[JBoss JIRA] (ISPN-2779) Lost data on remotecache get() after put()
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2779?page=com.atlassian.jira.plugin.... ]
Mircea Markus resolved ISPN-2779.
---------------------------------
Resolution: Rejected
can you please give it another try with 5.3.0.Final.
> Lost data on remotecache get() after put()
> ------------------------------------------
>
> Key: ISPN-2779
> URL: https://issues.jboss.org/browse/ISPN-2779
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache, Eviction, Loaders and Stores, Server
> Affects Versions: 5.1.6.FINAL, 5.2.0.CR3
> Environment: Windows 7SP1 Pro
> JDK 7 or JRE6
> Reporter: ThienLong Hong
> Assignee: Tristan Tarrant
> Fix For: 6.0.0.CR1
>
>
> I'm start server Infinispan on Ubuntu follow comand:
> {noformat}
> ./startServer.sh -r hotrod -c infinispan-distribution.xml -l 192.168.23.120 -Djgroups.bind_addr=192.168.23.120
> {noformat}
> Make sure we can open many file, /etc/security/limits.conf:
> * soft nofile 100002
> * hard nofile 100002
> Here is content of infinispan-distribution.xml:
> {code:xml}
> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
> xmlns="urn:infinispan:config:5.1">
> <global>
> <transport>
> <properties>
> <property name="configurationFile" value="jgroups-tcp.xml">
> </property></properties>
> </transport>
> </global>
> <default>
> <clustering mode="distribution">
> <sync />
> <hash numOwners="2" />
> </clustering>
> </default>
> <namedCache name="myCache">
> <clustering mode="distribution">
> <sync />
> <hash numOwners="2" />
> </clustering>
> </namedCache>
> <namedCache name="evictionCache">
> <clustering mode="distribution">
> <sync />
> <hash numOwners="2" rehashEnabled="true" rehashRpcTimeout="600000" numVirtualNodes="50"/>
> </clustering>
> <eviction
> maxEntries="10000"
> strategy="LRU"
> />
> <loaders passivation="true" preload="false">
> <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true" purgeOnStartup="false">
> <properties>
> <property name="location" value="data"/>
> </properties>
> </loader>
> </loaders>
> </namedCache>
> </infinispan>
> {code}
> I write a simple program to test benmark of infinispan but i got problem of lost data. I put many key-value but when i retrieve it return null.
> Here is my program soucecode:
> {code:java}
> package vn.vccorp.benmark.infinispan;
> import java.io.FileWriter;
> import java.io.PrintWriter;
> import java.net.URL;
> import java.util.ArrayList;
> import java.util.LinkedHashMap;
> import java.util.List;
> import java.util.Map;
> import java.util.Map.Entry;
> import java.util.Random;
> import java.util.UUID;
> import java.util.concurrent.Callable;
> import java.util.concurrent.Executor;
> import java.util.concurrent.ExecutorService;
> import java.util.concurrent.Executors;
> import java.util.concurrent.TimeUnit;
> import org.infinispan.client.hotrod.RemoteCache;
> import org.infinispan.client.hotrod.RemoteCacheManager;
> public class Benmark implements Callable<Void> {
> private static final int MAX_LENGHT = 20;
> private static final int MIN_LENGHT = 8;
> private Map<String, String> accs = new LinkedHashMap<String, String>();
> private final Random random = new Random(System.currentTimeMillis());
> private RemoteCache<String, String> rc;
> private boolean getOperator = false;
> public Benmark(RemoteCache<String, String> rc, int numAcc) {
> generateRandomAccs(numAcc);
> this.rc = rc;
> }
> /**
> * @param args
> * @throws InterruptedException
> */
> public static void main(String[] args) throws InterruptedException {
> URL resource = Thread.currentThread().getContextClassLoader()
> .getResource("hotrod-client.properties");
> RemoteCacheManager rcm = new RemoteCacheManager(resource, true);
> RemoteCache<String, String> rc = rcm.getCache("evictionCache");
> List<Benmark> bens = new ArrayList<Benmark>();
> int numThreads = Runtime.getRuntime().availableProcessors() * 2;
> int numAccsPerThread = 5000;
> for (int i = 0; i < numThreads; ++i) {
> bens.add(new Benmark(rc, numAccsPerThread));
> }
> long time = testOperator(bens);
> System.out.println("finish test Put with "
> + (numThreads * numAccsPerThread) + " records in "
> + (time) + "ns");
> for (Benmark benmark : bens) {
> benmark.setGetOperator(true);
> }
> time = testOperator(bens);
> System.out.println("finish test Get with "
> + (numThreads * numAccsPerThread) + " records in "
> + (time) + "ns");
> rcm.stop();
> saveDataTest(bens);
> }
> private static void saveDataTest(List<Benmark> bens) {
> PrintWriter writer = null;
> try {
> writer = new PrintWriter(new FileWriter("data_test.txt"));
> for (Benmark ben : bens) {
> for (Entry<String, String> entry : ben.getAccs().entrySet()) {
> writer.println(entry.getKey());
> writer.println(entry.getValue());
> }
> }
> } catch (Exception e) {
> // TODO: handle exception
> } finally {
> if (writer != null) {
> writer.close();
> }
> }
> }
> private static long testOperator(List<Benmark> bens) throws InterruptedException {
> ExecutorService pool = Executors.newCachedThreadPool();
> long time = System.nanoTime();
> pool.invokeAll(bens);
> pool.shutdown();
> while (!pool.awaitTermination(1, TimeUnit.MINUTES)) {
> }
> long end = System.nanoTime();
> return end - time;
> }
> private void generateRandomAccs(int numberAcc) {
> String user = null;
> String pass = null;
> for (int i = 0; i < numberAcc;) {
> user = randomString();
> pass = randomString();
> if (getAccs().put(user, pass) == null) {
> ++i;
> }
> }
> }
> private String randomString() {
> // int len = MIN_LENGHT + random.nextInt(MAX_LENGHT - MIN_LENGHT);
> // StringBuilder builder = new StringBuilder(len);
> // for (int i = 0; i < len; ++i) {
> // char c = (char) (32 + random.nextInt(126 - 32));
> // builder.append(c);
> // }
> // return builder.toString();
> return UUID.randomUUID().toString();
> }
> @Override
> public Void call() {
> if (!getOperator) {
> System.out.println("Start test PUT");
> long startPut = System.nanoTime();
> for (Entry<String, String> acc : getAccs().entrySet()) {
> rc.put(acc.getKey(), acc.getValue());
> }
> long endPut = System.nanoTime();
> System.out
> .println("Finish test PUT: " + (endPut - startPut) + "ns");
> } else {
> System.out.println("Start test GET");
> long startGet = System.nanoTime();
> for (Entry<String, String> acc : getAccs().entrySet()) {
> if (rc.get(acc.getKey()) == null) {
> System.err.println("Error get data");
> break;
> }
> }
> long endGet = System.nanoTime();
> System.out
> .println("Finish test GET: " + (endGet - startGet) + "ns");
> }
> return null;
> }
> public boolean isGetOperator() {
> return getOperator;
> }
> public void setGetOperator(boolean getOperator) {
> this.getOperator = getOperator;
> }
> public Map<String, String> getAccs() {
> return accs;
> }
> public void setAccs(Map<String, String> accs) {
> this.accs = accs;
> }
> }
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3244) TopologyAwareSyncConsistentHashFactory should limit the number of segments per node
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3244?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3244:
--------------------------------
Fix Version/s: 6.1.0.Final
(was: 6.0.0.CR1)
> TopologyAwareSyncConsistentHashFactory should limit the number of segments per node
> -----------------------------------------------------------------------------------
>
> Key: ISPN-3244
> URL: https://issues.jboss.org/browse/ISPN-3244
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.6.Final, 5.3.0.CR2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 6.1.0.Final
>
>
> Let's say we have a cluster with 5 nodes: A(r1), B(r2), C(r2), D(r3), E(r3)
> TopologyAwareConsistentSyncHashFactory will spread the segments equally on each rack, meaning A will own 2x segments compared to the other nodes.
> TopologyAwareConsistentHashFactory limits the maximum number per node, so that A owns just as many segments as the other nodes. With a slight limitation: the number of racks must be greater than numOwners, otherwise each rack must hold (at least) one copy of all the data.
> TopologyAwareConsistentSyncHashFactory is a little random, so we can't distribute the data perfectly, but we can limit the number of segments on each node to something like 1.5x the average number of segments.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3229) L1 cache entries should not be passivated
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3229?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3229:
--------------------------------
Fix Version/s: 6.1.0.Final
(was: 6.0.0.CR1)
> L1 cache entries should not be passivated
> -----------------------------------------
>
> Key: ISPN-3229
> URL: https://issues.jboss.org/browse/ISPN-3229
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.2.6.Final
> Reporter: William Burns
> Assignee: William Burns
> Fix For: 6.1.0.Final
>
> Attachments: DistSyncL1PassivationFuncTest.java
>
>
> L1 entries are stored in the same data container as the real entries. These can be evicted which is fine, however we don't want them to be passivated as this could be costly to write/read from the cache store. We should either prevent L1 cache entries from being passivated or have an option to allow for it.
> Currently L1 entries are not differentiated from other entries except through the fact that they are mortal, which is used for other operations as well, which means they would need a placeholder of some kind to tell the container this is a L1 entry.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3270) Hotrod clients removeWithVersion doesn't work with replicated cache
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3270?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-3270:
-------------------------------------
[~jmarkos] please make sure you fill the "Affect Version(s)" field in the JIRA, it's very important for us in order to reproduce the issue ;)
> Hotrod clients removeWithVersion doesn't work with replicated cache
> -------------------------------------------------------------------
>
> Key: ISPN-3270
> URL: https://issues.jboss.org/browse/ISPN-3270
> Project: Infinispan
> Issue Type: Bug
> Components: Remote protocols
> Reporter: Jakub Markos
> Assignee: Galder Zamarreño
> Fix For: 6.0.0.CR1
>
>
> I have a cluster of 2 latest infinispan servers (6.0.0-SNAPSHOT) with the following container configuration:
> {code:xml}<cache-container name="default" default-cache="default" listener-executor="infinispan-listener">
> <transport stack="udp" executor="infinispan-transport" lock-timeout="240000"/>
> <replicated-cache name="default" start="EAGER" mode="SYNC" batching="false" remote-timeout="60000">
> <transaction mode="NONE"/>
> <state-transfer enabled="true" timeout="60000"/>
> </replicated-cache>
> </cache-container>
> {code}
> Running this code:
> {code} remoteCache = remoteCacheManager.getCache();
> remoteCache.clear();
> assertFalse(remoteCache.removeWithVersion("aKey", 12321212l));
> remoteCache.put("aKey", "aValue");
> VersionedValue valueBinary = remoteCache.getVersioned("aKey");
> System.out.println("value = " + valueBinary.getValue());
> System.out.println("version = " + valueBinary.getVersion());
> System.out.println(remoteCache.removeWithVersion("aKey",valueBinary.getVersion()));
> valueBinary = remoteCache.getVersioned("aKey");
> System.out.println("value = " + valueBinary.getValue());
> System.out.println("version = " + valueBinary.getVersion());
> {code}
> results most of the time in (and the other times the removeWithVersion returns false)
> {quote}
> value = aValue
> version = 281483566645249
> true
> value = aValue
> version = 281483566645249
> {quote}
> The command works with distributed/local cache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3270) Hotrod clients removeWithVersion doesn't work with replicated cache
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3270?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-3270:
--------------------------------
Assignee: Mircea Markus (was: Galder Zamarreño)
> Hotrod clients removeWithVersion doesn't work with replicated cache
> -------------------------------------------------------------------
>
> Key: ISPN-3270
> URL: https://issues.jboss.org/browse/ISPN-3270
> Project: Infinispan
> Issue Type: Bug
> Components: Remote protocols
> Reporter: Jakub Markos
> Assignee: Mircea Markus
> Fix For: 6.0.0.CR1
>
>
> I have a cluster of 2 latest infinispan servers (6.0.0-SNAPSHOT) with the following container configuration:
> {code:xml}<cache-container name="default" default-cache="default" listener-executor="infinispan-listener">
> <transport stack="udp" executor="infinispan-transport" lock-timeout="240000"/>
> <replicated-cache name="default" start="EAGER" mode="SYNC" batching="false" remote-timeout="60000">
> <transaction mode="NONE"/>
> <state-transfer enabled="true" timeout="60000"/>
> </replicated-cache>
> </cache-container>
> {code}
> Running this code:
> {code} remoteCache = remoteCacheManager.getCache();
> remoteCache.clear();
> assertFalse(remoteCache.removeWithVersion("aKey", 12321212l));
> remoteCache.put("aKey", "aValue");
> VersionedValue valueBinary = remoteCache.getVersioned("aKey");
> System.out.println("value = " + valueBinary.getValue());
> System.out.println("version = " + valueBinary.getVersion());
> System.out.println(remoteCache.removeWithVersion("aKey",valueBinary.getVersion()));
> valueBinary = remoteCache.getVersioned("aKey");
> System.out.println("value = " + valueBinary.getValue());
> System.out.println("version = " + valueBinary.getVersion());
> {code}
> results most of the time in (and the other times the removeWithVersion returns false)
> {quote}
> value = aValue
> version = 281483566645249
> true
> value = aValue
> version = 281483566645249
> {quote}
> The command works with distributed/local cache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months