[JBoss JIRA] (ISPN-2823) TransactionManagerLookup is silently ignored with invocation batching
by Jeremy Stone (JIRA)
[ https://issues.jboss.org/browse/ISPN-2823?page=com.atlassian.jira.plugin.... ]
Jeremy Stone commented on ISPN-2823:
------------------------------------
OK. Yes test still fails in same way (different message).
Glad you're on the case, though. Thanks.
> TransactionManagerLookup is silently ignored with invocation batching
> ---------------------------------------------------------------------
>
> Key: ISPN-2823
> URL: https://issues.jboss.org/browse/ISPN-2823
> Project: Infinispan
> Issue Type: Bug
> Components: Core API, Transactions
> Affects Versions: 5.2.0.Final, 5.2.1.Final
> Reporter: Jeremy Stone
> Assignee: Tristan Tarrant
> Fix For: 5.3.0.Final
>
> Attachments: infinispan_batch_tx.zip
>
>
> A configured TransactionManagerLookup is ignored when invocation batching is enabled.
> Attempts to put an entry into the cache are greeted with "java.lang.IllegalStateException: This is a tx cache!" despite the presence of an active transaction.
> This seems to make it impossible to use the Tree API, where invocation batch mode is mandatory, in a transactional environment.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2836) org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
by Alan Field (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Alan Field edited comment on ISPN-2836 at 6/5/13 11:59 AM:
-----------------------------------------------------------
[~pruivo] My latest version of LoadFileStage.java will solve your decoding problem. If you set the stringData property to "true" in your benchmark file, then the cache values will use String objects. https://github.com/alanfx/radargun/blob/4932376272a0d24cf5258f287da35f7b8...
Also, if you look in the RadrGun log, you should see some statistics from the stage about how large the file is and how data was written to the cache. Something like this:
17:01:05,882 DEBUG [org.radargun.Master] (main) Starting 'LoadFileStage' on 1 slave nodes. Details: LoadFile {bucket=null, exitBenchmarkOnSlaveFailure=false, filePath=/qa/services/hudson/static_build_env/jdg/data/william-shakespeare-10MB.txt, printWriteStatistics=false, runOnAllSlaves=false, slaves=null, useSmartClassLoading=true, valueSize=8192 }
17:01:06,592 INFO [org.radargun.stages.LoadFileStage] (main) Received responses from all 1 slaves. Durations [0 = 697 milliseconds]
17:01:06,593 INFO [org.radargun.stages.LoadFileStage] (main) --------------------
17:01:06,594 INFO [org.radargun.stages.LoadFileStage] (main) Size of file '/qa/services/hudson/static_build_env/jdg/data/william-shakespeare-10MB.txt' is 11180386 bytes
17:01:06,594 INFO [org.radargun.stages.LoadFileStage] (main) Value size is '8192' which will produce 1365 keys
17:01:06,595 INFO [org.radargun.stages.LoadFileStage] (main) Slave 0 wrote 1365 values to the cache with a total size of 11180386 bytes
17:01:06,595 INFO [org.radargun.stages.LoadFileStage] (main) --------------------
was (Author: afield):
[~pruivo] My latest version of LoadFileStage.java will solve your decoding problem. If you set the stringData property to "true" in your benchmark file, then the cache values will use String objects. https://github.com/alanfx/radargun/blob/4932376272a0d24cf5258f287da35f7b8...
> org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Pedro Ruivo
> Priority: Blocker
> Labels: onboard
> Fix For: 5.3.0.Final
>
> Attachments: afield-tcp-521-final.txt, benchmark-mapreduce-multifilesize.xml, dist-udp-no-tx.xml, jgroups-udp.xml, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2823) TransactionManagerLookup is silently ignored with invocation batching
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-2823?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant commented on ISPN-2823:
---------------------------------------
Ignore me.
The problem resides in TransactionManagerFactory.construct() where the transaction manager is forced to be an instance of BatchModeTransactionManager overriding any user-provided TransactionManager.
> TransactionManagerLookup is silently ignored with invocation batching
> ---------------------------------------------------------------------
>
> Key: ISPN-2823
> URL: https://issues.jboss.org/browse/ISPN-2823
> Project: Infinispan
> Issue Type: Bug
> Components: Core API, Transactions
> Affects Versions: 5.2.0.Final, 5.2.1.Final
> Reporter: Jeremy Stone
> Assignee: Tristan Tarrant
> Fix For: 5.3.0.Final
>
> Attachments: infinispan_batch_tx.zip
>
>
> A configured TransactionManagerLookup is ignored when invocation batching is enabled.
> Attempts to put an entry into the cache are greeted with "java.lang.IllegalStateException: This is a tx cache!" despite the presence of an active transaction.
> This seems to make it impossible to use the Tree API, where invocation batch mode is mandatory, in a transactional environment.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3163) Replacing entry via HotRod which was initially stored via Memcached does not change CAS
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3163?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-3163:
-----------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/1876
> Replacing entry via HotRod which was initially stored via Memcached does not change CAS
> ---------------------------------------------------------------------------------------
>
> Key: ISPN-3163
> URL: https://issues.jboss.org/browse/ISPN-3163
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.3.0.CR1
> Reporter: Martin Gencur
> Assignee: Galder Zamarreño
> Fix For: 5.3.0.Final
>
>
> Users might expect that CAS (check-and-set) operation will work even in compatibility mode which is currently not true in the following scenario:
> 1) store a key/value via Memcached
> 2) change the value via HotRod or Embedded
> 3) use Memcached's CAS operation
> In step #3 the memcached client will update the value even though the value was changed by another client in the meantime. The memcached client was supposed to change it only if it had not been changed in the meantime.
> The following test snippet shows the problem:
> {code:java}
> public void testMemcachedPutHotRodEmbbeddedReplaceMemcachedCASTest() throws Exception {
> final String key1 = "5";
> // 1. Put with Memcached
> Future<Boolean> f = cacheFactory.getMemcachedClient().set(key1, 0, "v1");
> assertTrue(f.get(60, TimeUnit.SECONDS));
> CASValue oldValue = cacheFactory.getMemcachedClient().gets(key1);
> // 2. Replace with Hot Rod
> VersionedValue versioned = cacheFactory.getHotRodCache().getVersioned(key1);
> assertTrue(cacheFactory.getHotRodCache().replaceWithVersion(key1, "v2", versioned.getVersion()));
> // 3. Replace with Embedded
> assertTrue(cacheFactory.getEmbeddedCache().replace(key1, "v2", "v3"));
> // 4. Get with Memcached and verify value/CAS
> CASValue newValue = cacheFactory.getMemcachedClient().gets(key1);
> assertEquals("v3", newValue.getValue());
> assertTrue("The version (CAS) should have changed", oldValue.getCas() != newValue.getCas());
> //<---- fails here
> }
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2823) TransactionManagerLookup is silently ignored with invocation batching
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-2823?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant commented on ISPN-2823:
---------------------------------------
Jeremy, this seems to work with 5.3.0.CR1. Can you please verify it at your end ?
> TransactionManagerLookup is silently ignored with invocation batching
> ---------------------------------------------------------------------
>
> Key: ISPN-2823
> URL: https://issues.jboss.org/browse/ISPN-2823
> Project: Infinispan
> Issue Type: Bug
> Components: Core API, Transactions
> Affects Versions: 5.2.0.Final, 5.2.1.Final
> Reporter: Jeremy Stone
> Assignee: Tristan Tarrant
> Fix For: 5.3.0.Final
>
> Attachments: infinispan_batch_tx.zip
>
>
> A configured TransactionManagerLookup is ignored when invocation batching is enabled.
> Attempts to put an entry into the cache are greeted with "java.lang.IllegalStateException: This is a tx cache!" despite the presence of an active transaction.
> This seems to make it impossible to use the Tree API, where invocation batch mode is mandatory, in a transactional environment.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-2836) org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
by Alan Field (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Alan Field commented on ISPN-2836:
----------------------------------
[~pruivo] My latest version of LoadFileStage.java will solve your decoding problem. If you set the stringData property to "true" in your benchmark file, then the cache values will use String objects. https://github.com/alanfx/radargun/blob/4932376272a0d24cf5258f287da35f7b8...
> org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Pedro Ruivo
> Priority: Blocker
> Labels: onboard
> Fix For: 5.3.0.Final
>
> Attachments: afield-tcp-521-final.txt, benchmark-mapreduce-multifilesize.xml, dist-udp-no-tx.xml, jgroups-udp.xml, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3194) Create diagnostics dumping tool
by Manik Surtani (JIRA)
Manik Surtani created ISPN-3194:
-----------------------------------
Summary: Create diagnostics dumping tool
Key: ISPN-3194
URL: https://issues.jboss.org/browse/ISPN-3194
Project: Infinispan
Issue Type: Feature Request
Components: Core API
Affects Versions: 5.3.0.Final
Reporter: Manik Surtani
Assignee: Mircea Markus
Fix For: 6.0.0.Alpha1, 6.0.0.Final
A simple script that an end-user can run on an existing cluster, that will:
1. Connect to a given node via JMX
2. Get a list of all caches on all nodes
3. Run JMX calls on each cache on each node to capture diagnostic data
4. Serialise this data (maybe JSON?), zip it up.
This will allow end-users to share such stats for debugging and perf tuning.
Diagnostic data to be captured would include all JMX info on hit/miss ratio, RPC performance, transaction commit/rollback rates, config details, time budgeting info, etc.
This tool would require that JMX statistics are enabled and running for a while before the snapshot is captured.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3193) Capture time budgeting information
by Manik Surtani (JIRA)
Manik Surtani created ISPN-3193:
-----------------------------------
Summary: Capture time budgeting information
Key: ISPN-3193
URL: https://issues.jboss.org/browse/ISPN-3193
Project: Infinispan
Issue Type: Feature Request
Components: Core API
Affects Versions: 5.3.0.Final
Reporter: Manik Surtani
Assignee: Mircea Markus
Fix For: 6.0.0.Alpha1, 6.0.0.Final
Should be most recent timing for each major subsystem for each type of call. E.g.,
PUT: 10ms (locking), 10ms (datacontainer), 20ms (RPC), 30ms (CacheStore).
GET: 0ms (locking), 5ms (datacontainer), 0ms (RPC), 10ms (CacheStore).
etc.
Could be implemented as a simple ringbuffer, in a specific component (TimeBudgetMonitor?) just storing the most recent N calls and that's it. Cheap to capture, cheap to store.
This data could then be made available via a JMX operation on TimeBudgetMonitor. This is extremely valuable for tuning and debugging perf issues.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months
[JBoss JIRA] (ISPN-3163) Replacing entry via HotRod which was initially stored via Memcached does not change CAS
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-3163?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-3163:
----------------------------------------
The fix works, pull req should be coming up soon.
> Replacing entry via HotRod which was initially stored via Memcached does not change CAS
> ---------------------------------------------------------------------------------------
>
> Key: ISPN-3163
> URL: https://issues.jboss.org/browse/ISPN-3163
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.3.0.CR1
> Reporter: Martin Gencur
> Assignee: Galder Zamarreño
> Fix For: 5.3.0.Final
>
>
> Users might expect that CAS (check-and-set) operation will work even in compatibility mode which is currently not true in the following scenario:
> 1) store a key/value via Memcached
> 2) change the value via HotRod or Embedded
> 3) use Memcached's CAS operation
> In step #3 the memcached client will update the value even though the value was changed by another client in the meantime. The memcached client was supposed to change it only if it had not been changed in the meantime.
> The following test snippet shows the problem:
> {code:java}
> public void testMemcachedPutHotRodEmbbeddedReplaceMemcachedCASTest() throws Exception {
> final String key1 = "5";
> // 1. Put with Memcached
> Future<Boolean> f = cacheFactory.getMemcachedClient().set(key1, 0, "v1");
> assertTrue(f.get(60, TimeUnit.SECONDS));
> CASValue oldValue = cacheFactory.getMemcachedClient().gets(key1);
> // 2. Replace with Hot Rod
> VersionedValue versioned = cacheFactory.getHotRodCache().getVersioned(key1);
> assertTrue(cacheFactory.getHotRodCache().replaceWithVersion(key1, "v2", versioned.getVersion()));
> // 3. Replace with Embedded
> assertTrue(cacheFactory.getEmbeddedCache().replace(key1, "v2", "v3"));
> // 4. Get with Memcached and verify value/CAS
> CASValue newValue = cacheFactory.getMemcachedClient().gets(key1);
> assertEquals("v3", newValue.getValue());
> assertTrue("The version (CAS) should have changed", oldValue.getCas() != newValue.getCas());
> //<---- fails here
> }
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 7 months