[JBoss JIRA] (ISPN-3455) Cache replication not warranted under load
by Lukasz Szelag (JIRA)
[ https://issues.jboss.org/browse/ISPN-3455?page=com.atlassian.jira.plugin.... ]
Lukasz Szelag commented on ISPN-3455:
-------------------------------------
Giovanni, you are right - not Infinispan but inconsistent hash codes in enum constants are causing this behavior. I ended up creating a wrapper class that calculates hash code using the underlying enum's class name and the enum name. This issue affects all classes that inherit hashCode() from Object.
Thanks a lot for your help in resolving this!
Lukasz
> Cache replication not warranted under load
> ------------------------------------------
>
> Key: ISPN-3455
> URL: https://issues.jboss.org/browse/ISPN-3455
> Project: Infinispan
> Issue Type: Feature Request
> Affects Versions: 5.3.0.Final, 6.0.0.Alpha3
> Environment: JSE 1.6.0_45; Windows 7
> Reporter: Lukasz Szelag
> Assignee: Mircea Markus
> Attachments: infinispan.zip
>
>
> Problem:
> When running a replicated cache and repeatedly calling a cacheable method (using Spring cache abstraction), Infinispan enters an infinite replication loop. This can be confirmed by observing replication counts growing over time, where there are no cache misses.
> Expected behavior:
> Caches shouldn't be replicated when there is a cache hit.
> Test case:
> - 3 cluster members; asynchronous replication with a replication queue
> - a cacheable method is executed repeatedly using 2 different keys
> Notes:
> - for some reason, this issue only occurs when using Enum arguments for a cache key; I was not able to replicate this when using int or String types (see com.designamus.infinispan.Main.works())
> - the behavior is not deterministic (random), which points to a race condition
> - the problem does not seem to be related to the Spring's default cache key generator; I was able to reproduce the same behavior with a custom cache key generator, which was thread-safe
> - the cacheable method is executed only twice (once both keys are stored in the cache); subsequent invocations retrieve stored values from the cache; this can be confirmed by inspecting the log file
> - the cache doesn't expire and entries are not evicted
> - the memory usage grows over time, eventually causing OOM on a heavily loaded system
> - since the issue is random in nature it may take a 3-4 attempts to reproduce it; I was successful in reproducing this behavior numerous times
> Steps to reproduce:
> 1. Build a test project (mvn clean compile)
> 2. Execute /run.sh (this will spawn 3 JVMs)
> 3. Start JConsole to monitor 3 cluster members (jconsole localhost:17001 localhost:17002 localhost:17003)
> 4. Monitor "replicationCount" attribute under RpcManager for cache "MyCache" for all JVMs (see /replication-counts.png)
> 5. Observe that replication counts grow over time
> 6. Observe that all caches are of size 2 and there are no cache misses (see /cache-statistics.png)
> If the issue cannot be reproduced (replication counts stay at the same level):
> 5. Terminate all 3 JVM processes (as a convenience you can execute /stop.sh)
> 6. Repeat steps 2 through 5 above
> When testing the above scenario using a distributed mode, I observed some other anomalies (i.e. the cacheable method was executed multiple times, as if the value was not there). While this may be related, it deserves a separate JIRA.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3536) Exception when handling command SingleRpcCommand
by Mark De Leon (JIRA)
[ https://issues.jboss.org/browse/ISPN-3536?page=com.atlassian.jira.plugin.... ]
Mark De Leon updated ISPN-3536:
-------------------------------
Attachment: pearson-clustered-xsite-va.xml
> Exception when handling command SingleRpcCommand
> ------------------------------------------------
>
> Key: ISPN-3536
> URL: https://issues.jboss.org/browse/ISPN-3536
> Project: Infinispan
> Issue Type: Bug
> Components: Cross-Site Replication
> Affects Versions: 5.3.0.Final, 6.0.0.Alpha4
> Environment: Linux ip-10-252-170-214 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_24"
> OpenJDK Runtime Environment (IcedTea6 1.11.11.90) (amazon-62.1.11.11.90.55.amzn1-x86_64)
> OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
> Reporter: Mark De Leon
> Assignee: Mircea Markus
> Attachments: pearson-clustered-xsite-bos.xml, pearson-clustered-xsite-va.xml
>
>
> The following can be referenced from forum post by Chris Riley ISPN000071
> Trying to perform an insert of a cache value into a distributed cache that is replicated via Cross Site Replication in JBoss 6.0.0.Alpha4. The cross site replication has two nodes in the global cluster.
> When a value is put on the local site we get the following error on the remote site. We just upgraded to 6.0.0.Alpha4 because of JIRA ISSUE ISPN-3346, which we reproduced in 5.3.0.Final. I have attached our jboss configuration file for review.
>
> 17:22:41,798 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='importantCache', command=PutKeyValueCommand{key=1, value=[B@363f983f, flags=null, putIfAbsent=false, metadata=MimeMetadata(contentType=text/plain), successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.isBackupForRemoteCache(BackupReceiverRepositoryImpl.java:112) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupCacheManager(BackupReceiverRepositoryImpl.java:95) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:67) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:234) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:209) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1002) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:612) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:508) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:483) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:461) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:263) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1006) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:195) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:304) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.removeAndDeliver(UNICAST.java:748) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.handleBatchReceived(UNICAST.java:704) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.up(UNICAST.java:454) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:223) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP.passBatchUp(TP.java:1409) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1564) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
> at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3536) Exception when handling command SingleRpcCommand
by Mark De Leon (JIRA)
[ https://issues.jboss.org/browse/ISPN-3536?page=com.atlassian.jira.plugin.... ]
Mark De Leon updated ISPN-3536:
-------------------------------
Attachment: pearson-clustered-xsite-bos.xml
> Exception when handling command SingleRpcCommand
> ------------------------------------------------
>
> Key: ISPN-3536
> URL: https://issues.jboss.org/browse/ISPN-3536
> Project: Infinispan
> Issue Type: Bug
> Components: Cross-Site Replication
> Affects Versions: 5.3.0.Final, 6.0.0.Alpha4
> Environment: Linux ip-10-252-170-214 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_24"
> OpenJDK Runtime Environment (IcedTea6 1.11.11.90) (amazon-62.1.11.11.90.55.amzn1-x86_64)
> OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
> Reporter: Mark De Leon
> Assignee: Mircea Markus
> Attachments: pearson-clustered-xsite-bos.xml, pearson-clustered-xsite-va.xml
>
>
> The following can be referenced from forum post by Chris Riley ISPN000071
> Trying to perform an insert of a cache value into a distributed cache that is replicated via Cross Site Replication in JBoss 6.0.0.Alpha4. The cross site replication has two nodes in the global cluster.
> When a value is put on the local site we get the following error on the remote site. We just upgraded to 6.0.0.Alpha4 because of JIRA ISSUE ISPN-3346, which we reproduced in 5.3.0.Final. I have attached our jboss configuration file for review.
>
> 17:22:41,798 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='importantCache', command=PutKeyValueCommand{key=1, value=[B@363f983f, flags=null, putIfAbsent=false, metadata=MimeMetadata(contentType=text/plain), successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.isBackupForRemoteCache(BackupReceiverRepositoryImpl.java:112) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupCacheManager(BackupReceiverRepositoryImpl.java:95) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:67) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:234) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:209) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1002) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:612) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:508) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:483) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:461) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:263) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1006) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:195) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:304) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.removeAndDeliver(UNICAST.java:748) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.handleBatchReceived(UNICAST.java:704) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.up(UNICAST.java:454) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:223) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP.passBatchUp(TP.java:1409) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1564) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
> at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3536) Exception when handling command SingleRpcCommand
by Mark De Leon (JIRA)
Mark De Leon created ISPN-3536:
----------------------------------
Summary: Exception when handling command SingleRpcCommand
Key: ISPN-3536
URL: https://issues.jboss.org/browse/ISPN-3536
Project: Infinispan
Issue Type: Bug
Components: Cross-Site Replication
Affects Versions: 6.0.0.Alpha4, 5.3.0.Final
Environment: Linux ip-10-252-170-214 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.11.90) (amazon-62.1.11.11.90.55.amzn1-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
Reporter: Mark De Leon
Assignee: Mircea Markus
The following can be referenced from forum post by Chris Riley ISPN000071
Trying to perform an insert of a cache value into a distributed cache that is replicated via Cross Site Replication in JBoss 6.0.0.Alpha4. The cross site replication has two nodes in the global cluster.
When a value is put on the local site we get the following error on the remote site. We just upgraded to 6.0.0.Alpha4 because of JIRA ISSUE ISPN-3346, which we reproduced in 5.3.0.Final. I have attached our jboss configuration file for review.
17:22:41,798 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='importantCache', command=PutKeyValueCommand{key=1, value=[B@363f983f, flags=null, putIfAbsent=false, metadata=MimeMetadata(contentType=text/plain), successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException
at org.infinispan.xsite.BackupReceiverRepositoryImpl.isBackupForRemoteCache(BackupReceiverRepositoryImpl.java:112) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupCacheManager(BackupReceiverRepositoryImpl.java:95) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:67) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:234) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:209) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1002) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:612) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:508) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:483) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:461) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:263) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1006) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:195) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:304) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.UNICAST.removeAndDeliver(UNICAST.java:748) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.UNICAST.handleBatchReceived(UNICAST.java:704) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.UNICAST.up(UNICAST.java:454) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.MERGE2.up(MERGE2.java:223) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.TP.passBatchUp(TP.java:1409) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1564) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3431) Lost version info during state transfer causes overwrite of newer data that joining node has read from store
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3431?page=com.atlassian.jira.plugin.... ]
Mircea Markus closed ISPN-3431.
-------------------------------
Resolution: Rejected
This is not a bug, we don't support preloading from local cache stores ATM.
The feature you're after is described here: https://community.jboss.org/wiki/ControlledClusterShutdownWithDataRestore...
And the corresponding JIRA is ISPN-3351.
> Lost version info during state transfer causes overwrite of newer data that joining node has read from store
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3431
> URL: https://issues.jboss.org/browse/ISPN-3431
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 6.0.0.Alpha3
> Environment: uname -s ; cat /etc/redhat-release ; java -version
> Linux
> Red Hat Enterprise Linux Server release 5.9 (Tikanga)
> java version "1.6.0_30"
> Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
> Reporter: Nikolay Martynov
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: jdg62blocker
> Fix For: 6.0.0.CR1
>
>
> When state transfer sends data to newly joining node no version information is provided despite the fact that this information is available to the cluster (for example, loaded from store). When newly joining node has loaded from its store newer version of data than cluster has then this newer version of data is overwritten because no version information is provided during state transfer.
> Scenario:
> {noformat}
> 1. Start node1, node2, node3
> 2. Put {A=>A1} from node1
> 3. Put {B=>B1} from node2
> 4. Put {C=>C1} from node3
> 5. Gracefully shutdown node1 saving the data
> 6. Gracefully shutdown node2 saving the data
> 7. Put {C=>C2} from node3
> 8. Gracefully shutdown node3 saving the data
> 9. Start node1 loading the data
> 10. Start node2 loading the data
> 11. Start node3 loading the data
> {noformat}
> Loaded from store on node3 - version information shows its newer than nodes 1 and 2 have:
> {noformat}
> org.infinispan.interceptors.CallInterceptor
> Executing command: PutKeyValueCommand{key=C, value=C2, flags=[CACHE_MODE_LOCAL, SKIP_LOCKING, SKIP_CACHE_STORE, SKIP_INDEXING, SKIP_OWNERSHIP_CHECK, IGNORE_RETURN_VALUES], putIfAbsent=false, metadata=EmbeddedMetadata{version=SimpleClusteredVersion{topologyId=8, version=2}}, successful=true, ignorePreviousValue=false}.
> {noformat}
> During state transfer older value C1 is received with version information being null:
> {noformat}
> org.infinispan.statetransfer.StateTransferInterceptor
> handleNonTxWriteCommand for command PutKeyValueCommand{key=C, value=C1, flags=[CACHE_MODE_LOCAL, SKIP_REMOTE_LOOKUP, PUT_FOR_STATE_TRANSFER, SKIP_SHARED_CACHE_STORE, SKIP_OWNERSHIP_CHECK, IGNORE_RETURN_VALUES, SKIP_XSITE_BACKUP], putIfAbsent=false, metadata=EmbeddedMetadata{version=null}, successful=true, ignorePreviousValue=false}
> {noformat}
> As a result newer value is replaced with older:
> {noformat}
> org.infinispan.interceptors.EntryWrappingInterceptor
> About to commit entry ClusteredRepeatableReadEntry(323f265b){key=C, value=C1, oldValue=C2, isCreated=false, isChanged=true, isRemoved=false, isValid=true, skipRemoteGet=true, metadata=EmbeddedMetadata{version=SimpleClusteredVersion{topologyId=8, version=2}}}
> {noformat}
> As far as i can see, any key on any node that it didnt written but has received during resync has version=null after loading from store.
> Config:
> {code:xml}
> <?xml version="1.0" encoding="UTF-8"?>
> <!--
> This is just a very simplistic example configuration file. For more information, please see
> http://docs.jboss.org/infinispan/5.3/configdocs/infinispan-config-5.3.html
> -->
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:6.0 http://docs.jboss.org/infinispan/schemas/infinispan-config-6.0.xsd"
> xmlns="urn:infinispan:config:6.0">
> <global>
> <globalJmxStatistics enabled="true" jmxDomain="Infinispan" />
> <transport>
> <properties>
> <property name="configurationFile" value="jgroups.xml" />
> </properties>
> </transport>
> </global>
> <namedCache name="routing_table">
> <clustering mode="REPL">
> <stateTransfer awaitInitialTransfer="true" fetchInMemoryState="true"/>
> <sync replTimeout="1000"/>
> <!--async useReplQueue="true" replQueueMaxElements="100" /-->
> </clustering>
> <loaders passivation="false" shared="false" preload="true">
> <singleFileStore
> location="target/routing_table${infinispan.store_name:not_specified}/">
> </singleFileStore>
> </loaders>
> <versioning enabled="true" versioningScheme="SIMPLE"/>
> <transaction
> transactionMode="TRANSACTIONAL"
> autoCommit="true"
> transactionManagerLookupClass="org.infinispan.transaction.lookup.GenericTransactionManagerLookup"
> lockingMode="OPTIMISTIC"
> />
> <locking
> writeSkewCheck="true"
> isolationLevel="REPEATABLE_READ"/>
> </namedCache>
> </infinispan>
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-604) Re-design CacheStore transactions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-604?page=com.atlassian.jira.plugin.s... ]
Mircea Markus resolved ISPN-604.
--------------------------------
Resolution: Rejected
Making the store participate in the XA transaction only work if the TM and sotrage are collocated, but not in the general case. E.g.
- having a tx running on node A and part of that tx is an updates the local storage (local cache store) on node B for a distributed cache
- XA is build around the idea that B's resource manager is collocated(same process) with the TransactionManager instance running on A which is not in this case
> Re-design CacheStore transactions
> ----------------------------------
>
> Key: ISPN-604
> URL: https://issues.jboss.org/browse/ISPN-604
> Project: Infinispan
> Issue Type: Sub-task
> Components: Loaders and Stores, Transactions
> Affects Versions: 4.0.0.Final, 4.1.0.CR2
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Labels: as7-ignored, modshape
> Fix For: 6.0.0.Beta2, 6.0.0.CR1
>
>
> Current(4.1.x) transaction implementation in CacheStores is brocken in several ways:
> 1st problem.
> {code}AbstractCacheStore.prepare:
> public void prepare(List<? extends Modification> mods, GlobalTransaction tx, boolean isOnePhase) throws CacheLoaderException {
> if (isOnePhase) {
> applyModifications(mods);
> } else {
> transactions.put(tx, mods);
> }
> }
> {code}
> If this is 1PC we apply the modifications in the prepare phase - we should do it in the commit phase (as JTA does it).
> 2nd problem.
> This currently exhibits during commit/rollback with JdbcXyzCacheStore, but it is rather a more general cache store issue.
> When using a TransactionManager, during TM.commit AbstractCacheStore.commit is being called internally which tries to apply all the modifications that happened during that transaction.
> Within the scope of AbstractCacheStore.commit, JdbcStore obtains a connection from a DataSource and tries to write the modifications on that connection.
> Now if the DataSource is managed (e.g. by an A.S.) on the DS.getConnection call the A.S. would try to enlist the connection with the ongoing transaction by calling Transaction.enlistResource(XAResource xaRes) [1]
> This method fails with an IllegalStateException, because the transaction's status is preparing (see javax.transaction.Transaction.enlistResource).
> Suggested fix:
> - the modifications should be registered to the transaction as they happen(vs. during prepare/commit as it happens now)
> - this requires API changes in CacheStore, e.g.
> void store(InternalCacheEntry entry)
> should become
> void store(InternalCacheEntry entry, GlobalTransaction gtx)
> (gtx would be null if this is not a transactional call).
> [1] This behavior is specified by the JDBC 2.0 Standard Extension API, chapter 7 - distributed transaction
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-604) Re-design CacheStore transactions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-604?page=com.atlassian.jira.plugin.s... ]
Mircea Markus reopened ISPN-604:
--------------------------------
> Re-design CacheStore transactions
> ----------------------------------
>
> Key: ISPN-604
> URL: https://issues.jboss.org/browse/ISPN-604
> Project: Infinispan
> Issue Type: Sub-task
> Components: Loaders and Stores, Transactions
> Affects Versions: 4.0.0.Final, 4.1.0.CR2
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Labels: as7-ignored, modshape
> Fix For: 6.0.0.Beta2, 6.0.0.CR1
>
>
> Current(4.1.x) transaction implementation in CacheStores is brocken in several ways:
> 1st problem.
> {code}AbstractCacheStore.prepare:
> public void prepare(List<? extends Modification> mods, GlobalTransaction tx, boolean isOnePhase) throws CacheLoaderException {
> if (isOnePhase) {
> applyModifications(mods);
> } else {
> transactions.put(tx, mods);
> }
> }
> {code}
> If this is 1PC we apply the modifications in the prepare phase - we should do it in the commit phase (as JTA does it).
> 2nd problem.
> This currently exhibits during commit/rollback with JdbcXyzCacheStore, but it is rather a more general cache store issue.
> When using a TransactionManager, during TM.commit AbstractCacheStore.commit is being called internally which tries to apply all the modifications that happened during that transaction.
> Within the scope of AbstractCacheStore.commit, JdbcStore obtains a connection from a DataSource and tries to write the modifications on that connection.
> Now if the DataSource is managed (e.g. by an A.S.) on the DS.getConnection call the A.S. would try to enlist the connection with the ongoing transaction by calling Transaction.enlistResource(XAResource xaRes) [1]
> This method fails with an IllegalStateException, because the transaction's status is preparing (see javax.transaction.Transaction.enlistResource).
> Suggested fix:
> - the modifications should be registered to the transaction as they happen(vs. during prepare/commit as it happens now)
> - this requires API changes in CacheStore, e.g.
> void store(InternalCacheEntry entry)
> should become
> void store(InternalCacheEntry entry, GlobalTransaction gtx)
> (gtx would be null if this is not a transactional call).
> [1] This behavior is specified by the JDBC 2.0 Standard Extension API, chapter 7 - distributed transaction
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3535) ConfigurationBuilder.withProperties adds empty address when SERVER_LIST not defined
by Radim Vansa (JIRA)
Radim Vansa created ISPN-3535:
---------------------------------
Summary: ConfigurationBuilder.withProperties adds empty address when SERVER_LIST not defined
Key: ISPN-3535
URL: https://issues.jboss.org/browse/ISPN-3535
Project: Infinispan
Issue Type: Bug
Components: Remote protocols
Affects Versions: 6.0.0.Beta1
Reporter: Radim Vansa
Assignee: Galder Zamarreño
Priority: Minor
ConfigurationBuilder.withProperties uses getProperty(SERVER_LIST, "") with the default empty string and passes that to addServers. This one adds one server with empty address (=localhost) and default port.
Because of that, it's not possible to use withProperties and setting the servers in proper way. However, IPv6 addresses cannot be parsed in addServers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months
[JBoss JIRA] (ISPN-3527) Transaction boundary commands can be completed before state transfer applies an active transaction
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3527?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-3527:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/2090
> Transaction boundary commands can be completed before state transfer applies an active transaction
> --------------------------------------------------------------------------------------------------
>
> Key: ISPN-3527
> URL: https://issues.jboss.org/browse/ISPN-3527
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.7.Final
> Reporter: Erik Salter
> Assignee: Dan Berindei
> Fix For: 6.0.0.Beta2, 6.0.0.Final
>
>
> There is a condition where a transaction boundary command, like a RollbackCommand, can be processed by the new owner of a tx-encapsulated key before it applies the active transaction.
> On a view change:
> The forwarding node waits until its transactions from the new topology ID is set before issuing the command.
> 2013-09-17 14:28:33,725 TRACE [org.infinispan.interceptors.InvocationContextInterceptor] (OOB-350,session-resource-cluster,east-dht5-60816(CMC-Denver-CO)) Invoked with command RollbackCommand {gtx=GlobalTransaction:<east-dht5-60816(CMC-Denver-CO)>:48002:local, cacheName='eigAllocation', topologyId=-1} and InvocationContext [org.infinispan.context.impl.LocalTxInvocationContext@1f225e51]
> 2013-09-17 14:28:33,727 TRACE [org.infinispan.statetransfer.StateTransferInterceptor] (OOB-350,session-resource-cluster,east-dht5-60816(CMC-Denver-CO)) handleTopologyAffectedCommand for command RollbackCommand {gtx=GlobalTransaction:<east-dht5-60816(CMC-Denver-CO)>:48002:local, cacheName='eigAllocation', topologyId=-1}
> 2013-09-17 14:28:33,727 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-350,session-resource-cluster,east-dht5-60816(CMC-Denver-CO)) Waiting for transaction data for topology 26, current topology is 25
> ...
> 2013-09-17 14:28:34,101 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (OOB-350,session-resource-cluster,east-dht5-60816(CMC-Denver-CO)) east-dht5-60816(CMC-Denver-CO) broadcasting call RollbackCommand {gtx=GlobalTransaction:<east-dht5-60816(CMC-Denver-CO)>:48002:local, cacheName='eigAllocation', topologyId=26} to recipient list [west-dht4-48045(CH2-Chicago-IL), east-dht5-60816(CMC-Denver-CO), east-dht2-60243(CMC-Denver-CO)]
> However, the receiving node has not finished applying the transactions from the forwarding node yet for this topology ID. So the tx cannot be found, resulting in a stale tx and an unusable key.
> 2013-09-17 14:28:34,116 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (OOB-153,session-resource-cluster,west-dht4-48045(CH2-Chicago-IL)) Calling perform() on RollbackCommand {gtx=GlobalTransaction:<east-dht5-60816(CMC-Denver-CO)>:48002:remote, cacheName='eigAllocation', topologyId=26}
> 2013-09-17 14:28:34,116 TRACE [org.infinispan.commands.tx.AbstractTransactionBoundaryCommand] (OOB-153,session-resource-cluster,west-dht4-48045(CH2-Chicago-IL)) Did not find a RemoteTransaction for GlobalTransaction:<east-dht5-60816(CMC-Denver-CO)>:48002:remote
> 2013-09-17 14:28:34,116 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (OOB-153,session-resource-cluster,west-dht4-48045(CH2-Chicago-IL)) About to send back response SuccessfulResponse{responseValue=null} for command RollbackCommand {gtx=GlobalTransaction:<east-dht5-60816(CMC-Denver-CO)>:48002:remote, cacheName='eigAllocation', topologyId=26}
> ...
> 2013-09-17 14:28:34,161 TRACE [org.infinispan.transaction.TransactionTable] (OOB-408,session-resource-cluster,west-dht4-48045(CH2-Chicago-IL)) Created and registered remote transaction RemoteTransaction{modifications=[PutKeyValueCommand{key=EdgeResourceCacheKey[edgeDeviceId=4109,resourceId=16825], value=EdgeInputResource [edgeInputId=16825, currentBandwidth=0, maxBandwidth=1048576000], flags=null, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1, successful=true, ignorePreviousValue=false}], lookedUpEntries={}, lockedKeys=null, backupKeyLocks=null, missingLookedUpEntries=false, isMarkedForRollback=false, tx=GlobalTransaction:<east-dht5-60816(CMC-Denver-CO)>:48002:remote}
> 2013-09-17 14:28:34,733 TRACE [org.infinispan.statetransfer.StateTransferLockImpl] (OOB-408,session-resource-cluster,west-dht4-48045(CH2-Chicago-IL)) Signalling transaction data received for topology 26
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 3 months