[JBoss JIRA] (ISPN-3457) Infinispan error running on IBM JDK
by Luis Montoya (JIRA)
[ https://issues.jboss.org/browse/ISPN-3457?page=com.atlassian.jira.plugin.... ]
Luis Montoya commented on ISPN-3457:
------------------------------------
The required information:
java version "1.6.0"
Java(TM) SE Runtime Environment (build pwa6460_26sr5fp2-20130423_01(SR5 FP2))
IBM J9 VM (build 2.6, JRE 1.6.0 Windows 7 amd64-64 Compressed References 20130419_145740 (JIT enabled, AOT enabled)
J9VM - R26_Java626_SR5_FP2_20130419_1420_B145740
JIT - r11.b03_20130131_32403ifx4
GC - R26_Java626_SR5_FP2_20130419_1420_B145740_CMPRSS
J9CL - 20130419_145740)
JCL - 20130419_01
Something else:
I have been writting a test case in this class org.infinispan.manager.CacheManagerComponentRegistryTest.java:
public void testCreateCacheManagerGlobalConfiguration(){
cm = new DefaultCacheManager(GlobalConfigurationBuilder.defaultClusteredBuilder().build());
}
When I run this lines of code into Eclipse IDE over IBM JDK, it throws the mentioned exception, but if I run them using maven (executing it from the commands line using the same IBM JDK used before), it doesn´t throw the Exception, that's why I have not committed the UT in GitHub
> Infinispan error running on IBM JDK
> -----------------------------------
>
> Key: ISPN-3457
> URL: https://issues.jboss.org/browse/ISPN-3457
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 6.0.0.Alpha3
> Environment: WAS 8.0.0.6 JDK, Windows 7 Professional
> Reporter: Luis Montoya
> Assignee: Mircea Markus
> Fix For: 6.0.0.Alpha3
>
>
> I created a sample application using infinispan on standar JDK (Sun/Oracle). This app works fine using this JDK.
>
> I tried to run the app on IBM JDK (the needed for WAS), but I get the below error:
>
> org.infinispan.commons.CacheException: Unable to construct a GlobalComponentRegistry!
> at org.infinispan.factories.GlobalComponentRegistry.<init>(GlobalComponentRegistry.java:129)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:276)
> at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:246)
> at org.infinispan.quickstart.clusteredcache.replication.AbstractNode.createCacheManagerProgramatically(AbstractNode.java:41)
> at org.infinispan.quickstart.clusteredcache.replication.AbstractNode.<init>(AbstractNode.java:62)
> at org.infinispan.quickstart.clusteredcache.replication.Node0.main(Node0.java:32)
> Caused by: org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.topology.LocalTopologyManagerImpl.inject(org.infinispan.remoting.transport.Transport,java.util.concurrent.ExecutorService,org.infinispan.factories.GlobalComponentRegistry,org.infinispan.util.TimeService) on object of type LocalTopologyManagerImpl with parameters [org.infinispan.executors.LazyInitializingExecutorService@96d7b55b, org.infinispan.executors.LazyInitializingExecutorService@96d7b55b, org.infinispan.factories.GlobalComponentRegistry@9fd5a559, org.infinispan.util.DefaultTimeService@725adace]
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:188)
> at org.infinispan.factories.AbstractComponentRegistry.invokeInjectionMethod(AbstractComponentRegistry.java:229)
> at org.infinispan.factories.AbstractComponentRegistry.access$000(AbstractComponentRegistry.java:65)
> at org.infinispan.factories.AbstractComponentRegistry$Component.injectDependencies(AbstractComponentRegistry.java:797)
> at org.infinispan.factories.AbstractComponentRegistry.registerComponentInternal(AbstractComponentRegistry.java:201)
> at org.infinispan.factories.AbstractComponentRegistry.registerComponent(AbstractComponentRegistry.java:156)
> at org.infinispan.factories.AbstractComponentRegistry.getOrCreateComponent(AbstractComponentRegistry.java:277)
> at org.infinispan.factories.AbstractComponentRegistry.getOrCreateComponent(AbstractComponentRegistry.java:253)
> at org.infinispan.factories.GlobalComponentRegistry.<init>(GlobalComponentRegistry.java:125)
> ... 5 more
> Caused by: java.lang.IllegalArgumentException: discrepancia en el tipo de argumento
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:600)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:183)
> ... 13 more
>
>
> It seems that a method which is being invoked through reflection is receiving incorrectly the first parameter, which should be a org.infinispan.remoting.transport.Transport instance, but it is receiving a org.infinispan.executors.LazyInitializingExecutorService@96d7b55b instance
>
> The code which launch the error is the next:
>
>
> new DefaultCacheManager(
> GlobalConfigurationBuilder.defaultClusteredBuilder().globalJmxStatistics().allowDuplicateDomains(true)
> .transport().addProperty("configurationFile", "jgroups.xml")
> .build(),
> new ConfigurationBuilder()
> .clustering().cacheMode(CacheMode.REPL_SYNC)
> .build()
> );
> Making a review and debug of the code, the next behavior was seen which produce the error:
> if a map called map contains something like this {1=some.class.type}, and you try to get a value using the 0 as the key ( map.get(0), it doens't return null rather it returns the value for the 1 key, it means, for map.get(0) it returns "some.class.type", as if map.get(1) was called)
> Also, when the contains method of Map interface is called ( map.contains(0)), it returns true, which is incorrect because the map only has the 1 key
> This behavior is happening on this class and method:
> class: org.infinispan.factories.components.ComponentMetadata$InjectMetadata
> methods: getParameterName, isParameterNameSet
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-3455) Cache replication not warranted under load
by Lukasz Szelag (JIRA)
[ https://issues.jboss.org/browse/ISPN-3455?page=com.atlassian.jira.plugin.... ]
Lukasz Szelag commented on ISPN-3455:
-------------------------------------
Giovanni, you are right - not Infinispan but inconsistent hash codes in enum constants are causing this behavior. I ended up creating a wrapper class that calculates hash code using the underlying enum's class name and the enum name. This issue affects all classes that inherit hashCode() from Object.
Thanks a lot for your help in resolving this!
Lukasz
> Cache replication not warranted under load
> ------------------------------------------
>
> Key: ISPN-3455
> URL: https://issues.jboss.org/browse/ISPN-3455
> Project: Infinispan
> Issue Type: Feature Request
> Affects Versions: 5.3.0.Final, 6.0.0.Alpha3
> Environment: JSE 1.6.0_45; Windows 7
> Reporter: Lukasz Szelag
> Assignee: Mircea Markus
> Attachments: infinispan.zip
>
>
> Problem:
> When running a replicated cache and repeatedly calling a cacheable method (using Spring cache abstraction), Infinispan enters an infinite replication loop. This can be confirmed by observing replication counts growing over time, where there are no cache misses.
> Expected behavior:
> Caches shouldn't be replicated when there is a cache hit.
> Test case:
> - 3 cluster members; asynchronous replication with a replication queue
> - a cacheable method is executed repeatedly using 2 different keys
> Notes:
> - for some reason, this issue only occurs when using Enum arguments for a cache key; I was not able to replicate this when using int or String types (see com.designamus.infinispan.Main.works())
> - the behavior is not deterministic (random), which points to a race condition
> - the problem does not seem to be related to the Spring's default cache key generator; I was able to reproduce the same behavior with a custom cache key generator, which was thread-safe
> - the cacheable method is executed only twice (once both keys are stored in the cache); subsequent invocations retrieve stored values from the cache; this can be confirmed by inspecting the log file
> - the cache doesn't expire and entries are not evicted
> - the memory usage grows over time, eventually causing OOM on a heavily loaded system
> - since the issue is random in nature it may take a 3-4 attempts to reproduce it; I was successful in reproducing this behavior numerous times
> Steps to reproduce:
> 1. Build a test project (mvn clean compile)
> 2. Execute /run.sh (this will spawn 3 JVMs)
> 3. Start JConsole to monitor 3 cluster members (jconsole localhost:17001 localhost:17002 localhost:17003)
> 4. Monitor "replicationCount" attribute under RpcManager for cache "MyCache" for all JVMs (see /replication-counts.png)
> 5. Observe that replication counts grow over time
> 6. Observe that all caches are of size 2 and there are no cache misses (see /cache-statistics.png)
> If the issue cannot be reproduced (replication counts stay at the same level):
> 5. Terminate all 3 JVM processes (as a convenience you can execute /stop.sh)
> 6. Repeat steps 2 through 5 above
> When testing the above scenario using a distributed mode, I observed some other anomalies (i.e. the cacheable method was executed multiple times, as if the value was not there). While this may be related, it deserves a separate JIRA.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-3536) Exception when handling command SingleRpcCommand
by Mark De Leon (JIRA)
[ https://issues.jboss.org/browse/ISPN-3536?page=com.atlassian.jira.plugin.... ]
Mark De Leon updated ISPN-3536:
-------------------------------
Attachment: pearson-clustered-xsite-va.xml
> Exception when handling command SingleRpcCommand
> ------------------------------------------------
>
> Key: ISPN-3536
> URL: https://issues.jboss.org/browse/ISPN-3536
> Project: Infinispan
> Issue Type: Bug
> Components: Cross-Site Replication
> Affects Versions: 5.3.0.Final, 6.0.0.Alpha4
> Environment: Linux ip-10-252-170-214 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_24"
> OpenJDK Runtime Environment (IcedTea6 1.11.11.90) (amazon-62.1.11.11.90.55.amzn1-x86_64)
> OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
> Reporter: Mark De Leon
> Assignee: Mircea Markus
> Attachments: pearson-clustered-xsite-bos.xml, pearson-clustered-xsite-va.xml
>
>
> The following can be referenced from forum post by Chris Riley ISPN000071
> Trying to perform an insert of a cache value into a distributed cache that is replicated via Cross Site Replication in JBoss 6.0.0.Alpha4. The cross site replication has two nodes in the global cluster.
> When a value is put on the local site we get the following error on the remote site. We just upgraded to 6.0.0.Alpha4 because of JIRA ISSUE ISPN-3346, which we reproduced in 5.3.0.Final. I have attached our jboss configuration file for review.
>
> 17:22:41,798 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='importantCache', command=PutKeyValueCommand{key=1, value=[B@363f983f, flags=null, putIfAbsent=false, metadata=MimeMetadata(contentType=text/plain), successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.isBackupForRemoteCache(BackupReceiverRepositoryImpl.java:112) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupCacheManager(BackupReceiverRepositoryImpl.java:95) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:67) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:234) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:209) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1002) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:612) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:508) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:483) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:461) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:263) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1006) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:195) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:304) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.removeAndDeliver(UNICAST.java:748) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.handleBatchReceived(UNICAST.java:704) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.up(UNICAST.java:454) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:223) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP.passBatchUp(TP.java:1409) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1564) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
> at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-3536) Exception when handling command SingleRpcCommand
by Mark De Leon (JIRA)
[ https://issues.jboss.org/browse/ISPN-3536?page=com.atlassian.jira.plugin.... ]
Mark De Leon updated ISPN-3536:
-------------------------------
Attachment: pearson-clustered-xsite-bos.xml
> Exception when handling command SingleRpcCommand
> ------------------------------------------------
>
> Key: ISPN-3536
> URL: https://issues.jboss.org/browse/ISPN-3536
> Project: Infinispan
> Issue Type: Bug
> Components: Cross-Site Replication
> Affects Versions: 5.3.0.Final, 6.0.0.Alpha4
> Environment: Linux ip-10-252-170-214 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.6.0_24"
> OpenJDK Runtime Environment (IcedTea6 1.11.11.90) (amazon-62.1.11.11.90.55.amzn1-x86_64)
> OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
> Reporter: Mark De Leon
> Assignee: Mircea Markus
> Attachments: pearson-clustered-xsite-bos.xml, pearson-clustered-xsite-va.xml
>
>
> The following can be referenced from forum post by Chris Riley ISPN000071
> Trying to perform an insert of a cache value into a distributed cache that is replicated via Cross Site Replication in JBoss 6.0.0.Alpha4. The cross site replication has two nodes in the global cluster.
> When a value is put on the local site we get the following error on the remote site. We just upgraded to 6.0.0.Alpha4 because of JIRA ISSUE ISPN-3346, which we reproduced in 5.3.0.Final. I have attached our jboss configuration file for review.
>
> 17:22:41,798 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='importantCache', command=PutKeyValueCommand{key=1, value=[B@363f983f, flags=null, putIfAbsent=false, metadata=MimeMetadata(contentType=text/plain), successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.isBackupForRemoteCache(BackupReceiverRepositoryImpl.java:112) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupCacheManager(BackupReceiverRepositoryImpl.java:95) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:67) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:234) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:209) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1002) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:612) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:508) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:483) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:461) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:263) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1006) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:195) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:304) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.removeAndDeliver(UNICAST.java:748) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.handleBatchReceived(UNICAST.java:704) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.UNICAST.up(UNICAST.java:454) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:223) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP.passBatchUp(TP.java:1409) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1564) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
> at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-3536) Exception when handling command SingleRpcCommand
by Mark De Leon (JIRA)
Mark De Leon created ISPN-3536:
----------------------------------
Summary: Exception when handling command SingleRpcCommand
Key: ISPN-3536
URL: https://issues.jboss.org/browse/ISPN-3536
Project: Infinispan
Issue Type: Bug
Components: Cross-Site Replication
Affects Versions: 6.0.0.Alpha4, 5.3.0.Final
Environment: Linux ip-10-252-170-214 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.11.90) (amazon-62.1.11.11.90.55.amzn1-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
Reporter: Mark De Leon
Assignee: Mircea Markus
The following can be referenced from forum post by Chris Riley ISPN000071
Trying to perform an insert of a cache value into a distributed cache that is replicated via Cross Site Replication in JBoss 6.0.0.Alpha4. The cross site replication has two nodes in the global cluster.
When a value is put on the local site we get the following error on the remote site. We just upgraded to 6.0.0.Alpha4 because of JIRA ISSUE ISPN-3346, which we reproduced in 5.3.0.Final. I have attached our jboss configuration file for review.
17:22:41,798 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Incoming-2,shared=tcp) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='importantCache', command=PutKeyValueCommand{key=1, value=[B@363f983f, flags=null, putIfAbsent=false, metadata=MimeMetadata(contentType=text/plain), successful=true, ignorePreviousValue=false}}: java.lang.NullPointerException
at org.infinispan.xsite.BackupReceiverRepositoryImpl.isBackupForRemoteCache(BackupReceiverRepositoryImpl.java:112) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.xsite.BackupReceiverRepositoryImpl.getBackupCacheManager(BackupReceiverRepositoryImpl.java:95) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.xsite.BackupReceiverRepositoryImpl.handleRemoteCommand(BackupReceiverRepositoryImpl.java:67) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromRemoteSite(CommandAwareRpcDispatcher.java:234) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:209) [infinispan-core-6.0.0.Alpha4.jar:6.0.0.Alpha4]
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1002) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:612) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.route(RELAY2.java:508) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.handleMessage(RELAY2.java:483) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.RELAY2.handleRelayMessage(RELAY2.java:461) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.relay.Relayer$Bridge.receive(Relayer.java:263) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.JChannel.up(JChannel.java:749) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1006) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:195) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:439) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:304) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.UNICAST.removeAndDeliver(UNICAST.java:748) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.UNICAST.handleBatchReceived(UNICAST.java:704) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.UNICAST.up(UNICAST.java:454) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.FD.up(FD.java:274) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.MERGE2.up(MERGE2.java:223) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.stack.Protocol.up(Protocol.java:406) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.TP.passBatchUp(TP.java:1409) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1564) [jgroups-3.4.0.Alpha1.jar:3.4.0.Alpha1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-3431) Lost version info during state transfer causes overwrite of newer data that joining node has read from store
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-3431?page=com.atlassian.jira.plugin.... ]
Mircea Markus closed ISPN-3431.
-------------------------------
Resolution: Rejected
This is not a bug, we don't support preloading from local cache stores ATM.
The feature you're after is described here: https://community.jboss.org/wiki/ControlledClusterShutdownWithDataRestore...
And the corresponding JIRA is ISPN-3351.
> Lost version info during state transfer causes overwrite of newer data that joining node has read from store
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3431
> URL: https://issues.jboss.org/browse/ISPN-3431
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 6.0.0.Alpha3
> Environment: uname -s ; cat /etc/redhat-release ; java -version
> Linux
> Red Hat Enterprise Linux Server release 5.9 (Tikanga)
> java version "1.6.0_30"
> Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
> Reporter: Nikolay Martynov
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: jdg62blocker
> Fix For: 6.0.0.CR1
>
>
> When state transfer sends data to newly joining node no version information is provided despite the fact that this information is available to the cluster (for example, loaded from store). When newly joining node has loaded from its store newer version of data than cluster has then this newer version of data is overwritten because no version information is provided during state transfer.
> Scenario:
> {noformat}
> 1. Start node1, node2, node3
> 2. Put {A=>A1} from node1
> 3. Put {B=>B1} from node2
> 4. Put {C=>C1} from node3
> 5. Gracefully shutdown node1 saving the data
> 6. Gracefully shutdown node2 saving the data
> 7. Put {C=>C2} from node3
> 8. Gracefully shutdown node3 saving the data
> 9. Start node1 loading the data
> 10. Start node2 loading the data
> 11. Start node3 loading the data
> {noformat}
> Loaded from store on node3 - version information shows its newer than nodes 1 and 2 have:
> {noformat}
> org.infinispan.interceptors.CallInterceptor
> Executing command: PutKeyValueCommand{key=C, value=C2, flags=[CACHE_MODE_LOCAL, SKIP_LOCKING, SKIP_CACHE_STORE, SKIP_INDEXING, SKIP_OWNERSHIP_CHECK, IGNORE_RETURN_VALUES], putIfAbsent=false, metadata=EmbeddedMetadata{version=SimpleClusteredVersion{topologyId=8, version=2}}, successful=true, ignorePreviousValue=false}.
> {noformat}
> During state transfer older value C1 is received with version information being null:
> {noformat}
> org.infinispan.statetransfer.StateTransferInterceptor
> handleNonTxWriteCommand for command PutKeyValueCommand{key=C, value=C1, flags=[CACHE_MODE_LOCAL, SKIP_REMOTE_LOOKUP, PUT_FOR_STATE_TRANSFER, SKIP_SHARED_CACHE_STORE, SKIP_OWNERSHIP_CHECK, IGNORE_RETURN_VALUES, SKIP_XSITE_BACKUP], putIfAbsent=false, metadata=EmbeddedMetadata{version=null}, successful=true, ignorePreviousValue=false}
> {noformat}
> As a result newer value is replaced with older:
> {noformat}
> org.infinispan.interceptors.EntryWrappingInterceptor
> About to commit entry ClusteredRepeatableReadEntry(323f265b){key=C, value=C1, oldValue=C2, isCreated=false, isChanged=true, isRemoved=false, isValid=true, skipRemoteGet=true, metadata=EmbeddedMetadata{version=SimpleClusteredVersion{topologyId=8, version=2}}}
> {noformat}
> As far as i can see, any key on any node that it didnt written but has received during resync has version=null after loading from store.
> Config:
> {code:xml}
> <?xml version="1.0" encoding="UTF-8"?>
> <!--
> This is just a very simplistic example configuration file. For more information, please see
> http://docs.jboss.org/infinispan/5.3/configdocs/infinispan-config-5.3.html
> -->
> <infinispan
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xsi:schemaLocation="urn:infinispan:config:6.0 http://docs.jboss.org/infinispan/schemas/infinispan-config-6.0.xsd"
> xmlns="urn:infinispan:config:6.0">
> <global>
> <globalJmxStatistics enabled="true" jmxDomain="Infinispan" />
> <transport>
> <properties>
> <property name="configurationFile" value="jgroups.xml" />
> </properties>
> </transport>
> </global>
> <namedCache name="routing_table">
> <clustering mode="REPL">
> <stateTransfer awaitInitialTransfer="true" fetchInMemoryState="true"/>
> <sync replTimeout="1000"/>
> <!--async useReplQueue="true" replQueueMaxElements="100" /-->
> </clustering>
> <loaders passivation="false" shared="false" preload="true">
> <singleFileStore
> location="target/routing_table${infinispan.store_name:not_specified}/">
> </singleFileStore>
> </loaders>
> <versioning enabled="true" versioningScheme="SIMPLE"/>
> <transaction
> transactionMode="TRANSACTIONAL"
> autoCommit="true"
> transactionManagerLookupClass="org.infinispan.transaction.lookup.GenericTransactionManagerLookup"
> lockingMode="OPTIMISTIC"
> />
> <locking
> writeSkewCheck="true"
> isolationLevel="REPEATABLE_READ"/>
> </namedCache>
> </infinispan>
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-604) Re-design CacheStore transactions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-604?page=com.atlassian.jira.plugin.s... ]
Mircea Markus resolved ISPN-604.
--------------------------------
Resolution: Rejected
Making the store participate in the XA transaction only work if the TM and sotrage are collocated, but not in the general case. E.g.
- having a tx running on node A and part of that tx is an updates the local storage (local cache store) on node B for a distributed cache
- XA is build around the idea that B's resource manager is collocated(same process) with the TransactionManager instance running on A which is not in this case
> Re-design CacheStore transactions
> ----------------------------------
>
> Key: ISPN-604
> URL: https://issues.jboss.org/browse/ISPN-604
> Project: Infinispan
> Issue Type: Sub-task
> Components: Loaders and Stores, Transactions
> Affects Versions: 4.0.0.Final, 4.1.0.CR2
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Labels: as7-ignored, modshape
> Fix For: 6.0.0.Beta2, 6.0.0.CR1
>
>
> Current(4.1.x) transaction implementation in CacheStores is brocken in several ways:
> 1st problem.
> {code}AbstractCacheStore.prepare:
> public void prepare(List<? extends Modification> mods, GlobalTransaction tx, boolean isOnePhase) throws CacheLoaderException {
> if (isOnePhase) {
> applyModifications(mods);
> } else {
> transactions.put(tx, mods);
> }
> }
> {code}
> If this is 1PC we apply the modifications in the prepare phase - we should do it in the commit phase (as JTA does it).
> 2nd problem.
> This currently exhibits during commit/rollback with JdbcXyzCacheStore, but it is rather a more general cache store issue.
> When using a TransactionManager, during TM.commit AbstractCacheStore.commit is being called internally which tries to apply all the modifications that happened during that transaction.
> Within the scope of AbstractCacheStore.commit, JdbcStore obtains a connection from a DataSource and tries to write the modifications on that connection.
> Now if the DataSource is managed (e.g. by an A.S.) on the DS.getConnection call the A.S. would try to enlist the connection with the ongoing transaction by calling Transaction.enlistResource(XAResource xaRes) [1]
> This method fails with an IllegalStateException, because the transaction's status is preparing (see javax.transaction.Transaction.enlistResource).
> Suggested fix:
> - the modifications should be registered to the transaction as they happen(vs. during prepare/commit as it happens now)
> - this requires API changes in CacheStore, e.g.
> void store(InternalCacheEntry entry)
> should become
> void store(InternalCacheEntry entry, GlobalTransaction gtx)
> (gtx would be null if this is not a transactional call).
> [1] This behavior is specified by the JDBC 2.0 Standard Extension API, chapter 7 - distributed transaction
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-604) Re-design CacheStore transactions
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-604?page=com.atlassian.jira.plugin.s... ]
Mircea Markus reopened ISPN-604:
--------------------------------
> Re-design CacheStore transactions
> ----------------------------------
>
> Key: ISPN-604
> URL: https://issues.jboss.org/browse/ISPN-604
> Project: Infinispan
> Issue Type: Sub-task
> Components: Loaders and Stores, Transactions
> Affects Versions: 4.0.0.Final, 4.1.0.CR2
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Labels: as7-ignored, modshape
> Fix For: 6.0.0.Beta2, 6.0.0.CR1
>
>
> Current(4.1.x) transaction implementation in CacheStores is brocken in several ways:
> 1st problem.
> {code}AbstractCacheStore.prepare:
> public void prepare(List<? extends Modification> mods, GlobalTransaction tx, boolean isOnePhase) throws CacheLoaderException {
> if (isOnePhase) {
> applyModifications(mods);
> } else {
> transactions.put(tx, mods);
> }
> }
> {code}
> If this is 1PC we apply the modifications in the prepare phase - we should do it in the commit phase (as JTA does it).
> 2nd problem.
> This currently exhibits during commit/rollback with JdbcXyzCacheStore, but it is rather a more general cache store issue.
> When using a TransactionManager, during TM.commit AbstractCacheStore.commit is being called internally which tries to apply all the modifications that happened during that transaction.
> Within the scope of AbstractCacheStore.commit, JdbcStore obtains a connection from a DataSource and tries to write the modifications on that connection.
> Now if the DataSource is managed (e.g. by an A.S.) on the DS.getConnection call the A.S. would try to enlist the connection with the ongoing transaction by calling Transaction.enlistResource(XAResource xaRes) [1]
> This method fails with an IllegalStateException, because the transaction's status is preparing (see javax.transaction.Transaction.enlistResource).
> Suggested fix:
> - the modifications should be registered to the transaction as they happen(vs. during prepare/commit as it happens now)
> - this requires API changes in CacheStore, e.g.
> void store(InternalCacheEntry entry)
> should become
> void store(InternalCacheEntry entry, GlobalTransaction gtx)
> (gtx would be null if this is not a transactional call).
> [1] This behavior is specified by the JDBC 2.0 Standard Extension API, chapter 7 - distributed transaction
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (ISPN-3535) ConfigurationBuilder.withProperties adds empty address when SERVER_LIST not defined
by Radim Vansa (JIRA)
Radim Vansa created ISPN-3535:
---------------------------------
Summary: ConfigurationBuilder.withProperties adds empty address when SERVER_LIST not defined
Key: ISPN-3535
URL: https://issues.jboss.org/browse/ISPN-3535
Project: Infinispan
Issue Type: Bug
Components: Remote protocols
Affects Versions: 6.0.0.Beta1
Reporter: Radim Vansa
Assignee: Galder Zamarreño
Priority: Minor
ConfigurationBuilder.withProperties uses getProperty(SERVER_LIST, "") with the default empty string and passes that to addServers. This one adds one server with empty address (=localhost) and default port.
Because of that, it's not possible to use withProperties and setting the servers in proper way. However, IPv6 addresses cannot be parsed in addServers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months