[JBoss JIRA] (ISPN-2553) JBossMarshaller can be used before properly initialized
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2553?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2553:
-----------------------------------
Sprint: Beta6
> JBossMarshaller can be used before properly initialized
> -------------------------------------------------------
>
> Key: ISPN-2553
> URL: https://issues.jboss.org/browse/ISPN-2553
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 5.2.0.Beta4
> Reporter: Radim Vansa
> Assignee: Galder Zamarreño
> Fix For: 5.2.0.Beta6
>
>
> The {{JBossMarshaller}} can be used before its {{start()}} method is called. I've noticed that with replicated cache without transactions, an OOB thread can start demarshalling {{SingleRpcCommand}} in {{CacheRpcCommandExternalizer}} but when it tries to create a new unmarshaller (through {{AbstractJBossMarshaller.startObjectInput(...)}} and the {{marshallerTL.initialValue()}} the {{baseCfg}} configuration is not fully initialized yet and this results in creating marshallers in {{PerThreadInstanceHolder}} with {{objectTable == null}}. Then, objects are deserialized to {{null}}.
> I have verified this by inserting some log messages into constructors and start method:
> {code}
> 19:49:02,404 INFO [org.infinispan.marshall.jboss.AbstractJBossMarshaller] (pool-1-thread-1) Creating AbstractJBossMarshaller with org.jboss.marshalling.MarshallingConfiguration@1d296aa3: classExternalizerFactor
> y=<org.infinispan.marshall.jboss.SerializeWithExtFactory@a18024a> exceptionListener=<null> instanceCount=16 classCount=8 bufferSize=512 version=3
> 19:49:02,409 INFO [org.infinispan.marshall.jboss.AbstractJBossMarshaller] (pool-1-thread-1) Creating JBossMarshaller org.infinispan.marshall.jboss.JBossMarshaller@2e3e4d73
> 19:49:02,410 INFO [org.infinispan.marshall.jboss.AbstractJBossMarshaller] (pool-1-thread-1) Starting JBossMarshaller
> {code}
> and into the thread-local initialization and to {{getUnmarshaller()}} just before {{factory.createUnmarshaller}}:
> {code}
> 19:49:02,410 ERROR [org.infinispan.marshall.jboss.AbstractJBossMarshaller] (OOB-49,rvansa-22965) No object table in org.jboss.marshalling.MarshallingConfiguration@7c4ed0bc: classExternalizerFactory=<org.infinisp
> an.marshall.jboss.SerializeWithExtFactory@a18024a> exceptionListener=<null> instanceCount=16 classCount=8 bufferSize=512 version=3, base is org.jboss.marshalling.MarshallingConfiguration@1d296aa3: classExternali
> zerFactory=<org.infinispan.marshall.jboss.SerializeWithExtFactory@a18024a> exceptionListener=<null> instanceCount=16 classCount=8 bufferSize=512 version=3
> 19:49:02,453 ERROR [org.infinispan.marshall.jboss.AbstractJBossMarshaller] (OOB-49,rvansa-22965) Unmarshaller with cfg org.jboss.marshalling.MarshallingConfiguration@7c4ed0bc: classExternalizerFactory=<org.infin
> ispan.marshall.jboss.SerializeWithExtFactory@a18024a> exceptionListener=<null> instanceCount=16 classCount=8 bufferSize=512 version=3
> {code}
> See that the timestamps for {{start()}} and thread-local initialization are same, and the base configuration ({{baseCfg}}) does not have {{objectTable}} initialized.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2552) Support concurrent updates for non-transactional caches
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2552?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2552:
-----------------------------------
Sprint: Beta6
> Support concurrent updates for non-transactional caches
> --------------------------------------------------------
>
> Key: ISPN-2552
> URL: https://issues.jboss.org/browse/ISPN-2552
> Project: Infinispan
> Issue Type: Feature Request
> Affects Versions: 5.1.0.FINAL
> Reporter: Mircea Markus
> Assignee: Mircea Markus
> Priority: Blocker
> Fix For: 5.2.0.Beta6, 5.2.0.Final
>
>
> for non-transactional caches, when a key is updated, a local lock is acquired and also a lock on all the owning nodes as well. This is very inefficient for concurrent updates as it is very deadlock-prone.
> The following locking approach should solve this problem at the cost of an additional RPC:
> - 'k' is written on node A, owners(k)={B,C}
> - A forwards the given command to B
> - B acquires a lock on 'k' then it forwards it to the remaining owners: C
> - C applies the change and returns to B (no lock acquisition is needed)
> - B applies the result as well, releases the lock and returns the result of the operation to A.
> Note that even though this introduces an additional RPC (the forwarding), it behaves very well in conjunction with consistent-hash aware hotrod clients which connect directly to the lock owner.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2483) State transfer issue with the transactions for which the originator has crashed
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2483?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2483:
-----------------------------------
Sprint: Beta6
> State transfer issue with the transactions for which the originator has crashed
> -------------------------------------------------------------------------------
>
> Key: ISPN-2483
> URL: https://issues.jboss.org/browse/ISPN-2483
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer, Transactions
> Affects Versions: 5.1.8.Final, 5.2.0.Beta3
> Reporter: Mircea Markus
> Assignee: Adrian Nistor
> Priority: Blocker
> Fix For: 5.2.0.Beta6, 5.2.0.Final
>
>
> State transfer migrates and prepares the transactions for which the originator has left. On the receiving node, this results in the transaction being prepared and acquiring backup locks which are never released (unless manual intervention).
> This should behave as follows:
> - if there's no recovery enabled, the state producer should not send such transactions but drop them
> - if recovery is enabled these transactions should be sent across. They shouldn't be prepared/acquire backup locks, but be placed in the recovery cache (see RecoveryManagerImpl.inDoubtTransactions)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2362) Remove-inconsistency during NBST
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2362?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2362:
-----------------------------------
Sprint: Beta6
> Remove-inconsistency during NBST
> --------------------------------
>
> Key: ISPN-2362
> URL: https://issues.jboss.org/browse/ISPN-2362
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.Alpha3
> Reporter: Mircea Markus
> Assignee: Adrian Nistor
> Priority: Blocker
> Fix For: 5.2.0.Beta6
>
>
> The NBST functionality leaves place to inconsistencies during removals:
> 1. the joiner first requests transaction data
> 2. after all transaction data is integrated (i.e. tx is prepared on the state receiver node) it requests the rest(data container) of the data
> 3. in order not to override the data from transactions which is more recent (transactions at step 1 might commit during step 2), the insertion at 2 happens with putIfAbsent. Whilst this prevents from overriding newer values (written by transactions), it doesn't guard against the situation in which the tx at step 1 removed data. So it is possible for deleted data a to resurrect.
> A solution to this inconsistency issue is to use tombstones for the duration of the state transfer.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2612) Problem broadcasting CH_UPDATE command
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2612?page=com.atlassian.jira.plugin.... ]
Dan Berindei reassigned ISPN-2612:
----------------------------------
Assignee: Dan Berindei (was: Mircea Markus)
> Problem broadcasting CH_UPDATE command
> --------------------------------------
>
> Key: ISPN-2612
> URL: https://issues.jboss.org/browse/ISPN-2612
> Project: Infinispan
> Issue Type: Bug
> Components: RPC
> Affects Versions: 5.2.0.Beta5
> Reporter: Michal Linhard
> Assignee: Dan Berindei
> Attachments: test.zip
>
>
> Infinispan 5.2.0.Beta5
> JGroups 3.2.4.Final
> Steps to reproduce (I'm using two virtual interfaces test1, test2)
> 1. Start org.jboss.qa.jdg.Test with -Djgroups.udp.bind_addr=test1 -Djava.net.preferIPv4Stack=true
> 2. wait 10 sec
> 3. Start org.jboss.qa.jdg.Test with -Djgroups.udp.bind_addr=test2 -Djava.net.preferIPv4Stack=true
> After 5 seconds there should be this timeout exception:
> {code}
> 19:42:14,146 WARN [org.infinispan.topology.CacheTopologyControlCommand] (OOB-2,mlinhard-work-37329) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___defaultcache, type=REBALANCE_CONFIRM, sender=mlinhard-work-47337, joinInfo=null, topologyId=1, currentCH=null, pendingCH=null, throwable=null, viewId=1}
> java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: TimeoutException
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
> at java.util.concurrent.FutureTask.get(FutureTask.java:91)
> at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:563)
> at org.infinispan.topology.ClusterTopologyManagerImpl.broadcastConsistentHashUpdate(ClusterTopologyManagerImpl.java:349)
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleRebalanceCompleted(ClusterTopologyManagerImpl.java:213)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:160)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:137)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:252)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:219)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:483)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:390)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:248)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
> at org.jgroups.JChannel.up(JChannel.java:703)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
> at org.jgroups.protocols.RSVP.up(RSVP.java:172)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:736)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:414)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:143)
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:187)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: TimeoutException
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:532)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommands(CommandAwareRpcDispatcher.java:152)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:518)
> at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:545)
> at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:542)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> ... 3 more
> Caused by: org.jgroups.TimeoutException: TimeoutException
> at org.jgroups.util.Promise._getResultWithTimeout(Promise.java:145)
> at org.jgroups.util.Promise.getResultWithTimeout(Promise.java:40)
> at org.jgroups.util.AckCollector.waitForAllAcks(AckCollector.java:93)
> at org.jgroups.protocols.RSVP$Entry.block(RSVP.java:287)
> at org.jgroups.protocols.RSVP.down(RSVP.java:118)
> at org.jgroups.stack.ProtocolStack.down(ProtocolStack.java:1025)
> at org.jgroups.JChannel.down(JChannel.java:718)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.down(MessageDispatcher.java:616)
> at org.jgroups.blocks.RequestCorrelator.sendRequest(RequestCorrelator.java:173)
> at org.jgroups.blocks.GroupRequest.sendRequest(GroupRequest.java:360)
> at org.jgroups.blocks.GroupRequest.sendRequest(GroupRequest.java:103)
> at org.jgroups.blocks.Request.execute(Request.java:83)
> at org.jgroups.blocks.MessageDispatcher.cast(MessageDispatcher.java:335)
> at org.jgroups.blocks.MessageDispatcher.castMessage(MessageDispatcher.java:249)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processCalls(CommandAwareRpcDispatcher.java:330)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommands(CommandAwareRpcDispatcher.java:145)
> ... 8 more
> {code}
> Analysis:
> These are the messages sent after view change:
> {code}
> test1 test2
> <--- JOIN ----
> ---- REBALANCE_START --->
> <--- StateRequestCommand ----
> ---- StateResponseCommand --->
> <--- REBALANCE_CONFIRM ----
> ---- CH_UPDATE --->
> {code}
> The last CH_UPDATE message is broadcast, test2 successfully processes it, but test1 stays in waiting state, because it for some reason awaits response also from itself - local variable entry in the method RSVP.down
> (https://github.com/belaban/JGroups/blob/master/src/org/jgroups/protocols/...)
> contained local address.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2617) Intermittent failure of NodeMoveAPITest.testConcurrency
by Jitka Kudrnacova (JIRA)
Jitka Kudrnacova created ISPN-2617:
--------------------------------------
Summary: Intermittent failure of NodeMoveAPITest.testConcurrency
Key: ISPN-2617
URL: https://issues.jboss.org/browse/ISPN-2617
Project: Infinispan
Issue Type: Bug
Components: Test Suite, Tree API
Affects Versions: 5.1.8.Final
Reporter: Jitka Kudrnacova
Assignee: Mircea Markus
The test NodeMoveAPITest.testConcurrency fails intermittently across platforms and JDKs.
This is the output of TS on RHEL6_x86_64, Oracle JDK 6.
Stacktrace:
{code}
java.lang.AssertionError: Should have only found x once
at org.testng.AssertJUnit.fail(AssertJUnit.java:57)
at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:22)
at org.testng.AssertJUnit.assertFalse(AssertJUnit.java:39)
at org.infinispan.api.tree.NodeMoveAPITest.testConcurrency(NodeMoveAPITest.java:405)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:74)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:673)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:846)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1170)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.runWorkers(TestRunner.java:1147)
at org.testng.TestRunner.privateRun(TestRunner.java:749)
at org.testng.TestRunner.run(TestRunner.java:600)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:317)
at org.testng.SuiteRunner.access$000(SuiteRunner.java:34)
at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:351)
at org.testng.internal.thread.ThreadUtil$CountDownLatchedRunnable.run(ThreadUtil.java:147)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}
Output:
{code}
2012-12-05 10:21:34,463 FATAL [NodeMoveAPITest] (testng-NodeMoveAPITest) Tree:
+ / {}
+ b/ {}
+ x/ {}
+ a/ {}
+ e/ {}
+ c/ {}
+ x/ {}
+ y/ {}
+ d/ {}
2012-12-05 10:21:34,464 ERROR [UnitTestTestNGListener] (testng-NodeMoveAPITest) Method testConcurrency(org.infinispan.api.tree.NodeMoveAPITest) threw an exception
java.lang.AssertionError: Should have only found x once
at org.testng.AssertJUnit.fail(AssertJUnit.java:57)
at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:22)
at org.testng.AssertJUnit.assertFalse(AssertJUnit.java:39)
at org.infinispan.api.tree.NodeMoveAPITest.testConcurrency(NodeMoveAPITest.java:405)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:74)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:673)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:846)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1170)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.runWorkers(TestRunner.java:1147)
at org.testng.TestRunner.privateRun(TestRunner.java:749)
at org.testng.TestRunner.run(TestRunner.java:600)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:317)
at org.testng.SuiteRunner.access$000(SuiteRunner.java:34)
at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:351)
at org.testng.internal.thread.ThreadUtil$CountDownLatchedRunnable.run(ThreadUtil.java:147)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[testng-NodeMoveAPITest] Test testConcurrency(org.infinispan.api.tree.NodeMoveAPITest) failed.
2012-12-05 10:21:34,466 ERROR [UnitTestTestNGListener] (testng-NodeMoveAPITest) Test failed testConcurrency(org.infinispan.api.tree.NodeMoveAPITest)
java.lang.AssertionError: Should have only found x once
at org.testng.AssertJUnit.fail(AssertJUnit.java:57)
at org.testng.AssertJUnit.assertTrue(AssertJUnit.java:22)
at org.testng.AssertJUnit.assertFalse(AssertJUnit.java:39)
at org.infinispan.api.tree.NodeMoveAPITest.testConcurrency(NodeMoveAPITest.java:405)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:74)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:673)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:846)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1170)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
at org.testng.TestRunner.runWorkers(TestRunner.java:1147)
at org.testng.TestRunner.privateRun(TestRunner.java:749)
at org.testng.TestRunner.run(TestRunner.java:600)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:317)
at org.testng.SuiteRunner.access$000(SuiteRunner.java:34)
at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:351)
at org.testng.internal.thread.ThreadUtil$CountDownLatchedRunnable.run(ThreadUtil.java:147)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2564) Send distributed tasks to cache members rather than the entire cluster
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2564?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-2564:
------------------------------------
Sorry Vladimir, I meant to say I know what should be done but I don't have a solution ready.
I think we should have a RpcManagerImpl.getMembers() method that uses StateTransferManager.getCacheTopology().getMembers(), and all the cache-level stuff that currently uses Transport.getMembers() should use RpcManagerImpl.getMembers() instead.
> Send distributed tasks to cache members rather than the entire cluster
> ----------------------------------------------------------------------
>
> Key: ISPN-2564
> URL: https://issues.jboss.org/browse/ISPN-2564
> Project: Infinispan
> Issue Type: Bug
> Components: Core API
> Affects Versions: 5.2.0.Beta4
> Reporter: Vladimir Blagojevic
> Assignee: Vladimir Blagojevic
> Fix For: 5.2.0.CR1, 5.2.0.Final
>
>
> Currently our codebase relies on the cache views to be equal to entire cluster, however, this might not be the case due to asymmetric cluster; certain cache instances running only on particular Infinispan nodes. We have to make sure that cache views, if needed are scoped properly to particular cache rather than entire cluster
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2550) NoSuchElementException in Hot Rod Encoder
by Michal Linhard (JIRA)
[ https://issues.jboss.org/browse/ISPN-2550?page=com.atlassian.jira.plugin.... ]
Michal Linhard commented on ISPN-2550:
--------------------------------------
Tomas' tracelog shows exactly the same spot as my scenario: https://bugzilla.redhat.com/attachment.cgi?id=641649 (I'm not sure about his test scenario though)
> NoSuchElementException in Hot Rod Encoder
> -----------------------------------------
>
> Key: ISPN-2550
> URL: https://issues.jboss.org/browse/ISPN-2550
> Project: Infinispan
> Issue Type: Bug
> Components: Remote protocols
> Affects Versions: 5.2.0.Beta4
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Blocker
> Fix For: 5.2.0.Beta6
>
>
> Tomas noticed this a while ago in a specific functional test:
> https://bugzilla.redhat.com/show_bug.cgi?id=875151
> I'm creating a more general JIRA, cause I'm having this in resilience test.
> What I found by quick debug, is that here:
> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/ma...
> {code}
> for (segmentIdx <- 0 until numSegments) {
> val denormalizedSegmentHashIds = allDenormalizedHashIds(segmentIdx)
> val segmentOwners = ch.locateOwnersForSegment(segmentIdx)
> for (ownerIdx <- 0 until segmentOwners.length) {
> val address = segmentOwners(ownerIdx % segmentOwners.size)
> val serverAddress = members(address)
> val hashId = denormalizedSegmentHashIds(ownerIdx)
> log.tracef("Writing hash id %d for %s:%s", hashId, serverAddress.host, serverAddress.port)
> writeString(serverAddress.host, buf)
> writeUnsignedShort(serverAddress.port, buf)
> buf.writeInt(hashId)
> }
> }
> {code}
> we're trying to obtain serverAddress for nonexistent address and NoSuchElementException is not handled properly.
> It hapens after I kill a node in a resilience test and the exception appears when querying for the node in the members cache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month
[JBoss JIRA] (ISPN-2550) NoSuchElementException in Hot Rod Encoder
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2550?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-2550:
----------------------------------------
Tomas, I was wondering which of the functional tests you had developed was failing, and where (stacktrace of failure...etc). The idea is to replicate that specific test in the Infinispan codebase. Thanks.
> NoSuchElementException in Hot Rod Encoder
> -----------------------------------------
>
> Key: ISPN-2550
> URL: https://issues.jboss.org/browse/ISPN-2550
> Project: Infinispan
> Issue Type: Bug
> Components: Remote protocols
> Affects Versions: 5.2.0.Beta4
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Blocker
> Fix For: 5.2.0.Beta6
>
>
> Tomas noticed this a while ago in a specific functional test:
> https://bugzilla.redhat.com/show_bug.cgi?id=875151
> I'm creating a more general JIRA, cause I'm having this in resilience test.
> What I found by quick debug, is that here:
> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/ma...
> {code}
> for (segmentIdx <- 0 until numSegments) {
> val denormalizedSegmentHashIds = allDenormalizedHashIds(segmentIdx)
> val segmentOwners = ch.locateOwnersForSegment(segmentIdx)
> for (ownerIdx <- 0 until segmentOwners.length) {
> val address = segmentOwners(ownerIdx % segmentOwners.size)
> val serverAddress = members(address)
> val hashId = denormalizedSegmentHashIds(ownerIdx)
> log.tracef("Writing hash id %d for %s:%s", hashId, serverAddress.host, serverAddress.port)
> writeString(serverAddress.host, buf)
> writeUnsignedShort(serverAddress.port, buf)
> buf.writeInt(hashId)
> }
> }
> {code}
> we're trying to obtain serverAddress for nonexistent address and NoSuchElementException is not handled properly.
> It hapens after I kill a node in a resilience test and the exception appears when querying for the node in the members cache.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 1 month