[JBoss JIRA] (HRJS-18) Local server iterator test fails and hangs randomly with NoSuchElementException
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/HRJS-18?page=com.atlassian.jira.plugin.sy... ]
Galder Zamarreño updated HRJS-18:
---------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Local server iterator test fails and hangs randomly with NoSuchElementException
> -------------------------------------------------------------------------------
>
> Key: HRJS-18
> URL: https://issues.jboss.org/browse/HRJS-18
> Project: Infinispan Javascript client
> Issue Type: Bug
> Affects Versions: 0.3.0
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 0.4.0
>
> Attachments: server.log, tmp-tests.log
>
>
> Apart from the server returning {{NoSuchElementException}}, this is causing confusion in the client which results in the testsuite hanging completely.
> Here are snippets from client and server logs:
> {code}
> [2016-06-01 17:57:36.210] [DEBUG] client - Invoke putAll(msgId=323,pairs=[{"key":"local-it1","value":"v1","done":false},{"key":"local-it2","value":"v2","done":false},{"key":"local-it3","value":"v3","done":false}],opts=undefined)
> [2016-06-01 17:57:36.210] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.210] [TRACE] transport - Write buffer(msgId=323) to 127.0.0.1:11222: A0C302192D000003007703096C6F63616C2D697431027631096C6F63616C2D697432027632096C6F63616C2D697433027633
> [2016-06-01 17:57:36.214] [TRACE] decoder - Read header(msgId=323): opCode=46, status=0, hasNewTopology=0
> [2016-06-01 17:57:36.215] [TRACE] decoder - Call decode for request(msgId=323)
> [2016-06-01 17:57:36.215] [TRACE] connection - After decoding request(msgId=323), buffer size is 6, and offset 6
> [2016-06-01 17:57:36.215] [TRACE] connection - Complete success for request(msgId=323) with undefined
> [2016-06-01 17:57:36.215] [DEBUG] client - Invoke iterator(msgId=324,batchSize=1,opts=undefined)
> [2016-06-01 17:57:36.215] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.215] [TRACE] transport - Write buffer(msgId=324) to 127.0.0.1:11222: A0C40219310000030001010100
> [2016-06-01 17:57:36.230] [TRACE] decoder - Read header(msgId=324): opCode=50, status=0, hasNewTopology=0
> [2016-06-01 17:57:36.230] [TRACE] decoder - Call decode for request(msgId=324)
> [2016-06-01 17:57:36.230] [TRACE] connection - After decoding request(msgId=324), buffer size is 43, and offset 43
> [2016-06-01 17:57:36.230] [TRACE] connection - Complete success for request(msgId=324) with {"iterId":"28cab848-73ac-47c5-ad68-f518b89c5ba4","conn":{}}
> [2016-06-01 17:57:36.230] [TRACE] client - Invoke iterator.next(msgId=325,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.230] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=325) to 127.0.0.1:11222: A0C5021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=326,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=326) to 127.0.0.1:11222: A0C6021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=327,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=327) to 127.0.0.1:11222: A0C7021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.244] [TRACE] decoder - Read header(msgId=327): opCode=80, status=133, hasNewTopology=0
> [2016-06-01 17:57:36.244] [ERROR] decoder - Error decoding body of request(msgId=327): java.util.NoSuchElementException
> [2016-06-01 17:57:36.244] [TRACE] connection - After decoding request(msgId=327), buffer size is 39, and offset 39
> [2016-06-01 17:57:36.244] [TRACE] connection - Complete failure for request(msgId=327) with java.util.NoSuchElementException
> [2016-06-01 17:57:36.249] [TRACE] decoder - Read header(msgId=327): opCode=52, status=0, hasNewTopology=0
> {code}
> {code}
> 2016-06-01 17:57:36,216 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationStartRequest, version=25, messageId=324, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (HotRodServerHandler-6-115) Invoked with command EntrySetCommand{cache=default} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@7bc2d6d3]
> 2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (HotRodServerHandler-6-115) Executing command: EntrySetCommand{cache=default}.
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.ContextHandler] (HotRodServerHandler-6-115) Write response IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.Encoder2x$] (HotRodServerWorker-8-2) Write topology response header with no change
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Write buffer contents A1C4023200002432386361623834382D373361632D343763352D616436382D663531386238396335626134 to channel [id: 0xd8959d2b, L:/127.0.0.1:11222 - R:/127.0.0.1:52367]
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=325, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=326, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=327, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,239 DEBUG [org.infinispan.server.hotrod.HotRodExceptionHandler] (HotRodServerWorker-8-2) Exception caught: java.util.NoSuchElementException
> at org.infinispan.stream.impl.RemovableIterator.next(RemovableIterator.java:49)
> at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
> at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:728)
> at scala.collection.immutable.Range.foreach(Range.scala:166)
> at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:727)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:142)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:138)
> at scala.Option.map(Option.scala:146)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager.next(IterationManager.scala:138)
> at org.infinispan.server.hotrod.ContextHandler.realRead(ContextHandler.java:182)
> at org.infinispan.server.hotrod.ContextHandler.lambda$channelRead0$1(ContextHandler.java:56)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-01 17:57:36,244 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg ErrorResponse{version=25, messageId=327, operation=ErrorResponse, status=ServerError, msg=java.util.NoSuchElementException}
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months
[JBoss JIRA] (ISPN-3395) ISPN000196: Failed to recover cluster state after the current node became the coordinator
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3395?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3395:
-----------------------------------------------
Petr Penicka <ppenicka(a)redhat.com> changed the Status of [bug 1283465|https://bugzilla.redhat.com/show_bug.cgi?id=1283465] from VERIFIED to CLOSED
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-3395
> URL: https://issues.jboss.org/browse/ISPN-3395
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.3.0.Final
> Reporter: Mayank Agarwal
> Fix For: 6.0.2.Final, 7.0.0.Final, 5.2.16.Final
>
>
> We are using infinispan 5.3.0.Final in our distributed application. we are testing infinispan in HA scenarios and getting following exception when new node becomes co-ordinator.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> java.lang.NullPointerException: null
> at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:455) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleNewView(ClusterTopologyManagerImpl.java:235) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:647) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.6.0_25]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_25]
> This is happening because cacheTopology is null at ClusterTopologyManagerImpl.java:455
> at 449: code is checking cacheTopology for null that for loop which is updating cacheStatusMap at 457 should be in that check itself.
> Fix:
> --- a/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> +++ b/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> @@ -448,7 +448,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> // but didn't get a response back yet
> if (cacheTopology != null) {
> topologyList.add(cacheTopology);
> - }
> +
>
> // Add all the members of the topology that have sent responses first
> // If we only added the sender, we could end up with a different member order
> @@ -457,6 +457,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> cacheStatusMap.get(cacheName).addMember(member);
> }
> }
> + }
>
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months
[JBoss JIRA] (ISPN-7362) Use bundler no-bundler in the default UDP configuration
by Dan Berindei (JIRA)
Dan Berindei created ISPN-7362:
----------------------------------
Summary: Use bundler no-bundler in the default UDP configuration
Key: ISPN-7362
URL: https://issues.jboss.org/browse/ISPN-7362
Project: Infinispan
Issue Type: Task
Components: Configuration, Core
Affects Versions: 9.0.0.Beta1
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.0.0.Beta2
Large datagrams don't seem to work very well in some networks, like our QE cluster.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months
[JBoss JIRA] (ISPN-4810) Local Transactional Cache loses data when eviction is enabled and there are multiple readers and one writer
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4810?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4810:
-----------------------------------------------
Petr Penicka <ppenicka(a)redhat.com> changed the Status of [bug 1202354|https://bugzilla.redhat.com/show_bug.cgi?id=1202354] from VERIFIED to CLOSED
> Local Transactional Cache loses data when eviction is enabled and there are multiple readers and one writer
> -----------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4810
> URL: https://issues.jboss.org/browse/ISPN-4810
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 6.0.2.Final
> Environment: Windows 7 x64 (NTFS)
> Oracle JDK1.7.0_40
> Apache Maven 3.0.5
> Reporter: Horia Chiorean
> Assignee: William Burns
> Labels: modeshape
> Fix For: 7.2.0.Beta1, 5.2.12.Final, 5.2.13.Final
>
> Attachments: ispn_concurrent.zip
>
>
> Using Infinispan 6.0.2 and a local, transactional cache backed by a <singleFile> store, with eviction enabled and a small {{max-entries}} setting, we have the following scenario:
> * the main thread (i.e. the "writer") starts a transaction, adds a batch of strings into the cache and also appends the same strings into a List cache entry and then commits the transaction
> * after the above has finished (i.e. after tx.commit) it fires a number of reader threads where each reader thread
> ** checks that the string entries were added into the cache and
> ** checks that the entries were correctly appended to the List entry
> * the above steps are repeated a number of times
> On any given run, based on timing, we're seeing that at some point (after some time) some of the reader threads will not see the latest version of the List entry (i.e. will not see the latest elements that were added into the list) but rather an old, stale List (sort of a "lost update" scenario).
> If we either:
> * disable eviction or
> * set the {{max-entries}} to a large enough value (which I suspect has the same effect - not evicting anything) the problem doesn't show up.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months