[JBoss JIRA] (ISPN-2697) HotRodServer startup fails when its record cannot be inserted into topology cache
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-2697?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on ISPN-2697:
--------------------------------
Can't clients use withFlags() ? So, yes, HotRod doesn't *need* to be any different.
Using flags is the best solution I can think of. I'll eventually implement JGRP-1570, but I think using a flag is better, that's exactly the use case I had in mind when writing RSVP.
> HotRodServer startup fails when its record cannot be inserted into topology cache
> ---------------------------------------------------------------------------------
>
> Key: ISPN-2697
> URL: https://issues.jboss.org/browse/ISPN-2697
> Project: Infinispan
> Issue Type: Bug
> Components: Remote protocols
> Affects Versions: 5.2.0.Beta6
> Reporter: Radim Vansa
> Assignee: Galder Zamarreño
> Priority: Critical
> Fix For: 5.2.0.Final
>
>
> When the HotRodServer starts it inserts its record to __hotRodTopologyCache ({{HotRodServer.addSelfToTopologyView(...)}}).
> However, this put may very easily fail - as the command is broadcasted using NAKACK2 protocol, if the message gets lost and there's no following broadcasted message, the message will be not retransmitted and the put operation times out (Replication timeout), which fails the whole HotRodServer startup, all because of one lost UDP message.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2697) HotRodServer startup fails when its record cannot be inserted into topology cache
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2697?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-2697:
----------------------------------------
Well, Hot Rod uses Infinispan just like any other client, it's no different. So, there's no really a way currently to have a cache flag that only applies to Hot Rod.
> HotRodServer startup fails when its record cannot be inserted into topology cache
> ---------------------------------------------------------------------------------
>
> Key: ISPN-2697
> URL: https://issues.jboss.org/browse/ISPN-2697
> Project: Infinispan
> Issue Type: Bug
> Components: Remote protocols
> Affects Versions: 5.2.0.Beta6
> Reporter: Radim Vansa
> Assignee: Galder Zamarreño
> Priority: Critical
> Fix For: 5.2.0.Final
>
>
> When the HotRodServer starts it inserts its record to __hotRodTopologyCache ({{HotRodServer.addSelfToTopologyView(...)}}).
> However, this put may very easily fail - as the command is broadcasted using NAKACK2 protocol, if the message gets lost and there's no following broadcasted message, the message will be not retransmitted and the put operation times out (Replication timeout), which fails the whole HotRodServer startup, all because of one lost UDP message.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2712) Initial state transfer doesn't appear to all be persisted when using eviction in a replicated cluster
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-2712?page=com.atlassian.jira.plugin.... ]
Adrian Nistor edited comment on ISPN-2712 at 1/28/13 9:46 AM:
--------------------------------------------------------------
I've just created ReplStateTransferCacheLoaderTest using your config and can confirm that your scenario works in 5.1.x and fails in 5.2.
was (Author: anistor):
I've just created a test using your config and can confirm that your scenario works in 5.1.x and fails in 5.2: https://github.com/anistor/infinispan/commit/b9351b50ec7b33678d01c71b888c...
It seems that the issue is in the area of eviction and cache loaders.
> Initial state transfer doesn't appear to all be persisted when using eviction in a replicated cluster
> ------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2712
> URL: https://issues.jboss.org/browse/ISPN-2712
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.2.0.CR1
> Reporter: Randall Hauch
> Assignee: Adrian Nistor
> Priority: Critical
> Fix For: 5.2.0.Final
>
> Attachments: ispn_eviction_log.txt, spectrum-repository-infinispan.xml
>
>
> Using a clustered cache with 2 nodes, where the cache in each node is configured identically with replication, eviction and (non-shared) file cache store. (See attached configuration.)
> The first (coordinator) process in the cluster is started and populated with 293 entries. Then the first process continually adds a few entries every 5 seconds. After a short delay, the second process is started, at which point it joins the cluster and starts the state-transfer process; logging shows in the first process that all 293 entries are transferred to the new cluster member, and the second log shows that they are all received. The second process then attempts to look for a specific entry that was created during initial population in the first process. This fails to find the existing entry.
> By enabling trace logging and "IDE breakpoint output messages" around state transfer, it's visible that from the 293 keys, only 218 are placed into the cache, the others being lost.
> (This problem was originally discovered when clustering ModeShape, which behaves roughly in the manner described above. The initial entries that are populated upon initialization are content created when a new repository is started. The second process looks for this content, and if it finds the content it knows not to create all of this initial content. However, if it doesn't find it, it thinks the repository has not yet been initialized and that it should create the initial content. The problem described by this bug then manifests itself in ModeShape through dozens of exceptions because the repository has been corrupted. See MODE-1745 for details on this problem. ModeShape's corresponding known issue for this issue, ISPN-2712, is MODE-1754.)
> The eviction is configured like this:
> {code:xml}
> <eviction strategy="LIRS" maxEntries="1000"/>
> {code}
> The attached log file is from the second process (the "receiver" node) and it contains the following key points:
> * line 40 - the total number of keys & entries to be transferred = 293
> * line 1352 and from there onwards 1358 / 1364 / i + 6 - the data container's size stops growing at 218, while the other entries are being sent. This means that in effect, they are ignored.
> * line 1797 - the loop from {{org.infinispan.statetransfer.StateConsumerImpl#doApplyState}} finishes
> Disabling eviction fixes the problem and all 293 nodes are placed in Node2's cache.
> (I initially marked this as CRITICAL priority, though it is a blocker for our use of Infinispan 5.2.)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2712) Initial state transfer doesn't appear to all be persisted when using eviction in a replicated cluster
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-2712?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-2712:
--------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/1617
> Initial state transfer doesn't appear to all be persisted when using eviction in a replicated cluster
> ------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2712
> URL: https://issues.jboss.org/browse/ISPN-2712
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.2.0.CR1
> Reporter: Randall Hauch
> Assignee: Adrian Nistor
> Priority: Critical
> Fix For: 5.2.0.Final
>
> Attachments: ispn_eviction_log.txt, spectrum-repository-infinispan.xml
>
>
> Using a clustered cache with 2 nodes, where the cache in each node is configured identically with replication, eviction and (non-shared) file cache store. (See attached configuration.)
> The first (coordinator) process in the cluster is started and populated with 293 entries. Then the first process continually adds a few entries every 5 seconds. After a short delay, the second process is started, at which point it joins the cluster and starts the state-transfer process; logging shows in the first process that all 293 entries are transferred to the new cluster member, and the second log shows that they are all received. The second process then attempts to look for a specific entry that was created during initial population in the first process. This fails to find the existing entry.
> By enabling trace logging and "IDE breakpoint output messages" around state transfer, it's visible that from the 293 keys, only 218 are placed into the cache, the others being lost.
> (This problem was originally discovered when clustering ModeShape, which behaves roughly in the manner described above. The initial entries that are populated upon initialization are content created when a new repository is started. The second process looks for this content, and if it finds the content it knows not to create all of this initial content. However, if it doesn't find it, it thinks the repository has not yet been initialized and that it should create the initial content. The problem described by this bug then manifests itself in ModeShape through dozens of exceptions because the repository has been corrupted. See MODE-1745 for details on this problem. ModeShape's corresponding known issue for this issue, ISPN-2712, is MODE-1754.)
> The eviction is configured like this:
> {code:xml}
> <eviction strategy="LIRS" maxEntries="1000"/>
> {code}
> The attached log file is from the second process (the "receiver" node) and it contains the following key points:
> * line 40 - the total number of keys & entries to be transferred = 293
> * line 1352 and from there onwards 1358 / 1364 / i + 6 - the data container's size stops growing at 218, while the other entries are being sent. This means that in effect, they are ignored.
> * line 1797 - the loop from {{org.infinispan.statetransfer.StateConsumerImpl#doApplyState}} finishes
> Disabling eviction fixes the problem and all 293 nodes are placed in Node2's cache.
> (I initially marked this as CRITICAL priority, though it is a blocker for our use of Infinispan 5.2.)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2762) Add version information to infinispan-core JAR file
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2762?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2762:
-----------------------------------
Assignee: Galder Zamarreño (was: Mircea Markus)
Fix Version/s: 5.2.0.Final
> Add version information to infinispan-core JAR file
> ---------------------------------------------------
>
> Key: ISPN-2762
> URL: https://issues.jboss.org/browse/ISPN-2762
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core API
> Affects Versions: 5.2.0.CR3
> Reporter: Alan Field
> Assignee: Galder Zamarreño
> Fix For: 5.2.0.Final
>
>
> Infinispan version information is logged when a cache is created, but it would be useful to have this information included in the infinispan-core JAR file. I discovered this when I was building and packaging RadarGun and wanted to know which version of Infinispan 5.2 was included. Currently no version information is included in the JAR file.
> The version information for a SNAPSHOT build could include the build date and a git hash, along with the version string.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2752) IllegalStateException on shutdown
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2752?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2752:
-----------------------------------
Fix Version/s: 5.2.0.Final
Assignee: Galder Zamarreño (was: Mircea Markus)
> IllegalStateException on shutdown
> ---------------------------------
>
> Key: ISPN-2752
> URL: https://issues.jboss.org/browse/ISPN-2752
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.2.0.CR2
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.2.0.Final
>
>
> This happens on graceful shutdown of 8 or 6 nodes of 5.2.0.CR2:
> {code}
> 07:17:45,892 ERROR [org.infinispan.topology.ClusterTopologyManagerImpl] ISPN000196: Failed to recover cluster state after the current node became the coordinator: java.util.concurrent.ExecutionException: org.infinispan.CacheException: Remote (node0005/default) failed unexpectedly
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232) [rt.jar:1.6.0_33]
> at java.util.concurrent.FutureTask.get(FutureTask.java:91) [rt.jar:1.6.0_33]
> at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:567) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:432) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleNewView(ClusterTopologyManagerImpl.java:231) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.handleViewChange(ClusterTopologyManagerImpl.java:597) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.6.0_33]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [rt.jar:1.6.0_33]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [rt.jar:1.6.0_33]
> at java.lang.reflect.Method.invoke(Method.java:597) [rt.jar:1.6.0_33]
> at org.infinispan.notifications.AbstractListenerImpl$ListenerInvocation$1.run(AbstractListenerImpl.java:212) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_33]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_33]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_33]
> Caused by: org.infinispan.CacheException: Remote (node0005/default) failed unexpectedly
> at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:96) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:541) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:549) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:546) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [rt.jar:1.6.0_33]
> at java.util.concurrent.FutureTask.run(FutureTask.java:138) [rt.jar:1.6.0_33]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_33]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_33]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_33]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.0.0.GA-redhat-2.jar:2.0.0.GA-redhat-2]
> Caused by: java.lang.IllegalStateException: transport was closed
> at org.jgroups.blocks.GroupRequest.transportClosed(GroupRequest.java:273) [jgroups-3.2.6.Final-redhat-1.jar:3.2.6.Final-redhat-1]
> at org.jgroups.blocks.RequestCorrelator.stop(RequestCorrelator.java:269) [jgroups-3.2.6.Final-redhat-1.jar:3.2.6.Final-redhat-1]
> at org.jgroups.blocks.MessageDispatcher.stop(MessageDispatcher.java:152) [jgroups-3.2.6.Final-redhat-1.jar:3.2.6.Final-redhat-1]
> at org.jgroups.blocks.MessageDispatcher.channelDisconnected(MessageDispatcher.java:455) [jgroups-3.2.6.Final-redhat-1.jar:3.2.6.Final-redhat-1]
> at org.jgroups.Channel.notifyChannelDisconnected(Channel.java:507) [jgroups-3.2.6.Final-redhat-1.jar:3.2.6.Final-redhat-1]
> at org.jgroups.JChannel.disconnect(JChannel.java:363) [jgroups-3.2.6.Final-redhat-1.jar:3.2.6.Final-redhat-1]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.stop(JGroupsTransport.java:258) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.6.0_33]
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [rt.jar:1.6.0_33]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [rt.jar:1.6.0_33]
> at java.lang.reflect.Method.invoke(Method.java:597) [rt.jar:1.6.0_33]
> at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:883) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:690) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:568) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.factories.GlobalComponentRegistry.stop(GlobalComponentRegistry.java:260) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:742) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.infinispan.manager.AbstractDelegatingEmbeddedCacheManager.stop(AbstractDelegatingEmbeddedCacheManager.java:179) [infinispan-core-5.2.0.CR2-redhat-1.jar:5.2.0.CR2-redhat-1]
> at org.jboss.as.clustering.infinispan.subsystem.EmbeddedCacheManagerService.stop(EmbeddedCacheManagerService.java:76) [jboss-datagrid-infinispan-6.1.0.ER9-redhat-1.jar:6.1.0.ER9-redhat-1]
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:1911) [jboss-msc-1.0.2.GA-redhat-2.jar:1.0.2.GA-redhat-2]
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:1874) [jboss-msc-1.0.2.GA-redhat-2.jar:1.0.2.GA-redhat-2]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_33]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_33]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_33]
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2737) Thread naming anomaly when reporting lock timeout
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2737:
-----------------------------------
Fix Version/s: 5.2.0.Final
> Thread naming anomaly when reporting lock timeout
> -------------------------------------------------
>
> Key: ISPN-2737
> URL: https://issues.jboss.org/browse/ISPN-2737
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.0.CR1
> Reporter: Michal Linhard
> Assignee: Mircea Markus
> Priority: Minor
> Fix For: 5.2.0.Final
>
>
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> {code}
> 11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MemcachedServerWorker-277) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds] on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> ....
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
> {code}
> note the thread name "MemcachedServerWorker" in an operation coming from the JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2737) Thread naming anomaly when reporting lock timeout
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño reassigned ISPN-2737:
--------------------------------------
Assignee: Galder Zamarreño (was: Mircea Markus)
> Thread naming anomaly when reporting lock timeout
> -------------------------------------------------
>
> Key: ISPN-2737
> URL: https://issues.jboss.org/browse/ISPN-2737
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.0.CR1
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.2.0.Final
>
>
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> {code}
> 11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MemcachedServerWorker-277) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds] on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> ....
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
> {code}
> note the thread name "MemcachedServerWorker" in an operation coming from the JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2734) KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2734?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño reassigned ISPN-2734:
--------------------------------------
Assignee: Galder Zamarreño (was: Mircea Markus)
> KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
> -------------------------------------------------------
>
> Key: ISPN-2734
> URL: https://issues.jboss.org/browse/ISPN-2734
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.CR1
> Reporter: Thomas Fromm
> Assignee: Galder Zamarreño
>
> I found a node in cluster which seems to hang in a loop because of not closed latch, maybe caused due an prior error error.
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as INACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> No further log available, since I've changed the loglevel of org.infinispan an runtime.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2734) KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2734?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2734:
-----------------------------------
Fix Version/s: 5.2.0.Final
> KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
> -------------------------------------------------------
>
> Key: ISPN-2734
> URL: https://issues.jboss.org/browse/ISPN-2734
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.CR1
> Reporter: Thomas Fromm
> Assignee: Galder Zamarreño
> Fix For: 5.2.0.Final
>
>
> I found a node in cluster which seems to hang in a loop because of not closed latch, maybe caused due an prior error error.
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as INACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> No further log available, since I've changed the loglevel of org.infinispan an runtime.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months