[JBoss JIRA] (ISPN-2726) Sporadic NPE in KeyAffinityServiceImpl
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2726?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-2726:
----------------------------------------
Assigning this to Mircea who's the expert of the KeyAffinityServiceImpl.
Assuming this is related to rehashing, getAddressForKey() could return null and in generateKeys(), the address returned is null, it could continue the loop. The biggest problem there is what happens when getCollocatedKey() calls getAddressForKey(), what to do with a null Address? The only possibility there would be to wait until address is not null, but that looks a hack for me.
[~dan.berindei], should DistributionManager.getConsistentHash() return null at all?
> Sporadic NPE in KeyAffinityServiceImpl
> --------------------------------------
>
> Key: ISPN-2726
> URL: https://issues.jboss.org/browse/ISPN-2726
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.CR1
> Reporter: Thomas Fromm
> Assignee: Mircea Markus
> Fix For: 5.2.0.Final
>
>
> The NPE appears not often, unfortunality with enabled TRACE logging, it never appears :-( I'll keep trying to get TRACEs.
> Exception in thread "pool-70-thread-1" java.lang.NullPointerException
> at org.infinispan.affinity.KeyAffinityServiceImpl.getAddressForKey(KeyAffinityServiceImpl.java:347)
> at org.infinispan.affinity.KeyAffinityServiceImpl.access$700(KeyAffinityServiceImpl.java:59)
> at org.infinispan.affinity.KeyAffinityServiceImpl$KeyGeneratorWorker.generateKeys(KeyAffinityServiceImpl.java:270)
> at org.infinispan.affinity.KeyAffinityServiceImpl$KeyGeneratorWorker.run(KeyAffinityServiceImpl.java:242)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2726) Sporadic NPE in KeyAffinityServiceImpl
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2726?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2726:
-----------------------------------
Assignee: Mircea Markus (was: Galder Zamarreño)
> Sporadic NPE in KeyAffinityServiceImpl
> --------------------------------------
>
> Key: ISPN-2726
> URL: https://issues.jboss.org/browse/ISPN-2726
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.CR1
> Reporter: Thomas Fromm
> Assignee: Mircea Markus
> Fix For: 5.2.0.Final
>
>
> The NPE appears not often, unfortunality with enabled TRACE logging, it never appears :-( I'll keep trying to get TRACEs.
> Exception in thread "pool-70-thread-1" java.lang.NullPointerException
> at org.infinispan.affinity.KeyAffinityServiceImpl.getAddressForKey(KeyAffinityServiceImpl.java:347)
> at org.infinispan.affinity.KeyAffinityServiceImpl.access$700(KeyAffinityServiceImpl.java:59)
> at org.infinispan.affinity.KeyAffinityServiceImpl$KeyGeneratorWorker.generateKeys(KeyAffinityServiceImpl.java:270)
> at org.infinispan.affinity.KeyAffinityServiceImpl$KeyGeneratorWorker.run(KeyAffinityServiceImpl.java:242)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2737) Thread naming anomaly when reporting lock timeout
by Michal Linhard (JIRA)
[ https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin.... ]
Michal Linhard commented on ISPN-2737:
--------------------------------------
Agreed. The information that the stacktrace is not local is enough. Info where it's coming from is a nice bonus.
> Thread naming anomaly when reporting lock timeout
> -------------------------------------------------
>
> Key: ISPN-2737
> URL: https://issues.jboss.org/browse/ISPN-2737
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency, RPC
> Affects Versions: 5.2.0.CR1
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.2.0.Final
>
>
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> {code}
> 11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MemcachedServerWorker-277) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds] on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> ....
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
> {code}
> note the thread name "MemcachedServerWorker" in an operation coming from the JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2483) State transfer issue with the transactions for which the originator has crashed
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2483?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-2483:
-------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/1621
Both scenarios could happen, because the state consumer could receive the topology update before requesting transactions, and yet the provider may receive the same topology update after sending the transactions.
The fix is to apply all transactions, without checking the originator, and then call TransactionTable.updateStateOnNodesLeaving() again to catch transactions from any new leavers.
> State transfer issue with the transactions for which the originator has crashed
> -------------------------------------------------------------------------------
>
> Key: ISPN-2483
> URL: https://issues.jboss.org/browse/ISPN-2483
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer, Transactions
> Affects Versions: 5.1.8.Final, 5.2.0.Beta3
> Reporter: Mircea Markus
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 5.2.0.Final
>
>
> State transfer migrates and prepares the transactions for which the originator has left. On the receiving node, this results in the transaction being prepared and acquiring backup locks which are never released (unless manual intervention).
> This should behave as follows:
> - if there's no recovery enabled, the state producer should not send such transactions but drop them
> - if recovery is enabled these transactions should be sent across. They shouldn't be prepared/acquire backup locks, but be placed in the recovery cache (see RecoveryManagerImpl.inDoubtTransactions)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2734) KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2734?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño resolved ISPN-2734.
------------------------------------
Fix Version/s: (was: 5.2.0.Final)
Resolution: Rejected
> KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
> -------------------------------------------------------
>
> Key: ISPN-2734
> URL: https://issues.jboss.org/browse/ISPN-2734
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.CR1
> Reporter: Thomas Fromm
> Assignee: Galder Zamarreño
>
> I found a node in cluster which seems to hang in a loop because of not closed latch, maybe caused due an prior error error.
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as INACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> No further log available, since I've changed the loglevel of org.infinispan an runtime.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2734) KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2734?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-2734:
----------------------------------------
Oh, if it was an endless ACTIVE/INACTIVE loop, then that is fine because that's what it's meant to do unless it's stopped:
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
The question is, what's the root of the hang, and for that we need a thread dump. I'm rejecting this JIRA for the moment. Please reopen if you find more about this.
> KeyAffinityServiceImpl.KeyGeneratorWorker hangs in loop
> -------------------------------------------------------
>
> Key: ISPN-2734
> URL: https://issues.jboss.org/browse/ISPN-2734
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.0.CR1
> Reporter: Thomas Fromm
> Assignee: Galder Zamarreño
> Fix For: 5.2.0.Final
>
>
> I found a node in cluster which seems to hang in a loop because of not closed latch, maybe caused due an prior error error.
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as INACTIVE
> TRACE 21.01.13 09:43:04,653 [pool-44-thread-1] KeyAffinityServiceImpl KeyGeneratorWorker marked as ACTIVE
> No further log available, since I've changed the loglevel of org.infinispan an runtime.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2737) Thread naming anomaly when reporting lock timeout
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-2737:
----------------------------------------
The reason I knew it was a remote exception is cos of this clue:
{code}...for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]{code}
So, the code requesting the lock was an `OOB-` thread, and another `OOB-` thread had the lock. This is basically saying: a JGroups inbound thread tried to acquire the lock in a remote node, but another thread had it.
WRT your suggestion, I think that's doable. We currently have a response filter that takes an remote exception and puts it in a response:
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
At this point we could sender information to the ExceptionResponse instance. On the sender side, the remote exception is thrown as is:
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
I'm not sure if the local stacktrace information is really that important here. What's more important is knowing that this exception was produced remotely, and where it's coming from.
> Thread naming anomaly when reporting lock timeout
> -------------------------------------------------
>
> Key: ISPN-2737
> URL: https://issues.jboss.org/browse/ISPN-2737
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.0.CR1
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
>
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> {code}
> 11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MemcachedServerWorker-277) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds] on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> ....
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
> {code}
> note the thread name "MemcachedServerWorker" in an operation coming from the JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2737) Thread naming anomaly when reporting lock timeout
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño reopened ISPN-2737:
------------------------------------
> Thread naming anomaly when reporting lock timeout
> -------------------------------------------------
>
> Key: ISPN-2737
> URL: https://issues.jboss.org/browse/ISPN-2737
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.0.CR1
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
>
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> {code}
> 11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MemcachedServerWorker-277) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds] on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> ....
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
> {code}
> note the thread name "MemcachedServerWorker" in an operation coming from the JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2737) Thread naming anomaly when reporting lock timeout
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2737:
-----------------------------------
Fix Version/s: 5.2.0.Final
> Thread naming anomaly when reporting lock timeout
> -------------------------------------------------
>
> Key: ISPN-2737
> URL: https://issues.jboss.org/browse/ISPN-2737
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.0.CR1
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.2.0.Final
>
>
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> {code}
> 11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MemcachedServerWorker-277) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds] on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> ....
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
> {code}
> note the thread name "MemcachedServerWorker" in an operation coming from the JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months
[JBoss JIRA] (ISPN-2737) Thread naming anomaly when reporting lock timeout
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-2737:
-----------------------------------
Component/s: RPC
> Thread naming anomaly when reporting lock timeout
> -------------------------------------------------
>
> Key: ISPN-2737
> URL: https://issues.jboss.org/browse/ISPN-2737
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency, RPC
> Affects Versions: 5.2.0.CR1
> Reporter: Michal Linhard
> Assignee: Galder Zamarreño
> Priority: Minor
> Fix For: 5.2.0.Final
>
>
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> {code}
> 11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (MemcachedServerWorker-277) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds] on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by [Thread[OOB-150,null,5,Thread Pools]]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114) [infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
> ....
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1287) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823) [jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_38]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_38]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
> {code}
> note the thread name "MemcachedServerWorker" in an operation coming from the JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 11 months