[
https://issues.jboss.org/browse/ISPN-2737?page=com.atlassian.jira.plugin....
]
Galder Zamarreño commented on ISPN-2737:
----------------------------------------
The reason I knew it was a remote exception is cos of this clue:
{code}...for requestor [Thread[OOB-127,null,5,Thread Pools]]! Lock held by
[Thread[OOB-150,null,5,Thread Pools]]{code}
So, the code requesting the lock was an `OOB-` thread, and another `OOB-` thread had the
lock. This is basically saying: a JGroups inbound thread tried to acquire the lock in a
remote node, but another thread had it.
WRT your suggestion, I think that's doable. We currently have a response filter that
takes an remote exception and puts it in a response:
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
At this point we could sender information to the ExceptionResponse instance. On the sender
side, the remote exception is thrown as is:
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
I'm not sure if the local stacktrace information is really that important here.
What's more important is knowing that this exception was produced remotely, and where
it's coming from.
Thread naming anomaly when reporting lock timeout
-------------------------------------------------
Key: ISPN-2737
URL:
https://issues.jboss.org/browse/ISPN-2737
Project: Infinispan
Issue Type: Bug
Components: Locking and Concurrency
Affects Versions: 5.2.0.CR1
Reporter: Michal Linhard
Assignee: Galder Zamarreño
Priority: Minor
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
{code}
11:47:30,859 ERROR [org.infinispan.interceptors.InvocationContextInterceptor]
(MemcachedServerWorker-277) ISPN000136: Execution error:
org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [3 seconds]
on key [memcachedCache#key763328] for requestor [Thread[OOB-127,null,5,Thread Pools]]!
Lock held by [Thread[OOB-150,null,5,Thread Pools]]
at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:217)
[infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
at
org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:200)
[infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
at
org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:114)
[infinispan-core-5.2.0.CR1-redhat-1.jar:5.2.0.CR1-redhat-1]
....
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
[jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
at org.jgroups.protocols.Discovery.up(Discovery.java:359)
[jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2640)
[jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1287)
[jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1850)
[jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1823)
[jgroups-3.2.5.Final-redhat-1.jar:3.2.5.Final-redhat-1]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
[rt.jar:1.6.0_38]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
[rt.jar:1.6.0_38]
at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_38]
{code}
note the thread name "MemcachedServerWorker" in an operation coming from the
JGroups stack...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:
http://www.atlassian.com/software/jira