[JBoss JIRA] (ISPN-5409) Primitive values lead to failure with remote non-indexed query
by Adrian Nistor (JIRA)
Adrian Nistor created ISPN-5409:
-----------------------------------
Summary: Primitive values lead to failure with remote non-indexed query
Key: ISPN-5409
URL: https://issues.jboss.org/browse/ISPN-5409
Project: Infinispan
Issue Type: Bug
Components: Remote Querying
Reporter: Adrian Nistor
Assignee: Adrian Nistor
Querying of primitive values is not currently supported (see ISPN-3949).
Indexed query will successfully ignore them but with non-indexed query we get failures. This scenario should be gracefully handled by non-indexed query too.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months
[JBoss JIRA] (ISPN-5386) Tx succeeds on coord, while being rollbacked on other participants due to Tx pruning
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5386?page=com.atlassian.jira.plugin.... ]
Work on ISPN-5386 started by Dan Berindei.
------------------------------------------
> Tx succeeds on coord, while being rollbacked on other participants due to Tx pruning
> ------------------------------------------------------------------------------------
>
> Key: ISPN-5386
> URL: https://issues.jboss.org/browse/ISPN-5386
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Reporter: Matej Čimbora
> Assignee: Dan Berindei
>
> All participants of transaction share the same topology. TX gets successfully prepared & commited on coordinator.
> {code}
> 03:49:27,759 DEBUG [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,edg-perf08-48196) New view accepted: [edg-perf08-48196|18] (5) [edg-perf08-48196, edg-perf01-23632, edg-perf02-34805, edg-perf03-16232, edg-perf04-41106]
> 03:49:41,051 TRACE [org.infinispan.statetransfer.StateTransferManagerImpl] (transport-thread-9) Installing new cache topology CacheTopology{id=53, rebalanceId=19, currentCH=DefaultConsistentHash{ns = 512, owners = (5)[edg-perf08-48196: 103+101, edg-perf01-23632: 102+103, edg-perf02-34805: 102+103, edg-perf03-16232: 102+103, edg-perf04-41106: 103+102]}, pendingCH=null, unionCH=null, actualMembers=[edg-perf08-48196, edg-perf01-23632, edg-perf02-34805, edg-perf03-16232, edg-perf04-41106]} on cache testCache
> ...
> 03:51:34,005 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (DefaultStressor-1) edg-perf08-48196 invoking PrepareCommand { ... gtx=GlobalTransaction:<edg-perf08-48196>:13330:local, cacheName='testCache', topologyId=53} to recipient list [edg-perf03-16232, edg-perf08-48196, edg-perf02-34805, edg-perf04-41106, edg-perf01-23632]
> 03:51:36,329 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (DefaultStressor-1) Responses: [sender=edg-perf03-16232,received=true, suspected=false] [sender=edg-perf02-34805, received=true, suspected=false] [sender=edg-perf04-41106, received=true, suspected=false] [sender=edg-perf01-23632, received=true, suspected=false]
> 03:51:36,342 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (DefaultStressor-1) edg-perf08-48196 invoking CommitCommand {gtx=GlobalTransaction:<edg-perf08-48196>:13330:local, cacheName='testCache', topologyId=53} to recipient list [edg-perf03-16232, edg-perf08-48196, edg-perf02-34805, edg-perf04-41106, edg-perf01-23632] with options RpcOptions{timeout=60000, unit=MILLISECONDS, fifoOrder=false, totalOrder=false, responseFilter=null, responseMode=SYNCHRONOUS_IGNORE_LEAVERS, skipReplicationQueue=false}
> 03:51:36,703 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (DefaultStressor-1) Responses: [sender=edg-perf03-16232, retval=SuccessfulResponse{responseValue=null} , received=true, suspected=false] [sender=edg-perf02-34805, retval=SuccessfulResponse{responseValue=null} , received=true, suspected=false] [sender=edg-perf04-41106, retval=SuccessfulResponse{responseValue=null} , received=true, suspected=false] [sender=edg-perf01-23632, retval=SuccessfulResponse{responseValue=null} , received=true, suspected=false]
> {code}
> The problem is, that other participating nodes rollback it, as TX with higher id was completed before. Successfull response is returned for both prepare & commit commands.
> {code}
> 03:49:58,190 TRACE [org.infinispan.transaction.TransactionTable] (remote-thread-499) Marking transaction GlobalTransaction:<edg-perf08-48196>:13337:local as completed
> ...
> 03:51:34,122 TRACE [org.infinispan.transaction.TransactionTable] (remote-thread-593) Created and registered remote transaction RemoteTransaction{ ... lookedUpEntries={}, lockedKeys=null, backupKeyLocks=null, lookedUpEntriesTopology=2147483647, isMarkedForRollback=false, tx=GlobalTransaction:<edg-perf08-48196>:13330:remote, state=null}
> 03:51:34,073 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread-593) Calling perform() on PrepareCommand { ... gtx=GlobalTransaction:<edg-perf08-48196>:13330:remote, cacheName='testCache', topologyId=53}
> 03:51:34,342 TRACE [org.infinispan.interceptors.TxInterceptor] (remote-thread-593) Rolling back remote transaction GlobalTransaction:<edg-perf08-48196>:13330:remote because either already completed (true) or originator no longer in the cluster (false).
> 03:51:34,639 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread-593) About to send back response null for command PrepareCommand { ... gtx=GlobalTransaction:<edg-perf08-48196>:13330:remote, cacheName='testCache', topologyId=53}
> 03:51:36,355 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread-589) Calling perform() on CommitCommand {gtx=GlobalTransaction:<edg-perf08-48196>:13330:remote, cacheName='testCache', topologyId=53}
> 03:51:36,355 TRACE [org.infinispan.commands.tx.AbstractTransactionBoundaryCommand] (remote-thread-589) Did not find a RemoteTransaction for GlobalTransaction:<edg-perf08-48196>:13330:remote
> 03:51:36,355 TRACE [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread-589) About to send back response SuccessfulResponse{responseValue=null} for command CommitCommand {gtx=GlobalTransaction:<edg-perf08-48196>:13330:remote, cacheName='testCache', topologyId=53}
> {code}
> Exception response should be returned instead to avoid incorrect assumptions about presence of updated entry in the cache.
> [~dan.berindei] spotted lastPrunedTxId modifications are not logged, let's make sure they are.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months
[JBoss JIRA] (ISPN-5106) Deadlock on GlobalComponentRegistry when starting a cluster
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5106?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5106:
-------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/3416
> Deadlock on GlobalComponentRegistry when starting a cluster
> -----------------------------------------------------------
>
> Key: ISPN-5106
> URL: https://issues.jboss.org/browse/ISPN-5106
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Reporter: Jakub Markos
> Assignee: Dan Berindei
> Priority: Critical
> Attachments: dumps_and_logs.zip
>
>
> We have a test which starts 4 server nodes, and sometimes they fail to complete the startup. This happens with the current snapshot.
> It appears there's a deadlock on intrinsic locks on GlobalComponentRegistry, since the CacheTopologyControlCommand.POLICY_GET_STATUS is sent with the lock acquired but this lock is also needed for injecting dependencies when the command is processed on the remote node.
> Here are the relevant parts from the dumps, node02:
> {code}
> "remote-thread--p3-t1" daemon prio=10 tid=0x00007f7a00002800 nid=0x487f waiting for monitor entry [0x00007f796bbfa000]
> java.lang.Thread.State: BLOCKED (on object monitor)
> at org.infinispan.factories.AbstractComponentRegistry.getOrCreateComponent(AbstractComponentRegistry.java:262)
> - waiting to lock <0x000000060365b6b8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.AbstractComponentRegistry.invokeInjectionMethod(AbstractComponentRegistry.java:227)
> at org.infinispan.factories.AbstractComponentRegistry.wireDependencies(AbstractComponentRegistry.java:132)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$2.run(GlobalInboundInvocationHandler.java:156)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Locked ownable synchronizers:
> - <0x0000000615af46d0> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "MSC service thread 1-16" prio=10 tid=0x00007f79ec071800 nid=0x4839 waiting on condition [0x00007f7a40239000]
> java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x0000000614d47e60> (a java.util.concurrent.FutureTask)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
> at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:422)
> at java.util.concurrent.FutureTask.get(FutureTask.java:199)
> at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:432)
> at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:385)
> at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:368)
> at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:359)
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:281)
> - locked <0x000000060420d4a8> (a java.lang.Object)
> at org.infinispan.topology.ClusterTopologyManagerImpl.start(ClusterTopologyManagerImpl.java:103)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
> - locked <0x000000060365b6b8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:221)
> - locked <0x000000060365b6b8> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:580)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:546)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:423)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:437)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:89)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:80)
> at org.infinispan.server.infinispan.SecurityActions$4.run(SecurityActions.java:116)
> at org.infinispan.server.infinispan.SecurityActions$4.run(SecurityActions.java:113)
> at org.infinispan.security.Security.doPrivileged(Security.java:76)
> at org.infinispan.server.infinispan.SecurityActions.doPrivileged(SecurityActions.java:60)
> at org.infinispan.server.infinispan.SecurityActions.startCache(SecurityActions.java:121)
> at org.jboss.as.clustering.infinispan.subsystem.CacheService.start(CacheService.java:79)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Locked ownable synchronizers:
> - <0x0000000653444750> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> {code}
> and node03
> {code}
> "remote-thread--p3-t1" daemon prio=10 tid=0x00007f016c079000 nid=0x1a43 waiting for monitor entry [0x00007f0114396000]
> java.lang.Thread.State: BLOCKED (on object monitor)
> at org.infinispan.factories.AbstractComponentRegistry.getOrCreateComponent(AbstractComponentRegistry.java:262)
> - waiting to lock <0x0000000609c2bf50> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.AbstractComponentRegistry.invokeInjectionMethod(AbstractComponentRegistry.java:227)
> at org.infinispan.factories.AbstractComponentRegistry.wireDependencies(AbstractComponentRegistry.java:132)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$2.run(GlobalInboundInvocationHandler.java:156)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Locked ownable synchronizers:
> - <0x0000000615a05750> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "MSC service thread 1-16" prio=10 tid=0x00007f015c071800 nid=0x19ff waiting on condition [0x00007f01b0558000]
> java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <0x0000000615025bb0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
> at org.jgroups.util.CondVar.waitFor(CondVar.java:64)
> at org.jgroups.blocks.Request.waitForResults(Request.java:195)
> at org.jgroups.blocks.Request.responsesComplete(Request.java:181)
> at org.jgroups.blocks.Request.execute(Request.java:89)
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:409)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:374)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:188)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:562)
> at org.infinispan.topology.ClusterTopologyManagerImpl.start(ClusterTopologyManagerImpl.java:112)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)
> - locked <0x0000000609c2bf50> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:221)
> - locked <0x0000000609c2bf50> (a org.infinispan.factories.GlobalComponentRegistry)
> at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:580)
> at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:546)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:423)
> at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:437)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:89)
> at org.jboss.as.clustering.infinispan.DefaultEmbeddedCacheManager.getCache(DefaultEmbeddedCacheManager.java:80)
> at org.infinispan.server.infinispan.SecurityActions$4.run(SecurityActions.java:116)
> at org.infinispan.server.infinispan.SecurityActions$4.run(SecurityActions.java:113)
> at org.infinispan.security.Security.doPrivileged(Security.java:76)
> at org.infinispan.server.infinispan.SecurityActions.doPrivileged(SecurityActions.java:60)
> at org.infinispan.server.infinispan.SecurityActions.startCache(SecurityActions.java:121)
> at org.jboss.as.clustering.infinispan.subsystem.CacheService.start(CacheService.java:79)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Locked ownable synchronizers:
> - <0x00000006534e9628> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months
[JBoss JIRA] (ISPN-5408) High availability status
by Galder Zamarreño (JIRA)
Galder Zamarreño created ISPN-5408:
--------------------------------------
Summary: High availability status
Key: ISPN-5408
URL: https://issues.jboss.org/browse/ISPN-5408
Project: Infinispan
Issue Type: Feature Request
Components: Core
Reporter: Galder Zamarreño
Currently, DistributionManager has a couple of methods that indicate whether the node is process of rehashing, `isRehashInProgress` method, and whether the node has joined the cluster `isJoinComplete`.
It'd be easier for the users if these boolean flags where transformed into some kind of Status Enum that represents what the node's current status is. Cache's status is not enough currently to represent this.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months
[JBoss JIRA] (ISPN-4778) PessimisticLockingInterceptor throws when handling remote clear command
by Daniele Pirola (JIRA)
[ https://issues.jboss.org/browse/ISPN-4778?page=com.atlassian.jira.plugin.... ]
Daniele Pirola commented on ISPN-4778:
--------------------------------------
Is there a chance to see this fix inside 6.0.2.Final ? Jboss wildfly 8.2 ships with infinispan 6.0.2.Final, I try to upgrade to 7.2.0.CR1 with no luck.
> PessimisticLockingInterceptor throws when handling remote clear command
> -----------------------------------------------------------------------
>
> Key: ISPN-4778
> URL: https://issues.jboss.org/browse/ISPN-4778
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 6.0.2.Final
> Environment: JBoss WildFly 8.1.0.FINAL
> Reporter: Arjan t
> Assignee: Pedro Ruivo
> Labels: remote
>
> Using InfiniSpan as it's shipped with Jboss WildFly 8.1.0.Final as distributed cache for Hibernate, it appears that the ClearCommand does not work in a cluster when *pessimistic locking* is used. Pessimistic locking seems to be the default in WildFly, even when theoretically it shouldn't be.
> This will result in the following exception:
> {noformat}
> java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext
> at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:194)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
> at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255)
> at org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
> at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73)
> at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)
> at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39)
> at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:50)
> at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:172)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> {noformat}
> The incoming command looks as follows:
> {noformat}
> CacheRpcCommand cmd:
> command:
> ClearCommand{flags=null}
> icf:
> org.infinispan.context.TransactionalInvocationContextFactory@3ef1861e
> Interceptor chain:
>
> >> org.infinispan.interceptors.InvocationContextInterceptor -- checks if stopping, otherwise continues
> >> org.infinispan.interceptors.CacheMgmtInterceptor -- does nothing
> >> org.infinispan.interceptors.TxInterceptor -- checks "shouldEnlist", if false does nothing
> >> org.infinispan.interceptors.NotificationInterceptor -- does nothing
> >> org.infinispan.interceptors.locking.PessimisticLockingInterceptor -- Throws exception if something in cache
> >> org.infinispan.interceptors.EntryWrappingInterceptor
> >> org.infinispan.interceptors.InvalidationInterceptor
> >> org.infinispan.interceptors.CallInterceptor
> {noformat}
> The problem seems to be that {{org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand}} always creates a {{NonTxInvocationContext}}. As per the following line of code:
> {code}
> final InvocationContext ctx = icf.createRemoteInvocationContextForCommand(vc, getOrigin());
> {code}
> When handling the ClearCommand, the PessimisticLockInterceptor always casts this to a {{TxInvocationContext}} whenever {{dataContainer}} is not empty, e.g. when there is cached data on the node where the clear command arrives. This happens in the following code:
> {code}
> public Object visitClearCommand(InvocationContext ctx, ClearCommand command) throws Throwable {
> try {
> boolean skipLocking = hasSkipLocking(command);
> long lockTimeout = getLockAcquisitionTimeout(command, skipLocking);
> for (InternalCacheEntry entry : dataContainer.entrySet())
> lockAndRegisterBackupLock((TxInvocationContext) ctx, entry.getKey(), lockTimeout, skipLocking);
> return invokeNextInterceptor(ctx, command);
> } catch (Throwable te) {
> releaseLocksOnFailureBeforePrepare(ctx);
> throw te;
> }
> }
> {code}
> So seemingly this can't ever work.
> Either the {{PessimisticLockingInterceptor}} can't be in a the interceptor chain when handling commands from a remote destination, or something has to be done about about the {{InvocationContext}} when handling remote commands?
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months
[JBoss JIRA] (ISPN-5405) org.infinispan.statetransfer.StateConsumerTest.test1 fails randomly with Assertion Error
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5405?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5405:
-----------------------------------------------
Sebastian Łaskawiec <slaskawi(a)redhat.com> changed the Status of [bug 1214692|https://bugzilla.redhat.com/show_bug.cgi?id=1214692] from NEW to ASSIGNED
> org.infinispan.statetransfer.StateConsumerTest.test1 fails randomly with Assertion Error
> ----------------------------------------------------------------------------------------
>
> Key: ISPN-5405
> URL: https://issues.jboss.org/browse/ISPN-5405
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 7.1.1.Final
> Reporter: Roman Macor
>
> Test org.infinispan.statetransfer.StateConsumerTest.test1 fails randomly with:
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertFalse(Assert.java:64)
> at org.junit.Assert.assertFalse(Assert.java:74)
> at org.infinispan.statetransfer.StateConsumerTest.test1(StateConsumerTest.java:245)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:56)
> at java.lang.reflect.Method.invoke(Method.java:620)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:274)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:627)
> at java.lang.Thread.run(Thread.java:863)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months
[JBoss JIRA] (ISPN-4842) Reduce the overhead of clustered caches
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4842?page=com.atlassian.jira.plugin.... ]
Dan Berindei reassigned ISPN-4842:
----------------------------------
Assignee: Dan Berindei
> Reduce the overhead of clustered caches
> ---------------------------------------
>
> Key: ISPN-4842
> URL: https://issues.jboss.org/browse/ISPN-4842
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 7.0.0.CR1
> Reporter: Takayoshi Kimura
> Assignee: Dan Berindei
>
> A user is testing 500 nodes cluster with 500 dist caches defined, and plans to expand it to 3000 caches.
> Infinispan manages consistent hash per cache, uses a JGroups channel per cache and uses several threads per cache. It gives significant overhead with this large size cluster. When tested with this size, Infinispan easily exhausted all threads in the thread pools and deadlocks, and requires several thousands threads to handle massive JOIN requests - the coord receives 499 * 3000 JOIN requests.
> It would be great if we can share the consistent hash and resources between caches. For example, define a "master" dist cache and allow other caches to refer to the master cache for resource sharing.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months
[JBoss JIRA] (ISPN-4842) Reduce the overhead of clustered caches
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4842?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-4842:
-------------------------------
Status: Open (was: New)
> Reduce the overhead of clustered caches
> ---------------------------------------
>
> Key: ISPN-4842
> URL: https://issues.jboss.org/browse/ISPN-4842
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 7.0.0.CR1
> Reporter: Takayoshi Kimura
> Assignee: Dan Berindei
>
> A user is testing 500 nodes cluster with 500 dist caches defined, and plans to expand it to 3000 caches.
> Infinispan manages consistent hash per cache, uses a JGroups channel per cache and uses several threads per cache. It gives significant overhead with this large size cluster. When tested with this size, Infinispan easily exhausted all threads in the thread pools and deadlocks, and requires several thousands threads to handle massive JOIN requests - the coord receives 499 * 3000 JOIN requests.
> It would be great if we can share the consistent hash and resources between caches. For example, define a "master" dist cache and allow other caches to refer to the master cache for resource sharing.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
10 years, 11 months