[JBoss JIRA] (ISPN-4780) Protostream should not require package name on imports
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-4780:
---------------------------------------
Summary: Protostream should not require package name on imports
Key: ISPN-4780
URL: https://issues.jboss.org/browse/ISPN-4780
Project: Infinispan
Issue Type: Bug
Components: Remote Querying
Affects Versions: 7.0.0.Beta2
Reporter: Gustavo Fernandes
Assignee: Sanne Grinovero
Fix For: 7.0.0.CR1
{code:title=file1.proto}
package p;
message A {
optional int32 f1 = 1;
}
{code}
{code:title=file2.proto}
package org.infinispan;
import "file1.proto";
message B {
required p.A ma = 1;
}
{code}
Does not work since protostream uses the package name + file name to resolve imports
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
9 years, 7 months
[JBoss JIRA] (ISPN-4631) NodeAuthentication*PassIT.testReadItemOnJoiningNode fails on RHEL6
by Vojtech Juranek (JIRA)
[ https://issues.jboss.org/browse/ISPN-4631?page=com.atlassian.jira.plugin.... ]
Vojtech Juranek updated ISPN-4631:
----------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/2910
> NodeAuthentication*PassIT.testReadItemOnJoiningNode fails on RHEL6
> ------------------------------------------------------------------
>
> Key: ISPN-4631
> URL: https://issues.jboss.org/browse/ISPN-4631
> Project: Infinispan
> Issue Type: Bug
> Components: Integration , Security
> Affects Versions: 7.0.0.Beta1
> Reporter: Dan Berindei
> Assignee: Vojtech Juranek
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 7.0.0.CR1
>
>
> Failures appear only on the RHEL agents in CI, both in NodeAuthenticationKrbPassIT and NodeAuthenticationMD5PassIT:
> {noformat}
> java.lang.AssertionError: expected:<test_value> but was:<null>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at org.infinispan.test.integration.security.embedded.AbstractNodeAuthentication.testReadItemOnJoiningNode(AbstractNodeAuthentication.java:94)
> at org.infinispan.test.integration.security.embedded.NodeAuthenticationKrbPassIT.testReadItemOnJoiningNode(NodeAuthenticationKrbPassIT.java:71)
> {noformat}
> The failure in {{NodeAuthentication*FailIT.testReadItemOnJoiningNode}} is almost certainly related:
> {noformat}
> java.lang.Exception: Unexpected exception, expected<org.infinispan.manager.EmbeddedCacheManagerStartupException> but was<java.lang.Exception>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at org.infinispan.test.integration.security.embedded.AbstractNodeAuthentication.testReadItemOnJoiningNode(AbstractNodeAuthentication.java:94)
> at org.infinispan.test.integration.security.embedded.NodeAuthenticationMD5FailIT.testReadItemOnJoiningNode(NodeAuthenticationMD5FailIT.java:55)
> {noformat}
> http://ci.infinispan.org/viewLog.html?buildId=10776&tab=buildResultsDiv&b...
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
9 years, 7 months
[JBoss JIRA] (ISPN-4776) The topology id for the merged cache topology is not always bigger than all the partition topology ids
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4776?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-4776:
-------------------------------
Status: Open (was: New)
> The topology id for the merged cache topology is not always bigger than all the partition topology ids
> ------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4776
> URL: https://issues.jboss.org/browse/ISPN-4776
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.0.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 7.0.0.CR1
>
>
> With the ISPN-4574 fix, I changed the merge algorithm to pick the partition with the most members (both in the _stable_ topology and in the _current_ topology) instead of the partition with the highest topology id.
> However, the biggest topology is not necessarily the partition with the highest topology id, so it's possible that some nodes will ignore the merged topology because they already have a higher topology installed. This happened once in ClusterTopologyManagerTest.testClusterRecoveryAfterThreeWaySplit:
> {noformat}
> 00:24:59,286 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterCacheStatus] Recovered 3 partition(s) for cache cache: [CacheTopology{id=8, rebalanceId=3, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeL-25322: 60+0]}, pendingCH=null, unionCH=null}, CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns = 60, owners = (2)[, NodeL-25322: 30+10, NodeN-6727: 30+10]}, pendingCH=DefaultConsistentHash{ns = 60, owners = (2)[, NodeL-25322: 30+30, NodeN-6727: 30+30]}, unionCH=null}, CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}]
> 00:24:59,287 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterCacheStatus] Updating topologies after merge for cache cache, current topology = CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}, stable topology = CacheTopology{id=4, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (3)[, NodeL-25322: 20+20, NodeM-12972: 20+20, NodeN-6727: 20+20]}, pendingCH=null, unionCH=null}, availability mode = null
> 00:24:59,287 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache cache, topology = CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}, availability mode = null
> 00:24:59,288 TRACE (transport-thread-NodeL-p33097-t3:) [LocalTopologyManagerImpl] Ignoring consistent hash update for cache cache, current topology is 8: CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}
> {noformat}
> Failure logs here: http://ci.infinispan.org/viewLog.html?buildId=12364&buildTypeId=Infinispa...
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
9 years, 7 months
[JBoss JIRA] (ISPN-4776) The topology id for the merged cache topology is not always bigger than all the partition topology ids
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4776?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-4776:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/2908
> The topology id for the merged cache topology is not always bigger than all the partition topology ids
> ------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4776
> URL: https://issues.jboss.org/browse/ISPN-4776
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.0.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 7.0.0.CR1
>
>
> With the ISPN-4574 fix, I changed the merge algorithm to pick the partition with the most members (both in the _stable_ topology and in the _current_ topology) instead of the partition with the highest topology id.
> However, the biggest topology is not necessarily the partition with the highest topology id, so it's possible that some nodes will ignore the merged topology because they already have a higher topology installed. This happened once in ClusterTopologyManagerTest.testClusterRecoveryAfterThreeWaySplit:
> {noformat}
> 00:24:59,286 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterCacheStatus] Recovered 3 partition(s) for cache cache: [CacheTopology{id=8, rebalanceId=3, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeL-25322: 60+0]}, pendingCH=null, unionCH=null}, CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns = 60, owners = (2)[, NodeL-25322: 30+10, NodeN-6727: 30+10]}, pendingCH=DefaultConsistentHash{ns = 60, owners = (2)[, NodeL-25322: 30+30, NodeN-6727: 30+30]}, unionCH=null}, CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}]
> 00:24:59,287 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterCacheStatus] Updating topologies after merge for cache cache, current topology = CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}, stable topology = CacheTopology{id=4, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (3)[, NodeL-25322: 20+20, NodeM-12972: 20+20, NodeN-6727: 20+20]}, pendingCH=null, unionCH=null}, availability mode = null
> 00:24:59,287 DEBUG (transport-thread-NodeL-p33097-t6:) [ClusterTopologyManagerImpl] Updating cluster-wide current topology for cache cache, topology = CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}, availability mode = null
> 00:24:59,288 TRACE (transport-thread-NodeL-p33097-t3:) [LocalTopologyManagerImpl] Ignoring consistent hash update for cache cache, current topology is 8: CacheTopology{id=5, rebalanceId=2, currentCH=DefaultConsistentHash{ns = 60, owners = (1)[, NodeM-12972: 60+0]}, pendingCH=null, unionCH=null}
> {noformat}
> Failure logs here: http://ci.infinispan.org/viewLog.html?buildId=12364&buildTypeId=Infinispa...
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
9 years, 7 months
[JBoss JIRA] (ISPN-4778) PessimisticLockingInterceptor throws when handling remote clear command
by Arjan t (JIRA)
[ https://issues.jboss.org/browse/ISPN-4778?page=com.atlassian.jira.plugin.... ]
Arjan t updated ISPN-4778:
--------------------------
Description:
Using InfiniSpan as it's shipped with Jboss WildFly 8.1.0.Final as distributed cache for Hibernate, it appears that the ClearCommand does not work in a cluster when *pessimistic locking* is used. Pessimistic locking seems to be the default in WildFly, even when theoretically it shouldn't be.
This will result in the following exception:
{noformat}
java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext
at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:194)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255)
at org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)
at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39)
at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:50)
at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
{noformat}
The incoming command looks as follows:
{noformat}
CacheRpcCommand cmd:
command:
ClearCommand{flags=null}
icf:
org.infinispan.context.TransactionalInvocationContextFactory@3ef1861e
Interceptor chain:
>> org.infinispan.interceptors.InvocationContextInterceptor -- checks if stopping, otherwise continues
>> org.infinispan.interceptors.CacheMgmtInterceptor -- does nothing
>> org.infinispan.interceptors.TxInterceptor -- checks "shouldEnlist", if false does nothing
>> org.infinispan.interceptors.NotificationInterceptor -- does nothing
>> org.infinispan.interceptors.locking.PessimisticLockingInterceptor -- Throws exception if something in cache
>> org.infinispan.interceptors.EntryWrappingInterceptor
>> org.infinispan.interceptors.InvalidationInterceptor
>> org.infinispan.interceptors.CallInterceptor
{noformat}
The problem seems to be that {{org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand}} always creates a {{NonTxInvocationContext}}. As per the following line of code:
{code}
final InvocationContext ctx = icf.createRemoteInvocationContextForCommand(vc, getOrigin());
{code}
When handling the ClearCommand, the PessimisticLockInterceptor always casts this to a {{TxInvocationContext}} whenever {{dataContainer}} is not empty, e.g. when there is cached data on the node where the clear command arrives. This happens in the following code:
{code}
public Object visitClearCommand(InvocationContext ctx, ClearCommand command) throws Throwable {
try {
boolean skipLocking = hasSkipLocking(command);
long lockTimeout = getLockAcquisitionTimeout(command, skipLocking);
for (InternalCacheEntry entry : dataContainer.entrySet())
lockAndRegisterBackupLock((TxInvocationContext) ctx, entry.getKey(), lockTimeout, skipLocking);
return invokeNextInterceptor(ctx, command);
} catch (Throwable te) {
releaseLocksOnFailureBeforePrepare(ctx);
throw te;
}
}
{code}
So seemingly this can't ever work.
Either the {{PessimisticLockingInterceptor}} can't be in a the interceptor chain when handling commands from a remote destination, or something has to be done about about the {{InvocationContext}} when handling remote commands?
was:
Using InfiniSpan as its shipped with Jboss WildFly 8.1.0.Final as distributed cache for Hibernate, it appears that the ClearCommand does not work in a cluster when *pessimistic locking* is used. Pessimistic locking seems to be the default in WildFly, even when theoretically it shouldn't be.
This will result in the following exception:
{noformat}
java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext
at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:194)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255)
at org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)
at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39)
at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:50)
at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
{noformat}
The incoming command looks as follows:
{noformat}
CacheRpcCommand cmd:
command:
ClearCommand{flags=null}
icf:
org.infinispan.context.TransactionalInvocationContextFactory@3ef1861e
Interceptor chain:
>> org.infinispan.interceptors.InvocationContextInterceptor -- checks if stopping, otherwise continues
>> org.infinispan.interceptors.CacheMgmtInterceptor -- does nothing
>> org.infinispan.interceptors.TxInterceptor -- checks "shouldEnlist", if false does nothing
>> org.infinispan.interceptors.NotificationInterceptor -- does nothing
>> org.infinispan.interceptors.locking.PessimisticLockingInterceptor -- Throws exception if something in cache
>> org.infinispan.interceptors.EntryWrappingInterceptor
>> org.infinispan.interceptors.InvalidationInterceptor
>> org.infinispan.interceptors.CallInterceptor
{noformat}
The problem seems to be that {{org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand}} always creates a {{NonTxInvocationContext}}. As per the following line of code:
{code}
final InvocationContext ctx = icf.createRemoteInvocationContextForCommand(vc, getOrigin());
{code}
When handling the ClearCommand, the PessimisticLockInterceptor always casts this to a {{TxInvocationContext}} whenever {{dataContainer}} is not empty, e.g. when there is cached data on the node where the clear command arrives. This happens in the following code:
{code}
public Object visitClearCommand(InvocationContext ctx, ClearCommand command) throws Throwable {
try {
boolean skipLocking = hasSkipLocking(command);
long lockTimeout = getLockAcquisitionTimeout(command, skipLocking);
for (InternalCacheEntry entry : dataContainer.entrySet())
lockAndRegisterBackupLock((TxInvocationContext) ctx, entry.getKey(), lockTimeout, skipLocking);
return invokeNextInterceptor(ctx, command);
} catch (Throwable te) {
releaseLocksOnFailureBeforePrepare(ctx);
throw te;
}
}
{code}
So seemingly this can't ever work.
Either the {{PessimisticLockingInterceptor}} can't be in a the interceptor chain when handling commands from a remote destination, or something has to be done about about the {{InvocationContext}} when handling remote commands?
> PessimisticLockingInterceptor throws when handling remote clear command
> -----------------------------------------------------------------------
>
> Key: ISPN-4778
> URL: https://issues.jboss.org/browse/ISPN-4778
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 6.0.2.Final
> Environment: JBoss WildFly 8.1.0.FINAL
> Reporter: Arjan t
> Assignee: Mircea Markus
> Labels: remote
>
> Using InfiniSpan as it's shipped with Jboss WildFly 8.1.0.Final as distributed cache for Hibernate, it appears that the ClearCommand does not work in a cluster when *pessimistic locking* is used. Pessimistic locking seems to be the default in WildFly, even when theoretically it shouldn't be.
> This will result in the following exception:
> {noformat}
> java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext
> at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:194)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
> at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255)
> at org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
> at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73)
> at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
> at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)
> at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39)
> at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
> at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:50)
> at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:172)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> {noformat}
> The incoming command looks as follows:
> {noformat}
> CacheRpcCommand cmd:
> command:
> ClearCommand{flags=null}
> icf:
> org.infinispan.context.TransactionalInvocationContextFactory@3ef1861e
> Interceptor chain:
>
> >> org.infinispan.interceptors.InvocationContextInterceptor -- checks if stopping, otherwise continues
> >> org.infinispan.interceptors.CacheMgmtInterceptor -- does nothing
> >> org.infinispan.interceptors.TxInterceptor -- checks "shouldEnlist", if false does nothing
> >> org.infinispan.interceptors.NotificationInterceptor -- does nothing
> >> org.infinispan.interceptors.locking.PessimisticLockingInterceptor -- Throws exception if something in cache
> >> org.infinispan.interceptors.EntryWrappingInterceptor
> >> org.infinispan.interceptors.InvalidationInterceptor
> >> org.infinispan.interceptors.CallInterceptor
> {noformat}
> The problem seems to be that {{org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand}} always creates a {{NonTxInvocationContext}}. As per the following line of code:
> {code}
> final InvocationContext ctx = icf.createRemoteInvocationContextForCommand(vc, getOrigin());
> {code}
> When handling the ClearCommand, the PessimisticLockInterceptor always casts this to a {{TxInvocationContext}} whenever {{dataContainer}} is not empty, e.g. when there is cached data on the node where the clear command arrives. This happens in the following code:
> {code}
> public Object visitClearCommand(InvocationContext ctx, ClearCommand command) throws Throwable {
> try {
> boolean skipLocking = hasSkipLocking(command);
> long lockTimeout = getLockAcquisitionTimeout(command, skipLocking);
> for (InternalCacheEntry entry : dataContainer.entrySet())
> lockAndRegisterBackupLock((TxInvocationContext) ctx, entry.getKey(), lockTimeout, skipLocking);
> return invokeNextInterceptor(ctx, command);
> } catch (Throwable te) {
> releaseLocksOnFailureBeforePrepare(ctx);
> throw te;
> }
> }
> {code}
> So seemingly this can't ever work.
> Either the {{PessimisticLockingInterceptor}} can't be in a the interceptor chain when handling commands from a remote destination, or something has to be done about about the {{InvocationContext}} when handling remote commands?
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
9 years, 7 months
[JBoss JIRA] (ISPN-4778) PessimisticLockingInterceptor throws when handling remote clear command
by Arjan t (JIRA)
Arjan t created ISPN-4778:
-----------------------------
Summary: PessimisticLockingInterceptor throws when handling remote clear command
Key: ISPN-4778
URL: https://issues.jboss.org/browse/ISPN-4778
Project: Infinispan
Issue Type: Bug
Affects Versions: 6.0.2.Final
Environment: JBoss WildFly 8.1.0.FINAL
Reporter: Arjan t
Assignee: Mircea Markus
Using InfiniSpan as its shipped with Jboss WildFly 8.1.0.Final as distributed cache for Hibernate, it appears that the ClearCommand does not work in a cluster when *pessimistic locking* is used. Pessimistic locking seems to be the default in WildFly, even when theoretically it shouldn't be.
This will result in the following exception:
{noformat}
java.lang.ClassCastException: org.infinispan.context.impl.NonTxInvocationContext cannot be cast to org.infinispan.context.impl.TxInvocationContext
at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitClearCommand(PessimisticLockingInterceptor.java:194)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:255)
at org.infinispan.interceptors.TxInterceptor.visitClearCommand(TxInterceptor.java:206)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73)
at org.infinispan.commands.AbstractVisitor.visitClearCommand(AbstractVisitor.java:47)
at org.infinispan.commands.write.ClearCommand.acceptVisitor(ClearCommand.java:38)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)
at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39)
at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:95)
at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:50)
at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
{noformat}
The incoming command looks as follows:
{noformat}
CacheRpcCommand cmd:
command:
ClearCommand{flags=null}
icf:
org.infinispan.context.TransactionalInvocationContextFactory@3ef1861e
Interceptor chain:
>> org.infinispan.interceptors.InvocationContextInterceptor -- checks if stopping, otherwise continues
>> org.infinispan.interceptors.CacheMgmtInterceptor -- does nothing
>> org.infinispan.interceptors.TxInterceptor -- checks "shouldEnlist", if false does nothing
>> org.infinispan.interceptors.NotificationInterceptor -- does nothing
>> org.infinispan.interceptors.locking.PessimisticLockingInterceptor -- Throws exception if something in cache
>> org.infinispan.interceptors.EntryWrappingInterceptor
>> org.infinispan.interceptors.InvalidationInterceptor
>> org.infinispan.interceptors.CallInterceptor
{noformat}
The problem seems to be that {{org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand}} always creates a {{NonTxInvocationContext}}. As per the following line of code:
{code}
final InvocationContext ctx = icf.createRemoteInvocationContextForCommand(vc, getOrigin());
{code}
When handling the ClearCommand, the PessimisticLockInterceptor always casts this to a {{TxInvocationContext}} whenever {{dataContainer}} is not empty, e.g. when there is cached data on the node where the clear command arrives. This happens in the following code:
{code}
public Object visitClearCommand(InvocationContext ctx, ClearCommand command) throws Throwable {
try {
boolean skipLocking = hasSkipLocking(command);
long lockTimeout = getLockAcquisitionTimeout(command, skipLocking);
for (InternalCacheEntry entry : dataContainer.entrySet())
lockAndRegisterBackupLock((TxInvocationContext) ctx, entry.getKey(), lockTimeout, skipLocking);
return invokeNextInterceptor(ctx, command);
} catch (Throwable te) {
releaseLocksOnFailureBeforePrepare(ctx);
throw te;
}
}
{code}
So seemingly this can't ever work.
Either the {{PessimisticLockingInterceptor}} can't be in a the interceptor chain when handling commands from a remote destination, or something has to be done about about the {{InvocationContext}} when handling remote commands?
--
This message was sent by Atlassian JIRA
(v6.3.1#6329)
9 years, 7 months