[JBoss JIRA] (ISPN-6099) ConcurrentJoinTest random failures
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-6099?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on ISPN-6099:
--------------------------------
So the attached C demo demonstrates this. Start 2 instances: {{./BindAddress 7900}}
On MacOSX, the first program calls {{bind()}} then waits. The second program fails on {{bind()}} with address already in use
On Linux, both programs are able to successfully call {{bind()}}, but the second one that calls {{listen()}} fails.
So the question is what openjdk does: if {{listen()}} fails, it _should_ close the socket, but - according to your observations - it apparently doesn't.
I'll investigate what happens when {{SO_REUSEADDR}} is not set. I'd expect that {{bind()}} would fail, even on Linux...
> ConcurrentJoinTest random failures
> ----------------------------------
>
> Key: ISPN-6099
> URL: https://issues.jboss.org/browse/ISPN-6099
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 8.1.0.Final
> Environment: java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.2.0.Beta1
>
> Attachments: main.cpp
>
>
> Since the switch to {{TCP_NIO2}} in the test suite, I've been seeing random failures in {{ConcurrentJoinTest}} and other tests that attempt to start multiple channels in parallel (e.g. {{StateTransferFunctionalTest}} and its subclasses).
> Normally JGroups only reports a {{java.net.BindException: No available port to bind to in range [8000 .. 8099]}}, but I have modified {{org.jgroups.util.Util.createServerSocket()}} to report the cause exception and I got this:
> {noformat}
> java.net.BindException: No available port to bind to in range [8000 .. 8099]
> at org.jgroups.util.Util.createServerSocketChannel(Util.java:3077) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.blocks.cs.NioServer.<init>(NioServer.java:86) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.protocols.TCP_NIO2.start(TCP_NIO2.java:97) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.stack.ProtocolStack.startStack(ProtocolStack.java:966) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel.startStack(JChannel.java:890) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel._preConnect(JChannel.java:553) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel.connect(JChannel.java:288) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel.connect(JChannel.java:279) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.startJGroupsChannelIfNeeded(JGroupsTransport.java:199) ~[classes/:?]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:190) ~[classes/:?]
> at sun.reflect.GeneratedMethodAccessor129.invoke(Unknown Source) ~[?:?]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_60]
> at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_60]
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) ~[infinispan-commons-8.2.0-SNAPSHOT.jar:8.2.0-SNAPSHOT]
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:870) ~[classes/:?]
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:639) ~[classes/:?]
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:628) ~[classes/:?]
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:531) ~[classes/:?]
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:229) ~[classes/:?]
> ... 11 more
> Caused by: java.net.SocketException: Invalid argument
> at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_60]
> at sun.nio.ch.Net.bind(Net.java:433) ~[?:1.8.0_60]
> at sun.nio.ch.Net.bind(Net.java:425) ~[?:1.8.0_60]
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:1.8.0_60]
> at java.nio.channels.ServerSocketChannel.bind(ServerSocketChannel.java:157) ~[?:1.8.0_60]
> at org.jgroups.util.Util.createServerSocketChannel(Util.java:3072) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-6099) ConcurrentJoinTest random failures
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-6099?page=com.atlassian.jira.plugin.... ]
Bela Ban updated ISPN-6099:
---------------------------
Attachment: main.cpp
* Creates socket
* Calls bind()
* Waits for input
* Calls listen()
> ConcurrentJoinTest random failures
> ----------------------------------
>
> Key: ISPN-6099
> URL: https://issues.jboss.org/browse/ISPN-6099
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 8.1.0.Final
> Environment: java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.2.0.Beta1
>
> Attachments: main.cpp
>
>
> Since the switch to {{TCP_NIO2}} in the test suite, I've been seeing random failures in {{ConcurrentJoinTest}} and other tests that attempt to start multiple channels in parallel (e.g. {{StateTransferFunctionalTest}} and its subclasses).
> Normally JGroups only reports a {{java.net.BindException: No available port to bind to in range [8000 .. 8099]}}, but I have modified {{org.jgroups.util.Util.createServerSocket()}} to report the cause exception and I got this:
> {noformat}
> java.net.BindException: No available port to bind to in range [8000 .. 8099]
> at org.jgroups.util.Util.createServerSocketChannel(Util.java:3077) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.blocks.cs.NioServer.<init>(NioServer.java:86) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.protocols.TCP_NIO2.start(TCP_NIO2.java:97) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.stack.ProtocolStack.startStack(ProtocolStack.java:966) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel.startStack(JChannel.java:890) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel._preConnect(JChannel.java:553) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel.connect(JChannel.java:288) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.jgroups.JChannel.connect(JChannel.java:279) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.startJGroupsChannelIfNeeded(JGroupsTransport.java:199) ~[classes/:?]
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:190) ~[classes/:?]
> at sun.reflect.GeneratedMethodAccessor129.invoke(Unknown Source) ~[?:?]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_60]
> at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_60]
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168) ~[infinispan-commons-8.2.0-SNAPSHOT.jar:8.2.0-SNAPSHOT]
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:870) ~[classes/:?]
> at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:639) ~[classes/:?]
> at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:628) ~[classes/:?]
> at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:531) ~[classes/:?]
> at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:229) ~[classes/:?]
> ... 11 more
> Caused by: java.net.SocketException: Invalid argument
> at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_60]
> at sun.nio.ch.Net.bind(Net.java:433) ~[?:1.8.0_60]
> at sun.nio.ch.Net.bind(Net.java:425) ~[?:1.8.0_60]
> at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:1.8.0_60]
> at java.nio.channels.ServerSocketChannel.bind(ServerSocketChannel.java:157) ~[?:1.8.0_60]
> at org.jgroups.util.Util.createServerSocketChannel(Util.java:3072) ~[jgroups-3.6.7.Final.jar:3.6.7.Final]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-6007) Cache.get(...) using Flag.FORCE_WRITE_LOCK should retry on OutdatedTopologyException
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-6007?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-6007:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 8.2.0.Final
Resolution: Done
> Cache.get(...) using Flag.FORCE_WRITE_LOCK should retry on OutdatedTopologyException
> ------------------------------------------------------------------------------------
>
> Key: ISPN-6007
> URL: https://issues.jboss.org/browse/ISPN-6007
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 8.1.0.CR1, 8.0.2.Final
> Reporter: Paul Ferraro
> Assignee: Dan Berindei
> Fix For: 8.2.0.Beta1, 8.2.0.Final, 8.1.1.Final
>
>
> From IRC:
> pferraro: Following a shutdown of a node, we're seeing OutdatedTopologyExceptions during a Cache.get(...) using Flag.FORCE_WRITE_LOCK when the requested key is not owned by the requesting node
> pferraro: is there a reason why Infinispan doesn't automatically retry here?
> dberindei: I think we just overlooked it
> dberindei: we are retrying a LockControlCommand when you call lock() explicitly
> dberindei: but maybe not when it's invoked for a get()
> pferraro_: is that something that can be fixed easily?
> dberindei: yes, I think so
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5816) Implement an event logger for server
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-5816?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5816:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 8.2.0.Final
Resolution: Done
> Implement an event logger for server
> ------------------------------------
>
> Key: ISPN-5816
> URL: https://issues.jboss.org/browse/ISPN-5816
> Project: Infinispan
> Issue Type: Feature Request
> Components: Server
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Fix For: 8.2.0.Beta1, 8.2.0.Final
>
>
> Create an event logger so that the management interface can show events:
> - 7 day retention
> - Task execution
> - Cluster events (node join/leave, split/merge, rebalance start/stop, mass-indexer start/stop, server shutdown/start, remote site up/down)
> - Cache events (start, stop)
> - Security audit
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5955) FAIL_SILENTLY doesn't always prevent rollback with pessimistic transactions
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-5955?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5955:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 8.2.0.Final
8.1.1.Final
Resolution: Done
> FAIL_SILENTLY doesn't always prevent rollback with pessimistic transactions
> ---------------------------------------------------------------------------
>
> Key: ISPN-5955
> URL: https://issues.jboss.org/browse/ISPN-5955
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 8.0.1.Final, 8.1.0.Beta1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 8.2.0.Beta1, 8.2.0.Final, 8.1.1.Final
>
>
> The {{FAIL_SILENTLY}} should "prevent a failing operation from affecting any ongoing JTA transactions", but sometimes this doesn't work properly.
> {{FlagsReplicationTest}} has a transaction using both {{FAIL_SILENTLY}} and {{SKIP_LOCKING}} in the same transaction:
> # A {{lock(FAIL_SILENTLY)(key)}} operation fails.
> Both on the primary owner and on the originator, PessimisticLockingInterceptor sends a TxCompletionNotificationCommand to all the nodes affected by the tx so far, to release the locks. This marks the transaction as completed, but doesn't mark it as rolled back.
> # A {{remove(SKIP_LOCKING)(key)}} operation should then succeed.
> But the operation will invoke a {{ClusteredGetCommand(key, acquireRemoteLock=true, SKIP_LOCKING)}} on both owners, which will in turn invoke locally a {{LockControlCommand(key, SKIP_LOCKING)}}.
> At this point, {{TxInterceptor}} sees that the transaction is already completed, and invokes a {{RollbackCommand}} to mark it as rolled back, then remove it from the transaction table. The command still succeeds.
> # The test then tries to commit the transaction. Usually, the {{PrepareCommand}} doesn't find the remote transaction in the transaction table, and it succeeds.
> But some of the time, because the {{ClusteredGetCommand}} command only uses the first response, the remote transaction is not removed from the transaction table by the time the {{PrepareCommand}} is executed on one of the owners.
> If that happens, the {{PrepareCommand}} executes with a remote transaction instance that's already marked for rollback, and fails when trying to put the key in the context.
> {noformat}
> 15:42:56,736 ERROR (remote-thread-NodeF-p1112-t5:GlobalTx:NodeG-6318:216) [InvocationContextInterceptor] ISPN000136: Error executing command LockControlCommand, writing keys []
> org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 0 milliseconds for key replication.FlagsReplicationTest and requestor GlobalTx:NodeG-6318:216. Lock is held by GlobalTx:NodeG-6318:215
> 15:42:56,802 TRACE (OOB-1,NodeF-64741:) [NonTotalOrderTxPerCacheInboundInvocationHandler] Calling perform() on ClusteredGetCommand{key=replication.FlagsReplicationTest, flags=[SKIP_LOCKING]}
> 15:42:56,803 TRACE (OOB-1,NodeF-64741:GlobalTx:NodeG-6318:216) [InvocationContextInterceptor] Invoked with command LockControlCommand{cache=dist, keys=[replication.FlagsReplicationTest], flags=[SKIP_LOCKING], unlock=false, gtx=GlobalTx:NodeG-6318:216} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@75bf6da7]
> 15:42:56,822 TRACE (remote-thread-NodeF-p1112-t6:GlobalTx:NodeG-6318:216) [InvocationContextInterceptor] Invoked with command PrepareCommand {modifications=[RemoveCommand{key=replication.FlagsReplicationTest, value=null, flags=[SKIP_LOCKING], valueMatcher=MATCH_ALWAYS}], onePhaseCommit=true, gtx=GlobalTx:NodeG-6318:216, cacheName='dist', topologyId=6} and InvocationContext [org.infinispan.context.impl.RemoteTxInvocationContext@75bf6da7]
> 15:42:56,832 TRACE (OOB-1,NodeF-64741:GlobalTx:NodeG-6318:216) [TxInterceptor] Rolling back remote transaction GlobalTx:NodeG-6318:216 because either already completed (true) or originator no longer in the cluster (false).
> 15:42:56,864 TRACE (remote-thread-NodeF-p1112-t6:GlobalTx:NodeG-6318:216) [EntryFactoryImpl] Creating new entry for key replication.FlagsReplicationTest
> 15:42:56,864 TRACE (remote-thread-NodeF-p1112-t6:GlobalTx:NodeG-6318:216) [BaseSequentialInvocationContext] Interceptor class org.infinispan.interceptors.EntryWrappingInterceptor threw exception org.infinispan.transaction.xa.InvalidTransactionException: This remote transaction GlobalTx:NodeG-6318:216 is already rolled back
> 15:42:56,873 TRACE (remote-thread-NodeF-p1112-t6:GlobalTx:NodeG-6318:216) [TxInterceptor] invokeNextInterceptorAndVerifyTransaction :: originatorMissing=false, alreadyCompleted=true
> 15:42:56,873 TRACE (remote-thread-NodeF-p1112-t6:GlobalTx:NodeG-6318:216) [TxInterceptor] Rolling back remote transaction GlobalTx:NodeG-6318:216 because either already completed (true) or originator no longer in the cluster (false).
> {noformat}
> Possible actions:
> * Never try to release locks from a {{LockControlCommand}}, wait for the {{RollbackCommand}} instead. This could cause problems if the primary owner dies, the transaction tries to lock the same entry again, and the backup owner that became primary wrongfully assumes that it is a proper owner.
> * Use a {{LockControlCommand(unlock=true)}} instead of a {{TxCompletionNotificationCommand}} to release the backup locks after an operation with {{FAIL_SILENTLY}} failed.
> * Don't set {{acquireRemoteLock=true}} when the {{SKIP_LOCKING}} flag has been set.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months