[JBoss JIRA] (ISPN-5570) Cross-site: retry backup commands
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5570?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5570:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> Cross-site: retry backup commands
> ---------------------------------
>
> Key: ISPN-5570
> URL: https://issues.jboss.org/browse/ISPN-5570
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Cross-Site Replication
> Affects Versions: 7.2.3.Final
> Reporter: Dan Berindei
> Fix For: 9.4.0.Final
>
>
> There are 3 phases in a backup RPC:
> 1. Sender -> Local site master: caused by the site master is shutting down or crashing, or by a network split.
> 2. Local site master -> Remote site master:
> 2.1. Local site master is no longer a site master, e.g. because it's shutting down or because it's no longer coordinator after a merge.
> 2.2. Remote site master is not longer a site master.
> 2.3. Link between local site and remote site is down.
> 3. Remote site master -> Backup targets
> Replication failures in phase 3 are handled by retrying (except for TimeoutExceptions), because {{BaseBackupReceiver}} uses regular cache methods to perform the updates.
> But replication failures in phases 1 and 2 are not handled in any way, except for causing the remote site to be taken offline after a certain number of replication failures (if backup is synchronous). We should instead retry backup RPCs when we get a {{SuspectException}} or {{UnreachableException}}, and perhaps even when we get no response (2.2?), and only stop when the timeout expires or when the backup is taken offline.
> Async backup probably needs retrying as well, and perhaps even a more sophisticated approach like I-RAC (ISPN-2634).
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5557) Core threading redesign
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5557?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5557:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> Core threading redesign
> -----------------------
>
> Key: ISPN-5557
> URL: https://issues.jboss.org/browse/ISPN-5557
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Affects Versions: 7.2.2.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 9.4.0.Final
>
>
> Infinispan needs a lot of threads, because everything is synchronous: locking, remote command invocations, cache writers. This causes various issues, from general context switching overhead to the thread pools getting full and causing deadlocks.
> We should redesign the core so that most blocking happens on the application threads, and the number of internal threads is kept to a minimum.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5530) AtomicObjectFactoryTest.distributedCacheTest random failures
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5530?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5530:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> AtomicObjectFactoryTest.distributedCacheTest random failures
> ------------------------------------------------------------
>
> Key: ISPN-5530
> URL: https://issues.jboss.org/browse/ISPN-5530
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Tristan Tarrant
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 9.4.0.Final
>
>
> {noformat}
> java.lang.AssertionError: obtained = 999; espected = 1000 expected:<1000> but was:<999>
> at org.testng.AssertJUnit.fail(AssertJUnit.java:59)
> at org.testng.AssertJUnit.failNotEquals(AssertJUnit.java:364)
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:80)
> at org.infinispan.atomic.AtomicObjectFactoryTest.distributedCacheTest(AtomicObjectFactoryTest.java:114)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:483)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5515) Purge store if there is another node already running
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5515?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5515:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> Purge store if there is another node already running
> ----------------------------------------------------
>
> Key: ISPN-5515
> URL: https://issues.jboss.org/browse/ISPN-5515
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.4.0.Final
>
>
> Preloading happens before communicating with other nodes that might already have the cache running. When joining the existing members, the cache then waits to receive the first CH in which it is a member, and then deletes only the entries in the segments that it doesn't own in that CH.
> The intention of this was to remove as little as possible from the existing data, e.g. if the first node to start up is not the one that was stopped last. But the preloaded entries are not replicated to the other nodes, so this can lead to inconsistencies.
> It would be better to delay preloading until we know we are the first node to start up, but failing that we could clear the data container and the store before receiving the initial state.
> Note that this will only allow preloading data from one node. Restoring data from more nodes is harder to do, and we will implement it as part of graceful restart.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5513) State Transfer can miss entries that are concurrently activated
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5513?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5513:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> State Transfer can miss entries that are concurrently activated
> ---------------------------------------------------------------
>
> Key: ISPN-5513
> URL: https://issues.jboss.org/browse/ISPN-5513
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 8.0.0.Alpha1
> Reporter: William Burns
> Fix For: 9.4.0.Final
>
>
> Currently the OutboundTransferTask iterates upon the data container and then runs process for the state loader. However if an entry is activated during or after the data container iteration it is possible this entry is then not seen and subsequently is not present in the store when it is processed.
> EntryRetriever had this same issue and it was required to register a cache listener to listen for activations and then replay the data after finishing with the store.
> This can cause duplicate values as well, however replacing the same exact value is fine and if a non ST write occurs the state is ignored anyways.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5510) Provide better Hot Rod client socket timeout and retry defaults
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5510?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5510:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> Provide better Hot Rod client socket timeout and retry defaults
> ---------------------------------------------------------------
>
> Key: ISPN-5510
> URL: https://issues.jboss.org/browse/ISPN-5510
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 9.4.0.Final
>
>
> The current defaults are:
> * Socket timeout = 60 seconds
> * Max retries = 10
> As a result of these defaults, if the server hangs an operation, it'd take 10 minutes (60 second timeout x 10 retries) for the operation to finally return an exception to the client, which is way too much.
> So, these default value should change to be more aggressive: 30 second socket timeout and 3 max retries.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5506) Dist/ReplEmbeddedRestHotRodTest failing randomly with "java.net.SocketException: Socket closed"
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5506?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5506:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> Dist/ReplEmbeddedRestHotRodTest failing randomly with "java.net.SocketException: Socket closed"
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-5506
> URL: https://issues.jboss.org/browse/ISPN-5506
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 7.2.1.Final, 8.0.0.Alpha1
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 9.4.0.Final
>
> Attachments: infinispan_5506.tgz
>
>
> {code}
> testEmbeddedPutRestHotRodGet(org.infinispan.it.compatibility.DistEmbeddedRestHotRodTest) Time elapsed: 0.002 sec <<< FAILURE!
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
> at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at java.net.Socket.connect(Socket.java:538)
> at java.net.Socket.<init>(Socket.java:434)
> at java.net.Socket.<init>(Socket.java:286)
> at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:80)
> at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:122)
> at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
> at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
> at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
> at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
> at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
> at org.infinispan.it.compatibility.ReplEmbeddedRestHotRodTest.testEmbeddedPutRestHotRodGet(ReplEmbeddedRestHotRodTest.java:80)
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5498) ClientClusterFailoverEventsTest.testEventReplayWithAndWithoutStateAfterFailover random failures
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5498?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5498:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> ClientClusterFailoverEventsTest.testEventReplayWithAndWithoutStateAfterFailover random failures
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-5498
> URL: https://issues.jboss.org/browse/ISPN-5498
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Server
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 9.4.0.Final
>
>
> It appears the cluster event notifier wants to send some events to a node that no longer exists, and the exception is propagated all the way to the HR client:
> {noformat}
> 12:04:03,088 ERROR (HotRodServerWorker-78-1:) [InvocationContextInterceptor] ISPN000136: Execution errorjava.lang.IllegalArgumentException: Target node ClientClusterFailoverEventsTest-NodeA-33510 is not a cluster member, members are [ClientClusterFailoverEventsTest-NodeB-14182]
> at org.infinispan.distexec.DefaultExecutorService.submit(DefaultExecutorService.java:414)
> at org.infinispan.distexec.DefaultExecutorService.submit(DefaultExecutorService.java:403)
> at org.infinispan.distexec.DistributedExecutionCompletionService.submit(DistributedExecutionCompletionService.java:178)
> at org.infinispan.notifications.cachelistener.cluster.impl.BatchingClusterEventManagerImpl$UnicastEventContext.sendToTargets(BatchingClusterEventManagerImpl.java:97)
> at org.infinispan.notifications.cachelistener.cluster.impl.BatchingClusterEventManagerImpl.sendEvents(BatchingClusterEventManagerImpl.java:50)
> at org.infinispan.notifications.cachelistener.CacheNotifierImpl.notifyCacheEntryCreated(CacheNotifierImpl.java:292)
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AbstractClusteringDependentLogic.notifyCommitEntry(ClusteringDependentLogic.java:138)
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$DistributionLogic.commitSingleEntry(ClusteringDependentLogic.java:493)
> at org.infinispan.interceptors.locking.ClusteringDependentLogic$AbstractClusteringDependentLogic.commitEntry(ClusteringDependentLogic.java:108)
> at org.infinispan.interceptors.EntryWrappingInterceptor.commitContextEntry(EntryWrappingInterceptor.java:367)
> at org.infinispan.interceptors.EntryWrappingInterceptor.commitEntryIfNeeded(EntryWrappingInterceptor.java:545)
> at org.infinispan.interceptors.EntryWrappingInterceptor.commitContextEntries(EntryWrappingInterceptor.java:344)
> at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:418)
> at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:449)
> at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:195)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:88)
> at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:40)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:55)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
> at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:44)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:324)
> at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:256)
> at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:115)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
> at org.infinispan.interceptors.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:191)
> at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:177)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)
> at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
> at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:44)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1617)
> at org.infinispan.cache.impl.CacheImpl.putInternal(CacheImpl.java:1097)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1089)
> at org.infinispan.cache.impl.DecoratedCache.put(DecoratedCache.java:522)
> at org.infinispan.server.hotrod.CacheDecodeContext.put(CacheDecodeContext.scala:216)
> at org.infinispan.server.hotrod.HotRodDecoder.org$infinispan$server$hotrod$HotRodDecoder$$decodeValue(HotRodDecoder.scala:132)
> at org.infinispan.server.hotrod.HotRodDecoder$$anonfun$decode$1.apply$mcV$sp(HotRodDecoder.scala:50)
> at org.infinispan.server.hotrod.HotRodDecoder.wrapSecurity(HotRodDecoder.scala:208)
> at org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.scala:45)
> org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[922] returned server error (status=0x85): java.lang.IllegalArgumentException: Target node ClientClusterFailoverEventsTest-NodeA-33510 is not a cluster member, members are [ClientClusterFailoverEventsTest-NodeB-14182]
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:336)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:126)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:112)
> at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56)
> at org.infinispan.client.hotrod.impl.operations.AbstractKeyValueOperation.sendPutOperation(AbstractKeyValueOperation.java:57)
> at org.infinispan.client.hotrod.impl.operations.PutOperation.executeOperation(PutOperation.java:31)
> at org.infinispan.client.hotrod.impl.operations.PutOperation.executeOperation(PutOperation.java:20)
> at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:52)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:247)
> at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:79)
> at org.infinispan.client.hotrod.event.ClientClusterFailoverEventsTest.testEventReplayWithAndWithoutStateAfterFailover(ClientClusterFailoverEventsTest.java:59)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-5470) Remote-executor threads should not block to acquire locks
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5470?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5470:
-----------------------------------
Fix Version/s: 9.4.0.Final
(was: 9.3.0.Final)
> Remote-executor threads should not block to acquire locks
> ---------------------------------------------------------
>
> Key: ISPN-5470
> URL: https://issues.jboss.org/browse/ISPN-5470
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Affects Versions: 7.2.1.Final
> Reporter: Dan Berindei
> Fix For: 9.4.0.Final
>
>
> Currently, enabling the queue on the remote-executor thread pool can cause deadlocks, because a CommitCommand/1PCPrepareCommand could end up in the queue while a remote-executor thread is busy waiting to acquire the same lock that this commit would release.
> If trying to acquire a lock would free the thread until the key has been acquired, we could enable the queue on the remote-executor/OOB thread pools, and we would need a lot less threads for the same load.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months