[JBoss JIRA] (ISPN-10366) ScatteredStateConsumerImpl sets segment state to OWNED before applying values
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10366?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10366:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> ScatteredStateConsumerImpl sets segment state to OWNED before applying values
> -----------------------------------------------------------------------------
>
> Key: ISPN-10366
> URL: https://issues.jboss.org/browse/ISPN-10366
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 10.0.0.Beta3, 9.4.15.Final
> Reporter: Dan Berindei
> Assignee: Radim Vansa
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.0.1.Final
>
> Attachments: ISPN-10363_LazyInitializingExecutorService_94x_20190627-2010_PrefetchTest-infinispan-core.log.gz
>
>
> {{ScatteredStateConsumerImpl}} uses {{InboundTransferTask}} only to request keys, then after it received all the keys of a segment it changes the segment state to {{VALUE_TRANSFER}} and starts an asynchronous request to fetch the values and replace the {{RemoteMetadata}} entries with real entries.
> {{ScatteredStateConsumerImpl.chunkCounter}} is supposed to delay the state transfer end and the segment state change to {{OWNED}}, but in rare occasions this doesn't happen.
> This happened in {{PrefetchTest.testPrefetch12}} while running the test suite with {{taskset -c 1-2}}:
> {noformat}
> 21:54:43,304 TRACE (transport-thread-Test-NodeC-p69907-t5:[Topology-___defaultcache]) [StateConsumerImpl] Received new topology for cache ___defaultcache, isRebalance = true, isMember = true, topology = CacheTopology{id=9, phase=TRANSITORY, rebalanceId=5, currentCH=PartitionerConsistentHash:ScatteredConsistentHash{ns=1, rebalanced=false, owners = (2)[Test-NodeA-39104: 1, Test-NodeC-3746: 0]}, pendingCH=PartitionerConsistentHash:ScatteredConsistentHash{ns=1, rebalanced=true, owners = (2)[Test-NodeA-39104: 0, Test-NodeC-3746: 1]}, unionCH=PartitionerConsistentHash:ScatteredConsistentHash{ns=1, rebalanced=false, owners = (2)[Test-NodeA-39104: 0, Test-NodeC-3746: 1]}, actualMembers=[Test-NodeA-39104, Test-NodeC-3746], persistentUUIDs=[f58e0a9a-dd4e-429a-8464-da64bf001d4e, 1471096f-c59a-4dc9-8f4d-31fbf399a2aa]}
> 21:54:43,305 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[StateRequest-___defaultcache]) [ScatteredStateConsumerImpl] Requesting keys for segments {0} from Test-NodeA-39104
> 21:54:43,313 TRACE (transport-thread-Test-NodeC-p69907-t5:[Topology-___defaultcache]) [StateConsumerImpl] Topology update processed, stateTransferTopologyId = 9, startRebalance = true, pending CH = PartitionerConsistentHash:ScatteredConsistentHash{ns=1, rebalanced=true, owners = (2)[Test-NodeA-39104: 0, Test-NodeC-3746: 1]}
> 21:54:43,313 TRACE (transport-thread-Test-NodeC-p69907-t5:[Topology-___defaultcache]) [StateTransferLockImpl] Signalling transaction data received for topology 9
> 21:54:43,313 TRACE (remote-thread-Test-NodeC-p69905-t2:[]) [TrianglePerCacheInboundInvocationHandler] Calling perform() on StateResponseCommand{cache=___defaultcache, pushTransfer=false, stateChunks=[StateChunk{segmentId=0, cacheEntries=1, isLastChunk=true}], origin=Test-NodeA-39104, topologyId=9, applyState=true}
> 21:54:43,313 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [StateConsumerImpl] Applying new state chunk for segment 0 of cache ___defaultcache from node Test-NodeA-39104: received 1 cache entries
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [ScatteredVersionManagerImpl] Finished transfer for segment 0 = KEY_TRANSFER -> VALUE_TRANSFER
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [ScatteredVersionManagerImpl] Node Test-NodeC-3746, segment 0 has all keys in, expects value transfer
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [ScatteredStateConsumerImpl] Requesting values from segments {0}, for in-memory keys
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [ScatteredStateConsumerImpl] Retrieving values, chunk counter is 1
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [JGroupsTransport] Test-NodeC-3746 sending request 11 to Test-NodeA-39104: ClusteredGetAllCommand{keys=[key], flags=[SKIP_OWNERSHIP_CHECK], topologyId=9}
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [ScatteredStateConsumerImpl] Invalidating versions on Test-NodeC-3746, chunk counter incremented to 2
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [ScatteredStateConsumerImpl] Versions invalidated on Test-NodeC-3746, chunk counter decremented to 1
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [StateConsumerImpl] Removing inbound transfers from node {0} for segments Test-NodeA-39104
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [ScatteredStateConsumerImpl] Inbound transfer removed, chunk counter is 1
> 21:54:43,314 TRACE (stateTransferExecutor-thread-Test-NodeC-p69908-t6:[]) [StateConsumerImpl] Latch 0
> 21:54:43,315 TRACE (jgroups-7,Test-NodeC-3746:[]) [JGroupsTransport] Test-NodeC-3746 received response for request 11 from Test-NodeA-39104: SuccessfulResponse([MetadataImmortalCacheValue {value=v0, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=SimpleClusteredVersion{topologyId=7, version=1}}}])
> 21:54:43,316 TRACE (jgroups-7,Test-NodeC-3746:[]) [BlockingInterceptor] Command blocking before completion of PutKeyValueCommand{key=key, value=v0, flags=[CACHE_MODE_LOCAL, SKIP_REMOTE_LOOKUP, PUT_FOR_STATE_TRANSFER, SKIP_SHARED_CACHE_STORE, SKIP_OWNERSHIP_CHECK, IGNORE_RETURN_VALUES, SKIP_XSITE_BACKUP], commandInvocationId=CommandInvocation:Test-NodeC-3746:121294, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=InternalMetadataImpl{actual=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=SimpleClusteredVersion{topologyId=7, version=1}}, created=-1, lastUsed=-1}, successful=true, topologyId=-1}
> 21:54:43,316 TRACE (remote-thread-Test-NodeC-p69905-t2:[___defaultcache]) [StateConsumerImpl] After applying the received state the data container of cache ___defaultcache has 1 keys
> 21:54:43,316 TRACE (remote-thread-Test-NodeC-p69905-t2:[___defaultcache]) [StateConsumerImpl] Segments not received yet for cache ___defaultcache: {}
> 21:54:43,316 DEBUG (transport-thread-Test-NodeC-p69907-t5:[Topology-___defaultcache]) [StateConsumerImpl] Finished receiving of segments for cache ___defaultcache for topology 9.
> 21:54:43,316 DEBUG (transport-thread-Test-NodeC-p69907-t5:[Topology-___defaultcache]) [ScatteredVersionManagerImpl] Node Test-NodeC-3746 received values for all segments in topology 9
> 21:54:43,316 TRACE (transport-thread-Test-NodeC-p69907-t5:[Topology-___defaultcache]) [StateConsumerImpl] Stop keeping track of changed keys for state transfer in topology 9
> {noformat}
> The test then starts a put operation and expects it to prefetch the previous value, but because the segment is {{OWNED}}, the {{RemoteMetadata}} is ignored:
> {noformat}
> 21:54:43,316 TRACE (ForkThread-1,Test:[]) [InvocationContextInterceptor] Invoked with command PutKeyValueCommand{key=key, value=v1, flags=[], commandInvocationId=CommandInvocation:Test-NodeC-3746:121295, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=null}, successful=true, topologyId=-1} and InvocationContext [SingleKeyNonTxInvocationContext{isLocked=false, key=null, cacheEntry=null, origin=null, lockOwner=CommandInvocation:Test-NodeC-3746:121295}]
> 21:54:43,316 TRACE (ForkThread-1,Test:[]) [EntryFactoryImpl] Retrieved from container MetadataImmortalCacheEntry{key=key, value=null, metadata=RemoteMetadata{address=Test-NodeA-39104, version=1}}
> 21:54:43,316 TRACE (ForkThread-1,Test:[]) [ScatteredDistributionInterceptor] Committing entry RepeatableReadEntry(108d175b){key=key, value=v1, isCreated=false, isChanged=true, isRemoved=false, isExpired=false, skipLookup=true, metadata=EmbeddedExpirableMetadata{lifespan=-1, maxIdle=-1, version=SimpleClusteredVersion{topologyId=9, version=1}}}, replaced MetadataImmortalCacheEntry{key=key, value=null, metadata=RemoteMetadata{address=Test-NodeA-39104, version=1}}
> 21:54:53,316 ERROR (testng-Test:[]) [TestSuiteProgress] Test failed: org.infinispan.scattered.statetransfer.PrefetchTest.testPrefetch12
> org.infinispan.test.TestException: java.util.concurrent.TimeoutException
> at org.infinispan.util.ControlledRpcManager.uncheckedGet(ControlledRpcManager.java:259) ~[test-classes/:?]
> at org.infinispan.util.ControlledRpcManager.expectCommand(ControlledRpcManager.java:124) ~[test-classes/:?]
> at org.infinispan.scattered.statetransfer.PrefetchTest.testPrefetch(PrefetchTest.java:110) ~[test-classes/:?]
> at org.infinispan.scattered.statetransfer.PrefetchTest.testPrefetch12(PrefetchTest.java:67) ~[test-classes/:?]
> {noformat}
> On a related note, {{StateConsumerImpl.applyState(pushTransfer=true)}} initializes a {{CountDownLatch(stateChunks.size())}}, but doesn't actually count down if {{stateChunk.getCacheEntries() == null}}, potentially hanging state transfer until it times out.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10367) Possible loss of (pessimistic) lock if a transaction will reach timeout and/or is removed
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10367?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10367:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Possible loss of (pessimistic) lock if a transaction will reach timeout and/or is removed
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-10367
> URL: https://issues.jboss.org/browse/ISPN-10367
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 10.0.0.Beta3, 9.4.15.Final
> Environment: Infinispan with pessimistic locking enabled
> Reporter: Wolf-Dieter Fink
> Assignee: Pedro Ruivo
> Priority: Critical
> Fix For: 10.0.1.Final
>
> Attachments: StressApp.zip
>
>
> If entries are locked, no matter whether it was done by FORCE_WRITE_LOCK flag or getAdvancedCache().lock(key), and the lock is hold longer than the current Tx timeout setting (.completedTxTimeout(...) ) the transacaction might be removed
> - if the node is blocked and expelled from the cluster (and join back later)
> - the thread processing the lock will take longer than the Tx-timeout setting
> This force to remove the Tx and free the lock.
> An indicator is the Exception below which will be shown if the Tx is timing out, it is not a (remote) access timout.
> If the originator is back after this this (ongoing) Tx is assumed new and it will continue by accident without lock.
> This can cause unexpected inconsistency!
> ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (EJB timer - 13) ISPN000136: Error executing command LockControlCommand, writing keys []: org.infinispan.util.concurrent.TimeoutException: Replication timeout for lt-33828
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:803)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:641)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.RspListFuture.call(RspListFuture.java:47)
> at org.infinispan.remoting.transport.jgroups.RspListFuture.call(RspListFuture.java:16)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ERROR [org.jboss.as.ejb3.timer] (EJB timer - 13) WFLYEJB0020: Error invoking timeout for timer: [id=8a53d2c3-190d-4c74-9327-8e7554e1df2c timedObjectId=embeddedStressTest-ejb.embeddedStressTest-ejb.CacheAccessSingletonBean auto-timer?:false persistent?:true timerService=org.jboss.as.ejb3.timerservice.TimerServiceImpl@72c41b07 initialExpiration=Fri Jun 28 10:56:16 CEST 2019 intervalDuration(in milli sec)=1 nextExpiration=Fri Jun 28 10:56:43 CEST 2019 timerState=IN_TIMEOUT info=org.infinispan.wfink.stress.TimerInfo@47ae2053]: javax.ejb.EJBException: org.infinispan.util.concurrent.TimeoutException: Replication timeout for lt-33828
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:246)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.requiresNew(CMTTxInterceptor.java:388)
> at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:146)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81)
> at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
> at org.jboss.as.ejb3.component.singleton.ContainerManagedConcurrencyInterceptor.processInvocation(ContainerManagedConcurrencyInterceptor.java:106)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
> at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:619)
> at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
> at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
> at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
> at org.jboss.as.ejb3.timerservice.TimedObjectInvokerImpl.callTimeout(TimedObjectInvokerImpl.java:99)
> at org.jboss.as.ejb3.timerservice.TimedObjectInvokerImpl.callTimeout(TimedObjectInvokerImpl.java:109)
> at org.jboss.as.ejb3.timerservice.TimerTask.invokeBeanMethod(TimerTask.java:189)
> at org.jboss.as.ejb3.timerservice.TimerTask.callTimeout(TimerTask.java:185)
> at org.jboss.as.ejb3.timerservice.TimerTask.run(TimerTask.java:159)
> at org.jboss.as.ejb3.timerservice.TimerServiceImpl$Task$1.run(TimerServiceImpl.java:1304)
> at org.wildfly.extension.requestcontroller.RequestController$QueuedTask$1.run(RequestController.java:494)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> at org.jboss.threads.JBossThread.run(JBossThread.java:485)
> Caused by: org.infinispan.util.concurrent.TimeoutException: Replication timeout for lt-33828
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:803)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:641)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.RspListFuture.call(RspListFuture.java:47)
> at org.infinispan.remoting.transport.jgroups.RspListFuture.call(RspListFuture.java:16)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10368) All thread pools should have a queue
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10368?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10368:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> All thread pools should have a queue
> ------------------------------------
>
> Key: ISPN-10368
> URL: https://issues.jboss.org/browse/ISPN-10368
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 10.0.0.Beta3
> Reporter: Dan Berindei
> Assignee: Will Burns
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.0.1.Final
>
>
> Random failures in {{DistSyncStoreNotSharedTest}} because the cpu executor ({{ASYNC_OPERATIONS_EXECUTOR}}) rejected a task:
> {noformat}
> java.util.concurrent.CompletionException: java.lang.AssertionError: Thread name is: persistence-thread-DistSyncStoreNotSharedTest-NodeB-p16499-t5
> at org.infinispan.util.concurrent.CompletionStages.join(CompletionStages.java:75)
> at org.infinispan.stream.impl.AbstractCacheStream.performPublisherOperation(AbstractCacheStream.java:290)
> at org.infinispan.stream.impl.DistributedCacheStream.anyMatch(DistributedCacheStream.java:328)
> at org.infinispan.cache.impl.CacheImpl.isEmpty(CacheImpl.java:502)
> at org.infinispan.cache.impl.CacheImpl.isEmpty(CacheImpl.java:498)
> at org.infinispan.cache.impl.AbstractDelegatingCache.isEmpty(AbstractDelegatingCache.java:379)
> at org.infinispan.distribution.DistSyncStoreNotSharedTest.prepareClearTest(DistSyncStoreNotSharedTest.java:348)
> at org.infinispan.distribution.DistSyncStoreNotSharedTest.testClearWithFlag(DistSyncStoreNotSharedTest.java:305)
> at org.infinispan.commons.test.TestNGLongTestsHook.run(TestNGLongTestsHook.java:24)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: java.lang.AssertionError: Thread name is: persistence-thread-DistSyncStoreNotSharedTest-NodeB-p16499-t5
> at org.infinispan.persistence.manager.PersistenceManagerImpl.publishEntries(PersistenceManagerImpl.java:699)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.publishEntries(PersistenceManagerImpl.java:120)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor$WrappedEntrySet.getCacheEntryPublisher(CacheLoaderInterceptor.java:726)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor$WrappedEntrySet.localPublisher(CacheLoaderInterceptor.java:713)
> at org.infinispan.reactive.publisher.impl.LocalPublisherManagerImpl.lambda$exactlyOnceSequential$7(LocalPublisherManagerImpl.java:372)
> at org.infinispan.util.rxjava.FlowableFromIntSetFunction$IteratorSubscription.slowPath(FlowableFromIntSetFunction.java:223)
> at org.infinispan.util.rxjava.FlowableFromIntSetFunction$BaseRangeSubscription.request(FlowableFromIntSetFunction.java:127)
> at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drainLoop(FlowableFlatMap.java:546)
> at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drain(FlowableFlatMap.java:366)
> at io.reactivex.internal.operators.flowable.FlowableFlatMap$InnerSubscriber.onComplete(FlowableFlatMap.java:678)
> at io.reactivex.internal.subscriptions.DeferredScalarSubscription.complete(DeferredScalarSubscription.java:119)
> at io.reactivex.processors.AsyncProcessor.onComplete(AsyncProcessor.java:201)
> at org.infinispan.reactive.RxJavaInterop.lambda$static$5(RxJavaInterop.java:120)
> at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> at io.reactivex.internal.observers.ConsumerSingleObserver.onSuccess(ConsumerSingleObserver.java:62)
> at io.reactivex.internal.operators.flowable.FlowableAnySingle$AnySubscriber.onComplete(FlowableAnySingle.java:109)
> at io.reactivex.internal.operators.flowable.FlowableDoOnEach$DoOnEachSubscriber.onComplete(FlowableDoOnEach.java:135)
> at io.reactivex.internal.operators.flowable.FlowableConcatArray$ConcatArraySubscriber.onComplete(FlowableConcatArray.java:112)
> at io.reactivex.internal.subscribers.BasicFuseableSubscriber.onComplete(BasicFuseableSubscriber.java:120)
> at io.reactivex.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.checkTerminated(FlowableObserveOn.java:215)
> at io.reactivex.internal.operators.flowable.FlowableObserveOn$ObserveOnSubscriber.runAsync(FlowableObserveOn.java:399)
> at io.reactivex.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.run(FlowableObserveOn.java:176)
> at io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run(ExecutorScheduler.java:260)
> at io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run(ExecutorScheduler.java:225)
> at org.infinispan.commons.util.concurrent.CallerRunsRejectOnShutdownPolicy.rejectedExecution(CallerRunsRejectOnShutdownPolicy.java:19)
> at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
> at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
> at org.infinispan.executors.LazyInitializingExecutorService.execute(LazyInitializingExecutorService.java:138)
> at io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule(ExecutorScheduler.java:143)
> at io.reactivex.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.trySchedule(FlowableObserveOn.java:166)
> at io.reactivex.internal.operators.flowable.FlowableObserveOn$BaseObserveOnSubscriber.onComplete(FlowableObserveOn.java:135)
> at io.reactivex.internal.operators.flowable.FlowableSubscribeOn$SubscribeOnSubscriber.onComplete(FlowableSubscribeOn.java:108)
> at io.reactivex.internal.operators.flowable.FlowableUsing$UsingSubscriber.onComplete(FlowableUsing.java:148)
> at io.reactivex.internal.subscribers.BasicFuseableSubscriber.onComplete(BasicFuseableSubscriber.java:120)
> at io.reactivex.internal.subscribers.BasicFuseableConditionalSubscriber.onComplete(BasicFuseableConditionalSubscriber.java:119)
> at io.reactivex.internal.subscribers.BasicFuseableConditionalSubscriber.onComplete(BasicFuseableConditionalSubscriber.java:119)
> at io.reactivex.internal.subscriptions.EmptySubscription.complete(EmptySubscription.java:69)
> {noformat}
> https://ci.infinispan.org/job/Infinispan/job/master/1269/
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10369) InvalidatedNearCacheTest random failures
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10369?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10369:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> InvalidatedNearCacheTest random failures
> ----------------------------------------
>
> Key: ISPN-10369
> URL: https://issues.jboss.org/browse/ISPN-10369
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Server
> Affects Versions: 10.0.0.Beta3
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.0.1.Final
>
>
> {{InvalidatedNearCacheTest.testGetNearCacheAfterConnect}} has 2 clients, performing some operations and waiting for the corresponding near cache events on client 1 and then resetting the events on client 2.
> This is not correct, because client 2 may not have received the events yet, in which case {{resetEvents()}} doesn't do anything.
> {noformat}
> 18:48:47,813 INFO (testng-InvalidatedNearCacheTest:[]) [TestSuiteProgress] Test succeeded: org.infinispan.client.hotrod.near.InvalidatedNearCacheTest.testGetNearCacheAfterConnect
> 18:48:47,814 INFO (testng-InvalidatedNearCacheTest:[]) [TestSuiteProgress] Test starting: org.infinispan.client.hotrod.near.InvalidatedNearCacheTest.testGetUpdatesNearCache
> 18:48:47,821 ERROR (testng-InvalidatedNearCacheTest:[]) [TestSuiteProgress] Test failed: org.infinispan.client.hotrod.near.InvalidatedNearCacheTest.testGetUpdatesNearCache
> 18:48:47,821 ERROR (testng-InvalidatedNearCacheTest:[]) [TestSuiteProgress] Test failed: org.infinispan.client.hotrod.near.InvalidatedNearCacheTest.testGetUpdatesNearCache
> java.lang.AssertionError: [org.infinispan.client.hotrod.near.MockNearCacheService$MockRemoveEvent@514f3f0c, org.infinispan.client.hotrod.near.MockNearCacheService$MockRemoveEvent@76bf7455] expected:<0> but was:<2>
> at org.testng.AssertJUnit.fail(AssertJUnit.java:59) ~[testng-6.14.3.jar:?]
> at org.testng.AssertJUnit.failNotEquals(AssertJUnit.java:364) ~[testng-6.14.3.jar:?]
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:80) ~[testng-6.14.3.jar:?]
> at org.testng.AssertJUnit.assertEquals(AssertJUnit.java:245) ~[testng-6.14.3.jar:?]
> at org.infinispan.client.hotrod.near.AssertsNearCache.expectNoNearEvents(AssertsNearCache.java:123) ~[test-classes/:?]
> at org.infinispan.client.hotrod.near.InvalidatedNearCacheTest.testGetUpdatesNearCache(InvalidatedNearCacheTest.java:153) ~[test-classes/:?]
> {noformat}
> https://ci.infinispan.org/job/Infinispan/job/master/1240/
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10371) Refactor ProtocolServer thread pools
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10371?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10371:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Refactor ProtocolServer thread pools
> ------------------------------------
>
> Key: ISPN-10371
> URL: https://issues.jboss.org/browse/ISPN-10371
> Project: Infinispan
> Issue Type: Sub-task
> Components: Server
> Reporter: Will Burns
> Priority: Major
> Fix For: 10.0.1.Final
>
>
> Today we have protocol servers for REST/Hotrod/Memcached etc. These use their own pools of default sizes of 160 threads. These pools should be completely removed and use the appropriate thread pool from the cache manager itself. We can't really touch the Netty pools, but with the single port luckily this will consolidate the netty thread pool down to 1 for all protocols combined which is good.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10683) Remove CustomFailurePolicy transaction parameters
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10683?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10683:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Remove CustomFailurePolicy transaction parameters
> -------------------------------------------------
>
> Key: ISPN-10683
> URL: https://issues.jboss.org/browse/ISPN-10683
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Affects Versions: 10.0.0.CR2
> Reporter: Dan Berindei
> Priority: Major
> Fix For: 10.0.1.Final
>
>
> The {{CustomFailurePolicy}} javadoc says it should be thread-safe, but the {{javax.transaction.Transaction}} javadoc doesn't say anything about thread safety.
> We should replace the {{javax.transaction.Transaction}} with a {{GlobalTransaction}}, which we know is thread-safe.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10749) Invalidation mode needs a proper key partitioner
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10749?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10749:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Invalidation mode needs a proper key partitioner
> ------------------------------------------------
>
> Key: ISPN-10749
> URL: https://issues.jboss.org/browse/ISPN-10749
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.4.16.Final, 10.0.0.CR3
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 9.4.17.Final, 10.0.1.Final
>
>
> Caches in invalidation mode used to do everything on the local node and only broadcast an invalidation command to all the members. ISPN-10029 changed this, and now transactional invalidation caches acquire locks for affected keys on the primary owners.
> Unfortunately {{KeyPartitionerFactory}} constructs a {{SingleSegmentKeyPartitioner}} in invalidation mode, so there is only one primary owner for all the keys.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months