[JBoss JIRA] (ISPN-9812) Implement streaming response publisher method
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9812?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9812:
----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Implement streaming response publisher method
> ---------------------------------------------
>
> Key: ISPN-9812
> URL: https://issues.jboss.org/browse/ISPN-9812
> Project: Infinispan
> Issue Type: Sub-task
> Components: Publisher
> Reporter: Will Burns
> Assignee: Will Burns
> Priority: Major
> Fix For: 10.0.1.Final
>
>
> We need to implement a streaming based publisher that would support rehash. This is required for exposing the Cache as a Publisher or iterator properly. The example method to allow is quite simple:
> {code}
> <R> Publisher<R> compose(Function<? super Publisher<T>, ? extends Publisher<R>> publisherFunction);
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-9821) Add partition handling for PublisherManager
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9821?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9821:
----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Add partition handling for PublisherManager
> -------------------------------------------
>
> Key: ISPN-9821
> URL: https://issues.jboss.org/browse/ISPN-9821
> Project: Infinispan
> Issue Type: Sub-task
> Components: Partition Handling, Publisher
> Reporter: Will Burns
> Priority: Major
> Fix For: 10.0.1.Final
>
> Attachments: PartitionAwareClusterPublisherManager.java
>
>
> We need to make sure the publisher supports partition handling as well, similar to DistributedStreams
> Attached you can find a java file that should work - although it has no testing done for it yet.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10093) PersistenceManagerImpl stop deadlock with topology update
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10093?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10093:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> PersistenceManagerImpl stop deadlock with topology update
> ---------------------------------------------------------
>
> Key: ISPN-10093
> URL: https://issues.jboss.org/browse/ISPN-10093
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 10.0.0.Beta3
> Reporter: Dan Berindei
> Assignee: Will Burns
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.0.1.Final
>
> Attachments: threaddump.txt
>
>
> {{DistSyncStoreNotSharedTest.clearContent}} hanged in CI recently:
> {noformat}
> "testng-DistSyncStoreNotSharedTest" #16 prio=5 os_prio=0 cpu=11511.26ms elapsed=435.14s tid=0x00007fdb710b6000 nid=0x3222 waiting on condition [0x00007fdb352d3000]
> java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@11/Native Method)
> - parking to wait for <0x00000000c8a22450> (a java.util.concurrent.Semaphore$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(java.base@11/LockSupport.java:194)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11/AbstractQueuedSynchronizer.java:885)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(java.base@11/AbstractQueuedSynchronizer.java:1009)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@11/AbstractQueuedSynchronizer.java:1324)
> at java.util.concurrent.Semaphore.acquireUninterruptibly(java.base@11/Semaphore.java:504)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.stop(PersistenceManagerImpl.java:222)
> at jdk.internal.reflect.GeneratedMethodAccessor72.invoke(Unknown Source)
> at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@11/DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(java.base@11/Method.java:566)
> at org.infinispan.commons.util.SecurityActions.lambda$invokeAccessibly$0(SecurityActions.java:79)
> at org.infinispan.commons.util.SecurityActions$$Lambda$237/0x0000000100661c40.run(Unknown Source)
> at org.infinispan.commons.util.SecurityActions.doPrivileged(SecurityActions.java:71)
> at org.infinispan.commons.util.SecurityActions.invokeAccessibly(SecurityActions.java:76)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:181)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.performStop(BasicComponentRegistryImpl.java:601)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.stopWrapper(BasicComponentRegistryImpl.java:590)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.stop(BasicComponentRegistryImpl.java:461)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:431)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:366)
> at org.infinispan.cache.impl.CacheImpl.performImmediateShutdown(CacheImpl.java:1160)
> at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:1125)
> at org.infinispan.cache.impl.AbstractDelegatingCache.stop(AbstractDelegatingCache.java:521)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:747)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:799)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:775)
> at org.infinispan.test.TestingUtil.killCacheManagers(TestingUtil.java:846)
> at org.infinispan.test.MultipleCacheManagersTest.clearContent(MultipleCacheManagersTest.java:158)
> "persistence-thread-DistSyncStoreNotSharedTest-NodeB-p16432-t1" #53654 daemon prio=5 os_prio=0 cpu=1.26ms elapsed=301.93s tid=0x00007fdb3c3d8000 nid=0x8ef waiting on condition [0x00007fdb00055000]
> java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@11/Native Method)
> - parking to wait for <0x00000000c8b1fb88> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(java.base@11/LockSupport.java:194)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(java.base@11/AbstractQueuedSynchronizer.java:885)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(java.base@11/AbstractQueuedSynchronizer.java:1009)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@11/AbstractQueuedSynchronizer.java:1324)
> at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@11/ReentrantReadWriteLock.java:738)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.pollStoreAvailability(PersistenceManagerImpl.java:196)
> at org.infinispan.persistence.manager.PersistenceManagerImpl$$Lambda$492/0x00000001007fb440.run(Unknown Source)
> at java.util.concurrent.Executors$RunnableAdapter.call(java.base@11/Executors.java:515)
> at java.util.concurrent.FutureTask.runAndReset(java.base@11/FutureTask.java:305)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(java.base@11/ScheduledThreadPoolExecutor.java:305)
> "transport-thread-DistSyncStoreNotSharedTest-NodeB-p16424-t5" #53646 daemon prio=5 os_prio=0 cpu=3.15ms elapsed=301.94s tid=0x00007fdb2007a000 nid=0x8e8 waiting on condition [0x00007fdb0b406000]
> java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@11/Native Method)
> - parking to wait for <0x00000000c8d2abb0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(java.base@11/LockSupport.java:194)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(java.base@11/AbstractQueuedSynchronizer.java:2081)
> at io.reactivex.internal.operators.flowable.BlockingFlowableIterable$BlockingFlowableIterator.hasNext(BlockingFlowableIterable.java:94)
> at io.reactivex.Flowable.blockingForEach(Flowable.java:5682)
> at org.infinispan.statetransfer.StateConsumerImpl.removeStaleData(StateConsumerImpl.java:1011)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:453)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:202)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:58)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:114)
> at org.infinispan.topology.LocalTopologyManagerImpl.resetLocalTopologyBeforeRebalance(LocalTopologyManagerImpl.java:437)
> at org.infinispan.topology.LocalTopologyManagerImpl.doHandleRebalance(LocalTopologyManagerImpl.java:519)
> - locked <0x00000000c8b30b30> (a org.infinispan.topology.LocalCacheStatus)
> at org.infinispan.topology.LocalTopologyManagerImpl.lambda$handleRebalance$3(LocalTopologyManagerImpl.java:484)
> at org.infinispan.topology.LocalTopologyManagerImpl$$Lambda$574/0x000000010089a040.run(Unknown Source)
> at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175){noformat}
> [Full thread dump|https://ci.infinispan.org/job/Infinispan/job/master/1133/artifact/core/]
> Somehow the producer thread for the transport-thread iteration is blocked, but without waiting for the persistence mutex. Maybe it's waiting for a topology? Not sure if it's relevant, but the last test to run was {{testClearWithFlag}}, so the data container was empty and the store had 5 entries.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10238) RemoteCacheManager.stop() hangs if a client thread is waiting for a server response
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10238?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10238:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> RemoteCacheManager.stop() hangs if a client thread is waiting for a server response
> -----------------------------------------------------------------------------------
>
> Key: ISPN-10238
> URL: https://issues.jboss.org/browse/ISPN-10238
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite - Server
> Affects Versions: 10.0.0.Beta3, 9.4.14.Final
> Reporter: Dan Berindei
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.0.1.Final
>
>
> One of our integration tests performs a blocking {{RemoteCache.size()}} operation on the thread where another asynchronous operation was completed (a {{HotRod-client-async-pool}} thread):
> {code:title=EvictionIT}
> CompletableFuture res = rc.putAllAsync(entries);
> res.thenRun(() -> assertEquals(3, rc.size()));
> {code}
> The test then finishes, but doesn't stop the {{RemoteCacheManager}}. When I changed the test to stop the {{RemoteCacheManager}}, the test started hanging:
> {noformat}
> "HotRod-client-async-pool-139-1" #2880 daemon prio=5 os_prio=0 cpu=434.56ms elapsed=1621.24s tid=0x00007f43a6b99800 nid=0x19c0 waiting on condition [0x00007f42ec9fd000]
> java.lang.Thread.State: TIMED_WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base(a)11.0.3/Native Method)
> - parking to wait for <0x00000000d3321350> (a java.util.concurrent.CompletableFuture$Signaller)
> at java.util.concurrent.locks.LockSupport.parkNanos(java.base@11.0.3/LockSupport.java:234)
> at java.util.concurrent.CompletableFuture$Signaller.block(java.base@11.0.3/CompletableFuture.java:1798)
> at java.util.concurrent.ForkJoinPool.managedBlock(java.base@11.0.3/ForkJoinPool.java:3128)
> at java.util.concurrent.CompletableFuture.timedGet(java.base@11.0.3/CompletableFuture.java:1868)
> at java.util.concurrent.CompletableFuture.get(java.base@11.0.3/CompletableFuture.java:2021)
> at org.infinispan.client.hotrod.impl.Util.await(Util.java:46)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.size(RemoteCacheImpl.java:307)
> at org.infinispan.server.test.eviction.EvictionIT.lambda$testPutAllAsyncEviction$0(EvictionIT.java:73)
> at org.infinispan.server.test.eviction.EvictionIT$$Lambda$347/0x000000010074a440.run(Unknown Source)
> at java.util.concurrent.CompletableFuture$UniRun.tryFire(java.base@11.0.3/CompletableFuture.java:783)
> at java.util.concurrent.CompletableFuture.postComplete(java.base@11.0.3/CompletableFuture.java:506)
> at java.util.concurrent.CompletableFuture.complete(java.base@11.0.3/CompletableFuture.java:2073)
> at org.infinispan.client.hotrod.impl.operations.HotRodOperation.complete(HotRodOperation.java:162)
> at org.infinispan.client.hotrod.impl.operations.PutAllOperation.acceptResponse(PutAllOperation.java:83)
> at org.infinispan.client.hotrod.impl.transport.netty.HeaderDecoder.decode(HeaderDecoder.java:144)
> at org.infinispan.client.hotrod.impl.transport.netty.HintedReplayingDecoder.callDecode(HintedReplayingDecoder.java:94)
> at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965)
> at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799)
> at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:421)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:321)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.3/ThreadPoolExecutor.java:1128)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.3/ThreadPoolExecutor.java:628)
> at java.lang.Thread.run(java.base@11.0.3/Thread.java:834)
> Locked ownable synchronizers:
> - <0x00000000ca248c30> (a java.util.concurrent.ThreadPoolExecutor$Worker)
> "main" #1 prio=5 os_prio=0 cpu=37300.10ms elapsed=2911.99s tid=0x00007f43a4023000 nid=0x37f7 in Object.wait() [0x00007f43a9c21000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(java.base(a)11.0.3/Native Method)
> - waiting on <no object reference available>
> at java.lang.Object.wait(java.base@11.0.3/Object.java:328)
> at io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:231)
> - waiting to re-lock in wait() <0x00000000ca174af8> (a io.netty.util.concurrent.DefaultPromise)
> at io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:33)
> at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:32)
> at org.infinispan.client.hotrod.impl.transport.netty.ChannelFactory.destroy(ChannelFactory.java:216)
> at org.infinispan.client.hotrod.RemoteCacheManager.stop(RemoteCacheManager.java:365)
> at org.infinispan.client.hotrod.RemoteCacheManager.close(RemoteCacheManager.java:513)
> at org.infinispan.commons.junit.ClassResource.lambda$new$0(ClassResource.java:24)
> at org.infinispan.commons.junit.ClassResource$$Lambda$286/0x0000000100573040.accept(Unknown Source)
> at org.infinispan.commons.junit.ClassResource.after(ClassResource.java:41)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.jboss.arquillian.junit.Arquillian.run(Arquillian.java:167)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Locked ownable synchronizers:
> - None
> {noformat}
> {{HotRod-client-async-pool}} threads are not appropriate for doing blocking cache operations at any time, but we need to do more than just change the test:
> * We need an asynchronous {{RemoteCache.size()}} alternative
> * Currently blocking operations like {{size()}} wait for a response from the server for 1 day, they should wait for a much smaller (and configurable) timeout.
> * {{RemoteCacheManager.stop()}} should have a timeout as well, but more importantly it should cancel any pending operation.
> * We should consider running all application code on a separate thread pool.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10240) Explicitly disallow concurrent operations in the same transaction
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10240?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10240:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Explicitly disallow concurrent operations in the same transaction
> -----------------------------------------------------------------
>
> Key: ISPN-10240
> URL: https://issues.jboss.org/browse/ISPN-10240
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Affects Versions: 10.0.0.Beta3, 9.4.14.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 10.0.1.Final
>
>
> Invoking multiple cache operations in parallel in the same transaction is inherently dangerous, especially when they touch the same keys (e.g. if one of them is a bulk operation).
> Invoking multiple cache operations in parallel became much more easier when we added asynchronous operations, and while we "know" it's not ok to access the same transaction in parallel, there is no explicit guard against it.
> {{AbstractCacheTransaction.lookedUpEntries}} is a {{HashMap}}, which will sometimes throw a {{ConcurrentModificationException}}, but most of the time the exception is something completely unrelated, making the problem even harder to trace back to its cause (e.g. ISPN-10239).
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months
[JBoss JIRA] (ISPN-10215) Send clear event to near cache when client is not accepting events quick enough
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-10215?page=com.atlassian.jira.plugin... ]
Tristan Tarrant updated ISPN-10215:
-----------------------------------
Fix Version/s: 10.0.1.Final
(was: 10.0.0.Final)
> Send clear event to near cache when client is not accepting events quick enough
> -------------------------------------------------------------------------------
>
> Key: ISPN-10215
> URL: https://issues.jboss.org/browse/ISPN-10215
> Project: Infinispan
> Issue Type: Sub-task
> Components: Hot Rod, Listeners
> Reporter: Will Burns
> Priority: Major
> Fix For: 10.0.1.Final
>
>
> When a near cache is used it will install a remote listener that receives all modified/remove/expired events. If a client is unable to keep up with the amount of updates, this can cause a backup on the server which in turn can slow down writes.
> One solution for this would be if a client has been backed up for a given threshold to remove the pending events and send a single clear event to the near cache. This would take away the read performance temporarily of that client temporarily but would allow write heavy clients to continue operating.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 5 months