[JBoss JIRA] (ISPN-2926) MBean to access cache content
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2926?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-2926:
-------------------------------------
perhaps a RDBMS-dump-like operation, that seems very popular in the database world + some viewers on the dump file
> MBean to access cache content
> -----------------------------
>
> Key: ISPN-2926
> URL: https://issues.jboss.org/browse/ISPN-2926
> Project: Infinispan
> Issue Type: Feature Request
> Components: JMX, reporting and management
> Affects Versions: 5.2.5.Final
> Reporter: Edoardo Schepis
> Assignee: Manik Surtani
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> It would be great to have a tool to access the cache and inspect it.
> For troubleshooting, debugging or simply for monitoring, it would be very helpful to show the content of the cache using some kind of tool
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2931) async mode changes remove behaviour
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2931?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-2931:
-------------------------------------
[~sebastiantusk] Thanks for the bug report. Indeed by default the async operations should return the previous values even though this would make them sync. If the cache is configured with "unreliableReturnValues=true" or the Flag.IGNORE_RETURN_VALUES is used then the operation would behave asynchronously.
> async mode changes remove behaviour
> -----------------------------------
>
> Key: ISPN-2931
> URL: https://issues.jboss.org/browse/ISPN-2931
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.1.Final
> Reporter: Sebastian Tusk
> Assignee: Galder Zamarreño
> Fix For: 5.3.0.Final
>
>
> With a cache setup as clustering dist, 2 owners and async mode the Cache.remove API does not behave correctly. Cache.remove(key) should return the old value and Cache.remove(key, value) should return true if the entry was removed. Both methods only work correctly if invoked on the primary owner of the key. If invoked on another node remove(key) returns null every time and remove(key,value) returns false every time. The Infinispan documentation says that in async mode these operations should work as expected. https://docs.jboss.org/author/display/ISPN/Asynchronous+Options
> Complete cache config:
> <namedCache name="distributed">
> <!-- Used to register JMX statistics in any available MBean server -->
> <jmxStatistics enabled="true" />
>
> <clustering mode="dist">
> <stateTransfer fetchInMemoryState="true" timeout="20000" />
> <hash numOwners="2"/>
> <async/>
> </clustering>
> <locking isolationLevel="READ_COMMITTED"
> lockAcquisitionTimeout="15000" useLockStriping="false" />
>
> <eviction maxEntries="10000" strategy="LRU" />
> <expiration maxIdle="3600000" wakeUpInterval="5000"/>
> <storeAsBinary storeKeysAsBinary="true" storeValuesAsBinary="false" enabled="false" />
> </namedCache>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2960) '[ClusterTopologyManagerImpl] Failed to start rebalance: ... IllegalStateException: transport was closed' on cache stop
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/ISPN-2960?page=com.atlassian.jira.plugin.... ]
Radoslav Husar commented on ISPN-2960:
--------------------------------------
The other nodes generates this message (below). Afterwards, there are some failures (missing/stale entries) but I am unable to isolate this particular failure as there are quite a few things failing altogether.
{code}
[JBossINF] [0m[33m07:01:44,585 WARN [org.infinispan.topology.CacheTopologyControlCommand] (transport-thread-8) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=dist, type=REBALANCE_CONFIRM, sender=perf21/web, joinInfo=null, topologyId=13, currentCH=null, pendingCH=null, throwable=null, viewId=6}: org.infinispan.CacheException: Received invalid rebalance confirmation from perf21/web for cache dist, we don't have a rebalance in progress
[JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl.handleRebalanceCompleted(ClusterTopologyManagerImpl.java:206)
[JBossINF] at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:160)
[JBossINF] at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:137)
[JBossINF] at org.infinispan.topology.LocalTopologyManagerImpl$1.call(LocalTopologyManagerImpl.java:291)
[JBossINF] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [rt.jar:1.6.0_43]
[JBossINF] at java.util.concurrent.FutureTask.run(FutureTask.java:138) [rt.jar:1.6.0_43]
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [rt.jar:1.6.0_43]
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [rt.jar:1.6.0_43]
[JBossINF] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_43]
[JBossINF]
[JBossINF] [0m[33m07:01:44,584 WARN [org.infinispan.topology.CacheTopologyControlCommand] (OOB-19,shared=udp) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=default-host/clusterbench, type=REBALANCE_CONFIRM, sender=perf20/web, joinInfo=null, topologyId=13, currentCH=null, pendingCH=null, throwable=null, viewId=6}: org.infinispan.CacheException: Received invalid rebalance confirmation from perf20/web for cache default-host/clusterbench, we don't have a rebalance in progress
[JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl.handleRebalanceCompleted(ClusterTopologyManagerImpl.java:206)
[JBossINF] at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:160)
[JBossINF] at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:137)
[JBossINF] at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253)
[JBossINF] at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
[JBossINF] at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
[JBossINF] at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
[JBossINF] at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
[JBossINF] at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598)
[JBossINF] at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130)
[JBossINF] at org.jgroups.JChannel.up(JChannel.java:707)
[JBossINF] at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020)
[JBossINF] at org.jgroups.protocols.RSVP.up(RSVP.java:172)
[JBossINF] at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
[JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
[JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
[JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
[JBossINF] at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
[JBossINF] at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
[JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:606)
[JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:143)
[JBossINF] at org.jgroups.protocols.FD.up(FD.java:253)
[JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
[JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:290)
[JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:359)
[JBossINF] at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2616)
[JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
[JBossINF] at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
[JBossINF] at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798)
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [rt.jar:1.6.0_43]
[JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [rt.jar:1.6.0_43]
[JBossINF] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_43]
[JBossINF]
{code}
> '[ClusterTopologyManagerImpl] Failed to start rebalance: ... IllegalStateException: transport was closed' on cache stop
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2960
> URL: https://issues.jboss.org/browse/ISPN-2960
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.5.Final
> Reporter: Radoslav Husar
> Assignee: Dan Berindei
>
> Components are still not keeping track whether the cache is stopping down and holf off stopping the channel. This is biting us in AS on undeloy/shutdown.
> {code}
> [JBossINF] [0m[31m07:07:58,044 ERROR [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread-8) Failed to start rebalance: org.infinispan.CacheException: Remote (perf18/web) failed unexpectedly: java.util.concurrent.ExecutionException: org.infinispan.CacheException: Remote (perf18/web) failed unexpectedly
> [JBossINF] at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232) [rt.jar:1.6.0_43]
> [JBossINF] at java.util.concurrent.FutureTask.get(FutureTask.java:91) [rt.jar:1.6.0_43]
> [JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:549)
> [JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl.broadcastRebalanceStart(ClusterTopologyManagerImpl.java:392)
> [JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl.startRebalance(ClusterTopologyManagerImpl.java:382)
> [JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl.access$000(ClusterTopologyManagerImpl.java:66)
> [JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl$1.call(ClusterTopologyManagerImpl.java:128)
> [JBossINF] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [rt.jar:1.6.0_43]
> [JBossINF] at java.util.concurrent.FutureTask.run(FutureTask.java:138) [rt.jar:1.6.0_43]
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [rt.jar:1.6.0_43]
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [rt.jar:1.6.0_43]
> [JBossINF] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_43]
> [JBossINF] Caused by: org.infinispan.CacheException: Remote (perf18/web) failed unexpectedly
> [JBossINF] at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:99)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:541)
> [JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:531)
> [JBossINF] at org.infinispan.topology.ClusterTopologyManagerImpl$2.call(ClusterTopologyManagerImpl.java:528)
> [JBossINF] ... 5 more
> [JBossINF] Caused by: java.lang.IllegalStateException: transport was closed
> [JBossINF] at org.jgroups.blocks.GroupRequest.transportClosed(GroupRequest.java:273)
> [JBossINF] at org.jgroups.blocks.RequestCorrelator.stop(RequestCorrelator.java:269)
> [JBossINF] at org.jgroups.blocks.MessageDispatcher.stop(MessageDispatcher.java:152)
> [JBossINF] at org.jgroups.blocks.MessageDispatcher.channelDisconnected(MessageDispatcher.java:455)
> [JBossINF] at org.jgroups.Channel.notifyChannelDisconnected(Channel.java:507)
> [JBossINF] at org.jgroups.JChannel.disconnect(JChannel.java:363)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.stop(JGroupsTransport.java:258)
> [JBossINF] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.6.0_43]
> [JBossINF] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [rt.jar:1.6.0_43]
> [JBossINF] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [rt.jar:1.6.0_43]
> [JBossINF] at java.lang.reflect.Method.invoke(Method.java:597) [rt.jar:1.6.0_43]
> [JBossINF] at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> [JBossINF] at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:886)
> [JBossINF] at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:693)
> [JBossINF] at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:571)
> [JBossINF] at org.infinispan.factories.GlobalComponentRegistry.stop(GlobalComponentRegistry.java:260)
> [JBossINF] at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:742)
> [JBossINF] at org.infinispan.manager.AbstractDelegatingEmbeddedCacheManager.stop(AbstractDelegatingEmbeddedCacheManager.java:179)
> [JBossINF] at org.jboss.as.clustering.infinispan.subsystem.EmbeddedCacheManagerService.stop(EmbeddedCacheManagerService.java:76)
> [JBossINF] at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:1911) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1]
> [JBossINF] at org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:1874) [jboss-msc-1.0.4.GA-redhat-1.jar:1.0.4.GA-redhat-1]
> [JBossINF] ... 3 more
> {code}
> http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-failover-http-...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2477) AsyncStore shutdown can leak threads
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2477?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-2477:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Assignee: Adrian Nistor (was: Galder Zamarreño)
Resolution: Done
I think Adrian's updated pull request fixes the bug completely.
> AsyncStore shutdown can leak threads
> ------------------------------------
>
> Key: ISPN-2477
> URL: https://issues.jboss.org/browse/ISPN-2477
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 5.2.0.Beta3
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Fix For: 5.3.0.Final
>
>
> AsyncStore stop() should take care that all threads (coordinator and workers) are shut down. Right now there could be worker threads left hanging and the coordinator thread waiting forever for them to end.
> coordinator.join(shutdownTimeout) in AsyncStore.stop() could time out and both coordinator and worker thread will leak in this case.
> We must ensure the coordinator is terminated and the ExecutorService used for worker threads is also shut down when we exit AsyncStore.stop().
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2962) Fix thread leaks in the core test suite
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2962?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-2962:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
I integrated Adrian's fixes, complete with a dump of the live threads at the end of the test suite (only with TRACE enabled) to make leaks easier to spot in the future.
> Fix thread leaks in the core test suite
> ---------------------------------------
>
> Key: ISPN-2962
> URL: https://issues.jboss.org/browse/ISPN-2962
> Project: Infinispan
> Issue Type: Task
> Components: Test Suite
> Affects Versions: 5.2.5.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 5.3.0.Final
>
>
> The core test suite leaks several threads, which then keep their context classloaders alive and cause PermGen leaks.
> For example, I found these tests still alive at the end of a test run:
> {noformat}
> "Scheduled-eviction-thread-2486" daemon prio=10 tid=0x00007f2d6009e000 nid=0xecd waiting on condition [0x00007f2d48278000]
> "Scheduled-eviction-thread-2485" daemon prio=10 tid=0x00007f2d600b7800 nid=0xeb6 waiting on condition [0x00007f2d3f549000]
> "Scheduled-eviction-thread-2484" daemon prio=10 tid=0x00007f2d6005d000 nid=0xe9d waiting on condition [0x00007f2d22aca000]
> "AsyncStoreCoordinator-null" daemon prio=10 tid=0x0000000001cbf000 nid=0xb33 waiting on condition [0x00007f2d2b094000]
> "AsyncStoreCoordinator-null" daemon prio=10 tid=0x0000000001ca4000 nid=0xb1e waiting on condition [0x00007f2d3bbab000]
> "AsyncStoreCoordinator-null" daemon prio=10 tid=0x0000000001c9f000 nid=0xaee waiting on condition [0x00007f2db113b000]
> "AsyncStoreCoordinator-null" daemon prio=10 tid=0x0000000001cb7800 nid=0x8bf waiting on condition [0x00007f2d488d1000]
> "transport-thread-0,ReplSyncDistributedExecutorTest-NodeCD" daemon prio=10 tid=0x00007f2d64784000 nid=0x5308 waiting on condition [0x00007f2d7261d000]
> "transport-thread-0,DistributedExecutorWithCacheLoaderTest-NodeBN" daemon prio=10 tid=0x00007f2d64403800 nid=0x4b38 waiting on condition [0x00007f2d42a82000]
> "transport-thread-0,DistributedExecutorWithCacheLoaderTest-NodeBH" daemon prio=10 tid=0x00007f2d643f3800 nid=0x4840 waiting on condition [0x00007f2d35bf7000]
> "asyncTransportThread-0,ReplSyncDistributedExecutorWithTopologyAwareNodesTest-NodeAV" daemon prio=10 tid=0x00007f2d646a8800 nid=0x41ca waiting on condition [0x00007f2d3b552000]
> "transport-thread-0,DistributedExecutorWithCacheLoaderTest-NodeAP" daemon prio=10 tid=0x00007f2d644d6000 nid=0x3edc waiting on condition [0x00007f2d7290b000]
> "transport-thread-0,DistributedExecutorNonConcurrentTest-NodeAJ" daemon prio=10 tid=0x00007f2d645a0000 nid=0x3be6 waiting on condition [0x00007f2d39ce8000]
> "transport-thread-0,DistributedExecutorTest-NodeH" daemon prio=10 tid=0x00007f2d6406b000 nid=0x2af6 waiting on condition [0x00007f2d4261d000]
> "transport-thread-0,DistributedExecutorTest-NodeD" daemon prio=10 tid=0x00007f2d6410f000 nid=0x283f waiting on condition [0x00007f2d43e87000]
> "transport-thread-0,InDoubtXidReturnedOnceTest-NodeC" daemon prio=10 tid=0x00007f2d64621000 nid=0x188b waiting on condition [0x00007f2d4a988000]
> "Scheduled-eviction-thread-409" daemon prio=10 tid=0x00007f2d600b5000 nid=0x6ac3 waiting on condition [0x00007f2d2cbec000]
> "Scheduled-eviction-thread-403" daemon prio=10 tid=0x00007f2d600ad000 nid=0x6a35 waiting on condition [0x00007f2d2924e000]
> "Scheduled-eviction-thread-396" daemon prio=10 tid=0x00007f2d60089000 nid=0x698c waiting on condition [0x00007f2d30a6c000]
> "Scheduled-eviction-thread-390" daemon prio=10 tid=0x00007f2d60052000 nid=0x6903 waiting on condition [0x00007f2d3d703000]
> "pool-292-thread-2" prio=10 tid=0x00007f2d6c0bb000 nid=0x5cb3 waiting on condition [0x00007f2db0218000]
> "pool-292-thread-1" prio=10 tid=0x00007f2d6c073000 nid=0x5cb2 waiting on condition [0x00007f2d3aef9000]
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months
[JBoss JIRA] (ISPN-2955) Async marshalling executor retry when queue fills
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2955?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-2955:
------------------------------------
[~mircea.markus] Using CallerRunsPolicy would make the caller thread block for the entire duration of the RPC, not just while the command is serialized. So it could lead to much bigger delays in the caller thread than CallerBlocksPolicy (which would block only until the fastest RPC finished).
> Async marshalling executor retry when queue fills
> -------------------------------------------------
>
> Key: ISPN-2955
> URL: https://issues.jboss.org/browse/ISPN-2955
> Project: Infinispan
> Issue Type: Enhancement
> Components: Marshalling
> Affects Versions: 5.2.5.Final
> Reporter: Manik Surtani
> Assignee: Manik Surtani
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
>
> When using an async transport and async marshalling, an executor is used to process the marshalling task in a separate thread and the caller's thread is allowed to return immediately.
> When the executor's queue fills and the queue cannot accept any more tasks, it throws a {{RejectedExecutionException}}, causing a very bad user/developer experience. A more correct approach to this is to catch the {{RejectedExecutionException}}, block and retry the task submission.
> The end result is that, in the degenerate case (when the executor queue is full) instead of throwing exceptions, those invocations will perform slightly slower.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 9 months