[JBoss JIRA] (ISPN-10138) JMS ClassCastException
by worldline_ge_dev worldline_ge_dev (Jira)
worldline_ge_dev worldline_ge_dev created ISPN-10138:
--------------------------------------------------------
Summary: JMS ClassCastException
Key: ISPN-10138
URL: https://issues.jboss.org/browse/ISPN-10138
Project: Infinispan
Issue Type: Bug
Components: Listeners
Affects Versions: 9.4.4.Final
Environment: JBoss 6.4.10.GA
Infinispan 9.4.4.FINAL
hornetq 2.3.25.FINAL
jdk 1.8.0_92
Reporter: worldline_ge_dev worldline_ge_dev
We have a malfunction that we think is a bug but are not sure whether it is a JBoss, HornetQ or Infinspan issue. Here is the situation:
Two war applications on one JBoss, one sends data into a JMS queue, the other consumes them. On startup, each application registers in a replicated Infinispan cache.
Infinispan subsystem is excluded in jboss-deployment-structure.xml and libs are in the lib folder of the applications. When we start JBoss and both applications are deployed simultaneously,
all is okay. When first the consuming app is deployed and then the sending app we observe following error in last line of the following code in MessageListener:
{code:java}
public void process(Message inMessage) {
DistributionMetadata metadata = null;
log.info("DistributionMetadata.class.getClassLoader().toString(): " + DistributionMetadata.class.getClassLoader().toString());
try {
Object obj = ((ObjectMessage) inMessage).getObject();
log.info("obj.getClass().getClassLoader().toString(): " + obj.getClass().getClassLoader().toString());
metadata = (DistributionMetadata) obj;
{code}
INFO 12:01:41,805 (Thread-9 (HornetQ-client-global-threads-1054694434)) (JmsConsumer.java:process:214) -
DistributionMetadata.class.getClassLoader().toString(): ModuleClassLoader for Module "deployment.msp.war:main" from Service Module Loader
obj.getClass().getClassLoader().toString(): ModuleClassLoader for Module "deployment.bvn-idx-routing.war:main" from Service Module Loader
ERROR 12:01:41,805 (Thread-9 (HornetQ-client-global-threads-1054694434)) (JmsConsumer.java:process:244) -FAILED: com.equensworldline.jms.entities.DistributionMetadata cannot be cast to com.equensworldline.jms.entities.DistributionMetadata: java.lang.ClassCastException: com.equensworldline.jms.entities.DistributionMetadata cannot be cast to com.equensworldline.jms.entities.DistributionMetadata
at com.equensworldline.jms.api.external.JmsConsumer.process(JmsConsumer.java:217)
at com.equensworldline.correlationidmgmt.jee.CorrelationIDMessageListener.onMessage(CorrelationIDMessageListener.java:32)
at org.hornetq.jms.client.JMSMessageListenerWrapper.onMessage(JMSMessageListenerWrapper.java:100)
at org.hornetq.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1123)
at org.hornetq.core.client.impl.ClientConsumerImpl.access$500(ClientConsumerImpl.java:57)
at org.hornetq.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1258)
at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:105)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
You see the DistributionMetadata class is loaded by two different classloaders which causes this error.
The remedy we found is very strange and has on first sight nothing to do with the failure:
We have registered an Infinispan cachelistener and within the @CacheEntryCreated event we start a JMS listener and access the cache. Because we observed org.infinispan.util.concurrent.TimeoutException
we make this asynchronously following the advice in [https://developer.jboss.org/thread/268919|https://developer.jboss.org/thr...].
{code:java}
CompletableFuture.runAsync(() -> regService.startListener(event.getValue()));
{code}
When we change back to a synchronous call, everything works and the ClassCastException does not occur. It seems, either Infinispan, HornetQ or JBoss does something queer with the classloaders in the described situation
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-9588) JGroups fails to install new cluster view after coordinator leave
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-9588?page=com.atlassian.jira.plugin.... ]
Dan Berindei resolved ISPN-9588.
--------------------------------
Fix Version/s: 10.0.0.Beta1
9.4.7.Final
Resolution: Done
Fixed by upgrading to JGroups 4.0.18.Final.
The more recent failure was indeed caused by ISPN-9291.
> JGroups fails to install new cluster view after coordinator leave
> -----------------------------------------------------------------
>
> Key: ISPN-9588
> URL: https://issues.jboss.org/browse/ISPN-9588
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 9.4.0.CR3
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 10.0.0.Beta1, 9.4.7.Final
>
> Attachments: ISPN-9127_ScatteredCrashInSequenceTest-infinispan-core.log.gz, ISPN-9127_ThreeWaySplitAndMergeTest-infinispan-core.log.gz, ThreeWaySplitAndMergeTest.clearContent.log.gz
>
>
> The core test suite normally shuts down cache managers in the reverse order of their start, so that the coordinator stops last. This should be just an optimization, to reduce the number of coordinator changes, and with it the test suite duration and log size.
> Unfortunately, the few tests that stop the coordinator first are routinely failing to stop the cluster properly, usually when there are 2 nodes left in the cluster the 2nd node doesn't receive the view:
> {noformat}
> 10:08:01,656 DEBUG (jgroups-26,Test-NodeB-20661:[]) [GMS] Test-NodeB-20661: installing view [Test-NodeC-25266|33] (3) [Test-NodeC-25266, Test-NodeA-8936, Test-NodeB-20661]
> 10:08:01,862 TRACE (testng-Test:[]) [GMS] Test-NodeC-25266: sending LEAVE request to Test-NodeA-8936
> 10:08:01,863 DEBUG (jgroups-6,Test-NodeA-8936:[]) [GMS] Test-NodeA-8936: members are (3) Test-NodeC-25266,Test-NodeA-8936,Test-NodeB-20661, coord=Test-NodeA-8936: I'm the new coordinator
> 10:08:01,896 TRACE (jgroups-6,Test-NodeA-8936:[]) [GMS] Test-NodeA-8936: joiners=[], suspected=[], leaving=[Test-NodeC-25266], new view: [Test-NodeA-8936|34] (2) [Test-NodeA-8936, Test-NodeB-20661]
> 10:08:01,896 TRACE (jgroups-6,Test-NodeA-8936:[]) [GMS] Test-NodeA-8936: mcasting view [Test-NodeA-8936|34] (2) [Test-NodeA-8936, Test-NodeB-20661]
> 10:08:01,900 DEBUG (jgroups-6,Test-NodeA-8936:[]) [GMS] Test-NodeA-8936: installing view [Test-NodeA-8936|34] (2) [Test-NodeA-8936, Test-NodeB-20661]
> 10:08:02,245 TRACE (testng-Test:[]) [JGroupsTransport] Test-NodeB-20661 sending request 140 to Test-NodeC-25266: CacheTopologyControlCommand{cache=org.infinispan.CONFIG, type=LEAVE, sender=Test-NodeB-20661, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, phase=null, actualMembers=null, throwable=null, viewId=33}
> 10:12:02,247 DEBUG (testng-Test:[]) [LocalTopologyManagerImpl] Error sending the leave request for cache org.infinispan.CONFIG to coordinator
> org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 140 from 7d5d965d-eacc-fd47-2ab8-5cec95f4c10b
> {noformat}
> Sometimes the new coordinator gets stuck in a merge loop, keeps trying to re-include the stopped node and fails:
> {noformat}
> 10:06:21,484 DEBUG (jgroups-24,Test-NodeC-57941:[]) [GMS] Test-NodeC-57941: installing view [Test-NodeD-37696|8] (3) [Test-NodeD-37696, Test-NodeB-39137, Test-NodeC-57941]
> 10:06:21,486 DEBUG (jgroups-25,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: installing view [Test-NodeD-37696|8] (3) [Test-NodeD-37696, Test-NodeB-39137, Test-NodeC-57941]
> 10:06:22,070 TRACE (testng-Test:[]) [GMS] Test-NodeD-37696: sending LEAVE request to Test-NodeB-39137
> 10:06:22,078 TRACE (jgroups-10,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: mcasting view [Test-NodeB-39137|9] (2) [Test-NodeB-39137, Test-NodeC-57941]
> 10:06:22,080 DEBUG (jgroups-10,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: installing view [Test-NodeB-39137|9] (2) [Test-NodeB-39137, Test-NodeC-57941]
> 10:06:30,887 DEBUG (jgroups-24,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: I will be the merge leader. Starting the merge task. Views: {Test-NodeB-39137=[Test-NodeB-39137|9] (2) [Test-NodeB-39137, Test-NodeC-57941], Test-NodeC-57941=[Test-NodeD-37696|8] (3) [Test-NodeD-37696, Test-NodeB-39137, Test-NodeC-57941]}
> 10:06:30,888 DEBUG (MergeTask-108,Test-NodeB-39137:[]) [STABLE] suspending message garbage collection
> 10:06:30,888 DEBUG (MergeTask-108,Test-NodeB-39137:[]) [STABLE] Test-NodeB-39137: resume task started, max_suspend_time=220000
> 10:06:30,888 DEBUG (MergeTask-108,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge task Test-NodeB-39137::7 started with 2 participants
> 10:06:30,888 TRACE (MergeTask-108,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: sending MERGE_REQ to [Test-NodeD-37696, Test-NodeB-39137]
> 10:06:36,041 TRACE (MergeTask-108,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: collected 1 merge response(s) in 5153 ms
> 10:06:36,041 DEBUG (MergeTask-108,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:06:36,043 TRACE (MergeTask-108,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: consolidated view=MergeView::[Test-NodeB-39137|10] (2) [Test-NodeB-39137, Test-NodeC-57941], 1 subgroups: [Test-NodeB-39137|9] (2) [Test-NodeB-39137, Test-NodeC-57941]
> consolidated digest=Test-NodeB-39137: [15 (16)], Test-NodeC-57941: [5 (5)]
> 10:06:36,043 DEBUG (MergeTask-108,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: installing merge view [Test-NodeB-39137|10] (2 members) in 1 coords
> 10:06:36,043 DEBUG (MergeTask-108,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge Test-NodeB-39137::7 took 5155 ms
> 10:06:36,043 TRACE (jgroups-24,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: mcasting view MergeView::[Test-NodeB-39137|10] (2) [Test-NodeB-39137, Test-NodeC-57941], 1 subgroups: [Test-NodeB-39137|9] (2) [Test-NodeB-39137, Test-NodeC-57941]
> 10:06:49,877 DEBUG (MergeTask-229,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:07:03,755 DEBUG (MergeTask-342,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:07:17,703 DEBUG (MergeTask-447,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:07:31,547 DEBUG (MergeTask-568,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:07:45,381 DEBUG (MergeTask-688,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:07:59,196 DEBUG (MergeTask-806,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:08:13,031 DEBUG (MergeTask-932,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:08:26,885 DEBUG (MergeTask-1061,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:08:40,697 DEBUG (MergeTask-1197,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:08:54,510 DEBUG (MergeTask-1330,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:09:08,324 DEBUG (MergeTask-1463,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:09:22,135 DEBUG (MergeTask-1596,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:09:35,953 DEBUG (MergeTask-1729,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:09:49,770 DEBUG (MergeTask-1857,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:10:03,609 DEBUG (MergeTask-1986,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:10:17,420 DEBUG (MergeTask-2117,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:10:31,252 DEBUG (MergeTask-2248,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:10:45,099 DEBUG (MergeTask-2383,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:10:58,916 DEBUG (MergeTask-2516,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> 10:11:12,743 DEBUG (MergeTask-2648,Test-NodeB-39137:[]) [GMS] Test-NodeB-39137: merge leader Test-NodeB-39137 did not get responses from all 2 partition coordinators; missing responses from 1 members, removing them from the merge
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-10137) Component dependency injection should not use reflection
by Dan Berindei (Jira)
Dan Berindei created ISPN-10137:
-----------------------------------
Summary: Component dependency injection should not use reflection
Key: ISPN-10137
URL: https://issues.jboss.org/browse/ISPN-10137
Project: Infinispan
Issue Type: Enhancement
Components: Core
Affects Versions: 10.0.0.Beta3, 9.4.12.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 10.0.0.Beta4
Quarkus allows reflection, but "[t]his is normally achieved by listing every class, method, field and constructor in a JSON file, and passing this as a parameter into the native image build", so it would be much better if we generated code to perform the injection without reflection.
Because the generated code needs to obey Java's accessibility rules and generating code in the same class is impractical, private fields and methods annotated {{@Inject}} will not be supported.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-10137) Component dependency injection should not use reflection
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-10137?page=com.atlassian.jira.plugin... ]
Dan Berindei updated ISPN-10137:
--------------------------------
Status: Open (was: New)
> Component dependency injection should not use reflection
> --------------------------------------------------------
>
> Key: ISPN-10137
> URL: https://issues.jboss.org/browse/ISPN-10137
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Affects Versions: 9.4.12.Final, 10.0.0.Beta3
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 10.0.0.Beta4
>
>
> Quarkus allows reflection, but "[t]his is normally achieved by listing every class, method, field and constructor in a JSON file, and passing this as a parameter into the native image build", so it would be much better if we generated code to perform the injection without reflection.
> Because the generated code needs to obey Java's accessibility rules and generating code in the same class is impractical, private fields and methods annotated {{@Inject}} will not be supported.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-10136) Spurious dependency cycle detected error
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-10136?page=com.atlassian.jira.plugin... ]
Dan Berindei updated ISPN-10136:
--------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/6855
> Spurious dependency cycle detected error
> ----------------------------------------
>
> Key: ISPN-10136
> URL: https://issues.jboss.org/browse/ISPN-10136
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.4.6.Final, 10.0.0.Beta3
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 10.0.0.Beta4
>
>
> {{BasicComponentRegistryImpl}} keeps track of which threads are currently wiring or starting components in order to make the error messages more user-friendly. In rare cases, however, the tracking information is not updated after a failure, and a spurious dependency cycle is logged:
> {noformat}
> 14:56:09,073 WARN [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (jgroups-30,vlhebwmpr04-infinispan1) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='offer-templates', command=PutKeyValueCommand{key=WrappedByteArray{bytes=[B0x0101290B033E0931..[16], hashCode=-798243737}, value=WrappedByteArray{bytes=[B0x01012A2962030409..[10856], hashCode=0}, flags=[IGNORE_RETURN_VALUES], commandInvocationId=CommandInvocation:vlkrbwmpr01-infinispan1:1968005, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=162205431146, maxIdle=-1, version=NumericVersion{version=159314845408592057}}, successful=true, topologyId=2329}}: org.infinispan.commons.CacheConfigurationException: Dependency cycle detected, please use ComponentRef<T> to break the cycle in path org.infinispan.interceptors.AsyncInterceptorChain (a org.infinispan.interceptors.impl.AsyncInterceptorChainImpl)
> << org.infinispan.expiration.impl.InternalExpirationManager (a org.infinispan.expiration.impl.ClusterExpirationManager)
> << org.infinispan.container.impl.InternalDataContainer (a org.infinispan.container.impl.BoundedSegmentedDataContainer)
> << org.infinispan.commands.CommandsFactory (a org.infinispan.commands.CommandsFactoryImpl)
> << org.infinispan.distribution.L1Manager (a org.infinispan.distribution.impl.L1ManagerImpl)
> << org.infinispan.distribution.RemoteValueRetrievedListener (a org.infinispan.factories.impl.ComponentAlias)
> << org.infinispan.interceptors.distribution.NonTxDistributionInterceptor (a org.infinispan.interceptors.distribution.NonTxDistributionInterceptor)
> << org.infinispan.interceptors.AsyncInterceptorChain (a org.infinispan.interceptors.impl.AsyncInterceptorChainImpl)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.awaitWrapperState(BasicComponentRegistryImpl.java:646)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:498)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:714)
> at org.infinispan.commands.CommandsFactoryImpl.initializeReplicableCommand(CommandsFactoryImpl.java:397)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.initializeCacheRpcCommand(GlobalInboundInvocationHandler.java:127)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleCacheRpcCommand(GlobalInboundInvocationHandler.java:119)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:74)
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-9979) AbstractComponentRegistry.stop() can hang if called concurrently
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-9979?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-9979:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/6855
> AbstractComponentRegistry.stop() can hang if called concurrently
> ----------------------------------------------------------------
>
> Key: ISPN-9979
> URL: https://issues.jboss.org/browse/ISPN-9979
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.4.3.Final
> Reporter: Philippe Julien
> Assignee: Dan Berindei
> Priority: Major
>
> I believe that there is a bug in org.infinispan.factories.AbstractComponentRegistry.stop()
> Our Wildfly 15 nodes often hang on shutdown on the following:
> {noformat}
> "MSC service thread 1-2" #37 prio=5 os_prio=0 tid=0x0000000003807000 nid=0xf32d in Object.wait() [0x00007f0012c6f000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at java.lang.Object.wait(Object.java:502)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:359)
> - locked <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at org.infinispan.cache.impl.CacheImpl.performImmediateShutdown(CacheImpl.java:1159)
> at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:1124)
> at org.infinispan.cache.impl.AbstractDelegatingCache.stop(AbstractDelegatingCache.java:520)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:734)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:786)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:762)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:114)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:70)
> at org.wildfly.clustering.service.FunctionalService.stop(FunctionalService.java:77)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:1794)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.execute(ServiceControllerImpl.java:1763)
> at org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.java:1558)
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1364)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> There are no other threads that are working with 0x00000005cc795fa8 in the thread dump.
> I checked in a heap dump and the 0x00000005cc795fa8 org.infinispan.factories.ComponentRegistry object state is TERMINATED.
> I think that the finally block of AbstractComponentRegistry.stop() is missing a notifyAll().
> Shouldn't this:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> }
> }
> {code}
> Be:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> notifyAll();
> }
> }
> {code}
> This way, if two thread try to stop the same cache, the one that wins will notify the one that is waiting letting its stop complete.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-9979) AbstractComponentRegistry.stop() can hang if called concurrently
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-9979?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-9979:
------------------------------------
[~philjulien] you are right, we need {{notifyAll()}} there.
> AbstractComponentRegistry.stop() can hang if called concurrently
> ----------------------------------------------------------------
>
> Key: ISPN-9979
> URL: https://issues.jboss.org/browse/ISPN-9979
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.4.3.Final
> Reporter: Philippe Julien
> Assignee: Dan Berindei
> Priority: Major
>
> I believe that there is a bug in org.infinispan.factories.AbstractComponentRegistry.stop()
> Our Wildfly 15 nodes often hang on shutdown on the following:
> {noformat}
> "MSC service thread 1-2" #37 prio=5 os_prio=0 tid=0x0000000003807000 nid=0xf32d in Object.wait() [0x00007f0012c6f000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at java.lang.Object.wait(Object.java:502)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:359)
> - locked <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at org.infinispan.cache.impl.CacheImpl.performImmediateShutdown(CacheImpl.java:1159)
> at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:1124)
> at org.infinispan.cache.impl.AbstractDelegatingCache.stop(AbstractDelegatingCache.java:520)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:734)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:786)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:762)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:114)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:70)
> at org.wildfly.clustering.service.FunctionalService.stop(FunctionalService.java:77)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:1794)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.execute(ServiceControllerImpl.java:1763)
> at org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.java:1558)
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1364)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> There are no other threads that are working with 0x00000005cc795fa8 in the thread dump.
> I checked in a heap dump and the 0x00000005cc795fa8 org.infinispan.factories.ComponentRegistry object state is TERMINATED.
> I think that the finally block of AbstractComponentRegistry.stop() is missing a notifyAll().
> Shouldn't this:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> }
> }
> {code}
> Be:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> notifyAll();
> }
> }
> {code}
> This way, if two thread try to stop the same cache, the one that wins will notify the one that is waiting letting its stop complete.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-9979) AbstractComponentRegistry.stop() can hang if called concurrently
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-9979?page=com.atlassian.jira.plugin.... ]
Dan Berindei reassigned ISPN-9979:
----------------------------------
Assignee: Dan Berindei
> AbstractComponentRegistry.stop() can hang if called concurrently
> ----------------------------------------------------------------
>
> Key: ISPN-9979
> URL: https://issues.jboss.org/browse/ISPN-9979
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.4.3.Final
> Reporter: Philippe Julien
> Assignee: Dan Berindei
> Priority: Major
>
> I believe that there is a bug in org.infinispan.factories.AbstractComponentRegistry.stop()
> Our Wildfly 15 nodes often hang on shutdown on the following:
> {noformat}
> "MSC service thread 1-2" #37 prio=5 os_prio=0 tid=0x0000000003807000 nid=0xf32d in Object.wait() [0x00007f0012c6f000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at java.lang.Object.wait(Object.java:502)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:359)
> - locked <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at org.infinispan.cache.impl.CacheImpl.performImmediateShutdown(CacheImpl.java:1159)
> at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:1124)
> at org.infinispan.cache.impl.AbstractDelegatingCache.stop(AbstractDelegatingCache.java:520)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:734)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:786)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:762)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:114)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:70)
> at org.wildfly.clustering.service.FunctionalService.stop(FunctionalService.java:77)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:1794)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.execute(ServiceControllerImpl.java:1763)
> at org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.java:1558)
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1364)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> There are no other threads that are working with 0x00000005cc795fa8 in the thread dump.
> I checked in a heap dump and the 0x00000005cc795fa8 org.infinispan.factories.ComponentRegistry object state is TERMINATED.
> I think that the finally block of AbstractComponentRegistry.stop() is missing a notifyAll().
> Shouldn't this:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> }
> }
> {code}
> Be:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> notifyAll();
> }
> }
> {code}
> This way, if two thread try to stop the same cache, the one that wins will notify the one that is waiting letting its stop complete.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-9979) AbstractComponentRegistry.stop() can hang if called concurrently
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-9979?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-9979:
-------------------------------
Status: Open (was: New)
> AbstractComponentRegistry.stop() can hang if called concurrently
> ----------------------------------------------------------------
>
> Key: ISPN-9979
> URL: https://issues.jboss.org/browse/ISPN-9979
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.4.3.Final
> Reporter: Philippe Julien
> Assignee: Dan Berindei
> Priority: Major
>
> I believe that there is a bug in org.infinispan.factories.AbstractComponentRegistry.stop()
> Our Wildfly 15 nodes often hang on shutdown on the following:
> {noformat}
> "MSC service thread 1-2" #37 prio=5 os_prio=0 tid=0x0000000003807000 nid=0xf32d in Object.wait() [0x00007f0012c6f000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at java.lang.Object.wait(Object.java:502)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:359)
> - locked <0x00000005cc795fa8> (a org.infinispan.factories.ComponentRegistry)
> at org.infinispan.cache.impl.CacheImpl.performImmediateShutdown(CacheImpl.java:1159)
> at org.infinispan.cache.impl.CacheImpl.stop(CacheImpl.java:1124)
> at org.infinispan.cache.impl.AbstractDelegatingCache.stop(AbstractDelegatingCache.java:520)
> at org.infinispan.manager.DefaultCacheManager.terminate(DefaultCacheManager.java:734)
> at org.infinispan.manager.DefaultCacheManager.stopCaches(DefaultCacheManager.java:786)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:762)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:114)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerServiceConfigurator.accept(CacheContainerServiceConfigurator.java:70)
> at org.wildfly.clustering.service.FunctionalService.stop(FunctionalService.java:77)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:1794)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.execute(ServiceControllerImpl.java:1763)
> at org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.java:1558)
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1364)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> There are no other threads that are working with 0x00000005cc795fa8 in the thread dump.
> I checked in a heap dump and the 0x00000005cc795fa8 org.infinispan.factories.ComponentRegistry object state is TERMINATED.
> I think that the finally block of AbstractComponentRegistry.stop() is missing a notifyAll().
> Shouldn't this:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> }
> }
> {code}
> Be:
> {code}
> ...
> } finally {
> synchronized (this) {
> state = ComponentStatus.TERMINATED;
> notifyAll();
> }
> }
> {code}
> This way, if two thread try to stop the same cache, the one that wins will notify the one that is waiting letting its stop complete.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (ISPN-10136) Spurious dependency cycle detected error
by Dan Berindei (Jira)
[ https://issues.jboss.org/browse/ISPN-10136?page=com.atlassian.jira.plugin... ]
Dan Berindei updated ISPN-10136:
--------------------------------
Status: Open (was: New)
> Spurious dependency cycle detected error
> ----------------------------------------
>
> Key: ISPN-10136
> URL: https://issues.jboss.org/browse/ISPN-10136
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.4.6.Final, 10.0.0.Beta3
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 10.0.0.Beta4
>
>
> {{BasicComponentRegistryImpl}} keeps track of which threads are currently wiring or starting components in order to make the error messages more user-friendly. In rare cases, however, the tracking information is not updated after a failure, and a spurious dependency cycle is logged:
> {noformat}
> 14:56:09,073 WARN [org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler] (jgroups-30,vlhebwmpr04-infinispan1) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='offer-templates', command=PutKeyValueCommand{key=WrappedByteArray{bytes=[B0x0101290B033E0931..[16], hashCode=-798243737}, value=WrappedByteArray{bytes=[B0x01012A2962030409..[10856], hashCode=0}, flags=[IGNORE_RETURN_VALUES], commandInvocationId=CommandInvocation:vlkrbwmpr01-infinispan1:1968005, putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{lifespan=162205431146, maxIdle=-1, version=NumericVersion{version=159314845408592057}}, successful=true, topologyId=2329}}: org.infinispan.commons.CacheConfigurationException: Dependency cycle detected, please use ComponentRef<T> to break the cycle in path org.infinispan.interceptors.AsyncInterceptorChain (a org.infinispan.interceptors.impl.AsyncInterceptorChainImpl)
> << org.infinispan.expiration.impl.InternalExpirationManager (a org.infinispan.expiration.impl.ClusterExpirationManager)
> << org.infinispan.container.impl.InternalDataContainer (a org.infinispan.container.impl.BoundedSegmentedDataContainer)
> << org.infinispan.commands.CommandsFactory (a org.infinispan.commands.CommandsFactoryImpl)
> << org.infinispan.distribution.L1Manager (a org.infinispan.distribution.impl.L1ManagerImpl)
> << org.infinispan.distribution.RemoteValueRetrievedListener (a org.infinispan.factories.impl.ComponentAlias)
> << org.infinispan.interceptors.distribution.NonTxDistributionInterceptor (a org.infinispan.interceptors.distribution.NonTxDistributionInterceptor)
> << org.infinispan.interceptors.AsyncInterceptorChain (a org.infinispan.interceptors.impl.AsyncInterceptorChainImpl)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.awaitWrapperState(BasicComponentRegistryImpl.java:646)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:498)
> at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:714)
> at org.infinispan.commands.CommandsFactoryImpl.initializeReplicableCommand(CommandsFactoryImpl.java:397)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.initializeCacheRpcCommand(GlobalInboundInvocationHandler.java:127)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleCacheRpcCommand(GlobalInboundInvocationHandler.java:119)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:74)
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months