[JBoss JIRA] (AS7-2274) Top-level directory tidy-up
by Rich Sharples (Created) (JIRA)
Top-level directory tidy-up
---------------------------
Key: AS7-2274
URL: https://issues.jboss.org/browse/AS7-2274
Project: Application Server 7
Issue Type: Enhancement
Components: Server
Affects Versions: 7.1.0.Alpha1
Environment: all
Reporter: Rich Sharples
Assignee: Jason Greene
Fix For: 7.1.0.Beta1
There is a of stuff in the top-leve directory - much of which, most users will never need to see.
0. create a top-level "lib" directory for all static / read-only artifacts - modules, schemas, bundles, welcome-content. It's quite possible that there is a better name than "lib" - but this is consitent with Linux / Unix lib / Library dirs.
1. Rename and move <JBOSS-HOME>/docs
<JBOSS-HOME>/docs contains no docs. it contains schemas and an odd subdirectory with a HornetQ config. I suggest we rename it "schemas" and hide it under "lib". As for examples/configs dir. - is it required by anything ?
2. Move app-client under lib - doesn't need to be in the top-level directory
3. Move jboss-modules.jar under lib
This will leave JBOSS-HOME with just :
README.txt
copyright.txt
License.txt
bin
domain
standalone
Also - any reason README and LICENSE are upper-case but copyright is lower ?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months
[JBoss JIRA] (JBRULES-3358) Fix reload of KnowledgeAgent resources: CSV, XLS, BPMN
by Edson Tirelli (JIRA)
Edson Tirelli created JBRULES-3358:
--------------------------------------
Summary: Fix reload of KnowledgeAgent resources: CSV, XLS, BPMN
Key: JBRULES-3358
URL: https://issues.jboss.org/browse/JBRULES-3358
Project: Drools
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: drools-compiler, drools-core, drools-decisiontables
Affects Versions: 5.4.0.Beta1, 5.3.1.Final
Reporter: Edson Tirelli
Assignee: Edson Tirelli
Fix For: 5.3.2.Final, 5.4.0.Beta2
Placeholder JIRA for several BZ issues as linked.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months
[JBoss JIRA] (AS7-3343) Concurrency issues in ReferenceCountingEntityCache
by Alexey Makhmutov (JIRA)
Alexey Makhmutov created AS7-3343:
-------------------------------------
Summary: Concurrency issues in ReferenceCountingEntityCache
Key: AS7-3343
URL: https://issues.jboss.org/browse/AS7-3343
Project: Application Server 7
Issue Type: Bug
Components: EJB
Affects Versions: 7.1.0.CR1b
Reporter: Alexey Makhmutov
Assignee: jaikiran pai
Attachments: cacheProblemTC.zip
While running multithreaded workload against AS 7.1Beta/CR1 for application with Entity EJB 2.x we've faced a lot of following errors:
* Instance for PK [XXX] already registerd.
* Instance [YYY] not found in cache
It seems, that these errors are caused by synchronization issues in ReferenceCountingEntityCache:
* If two threads are trying to access the cache and there is no ready instance for particular PK in the cache, then both threads are trying to get some instance from the pool and put it into the cache -- as result, the second thread can get 'already registered' exception. We were able to compose a minimal test case application which reproduces this kind of problem (see below).
* There is a gap in time between the call to Associate interceptor (which puts instance in cache - via 'get' method) and Synchronization interceptor (which calls 'reference' method). During this time instance is not referenced, so it seems it can be removed by some other thread which finish its invocation at the same time. Probably this is the root cause of 'not found in cache' error, however it's hard to create a syntetic test case for it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months
[JBoss JIRA] Created: (JBRULES-3112) Shift operator differs between Java and MVEL
by Wolfgang Laun (JIRA)
Shift operator differs between Java and MVEL
--------------------------------------------
Key: JBRULES-3112
URL: https://issues.jboss.org/browse/JBRULES-3112
Project: Drools
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: drools-compiler (expert)
Affects Versions: 5.2.0.Final
Reporter: Wolfgang Laun
Assignee: Mark Proctor
Priority: Critical
Fix For: 5.3.0.Beta1
The following DRL file can be run without and with "dialect 'mvel'"; in the latter case rules 2 and 4 do not fire, otherwise all 4 rules fire.
*** Using dialect Java ***
test1 1 << 65552 = 10000
test1 1 << 65536 = 1
test1 1 << 65568 = 1
test2 1 << 4294967296 = 1
test2 1 << 65536 = 1
test2 1 << 65552 = 10000
test2 1 << 65568 = 1
test3 1 << 4294967296 = 1
test3 1 << 65536 = 1
test3 1 << 65552 = 10000
test4 1 << 65552 = 10000
test4 1 << 65536 = 1
*** Using dialect MVEL ***
test1 1 << 65552 = 10000
test1 1 << 65536 = 1
test1 1 << 65568 = 1
test3 1 << 4294967296 = 1
test3 1 << 65536 = 1
test3 1 << 65552 = 10000
=== DRL ===
# dialect "mvel"
rule kickOff
when
then
insert( Integer.valueOf( 1 ) );
insert( Long.valueOf( 1 ) );
insert( Integer.valueOf( 65552 ) ); // 0x10010
insert( Long.valueOf( 65552 ) );
insert( Integer.valueOf( 65568 ) ); // 0x10020
insert( Long.valueOf( 65568 ) );
insert( Integer.valueOf( 65536 ) ); // 0x10000
insert( Long.valueOf( 65536L ) );
insert( Long.valueOf( 4294967296L ) ); // 0x100000000L
end
rule test1
salience -1
when
$a: Integer( $one: intValue == 1 )
$b: Integer( $shift: intValue )
$c: Integer( $i: intValue, intValue == ($one << $shift ) )
then
System.out.println( "test1 " + $a + " << " + $b + " = " + Integer.toHexString( $c ) );
end
rule test2
salience -2
when
$a: Integer( $one: intValue == 1 )
$b: Long ( $shift: longValue )
$c: Integer( $i: intValue, intValue == ($one << $shift ) )
then
System.out.println( "test2 " + $a + " << " + $b + " = " + Integer.toHexString( $c ) );
end
rule test3
salience -3
when
$a: Long ( $one: longValue == 1 )
$b: Long ( $shift: longValue )
$c: Integer( $i: intValue, intValue == ($one << $shift ) )
then
System.out.println( "test3 " + $a + " << " + $b + " = " + Integer.toHexString( $c ) );
end
rule test4
salience -4
when
$a: Long ( $one: longValue == 1 )
$b: Integer( $shift: intValue )
$c: Integer( $i: intValue, intValue == ($one << $shift ) )
then
System.out.println( "test4 " + $a + " << " + $b + " = " + Integer.toHexString( $c ) );
end
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months
[JBoss JIRA] (AS7-3166) Removing nodes breaks further cluster formation with "ISPN000172 Failed to prepare view"
by Radoslav Husar (Created) (JIRA)
Removing nodes breaks further cluster formation with "ISPN000172 Failed to prepare view"
----------------------------------------------------------------------------------------
Key: AS7-3166
URL: https://issues.jboss.org/browse/AS7-3166
Project: Application Server 7
Issue Type: Feature Request
Components: Clustering
Affects Versions: 7.1.0.CR1
Reporter: Radoslav Husar
Assignee: Paul Ferraro
Priority: Critical
Fix For: 7.1.0.Final
when 2 nodes from 3 node cluster are shutdown (nearly at the same time), the remaining nodes keeps logging every minute
{noformat}
13:30:25,475 ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-1,s2/web) ISPN000172: Failed to prepare view CacheView{viewId=42, members=[s2/web, rhusar/web]} for cache registry, rolling back to view CacheView{viewId=41, members=[s2/web]}: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command CacheViewControlCommand{cache=registry, type=PREPARE_VIEW, sender=s2/web, newViewId=42, newMembers=[s2/web, rhusar/web], oldViewId=41, oldMembers=[s2/web]}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232) [:1.6.0_29]
at java.util.concurrent.FutureTask.get(FutureTask.java:91) [:1.6.0_29]
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:305) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:251) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:831) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_29]
at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]
at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command CacheViewControlCommand{cache=registry, type=PREPARE_VIEW, sender=s2/web, newViewId=42, newMembers=[s2/web, rhusar/web], oldViewId=41, oldMembers=[s2/web]}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:404) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl$2.call(CacheViewsManagerImpl.java:294) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl$2.call(CacheViewsManagerImpl.java:291) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
... 5 more
13:30:25,476 ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-2,s2/web) ISPN000172: Failed to prepare view CacheView{viewId=40, members=[s2/web, rhusar/web]} for cache repl, rolling back to view CacheView{viewId=39, members=[s2/web]}: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command CacheViewControlCommand{cache=repl, type=PREPARE_VIEW, sender=s2/web, newViewId=40, newMembers=[s2/web, rhusar/web], oldViewId=39, oldMembers=[s2/web]}
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232) [:1.6.0_29]
at java.util.concurrent.FutureTask.get(FutureTask.java:91) [:1.6.0_29]
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:305) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:251) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:831) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_29]
at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]
at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]
Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command CacheViewControlCommand{cache=repl, type=PREPARE_VIEW, sender=s2/web, newViewId=40, newMembers=[s2/web, rhusar/web], oldViewId=39, oldMembers=[s2/web]}
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:404) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl$2.call(CacheViewsManagerImpl.java:294) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.cacheviews.CacheViewsManagerImpl$2.call(CacheViewsManagerImpl.java:291) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
... 5 more
13:30:25,484 WARNING [org.jgroups.protocols.UDP] (CacheViewInstaller-2,s2/web) null: no physical address for rhusar/web, dropping message
13:30:25,484 WARNING [org.jgroups.protocols.UDP] (CacheViewInstaller-1,s2/web) null: no physical address for rhusar/web, dropping message
{noformat}
while 1st node terminated correctly, and 2nd one with a warning
{noformat}
13:13:06,128 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] Problems unmarshalling remote command from byte buffer: org.infinispan.CacheException: Cache manager is either starting up or shutting down but it's not interrupted, so type (id=74) cannot be resolved.
at org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:257) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37) [jboss-marshalling-1.3.4.GA.jar:1.3.4.GA]
at org.infinispan.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:120) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.marshall.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:115) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:79) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:50) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:139) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jboss.as.clustering.jgroups.ClassLoaderAwareUpHandler.up(ClassLoaderAwareUpHandler.java:56) [jboss-as-clustering-jgroups-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jboss.as.clustering.jgroups.MuxChannel$ClassLoaderAwareMuxUpHandler.up(MuxChannel.java:64) [jboss-as-clustering-jgroups-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jgroups.JChannel.up(JChannel.java:716) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:433) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.STATE_TRANSFER.up(STATE_TRANSFER.java:178) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:759) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:365) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:597) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.BARRIER.up(BARRIER.java:102) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:140) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FD.up(FD.java:273) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:284) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.Discovery.up(Discovery.java:354) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.stack.Protocol.up(Protocol.java:358) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1709) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1691) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]
at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]
13:13:06,131 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] Problems unmarshalling remote command from byte buffer: org.infinispan.CacheException: Cache manager is either starting up or shutting down but it's not interrupted, so type (id=74) cannot be resolved.
at org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:257) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:351)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37) [jboss-marshalling-1.3.4.GA.jar:1.3.4.GA]
at org.infinispan.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:120) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.marshall.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:115) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:79) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:50) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:139) [infinispan-core-5.1.0.CR1.jar:5.1.0.CR1]
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jboss.as.clustering.jgroups.ClassLoaderAwareUpHandler.up(ClassLoaderAwareUpHandler.java:56) [jboss-as-clustering-jgroups-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jboss.as.clustering.jgroups.MuxChannel$ClassLoaderAwareMuxUpHandler.up(MuxChannel.java:64) [jboss-as-clustering-jgroups-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jgroups.JChannel.up(JChannel.java:716) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:433) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.STATE_TRANSFER.up(STATE_TRANSFER.java:178) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:881) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:759) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:365) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:597) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.BARRIER.up(BARRIER.java:102) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:140) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FD.up(FD.java:273) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:284) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.Discovery.up(Discovery.java:354) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.stack.Protocol.up(Protocol.java:358) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1174) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1709) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1691) [jgroups-3.0.1.Final.jar:3.0.1.Final]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]
at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]
13:13:06,150 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] ISPN000082: Stopping the RpcDispatcher
13:13:06,158 INFO [com.arjuna.ats.jbossatx] ARJUNA032018: Destroying TransactionManagerService
13:13:06,159 INFO [com.arjuna.ats.jbossatx] ARJUNA032014: Stopping transaction recovery manager
13:13:06,488 INFO [org.jboss.as] JBoss AS 7.1.0.Final-SNAPSHOT "Flux Capacitor" stopped in 961ms
{noformat}
nodes are then unable to join back in, deployments are rolled back after minutes
{noformat}
13:35:02,755 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS015856: Undeploy of deployment "clusterbench-ee6.ear" was rolled back with failure message Operation cancelled
13:35:02,757 ERROR [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 2) JBAS014612: Operation ("deploy") failed - address: ([("deployment" => "clusterbench-ee6.ear")]): java.util.concurrent.CancellationException: Operation cancelled asynchronously
at org.jboss.as.controller.OperationContextImpl.awaitContainerMonitor(OperationContextImpl.java:356) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.OperationContextImpl.releaseStepLocks(OperationContextImpl.java:599) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext$Step.finalizeInternal(AbstractOperationContext.java:620) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext$Step.finalizeStep(AbstractOperationContext.java:579) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext$Step.access$200(AbstractOperationContext.java:560) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:429) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:254) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.completeStep(AbstractOperationContext.java:190) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.server.deployment.DeploymentDeployHandler.execute(DeploymentDeployHandler.java:68) [jboss-as-server-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:359) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:254) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.completeStep(AbstractOperationContext.java:190) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.server.deployment.DeploymentAddHandler.execute(DeploymentAddHandler.java:185) [jboss-as-server-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:359) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:254) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.completeStep(AbstractOperationContext.java:190) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.CompositeOperationHandler.execute(CompositeOperationHandler.java:84) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:359) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:254) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.completeStep(AbstractOperationContext.java:190) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.CompositeOperationHandler.execute(CompositeOperationHandler.java:84) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:359) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:254) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.completeStep(AbstractOperationContext.java:190) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.ModelControllerImpl$DefaultPrepareStepHandler.execute(ModelControllerImpl.java:432) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:359) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:254) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.AbstractOperationContext.completeStep(AbstractOperationContext.java:190) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:119) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.ModelControllerImpl$1.execute(ModelControllerImpl.java:302) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.controller.ModelControllerImpl$1.execute(ModelControllerImpl.java:292) [jboss-as-controller-7.1.0.Final-SNAPSHOT.jar:7.1.0.Final-SNAPSHOT]
at org.jboss.as.server.deployment.scanner.FileSystemDeploymentService$DeploymentTask.call(FileSystemDeploymentService.java:1178)
at org.jboss.as.server.deployment.scanner.FileSystemDeploymentService$DeploymentTask.call(FileSystemDeploymentService.java:1168)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_29]
at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_29]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98) [:1.6.0_29]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]
at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]
at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.0.0.GA.jar:2.0.0.GA]
{noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 8 months