[JBoss JIRA] (WFLY-6762) Windows ennvironment: Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node).
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6762?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla updated WFLY-6762:
-----------------------------------
Description:
In your mail related to WFLY-6749 you has said the below :-
**The default stack contains the following failure detection protocols:
FD_SOCK
FD_ALL
These protocols are described here:
http://www.jgroups.org/manual/index.html#FailureDetection
I suspect that your method of simulating a failure - by disabling the network of the host machine is not being detected by FD_SOCK. It will however, be detected by FD_ALL, but only after 1 minute. The heartbeat timeout used by FD_ALL can be manipulated via the timeout property.
e.g.
<protocol type="FD_ALL" ><property name="timeout">60000</property></protocol>
**************************************************************************************************
Thanks for the quick response on WFLY-6749.
Based on your suggestion, I had a taken a look at the testing scenarios mentioned in "Table 29. Failure detection behavior" in the link that you provided- http://www.jgroups.org/manual/index.html#FailureDetection. No where its mentioned that disabling a network on a node, is a valid testing scenario in Wildfly cluster.
The Failover is working properly when the network on a node is disabled on a weblogic cluster for our application. However it doesn't work and it hampers the application functionality on Wildfly cluster when we try to disable the network on a node in Wildfly cluster.
However as I said earlier, the failover on wildfly cluster works when we stop a node from admin console or give Ctrl + C to stop the services on a node.
Would like to get a confirmation from you that disabling the network on a node is not the valid failover testing scenario for wildfly cluster.
Also we tried to test the same failover scenario by Shutting down a VM (node) in a wildfly cluster. It did not work for Windows Environment although it worked for linux environment.
Note: we are using Windows 2012 environment. Here is a link I found: http://stackoverflow.com/questions/31218710/unable-to-stop-wildfly-8-2-se...
Thanks,
Preeta
was:
In your mail related to WFLY-6749 you has said the below :-
**The default stack contains the following failure detection protocols:
FD_SOCK
FD_ALL
These protocols are described here:
http://www.jgroups.org/manual/index.html#FailureDetection
I suspect that your method of simulating a failure - by disabling the network of the host machine is not being detected by FD_SOCK. It will however, be detected by FD_ALL, but only after 1 minute. The heartbeat timeout used by FD_ALL can be manipulated via the timeout property.
e.g.
<protocol type="FD_ALL" ><property name="timeout">60000</property></protocol>
**
Thanks for the quick response on WFLY-6749.
Based on your suggestion, I had a taken a look at the testing scenarios mentioned in "Table 29. Failure detection behavior" in the link that you provided- http://www.jgroups.org/manual/index.html#FailureDetection. No where its mentioned that disabling a network on a node, is a valid testing scenario in Wildfly cluster.
The Failover is working properly when the network on a node is disabled on a weblogic cluster for our application. However it doesn't work and it hampers the application functionality on Wildfly cluster when we try to disable the network on a node in Wildfly cluster.
However as I said earlier, the failover on wildfly cluster works when we stop a node from admin console or give Ctrl + C to stop the services on a node.
Would like to get a confirmation from you that disabling the network on a node is not the valid failover testing scenario for wildfly cluster.
Thanks,
Preeta
> Windows ennvironment: Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node).
> -------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6762
> URL: https://issues.jboss.org/browse/WFLY-6762
> Project: WildFly
> Issue Type: Quality Risk
> Components: Clustering
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Paul Ferraro
> Priority: Blocker
>
> In your mail related to WFLY-6749 you has said the below :-
> **The default stack contains the following failure detection protocols:
> FD_SOCK
> FD_ALL
> These protocols are described here:
> http://www.jgroups.org/manual/index.html#FailureDetection
> I suspect that your method of simulating a failure - by disabling the network of the host machine is not being detected by FD_SOCK. It will however, be detected by FD_ALL, but only after 1 minute. The heartbeat timeout used by FD_ALL can be manipulated via the timeout property.
> e.g.
> <protocol type="FD_ALL" ><property name="timeout">60000</property></protocol>
> **************************************************************************************************
> Thanks for the quick response on WFLY-6749.
> Based on your suggestion, I had a taken a look at the testing scenarios mentioned in "Table 29. Failure detection behavior" in the link that you provided- http://www.jgroups.org/manual/index.html#FailureDetection. No where its mentioned that disabling a network on a node, is a valid testing scenario in Wildfly cluster.
> The Failover is working properly when the network on a node is disabled on a weblogic cluster for our application. However it doesn't work and it hampers the application functionality on Wildfly cluster when we try to disable the network on a node in Wildfly cluster.
> However as I said earlier, the failover on wildfly cluster works when we stop a node from admin console or give Ctrl + C to stop the services on a node.
> Would like to get a confirmation from you that disabling the network on a node is not the valid failover testing scenario for wildfly cluster.
> Also we tried to test the same failover scenario by Shutting down a VM (node) in a wildfly cluster. It did not work for Windows Environment although it worked for linux environment.
> Note: we are using Windows 2012 environment. Here is a link I found: http://stackoverflow.com/questions/31218710/unable-to-stop-wildfly-8-2-se...
> Thanks,
> Preeta
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6762) Windows ennvironment: Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node).
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6762?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla updated WFLY-6762:
-----------------------------------
Summary: Windows ennvironment: Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node). (was: Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node).)
> Windows ennvironment: Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node).
> -------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6762
> URL: https://issues.jboss.org/browse/WFLY-6762
> Project: WildFly
> Issue Type: Quality Risk
> Components: Clustering
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Paul Ferraro
> Priority: Blocker
>
> In your mail related to WFLY-6749 you has said the below :-
> **The default stack contains the following failure detection protocols:
> FD_SOCK
> FD_ALL
> These protocols are described here:
> http://www.jgroups.org/manual/index.html#FailureDetection
> I suspect that your method of simulating a failure - by disabling the network of the host machine is not being detected by FD_SOCK. It will however, be detected by FD_ALL, but only after 1 minute. The heartbeat timeout used by FD_ALL can be manipulated via the timeout property.
> e.g.
> <protocol type="FD_ALL" ><property name="timeout">60000</property></protocol>
> **
> Thanks for the quick response on WFLY-6749.
> Based on your suggestion, I had a taken a look at the testing scenarios mentioned in "Table 29. Failure detection behavior" in the link that you provided- http://www.jgroups.org/manual/index.html#FailureDetection. No where its mentioned that disabling a network on a node, is a valid testing scenario in Wildfly cluster.
> The Failover is working properly when the network on a node is disabled on a weblogic cluster for our application. However it doesn't work and it hampers the application functionality on Wildfly cluster when we try to disable the network on a node in Wildfly cluster.
> However as I said earlier, the failover on wildfly cluster works when we stop a node from admin console or give Ctrl + C to stop the services on a node.
> Would like to get a confirmation from you that disabling the network on a node is not the valid failover testing scenario for wildfly cluster.
> Thanks,
> Preeta
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6762) Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node).
by Preeta Kuruvilla (JIRA)
[ https://issues.jboss.org/browse/WFLY-6762?page=com.atlassian.jira.plugin.... ]
Preeta Kuruvilla updated WFLY-6762:
-----------------------------------
Issue Type: Quality Risk (was: CTS Challenge)
Summary: Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node). (was: Is Network Disabling on a Node, a valid test scenario for testing Wildfly Cluster Failover?)
Priority: Blocker (was: Critical)
> Wildfly failover test not working when Network Disabling on a VM(Node) or by Shutting down the VM (Node).
> ---------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6762
> URL: https://issues.jboss.org/browse/WFLY-6762
> Project: WildFly
> Issue Type: Quality Risk
> Components: Clustering
> Affects Versions: 8.2.0.Final
> Reporter: Preeta Kuruvilla
> Assignee: Paul Ferraro
> Priority: Blocker
>
> In your mail related to WFLY-6749 you has said the below :-
> **The default stack contains the following failure detection protocols:
> FD_SOCK
> FD_ALL
> These protocols are described here:
> http://www.jgroups.org/manual/index.html#FailureDetection
> I suspect that your method of simulating a failure - by disabling the network of the host machine is not being detected by FD_SOCK. It will however, be detected by FD_ALL, but only after 1 minute. The heartbeat timeout used by FD_ALL can be manipulated via the timeout property.
> e.g.
> <protocol type="FD_ALL" ><property name="timeout">60000</property></protocol>
> **
> Thanks for the quick response on WFLY-6749.
> Based on your suggestion, I had a taken a look at the testing scenarios mentioned in "Table 29. Failure detection behavior" in the link that you provided- http://www.jgroups.org/manual/index.html#FailureDetection. No where its mentioned that disabling a network on a node, is a valid testing scenario in Wildfly cluster.
> The Failover is working properly when the network on a node is disabled on a weblogic cluster for our application. However it doesn't work and it hampers the application functionality on Wildfly cluster when we try to disable the network on a node in Wildfly cluster.
> However as I said earlier, the failover on wildfly cluster works when we stop a node from admin console or give Ctrl + C to stop the services on a node.
> Would like to get a confirmation from you that disabling the network on a node is not the valid failover testing scenario for wildfly cluster.
> Thanks,
> Preeta
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (JGRP-2030) GMS: view_ack_collection_timeout delay when last 2 members leave concurrently
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2030?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2030:
---------------------------
Fix Version/s: 3.6.11
(was: 3.6.10)
Hi Dan,
is this still relevant? I don't want to introduce retrying at the transport level, IMO this is the task of one of the retransmission layers.
> GMS: view_ack_collection_timeout delay when last 2 members leave concurrently
> -----------------------------------------------------------------------------
>
> Key: JGRP-2030
> URL: https://issues.jboss.org/browse/JGRP-2030
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.8
> Reporter: Dan Berindei
> Assignee: Bela Ban
> Priority: Minor
> Fix For: 3.6.11, 4.0
>
>
> When the coordinator ({{NodeE}}) leaves, it tries to install a new view on behalf of the new coordinator ({{NodeG}}, the last member).
> {noformat}
> 21:33:26,844 TRACE (ViewHandler,InitialClusterSizeTest-NodeE-42422:) [GMS] InitialClusterSizeTest-NodeE-42422: mcasting view [InitialClusterSizeTest-NodeG-30521|3] (1) [InitialClusterSizeTest-NodeG-30521] (1 mbrs)
> 21:33:26,844 TRACE (ViewHandler,InitialClusterSizeTest-NodeE-42422:) [TCP_NIO2] InitialClusterSizeTest-NodeE-42422: sending msg to null, src=InitialClusterSizeTest-NodeE-42422, headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=1], TP: [cluster_name=ISPN]
> {noformat}
> The message is actually sent later by the bundler, but {{NodeG}} is also sending its {{LEAVE_REQ}} message at the same time. Both nodes try to create a connection to each other, and only {{NodeG}} succeeds:
> {noformat}
> 21:33:26,844 TRACE (ForkThread-2,InitialClusterSizeTest:) [TCP_NIO2] InitialClusterSizeTest-NodeG-30521: sending msg to InitialClusterSizeTest-NodeE-42422, src=InitialClusterSizeTest-NodeG-30521, headers are GMS: GmsHeader[LEAVE_REQ]: mbr=InitialClusterSizeTest-NodeG-30521, UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN]
> 21:33:26,865 TRACE (Timer-2,InitialClusterSizeTest-NodeG-30521:) [TCP_NIO2] InitialClusterSizeTest-NodeG-30521: sending 1 msgs (83 bytes (0.27% of max_bundle_size) to 1 dests(s): [ISPN:InitialClusterSizeTest-NodeE-42422]
> 21:33:26,865 TRACE (Timer-2,InitialClusterSizeTest-NodeE-42422:) [TCP_NIO2] InitialClusterSizeTest-NodeE-42422: sending 1 msgs (91 bytes (0.29% of max_bundle_size) to 1 dests(s): [ISPN]
> 21:33:26,865 TRACE (Timer-2,InitialClusterSizeTest-NodeG-30521:) [TCP_NIO2] dest=127.0.0.1:7900 (86 bytes)
> 21:33:26,865 TRACE (Timer-2,InitialClusterSizeTest-NodeE-42422:) [TCP_NIO2] dest=127.0.0.1:7920 (94 bytes)
> 21:33:26,865 TRACE (Timer-2,InitialClusterSizeTest-NodeE-42422:) [TCP_NIO2] 127.0.0.1:7900: connecting to 127.0.0.1:7920
> 21:33:26,865 TRACE (Timer-2,InitialClusterSizeTest-NodeG-30521:) [TCP_NIO2] 127.0.0.1:7920: connecting to 127.0.0.1:7900
> 21:33:26,866 TRACE (NioConnection.Reader [null],InitialClusterSizeTest-NodeG-30521:) [TCP_NIO2] 127.0.0.1:7920: rejected connection from 127.0.0.1:7900 (connection existed and my address won as it's higher)
> 21:33:26,867 TRACE (OOB-1,InitialClusterSizeTest-NodeE-42422:) [TCP_NIO2] InitialClusterSizeTest-NodeE-42422: received [dst: InitialClusterSizeTest-NodeE-42422, src: InitialClusterSizeTest-NodeG-30521 (3 headers), size=0 bytes, flags=OOB], headers are GMS: GmsHeader[LEAVE_REQ]: mbr=InitialClusterSizeTest-NodeG-30521, UNICAST3: DATA, seqno=1, conn_id=1, first, TP: [cluster_name=ISPN]
> {noformat}
> I'm guessing {{NodeE}} would need a {{STABLE}} round in order to retransmit the {{VIEW}} message, but I'm not sure if the stable round would work, since it already (partially?) installed the new view with {{NodeG}} as the only member. However, I think it should be possible for {{NodeE}} to remove {{NodeG}} from it's {{AckCollector}} once it receives its {{LEAVE_REQ}}, and stop blocking.
> This is a minor annoyance a few the Infinispan tests - most of them shut down the nodes serially, so they don't see this delay.
> The question is whether the concurrent connection setup can have an impact for other messages as well - e.g. during startup, when there aren't a lot of messages being sent around to trigger retransmission. Could the node that failed to open its connection retry immediately on the connection opened by the other node?
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6402) EJBs accessible too early (spec violation)
by Miroslav Sochurek (JIRA)
[ https://issues.jboss.org/browse/WFLY-6402?page=com.atlassian.jira.plugin.... ]
Miroslav Sochurek updated WFLY-6402:
------------------------------------
Description:
{code}
EJB 3.1 spec, section 4.8.1:
"If the Startup annotation appears on the Singleton bean class or if the Singleton has been designated via the deployment descriptor as requiring eager initialization, the container must initialize the Singleton bean instance during the application startup sequence. The container must initialize all such startup-time Singletons before any external client requests (that is, client requests originating outside of the application) are delivered to any enterprise bean components in the application.
{code}
Wildlfy does not implement this correctly, and allows calls to other EJBs before a @Startup @Singleton finishes its @PostConstruct call.
This Jira ticket handles two PR's on WFLY:
https://github.com/wildfly/wildfly/pull/8824 (that is already merged)
and https://github.com/wildfly/wildfly/pull/8989
was:
{code}
EJB 3.1 spec, section 4.8.1:
"If the Startup annotation appears on the Singleton bean class or if the Singleton has been designated via the deployment descriptor as requiring eager initialization, the container must initialize the Singleton bean instance during the application startup sequence. The container must initialize all such startup-time Singletons before any external client requests (that is, client requests originating outside of the application) are delivered to any enterprise bean components in the application.
{code}
Wildlfy does not implement this correctly, and allows calls to other EJBs before a @Startup @Singleton finishes its @PostConstruct call.
Git Pull Request: https://github.com/wildfly/wildfly/pull/8824, https://github.com/wildfly/wildfly/pull/8989 (was: https://github.com/wildfly/wildfly/pull/8824)
> EJBs accessible too early (spec violation)
> ------------------------------------------
>
> Key: WFLY-6402
> URL: https://issues.jboss.org/browse/WFLY-6402
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.0.0.Final
> Reporter: Brad Maxwell
> Assignee: Fedor Gavrilov
> Labels: downstream_dependency
> Attachments: auto-test-reproducer.zip
>
>
> {code}
> EJB 3.1 spec, section 4.8.1:
> "If the Startup annotation appears on the Singleton bean class or if the Singleton has been designated via the deployment descriptor as requiring eager initialization, the container must initialize the Singleton bean instance during the application startup sequence. The container must initialize all such startup-time Singletons before any external client requests (that is, client requests originating outside of the application) are delivered to any enterprise bean components in the application.
> {code}
> Wildlfy does not implement this correctly, and allows calls to other EJBs before a @Startup @Singleton finishes its @PostConstruct call.
> This Jira ticket handles two PR's on WFLY:
> https://github.com/wildfly/wildfly/pull/8824 (that is already merged)
> and https://github.com/wildfly/wildfly/pull/8989
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (JGRP-2082) Coordinator failover is taking longer because VERIFY_SUSPECT runs twice
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2082?page=com.atlassian.jira.plugin.... ]
Bela Ban resolved JGRP-2082.
----------------------------
Resolution: Done
Your fix is certainly not wrong - accepted it. Note to self: should we do the same for FD_ALL?
> Coordinator failover is taking longer because VERIFY_SUSPECT runs twice
> -----------------------------------------------------------------------
>
> Key: JGRP-2082
> URL: https://issues.jboss.org/browse/JGRP-2082
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.3
> Reporter: Osamu Nagano
> Assignee: Bela Ban
> Fix For: 3.6.10, 4.0
>
>
> There are 4 machines (m03, ..., m06) and 7 nodes (n001, ..., n007) on each. To test the coordinator failover behaviour, nodes on m03 are all killed at the same time. These are suspected and verified in sequence at the first time (line 9 to 18, only m03_n007 is verified as dead and others are just queued), while these are verified as a whole at the second time (line 19 to 34). Since the coordinator is in the queued at the first time, view change is not triggered and causing a delay.
> - m03_n001 is the original coordinator.
> - m05_n001 is the next coordinator and the owner of the following log messages.
> {code}
> 8 12:04:10,997 TRACE [org.jgroups.protocols.FD_SOCK] (FD_SOCK acceptor,m05_n001/clustered) m05_n001/clustered: accepted connection from /172.20.66.36:29702
> 9 12:04:10,999 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n001/clustered]
> 10 12:04:10,999 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n002/clustered]
> 11 12:04:10,999 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n003/clustered]
> 12 12:04:10,999 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n004/clustered]
> 13 12:04:10,999 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n005/clustered]
> 14 12:04:10,999 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n006/clustered]
> 15 12:04:10,999 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n007/clustered]
> 16 12:04:10,999 DEBUG [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: suspecting [m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n005/clustered, m03_n006/clustered, m03_n007/clustered]
> 17 12:04:11,000 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n007/clustered is dead
> 18 12:04:12,000 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n007/clustered is dead (passing up SUSPECT event)
> 19 12:04:16,000 TRACE [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: received SUSPECT message from m06_n007/clustered: suspects=[m03_n003/clustered, m03_n004/clustered, m03_n005/clustered, m03_n002/clustered, m03_n001/clustered, m03_n007/clustered, m03_n006/clustered]
> 20 12:04:16,000 DEBUG [org.jgroups.protocols.FD_SOCK] (INT-28,shared=udp) m05_n001/clustered: suspecting [m03_n001/clustered, m03_n002/clustered, m03_n003/clustered, m03_n004/clustered, m03_n005/clustered, m03_n006/clustered, m03_n007/clustered]
> 21 12:04:16,000 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n003/clustered is dead
> 22 12:04:16,000 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n004/clustered is dead
> 23 12:04:16,001 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n005/clustered is dead
> 24 12:04:16,001 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n002/clustered is dead
> 25 12:04:16,001 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n001/clustered is dead
> 26 12:04:16,001 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n007/clustered is dead
> 27 12:04:16,001 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (INT-28,shared=udp) verifying that m03_n006/clustered is dead
> 28 12:04:17,001 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n003/clustered is dead (passing up SUSPECT event)
> 29 12:04:17,001 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n004/clustered is dead (passing up SUSPECT event)
> 30 12:04:17,002 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n007/clustered is dead (passing up SUSPECT event)
> 31 12:04:17,002 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n001/clustered is dead (passing up SUSPECT event)
> 32 12:04:17,002 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n002/clustered is dead (passing up SUSPECT event)
> 33 12:04:17,003 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n005/clustered is dead (passing up SUSPECT event)
> 34 12:04:17,003 TRACE [org.jgroups.protocols.VERIFY_SUSPECT] (VERIFY_SUSPECT.TimerThread,m05_n001/clustered) m03_n006/clustered is dead (passing up SUSPECT event)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6402) EJBs accessible too early (spec violation)
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFLY-6402?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on WFLY-6402:
-----------------------------------------------
Miroslav Sochurek <msochure(a)redhat.com> changed the Status of [bug 1310908|https://bugzilla.redhat.com/show_bug.cgi?id=1310908] from POST to ON_QA
> EJBs accessible too early (spec violation)
> ------------------------------------------
>
> Key: WFLY-6402
> URL: https://issues.jboss.org/browse/WFLY-6402
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.0.0.Final
> Reporter: Brad Maxwell
> Assignee: Fedor Gavrilov
> Labels: downstream_dependency
> Attachments: auto-test-reproducer.zip
>
>
> {code}
> EJB 3.1 spec, section 4.8.1:
> "If the Startup annotation appears on the Singleton bean class or if the Singleton has been designated via the deployment descriptor as requiring eager initialization, the container must initialize the Singleton bean instance during the application startup sequence. The container must initialize all such startup-time Singletons before any external client requests (that is, client requests originating outside of the application) are delivered to any enterprise bean components in the application.
> {code}
> Wildlfy does not implement this correctly, and allows calls to other EJBs before a @Startup @Singleton finishes its @PostConstruct call.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFCORE-1623) BoundedQueueThreadPoolResourceDefinition does not able to add additional check on the threads executor
by Lin Gao (JIRA)
Lin Gao created WFCORE-1623:
-------------------------------
Summary: BoundedQueueThreadPoolResourceDefinition does not able to add additional check on the threads executor
Key: WFCORE-1623
URL: https://issues.jboss.org/browse/WFCORE-1623
Project: WildFly Core
Issue Type: Bug
Reporter: Lin Gao
Assignee: Lin Gao
When there is already a long-running thread pool for a work manager and you try to create another one:
{{/subsystem=jca/workmanager=default/long-running-threads=custom:add(max-threads=30, queue-length=30)}}
you only get an opaque error message:
{{"failure-description" => "WFLYCTL0086: Failed to persist configuration change: WFLYCTL0084: Failed to marshal configuration",}} with a, also useless, {{java.lang.IllegalArgumentException}} in the server log.
It should be more obvious that the error is that you cannot create two long-running thread pools
To be able to check whether the {{long-running-threads/short-running-threads}} within one JCA workmanager is already defined when adding one, {{BoundedQueueThreadPoolResourceDefinition}} in {{threads}} subsystem needs to be extended to be able to check this.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6402) EJBs accessible too early (spec violation)
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFLY-6402?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on WFLY-6402:
-----------------------------------------------
Miroslav Sochurek <msochure(a)redhat.com> changed the Status of [bug 1350355|https://bugzilla.redhat.com/show_bug.cgi?id=1350355] from NEW to POST
> EJBs accessible too early (spec violation)
> ------------------------------------------
>
> Key: WFLY-6402
> URL: https://issues.jboss.org/browse/WFLY-6402
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.0.0.Final
> Reporter: Brad Maxwell
> Assignee: Fedor Gavrilov
> Labels: downstream_dependency
> Attachments: auto-test-reproducer.zip
>
>
> {code}
> EJB 3.1 spec, section 4.8.1:
> "If the Startup annotation appears on the Singleton bean class or if the Singleton has been designated via the deployment descriptor as requiring eager initialization, the container must initialize the Singleton bean instance during the application startup sequence. The container must initialize all such startup-time Singletons before any external client requests (that is, client requests originating outside of the application) are delivered to any enterprise bean components in the application.
> {code}
> Wildlfy does not implement this correctly, and allows calls to other EJBs before a @Startup @Singleton finishes its @PostConstruct call.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months