[JBoss JIRA] (JGRP-1821) SEQUENCER2: new impl of total order protocol
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1821?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-1821:
--------------------------------
First impl is on branch {{JGRP-1821}}
> SEQUENCER2: new impl of total order protocol
> --------------------------------------------
>
> Key: JGRP-1821
> URL: https://issues.jboss.org/browse/JGRP-1821
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.5
>
>
> When in {{SEQUENCER}} a member P wants to send a multicast message M, it unicasts it to the coordinator, who multicasts it on behalf of P.
> The new impl {{SEQUENCER2}} is different:
> * P asks the coord for a seqno
> * The coord responds with a (monotonically increasing) seqno
> * P multicasts M with that seqno
> * Everyone uses one global {{Table}} to deliver messages and weed out duplicates
> Advantages:
> # A sender sends messages itself, so the sequencer doesn't need to do sending (and potential retransmissions)
> # Compared to {{SEQUENCER}}, the data is only sent and marshalled once (better for large messages)
> # A sender grabs entire ranges of seqnos, so this should be efficient
> The edge case handling though requires some work, e.g.
> * A member B crashes after having received a seqno (e.g. 4)
> ** The sequencer will give out 5 next, but since nobody received 4, all subsequent messages will get stuck, waiting for 4
> * The sequencer (coord) dies or leaves
> ** The next-in-line probably needs to run some reconciliation protocol, asking all members for their highest received seqnos
> ** Messages like 4 would get marked as dummy, removed from table and dropped
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 2 months
[JBoss JIRA] (JGRP-1821) SEQUENCER2: new impl of total order protocol
by Bela Ban (JIRA)
Bela Ban created JGRP-1821:
------------------------------
Summary: SEQUENCER2: new impl of total order protocol
Key: JGRP-1821
URL: https://issues.jboss.org/browse/JGRP-1821
Project: JGroups
Issue Type: Feature Request
Reporter: Bela Ban
Assignee: Bela Ban
Fix For: 3.5
When in {{SEQUENCER}} a member P wants to send a multicast message M, it unicasts it to the coordinator, who multicasts it on behalf of P.
The new impl {{SEQUENCER2}} is different:
* P asks the coord for a seqno
* The coord responds with a (monotonically increasing) seqno
* P multicasts M with that seqno
* Everyone uses one global {{Table}} to deliver messages and weed out duplicates
Advantages:
# A sender sends messages itself, so the sequencer doesn't need to do sending (and potential retransmissions)
# Compared to {{SEQUENCER}}, the data is only sent and marshalled once (better for large messages)
# A sender grabs entire ranges of seqnos, so this should be efficient
The edge case handling though requires some work, e.g.
* A member B crashes after having received a seqno (e.g. 4)
** The sequencer will give out 5 next, but since nobody received 4, all subsequent messages will get stuck, waiting for 4
* The sequencer (coord) dies or leaves
** The next-in-line probably needs to run some reconciliation protocol, asking all members for their highest received seqnos
** Messages like 4 would get marked as dummy, removed from table and dropped
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 2 months
[JBoss JIRA] (WFLY-2881) org.jboss.as.ejb3.timer.schedule.CalendarBasedTimeoutTestCase#testCalendarBasedTimeout
by Eduardo Martins (JIRA)
[ https://issues.jboss.org/browse/WFLY-2881?page=com.atlassian.jira.plugin.... ]
Eduardo Martins resolved WFLY-2881.
-----------------------------------
Resolution: Done
> org.jboss.as.ejb3.timer.schedule.CalendarBasedTimeoutTestCase#testCalendarBasedTimeout
> --------------------------------------------------------------------------------------
>
> Key: WFLY-2881
> URL: https://issues.jboss.org/browse/WFLY-2881
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: EJB
> Affects Versions: 8.0.0.Final
> Reporter: Frank Langelage
> Assignee: Eduardo Martins
> Priority: Blocker
> Fix For: 8.0.1.Final
>
> Attachments: org.jboss.as.ejb3.timer.schedule.CalendarBasedTimeoutTestCase.txt, TEST-org.jboss.as.ejb3.timer.schedule.CalendarBasedTimeoutTestCase.xml
>
>
> Running build with smoke tests on current github sources I get failure in this test case.
> HOUR_OF_DAY is not 0 as expected but 1.
> I changed the Assert in the test case to print out firstTimeout.toString() instead of only timeZoneDisplayName.
> See attached files for more.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 2 months
[JBoss JIRA] (JGRP-1817) OverlappingMergeTest testSameCreatorDifferentIDs fails to create correct merged view
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/JGRP-1817?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz commented on JGRP-1817:
-------------------------------------------
I've been looking at this for some time now.
When channels start up, the discovery phase calls Discovery.findAllMembers() which results in an interaction like this:
- channel C sends GET_MBRS_REQ to channels A and B
- these channels respond with GET_MBRS_RSP
- C determines who the members are
{noformat}
Discovery: C calling findInitialMbrs:
Discovery: C calling findMembers: num_expected = 10, view_id = null
Discovery: sending discovery request: view_id = null, data = non-null
C: sending in-line discovery request to 10.16.95.7:27199
C: sending in-line discovery request to 10.16.95.7:27200
TCPPING: A received discovery request from C
Discovery: A processing GET_MBRS_REQ from sender: C
Discovery: A sending discovery response to C
C: sending in-line discovery request to 10.16.95.7:27202
Discovery: C processing GET_MBRS_RSP from A:A, view_id=[A|1], is_server=true, is_coord=true, logical_name=A, physical_addrs=10.16.95.7:27199
Discovery: C called findMembers
417364 [TRACE] GMS: - C: initial_mbrs are A
417364 [DEBUG] GMS: - election results: {A=1}
417364 [DEBUG] GMS: - sending JOIN(C) to A
TCPPING: B received discovery request from C
Discovery: B processing GET_MBRS_REQ from sender: C
Discovery: B sending discovery response to C
Discovery: C processing GET_MBRS_RSP from B:B, view_id=[A|1], is_server=true, is_coord=false, logical_name=B, physical_addrs=10.16.95.7:27200
{noformat}
This seems to work fine.
However, when calling MERGE.sendMergeSolicitation(), Discovery.findAllViews() is called instead of Discovery.findAllMembers().
This makes use of the same underlying method Discovery.findMembers(), but the behaviour ends up being completely different. In many cases, there is no evidence of the GET_MBRS_REQ messages arriving at the remote members, among other things.
For example:
{noformat}
==== triggering merge solicitation ====:
Discovery: A calling findAllViews:
Discovery: A calling findMembers: num_expected = 10, view_id = [A|5]
Discovery: sending discovery request: view_id = [A|5], data = null
370387 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27216
A: sending in-line discovery request to 10.16.95.7:27216
370392 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27218
A: sending in-line discovery request to 10.16.95.7:27218
370393 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27217
A: sending in-line discovery request to 10.16.95.7:27217
373394 [TRACE] TCPPING: - A: discovery took 3007 ms: responses: 1 total (1 servers (0 coord), 0 clients); responses received = B
373395 [TRACE] MERGE2: - Discovery results:
[B]: view_id=[A|6] ([A|6] [A, B])
[A]: view_id=[A|5] ([A|5] [A])
373395 [DEBUG] MERGE2: - A found different views : [A|6], [A|5]; sending up MERGE event with merge participants [B, A].
Discovery results:
[B]: coord=A
[A]: coord=A
==== checking views after merge ====:
....................Disabling TRACE debugging for GMS, MERGE2 and Discovery
A's view: [B|7] [B, A]
B's view: [B|7] [B, A]
C's view: [A|7] [A, B, C]
{noformat}
Note the absence of the messages concerning GET_MBRS_REQ.
I'm still looking at this. It's a puzzle.
> OverlappingMergeTest testSameCreatorDifferentIDs fails to create correct merged view
> ------------------------------------------------------------------------------------
>
> Key: JGRP-1817
> URL: https://issues.jboss.org/browse/JGRP-1817
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.2.13
> Environment: RHEL
> Reporter: Richard Achmatowicz
> Assignee: Bela Ban
> Fix For: 3.2.14
>
>
> This test does the following:
> - creates three channels a,b,c
> - injects views
> {noformat}
> A: {A|5 A}, B:{A|6 A,B}, C:{A|7 A,B,C}
> {noformat}
> - calls MERGE.sendMergeSolicitation() on channel A to simulate the calling of the periodic task MERGE.findSubgroupsTask which should find all views of all reachable members, check if there are different views, and if there are prepare and send a MERGE event up to GMS
> - checks that all channels have the final view of size 3
> The test fails intermittently but frequently on RHEL, with the same failure each time:
> {noformat}
> -------------------------------------------------------------------
> GMS: address=A, cluster=OverlappingMergeTest, physical address=10.16.95.7:27215
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> GMS: address=B, cluster=OverlappingMergeTest, physical address=10.16.95.7:27216
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> GMS: address=C, cluster=OverlappingMergeTest, physical address=10.16.95.7:27217
> -------------------------------------------------------------------
> ------------- testSameCreatorDifferentIDs -----------
> [A] view=[A|5] [A]
> [B] view=[A|6] [A, B]
> [C] view=[A|7] [A, B, C]
> A's view: [A|5] [A]
> B's view: [A|6] [A, B]
> C's view: [A|7] [A, B, C]
> Enabling TRACE debugging for GMS, MERGE2 and Discovery
> ==== triggering merge solicitation ====:
> 212534 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27216
> 212537 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27218
> 212538 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27217
> 215538 [TRACE] TCPPING: - A: discovery took 3004 ms: responses: 1 total (1 servers (0 coord), 0 clients)
> 215539 [TRACE] MERGE2: - Discovery results:
> [B]: view_id=[A|6] ([A|6] [A, B])
> [A]: view_id=[A|5] ([A|5] [A])
> 215539 [DEBUG] MERGE2: - A found different views : [A|5], [A|6]; sending up MERGE event with merge participants [B, A].
> Discovery results:
> [B]: coord=A
> [A]: coord=A
> ==== checking views after merge ====:
> ....................Disabling TRACE debugging for GMS, MERGE2 and Discovery
> A's view: [A|7] [A, B]
> B's view: [A|7] [A, B]
> C's view: [A|7] [A, B, C]
> {noformat}
> Whenever this test fails, it is the discovery phase which fails to find the correct set of views. Instead of finding views for channels A, B and C, it only finds views for channels A and B.
>
> Also, the discovery requests are sent to host:port combinations which are offset by 1. For example, in the case above, the host:port combinations of the channels are 10.16.95.7:27215, 10.16.95.7:27216, and 10.16.95.7:27217, but the pings go put to 10.16.95.7:27216, 10.16.95.7:27217, and 10.16.95.7:27218. Not sure if this is significant as it still covers the channels B and C.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 2 months
[JBoss JIRA] (JGRP-1817) OverlappingMergeTest testSameCreatorDifferentIDs fails to create correct merged view
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/JGRP-1817?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz updated JGRP-1817:
--------------------------------------
Description:
This test does the following:
- creates three channels a,b,c
- injects views
{noformat}
A: {A|5 A}, B:{A|6 A,B}, C:{A|7 A,B,C}
{noformat}
- calls MERGE.sendMergeSolicitation() on channel A to simulate the calling of the periodic task MERGE.findSubgroupsTask which should find all views of all reachable members, check if there are different views, and if there are prepare and send a MERGE event up to GMS
- checks that all channels have the final view of size 3
The test fails intermittently but frequently on RHEL, with the same failure each time:
{noformat}
-------------------------------------------------------------------
GMS: address=A, cluster=OverlappingMergeTest, physical address=10.16.95.7:27215
-------------------------------------------------------------------
-------------------------------------------------------------------
GMS: address=B, cluster=OverlappingMergeTest, physical address=10.16.95.7:27216
-------------------------------------------------------------------
-------------------------------------------------------------------
GMS: address=C, cluster=OverlappingMergeTest, physical address=10.16.95.7:27217
-------------------------------------------------------------------
------------- testSameCreatorDifferentIDs -----------
[A] view=[A|5] [A]
[B] view=[A|6] [A, B]
[C] view=[A|7] [A, B, C]
A's view: [A|5] [A]
B's view: [A|6] [A, B]
C's view: [A|7] [A, B, C]
Enabling TRACE debugging for GMS, MERGE2 and Discovery
==== triggering merge solicitation ====:
212534 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27216
212537 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27218
212538 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27217
215538 [TRACE] TCPPING: - A: discovery took 3004 ms: responses: 1 total (1 servers (0 coord), 0 clients)
215539 [TRACE] MERGE2: - Discovery results:
[B]: view_id=[A|6] ([A|6] [A, B])
[A]: view_id=[A|5] ([A|5] [A])
215539 [DEBUG] MERGE2: - A found different views : [A|5], [A|6]; sending up MERGE event with merge participants [B, A].
Discovery results:
[B]: coord=A
[A]: coord=A
==== checking views after merge ====:
....................Disabling TRACE debugging for GMS, MERGE2 and Discovery
A's view: [A|7] [A, B]
B's view: [A|7] [A, B]
C's view: [A|7] [A, B, C]
{noformat}
Whenever this test fails, it is the discovery phase which fails to find the correct set of views. Instead of finding views for channels A, B and C, it only finds views for channels A and B.
Also, the discovery requests are sent to host:port combinations which are offset by 1. For example, in the case above, the host:port combinations of the channels are 10.16.95.7:27215, 10.16.95.7:27216, and 10.16.95.7:27217, but the pings go put to 10.16.95.7:27216, 10.16.95.7:27217, and 10.16.95.7:27218. Not sure if this is significant as it still covers the channels B and C.
was:
This test does the following:
- creates three channels a,b,c
- injects views
{noformat}
A: {A|5 A}, B:{A|6 A,B}, C:{A|7 A,B,C}
{noformat}
- injects a merge event in each of channels A,B,C representing these four views
- checks that all channels have the final view of size 3
The test fails intermittently on RHEL, with the same failure each time:
{noformat}
-------------------------------------------------------------------
GMS: address=A, cluster=OverlappingMergeTest, physical address=10.16.95.7:27215
-------------------------------------------------------------------
-------------------------------------------------------------------
GMS: address=B, cluster=OverlappingMergeTest, physical address=10.16.95.7:27216
-------------------------------------------------------------------
-------------------------------------------------------------------
GMS: address=C, cluster=OverlappingMergeTest, physical address=10.16.95.7:27217
-------------------------------------------------------------------
------------- testSameCreatorDifferentIDs -----------
[A] view=[A|5] [A]
[B] view=[A|6] [A, B]
[C] view=[A|7] [A, B, C]
A's view: [A|5] [A]
B's view: [A|6] [A, B]
C's view: [A|7] [A, B, C]
Enabling TRACE debugging for GMS, MERGE2 and Discovery
==== triggering merge solicitation ====:
212534 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27216
212537 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27218
212538 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27217
215538 [TRACE] TCPPING: - A: discovery took 3004 ms: responses: 1 total (1 servers (0 coord), 0 clients)
215539 [TRACE] MERGE2: - Discovery results:
[B]: view_id=[A|6] ([A|6] [A, B])
[A]: view_id=[A|5] ([A|5] [A])
215539 [DEBUG] MERGE2: - A found different views : [A|5], [A|6]; sending up MERGE event with merge participants [B, A].
Discovery results:
[B]: coord=A
[A]: coord=A
==== checking views after merge ====:
....................Disabling TRACE debugging for GMS, MERGE2 and Discovery
A's view: [A|7] [A, B]
B's view: [A|7] [A, B]
C's view: [A|7] [A, B, C]
{noformat}
Whenever this test fails, it is the discovery phase which fails to find the correct set of views. Instead of finding views for channels A, B and C, it only finds views for channels A and B.
Also, the discovery requests are sent to host:port combinations which are offset by 1. For example, in the case above, the host:port combinations of the channels are 10.16.95.7:27215, 10.16.95.7:27216, and 10.16.95.7:27217, but the pings go put to 10.16.95.7:27216, 10.16.95.7:27217, and 10.16.95.7:27218. Not sure if this is significant as it still covers the channels B and C.
> OverlappingMergeTest testSameCreatorDifferentIDs fails to create correct merged view
> ------------------------------------------------------------------------------------
>
> Key: JGRP-1817
> URL: https://issues.jboss.org/browse/JGRP-1817
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.2.13
> Environment: RHEL
> Reporter: Richard Achmatowicz
> Assignee: Bela Ban
> Fix For: 3.2.14
>
>
> This test does the following:
> - creates three channels a,b,c
> - injects views
> {noformat}
> A: {A|5 A}, B:{A|6 A,B}, C:{A|7 A,B,C}
> {noformat}
> - calls MERGE.sendMergeSolicitation() on channel A to simulate the calling of the periodic task MERGE.findSubgroupsTask which should find all views of all reachable members, check if there are different views, and if there are prepare and send a MERGE event up to GMS
> - checks that all channels have the final view of size 3
> The test fails intermittently but frequently on RHEL, with the same failure each time:
> {noformat}
> -------------------------------------------------------------------
> GMS: address=A, cluster=OverlappingMergeTest, physical address=10.16.95.7:27215
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> GMS: address=B, cluster=OverlappingMergeTest, physical address=10.16.95.7:27216
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> GMS: address=C, cluster=OverlappingMergeTest, physical address=10.16.95.7:27217
> -------------------------------------------------------------------
> ------------- testSameCreatorDifferentIDs -----------
> [A] view=[A|5] [A]
> [B] view=[A|6] [A, B]
> [C] view=[A|7] [A, B, C]
> A's view: [A|5] [A]
> B's view: [A|6] [A, B]
> C's view: [A|7] [A, B, C]
> Enabling TRACE debugging for GMS, MERGE2 and Discovery
> ==== triggering merge solicitation ====:
> 212534 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27216
> 212537 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27218
> 212538 [TRACE] TCPPING: - A: sending discovery request to 10.16.95.7:27217
> 215538 [TRACE] TCPPING: - A: discovery took 3004 ms: responses: 1 total (1 servers (0 coord), 0 clients)
> 215539 [TRACE] MERGE2: - Discovery results:
> [B]: view_id=[A|6] ([A|6] [A, B])
> [A]: view_id=[A|5] ([A|5] [A])
> 215539 [DEBUG] MERGE2: - A found different views : [A|5], [A|6]; sending up MERGE event with merge participants [B, A].
> Discovery results:
> [B]: coord=A
> [A]: coord=A
> ==== checking views after merge ====:
> ....................Disabling TRACE debugging for GMS, MERGE2 and Discovery
> A's view: [A|7] [A, B]
> B's view: [A|7] [A, B]
> C's view: [A|7] [A, B, C]
> {noformat}
> Whenever this test fails, it is the discovery phase which fails to find the correct set of views. Instead of finding views for channels A, B and C, it only finds views for channels A and B.
>
> Also, the discovery requests are sent to host:port combinations which are offset by 1. For example, in the case above, the host:port combinations of the channels are 10.16.95.7:27215, 10.16.95.7:27216, and 10.16.95.7:27217, but the pings go put to 10.16.95.7:27216, 10.16.95.7:27217, and 10.16.95.7:27218. Not sure if this is significant as it still covers the channels B and C.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 2 months
[JBoss JIRA] (WFLY-3207) ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition on reload
by Farah Juma (JIRA)
[ https://issues.jboss.org/browse/WFLY-3207?page=com.atlassian.jira.plugin.... ]
Farah Juma commented on WFLY-3207:
----------------------------------
Please attach a reproducer app (just tried reproducing this but I wasn't able to).
> ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition on reload
> ---------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-3207
> URL: https://issues.jboss.org/browse/WFLY-3207
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: JSF
> Affects Versions: 8.0.0.Final
> Reporter: Radoslav Husar
> Assignee: Farah Juma
>
> Deploy JSF application and reload the server (:reload from CLI).
> {noformat}
> 14:07:53,201 SEVERE [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-2) Critical error during deployment: : org.jboss.weld.context.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition
> at org.jboss.weld.manager.BeanManagerImpl.getContext(BeanManagerImpl.java:680) [weld-core-impl-2.1.2.Final.jar:2014-01-09 09:23]
> at org.jboss.weld.util.ForwardingBeanManager.getContext(ForwardingBeanManager.java:181) [weld-core-impl-2.1.2.Final.jar:2014-01-09 09:23]
> at com.sun.faces.application.ApplicationAssociate$PostConstructApplicationListener.loadFlows(ApplicationAssociate.java:323) [jsf-impl-2.2.6-jbossorg-2.jar:]
> at com.sun.faces.application.ApplicationAssociate$PostConstructApplicationListener.processEvent(ApplicationAssociate.java:303) [jsf-impl-2.2.6-jbossorg-2.jar:]
> at javax.faces.event.SystemEvent.processListener(SystemEvent.java:108) [jboss-jsf-api_2.2_spec-2.2.6.jar:2.2.6]
> at com.sun.faces.application.ApplicationImpl.processListeners(ApplicationImpl.java:2187) [jsf-impl-2.2.6-jbossorg-2.jar:]
> at com.sun.faces.application.ApplicationImpl.invokeListenersFor(ApplicationImpl.java:2163) [jsf-impl-2.2.6-jbossorg-2.jar:]
> at com.sun.faces.application.ApplicationImpl.publishEvent(ApplicationImpl.java:296) [jsf-impl-2.2.6-jbossorg-2.jar:]
> at org.jboss.as.jsf.injection.weld.ForwardingApplication.publishEvent(ForwardingApplication.java:294) [wildfly-jsf-injection-8.0.1.Final-SNAPSHOT.jar:8.0.1.Final-SNAPSHOT]
> at com.sun.faces.config.ConfigManager.publishPostConfigEvent(ConfigManager.java:692) [jsf-impl-2.2.6-jbossorg-2.jar:]
> at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:253) [jsf-impl-2.2.6-jbossorg-2.jar:]
> at io.undertow.servlet.core.ApplicationListeners.contextInitialized(ApplicationListeners.java:173) [undertow-servlet-1.0.3.Final.jar:1.0.3.Final]
> at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:189) [undertow-servlet-1.0.3.Final.jar:1.0.3.Final]
> at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:86)
> at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.start(UndertowDeploymentService.java:71)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.8.0]
> 14:07:53,208 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-2) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./clusterbench-passivating: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./clusterbench-passivating: Failed to start service
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1904) [jboss-msc-1.2.1.Final.jar:1.2.1.Final]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.8.0]
> Caused by: java.lang.RuntimeException: java.lang.RuntimeException: org.jboss.weld.context.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition
> at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:218)
> at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:86)
> at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.start(UndertowDeploymentService.java:71)
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948) [jboss-msc-1.2.1.Final.jar:1.2.1.Final]
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881) [jboss-msc-1.2.1.Final.jar:1.2.1.Final]
> ... 3 more
> Caused by: java.lang.RuntimeException: org.jboss.weld.context.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition
> at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:273)
> at io.undertow.servlet.core.ApplicationListeners.contextInitialized(ApplicationListeners.java:173)
> at io.undertow.servlet.core.DeploymentManagerImpl.deploy(DeploymentManagerImpl.java:189)
> ... 7 more
> Caused by: org.jboss.weld.context.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition
> at org.jboss.weld.manager.BeanManagerImpl.getContext(BeanManagerImpl.java:680)
> at org.jboss.weld.util.ForwardingBeanManager.getContext(ForwardingBeanManager.java:181)
> at com.sun.faces.application.ApplicationAssociate$PostConstructApplicationListener.loadFlows(ApplicationAssociate.java:323)
> at com.sun.faces.application.ApplicationAssociate$PostConstructApplicationListener.processEvent(ApplicationAssociate.java:303)
> at javax.faces.event.SystemEvent.processListener(SystemEvent.java:108)
> at com.sun.faces.application.ApplicationImpl.processListeners(ApplicationImpl.java:2187)
> at com.sun.faces.application.ApplicationImpl.invokeListenersFor(ApplicationImpl.java:2163)
> at com.sun.faces.application.ApplicationImpl.publishEvent(ApplicationImpl.java:296)
> at org.jboss.as.jsf.injection.weld.ForwardingApplication.publishEvent(ForwardingApplication.java:294)
> at com.sun.faces.config.ConfigManager.publishPostConfigEvent(ConfigManager.java:692)
> at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:253)
> ... 9 more
> 14:07:53,437 INFO [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-1) Initializing Mojarra 2.2.6-jbossorg-2 20140318-1712 for context '/translator'
> 14:07:53,542 INFO [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-6) Initializing Mojarra 2.2.6-jbossorg-2 20140318-1712 for context '/clusterbench'
> 14:07:53,704 INFO [org.wildfly.extension.undertow] (MSC service thread 1-6) JBAS017534: Registered web context: /clusterbench
> 14:07:54,448 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) JBAS017534: Registered web context: /translator
> 14:07:54,456 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) JBAS014613: Operation ("deploy") failed - address: ([("deployment" => "clusterbench-ee7.ear")]) - failure description: {"JBAS014671: Failed services" => {"jboss.undertow.deployment.default-server.default-host./clusterbench-passivating" => "org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./clusterbench-passivating: Failed to start service
> Caused by: java.lang.RuntimeException: java.lang.RuntimeException: org.jboss.weld.context.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition
> Caused by: java.lang.RuntimeException: org.jboss.weld.context.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition
> Caused by: org.jboss.weld.context.ContextNotActiveException: WELD-001303: No active contexts for scope type javax.faces.flow.builder.FlowDefinition"}}
> 14:07:54,475 INFO [org.jboss.as.server] (ServerService Thread Pool -- 31) JBAS018559: Deployed "clusterbench-ee7.ear" (runtime-name : "clusterbench-ee7.ear")
> 14:07:54,475 INFO [org.jboss.as.server] (ServerService Thread Pool -- 31) JBAS018559: Deployed "translator.war" (runtime-name : "translator.war")
> 14:07:54,477 INFO [org.jboss.as.controller] (Controller Boot Thread) JBAS014774: Service status report
> JBAS014777: Services which failed to start: service jboss.undertow.deployment.default-server.default-host./clusterbench-passivating: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./clusterbench-passivating: Failed to start service
> 14:07:54,489 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management
> 14:07:54,493 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
> 14:07:54,493 ERROR [org.jboss.as] (Controller Boot Thread) JBAS015875: WildFly 8.0.1.Final-SNAPSHOT "WildFly" started (with errors) in 4618ms - Started 843 of 957 services (3 services failed or missing dependencies, 243 services are lazy, passive or on-demand)
> 14:07:54,704 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) JBAS017535: Unregistered web context: /clusterbench
> 14:07:54,704 INFO [org.wildfly.extension.undertow] (MSC service thread 1-8) JBAS017535: Unregistered web context: /clusterbench-granular
> 14:07:54,707 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 28) ISPN000029: Passivating all entries to disk
> 14:07:54,709 SEVERE [javax.faces] (MSC service thread 1-8) Unable to obtain InjectionProvider from init time FacesContext. Does this container implement the Mojarra Injection SPI?
> 14:07:54,707 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 27) ISPN000029: Passivating all entries to disk
> 14:07:54,709 SEVERE [javax.faces] (MSC service thread 1-8) Unable to call @PreDestroy annotated methods because no InjectionProvider can be found. Does this container implement the Mojarra Injection SPI?
> 14:07:54,709 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 27) ISPN000030: Passivated 0 entries in 1 milliseconds
> 14:07:54,709 SEVERE [javax.faces] (MSC service thread 1-8) Unable to obtain InjectionProvider from init time FacesContext. Does this container implement the Mojarra Injection SPI?
> 14:07:54,709 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 28) ISPN000030: Passivated 0 entries in 1 milliseconds
> 14:07:54,709 SEVERE [javax.faces] (MSC service thread 1-8) Unable to call @PreDestroy annotated methods because no InjectionProvider can be found. Does this container implement the Mojarra Injection SPI?
> 14:07:54,712 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 21) ISPN000029: Passivating all entries to disk
> 14:07:54,713 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 21) ISPN000030: Passivated 0 entries in 0 milliseconds
> 14:07:54,721 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 27) JBAS010282: Stopped clusterbench-ee7.ear/clusterbench-ee7-ejb-1.1.0-SNAPSHOT.jar cache from ejb container
> 14:07:54,722 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 28) JBAS010282: Stopped default-host/clusterbench-passivating cache from web container
> 14:07:54,724 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 21) JBAS010282: Stopped default-host/clusterbench-granular cache from web container
> 14:07:54,728 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 27) ISPN000029: Passivating all entries to disk
> 14:07:54,728 INFO [org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 27) ISPN000030: Passivated 0 entries in 0 milliseconds
> 14:07:54,737 INFO [org.jboss.weld.deployer] (MSC service thread 1-4) JBAS016009: Stopping weld service for deployment clusterbench-ee7.ear
> 14:07:54,739 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 27) JBAS010282: Stopped default-host/clusterbench cache from web container
> 14:07:54,755 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) JBAS015974: Stopped subdeployment (runtime-name: clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war) in 57ms
> 14:07:54,755 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) JBAS015974: Stopped subdeployment (runtime-name: clusterbench-ee7-ejb-1.1.0-SNAPSHOT.jar) in 57ms
> 14:07:54,755 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) JBAS015974: Stopped subdeployment (runtime-name: clusterbench-ee7-web-1.1.0-SNAPSHOT-granular.war) in 58ms
> 14:07:54,758 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) JBAS015974: Stopped subdeployment (runtime-name: clusterbench-ee7-web-1.1.0-SNAPSHOT-default.war) in 60ms
> 14:07:54,759 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) JBAS015877: Stopped deployment clusterbench-ee7.ear (runtime-name: clusterbench-ee7.ear) in 61ms
> 14:07:54,852 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018558: Undeployed "clusterbench-ee7.ear" (runtime-name: "clusterbench-ee7.ear")
> 14:07:54,853 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 2) JBAS014774: Service status report
> JBAS014775: New missing/unsatisfied dependencies:
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-ejb-1.1.0-SNAPSHOT.jar".deploymentCompleteService (missing) dependents: [service jboss.deployment.unit."clusterbench-ee7.ear".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-default.war".deploymentCompleteService (missing) dependents: [service jboss.deployment.unit."clusterbench-ee7.ear".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-granular.war".deploymentCompleteService (missing) dependents: [service jboss.deployment.unit."clusterbench-ee7.ear".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".component."com.sun.faces.config.ConfigureListener".START (missing) dependents: [service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".component."javax.faces.webapp.FacesServlet".START (missing) dependents: [service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".component."javax.faces.webapp.FacetTag".START (missing) dependents: [service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".component."javax.servlet.jsp.jstl.tlv.PermittedTaglibsTLV".START (missing) dependents: [service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".component."javax.servlet.jsp.jstl.tlv.ScriptFreeTLV".START (missing) dependents: [service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".component."org.jboss.test.clusterbench.web.session.GranularHttpSessionServlet".START (missing) dependents: [service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService]
> service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService (missing) dependents: [service jboss.deployment.unit."clusterbench-ee7.ear".deploymentCompleteService]
> service jboss.undertow.deployment.default-server.default-host./clusterbench-passivating (missing) dependents: [service jboss.deployment.subunit."clusterbench-ee7.ear"."clusterbench-ee7-web-1.1.0-SNAPSHOT-passivating.war".deploymentCompleteService]
> JBAS014777: Services which failed to start: service jboss.undertow.deployment.default-server.default-host./clusterbench-passivating
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 2 months
[JBoss JIRA] (WFLY-3209) CLI NullPointerException for Ctrl+D when prompted for username.
by Darran Lofthouse (JIRA)
Darran Lofthouse created WFLY-3209:
--------------------------------------
Summary: CLI NullPointerException for Ctrl+D when prompted for username.
Key: WFLY-3209
URL: https://issues.jboss.org/browse/WFLY-3209
Project: WildFly
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: CLI
Reporter: Darran Lofthouse
Assignee: Darran Lofthouse
Fix For: 9.0.0.CR1
{code}
[standalone@localhost:9990 /] [darranl@localhost bin]$ ./jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect
Username: java.lang.NullPointerException
at org.jboss.aesh.console.Console.pushToStdOut(Console.java:227)
at org.jboss.as.cli.impl.Console$Factory$1.print(Console.java:160)
at org.jboss.as.cli.impl.CommandContextImpl.printLine(CommandContextImpl.java:704)
at org.jboss.as.cli.impl.CommandContextImpl.error(CommandContextImpl.java:722)
at org.jboss.as.cli.impl.CommandContextImpl.handleSafe(CommandContextImpl.java:647)
at org.jboss.as.cli.impl.CommandContextImpl.interact(CommandContextImpl.java:1272)
at org.jboss.as.cli.impl.CliLauncher.main(CliLauncher.java:254)
at org.jboss.as.cli.CommandLineMain.main(CommandLineMain.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.jboss.modules.Module.run(Module.java:312)
at org.jboss.modules.Main.main(Main.java:460)
{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 2 months