[JBoss JIRA] (JGRP-2237) The single node in the cluster not become a coordinator after coordinator leave.
by kfir avraham (JIRA)
[ https://issues.jboss.org/browse/JGRP-2237?page=com.atlassian.jira.plugin.... ]
kfir avraham edited comment on JGRP-2237 at 11/29/17 7:01 AM:
--------------------------------------------------------------
i attached logs from server_A, server_B, and conf file.
in this case (after upgrade to version 4.0.8) it is look better but from some reason after a few restarts, server_A doesn't sent a join message to the other machine, and create new view with 'is_coord-true'.
any idea what could be the reason for that?
BTW, we must to use in ' port_range="0" ', because we want to use only in this port, and it worked perfect in 3.6.11 version (we upgraded the version because security issue).
when i set ' port_range="5" ', they not discovered each other from the beginning.
*+Log in trace mode:+*
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [DEBUG] - thread pool min/max/keep-alive: 2/30/60000 use_fork_join=false, internal pool: 0/4/30000 (2 cores available)
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.UNICAST3] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.STABLE] [main] [TRACE] - clm-tlv-spih31-6939: stable task started
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCPPING] [main] [TRACE] - clm-tlv-spih31-6939: sending discovery request to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [TRACE] - clm-tlv-spih31-6939: sending msg to 10.63.16.13:8102, src=clm-tlv-spih31-6939, headers are TCPPING: [GET_MBRS_REQ cluster=HACluster initial_discovery=true], TP: [cluster_name=HACluster]
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TQ-Bundler-7,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: +*connecting to 10.63.16.13:8102*+
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [Connection.Receiver [10.63.16.3:50534 - 10.63.16.13:8102]-9,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: +*removed connection to 10.63.16.13:8102*+
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TcpServer.Acceptor[8102]-5,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: +*accepted connection from 10.63.16.13:8102*+
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [TRACE] - clm-tlv-spih31-6939: *+no members discovered after 1029 ms:+* creating cluster as first member
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [DEBUG] -
[clm-tlv-spih31-6939 setDigest()]
existing digest: []
new digest: clm-tlv-spih31-6939: [0 (0)]
resulting digest: clm-tlv-spih31-6939: [0 (0)]
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [DEBUG] - clm-tlv-spih31-6939: installing view [clm-tlv-spih31-6939|0] (1) [clm-tlv-spih31-6939]
was (Author: kavraham):
i attached logs from server_A, server_B, and conf file.
in this case (after upgrade to version 4.0.8) it is look better but from some reason after a few restarts, server_A doesn't sent a join message to the other machine, and create new view with 'is_coord-true'.
any idea what could be the reason for that?
BTW, we must to use in ' port_range="0" ', because we want to use only in this port, and it worked perfect in 3.6.11 version (we upgraded the version because security issue).
when i set ' port_range="5" ', they not discovered each other from the beginning.
*+Log in trace mode:+*
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [DEBUG] - thread pool min/max/keep-alive: 2/30/60000 use_fork_join=false, internal pool: 0/4/30000 (2 cores available)
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.UNICAST3] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.STABLE] [main] [TRACE] - clm-tlv-spih31-6939: stable task started
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCPPING] [main] [TRACE] - clm-tlv-spih31-6939: sending discovery request to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [TRACE] - clm-tlv-spih31-6939: sending msg to 10.63.16.13:8102, src=clm-tlv-spih31-6939, headers are TCPPING: [GET_MBRS_REQ cluster=HACluster initial_discovery=true], TP: [cluster_name=HACluster]
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TQ-Bundler-7,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: connecting to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [Connection.Receiver [10.63.16.3:50534 - 10.63.16.13:8102]-9,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: removed connection to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TcpServer.Acceptor[8102]-5,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: accepted connection from 10.63.16.13:8102
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [TRACE] - clm-tlv-spih31-6939: no members discovered after 1029 ms: creating cluster as first member
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [DEBUG] -
[clm-tlv-spih31-6939 setDigest()]
existing digest: []
new digest: clm-tlv-spih31-6939: [0 (0)]
resulting digest: clm-tlv-spih31-6939: [0 (0)]
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [DEBUG] - clm-tlv-spih31-6939: installing view [clm-tlv-spih31-6939|0] (1) [clm-tlv-spih31-6939]
> The single node in the cluster not become a coordinator after coordinator leave.
> --------------------------------------------------------------------------------
>
> Key: JGRP-2237
> URL: https://issues.jboss.org/browse/JGRP-2237
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.2, 4.0.8
> Reporter: kfir avraham
> Assignee: Bela Ban
> Priority: Minor
> Attachments: Server_A.txt, Server_B.txt, conf.txt, test.xml
>
>
> I got cluster with 2 members, sometimes when the first node (coordinator) leave the cluster the second one is not become a coordinator.
> When the first one is rejoin, he could not determine coordinator and select new one from the nodes list.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (JGRP-2237) The single node in the cluster not become a coordinator after coordinator leave.
by kfir avraham (JIRA)
[ https://issues.jboss.org/browse/JGRP-2237?page=com.atlassian.jira.plugin.... ]
kfir avraham edited comment on JGRP-2237 at 11/29/17 6:41 AM:
--------------------------------------------------------------
i attached logs from server_A, server_B, and conf file.
in this case (after upgrade to version 4.0.8) it is look better but from some reason after a few restarts, server_A doesn't sent a join message to the other machine, and create new view with 'is_coord-true'.
any idea what could be the reason for that?
BTW, we must to use in ' port_range="0" ', because we want to use only in this port, and it worked perfect in 3.6.11 version (we upgraded the version because security issue).
when i set ' port_range="5" ', they not discovered each other from the beginning.
*+Log in trace mode:+*
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [DEBUG] - thread pool min/max/keep-alive: 2/30/60000 use_fork_join=false, internal pool: 0/4/30000 (2 cores available)
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.UNICAST3] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.STABLE] [main] [TRACE] - clm-tlv-spih31-6939: stable task started
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCPPING] [main] [TRACE] - clm-tlv-spih31-6939: sending discovery request to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [TRACE] - clm-tlv-spih31-6939: sending msg to 10.63.16.13:8102, src=clm-tlv-spih31-6939, headers are TCPPING: [GET_MBRS_REQ cluster=HACluster initial_discovery=true], TP: [cluster_name=HACluster]
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TQ-Bundler-7,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: connecting to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [Connection.Receiver [10.63.16.3:50534 - 10.63.16.13:8102]-9,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: removed connection to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TcpServer.Acceptor[8102]-5,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: accepted connection from 10.63.16.13:8102
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [TRACE] - clm-tlv-spih31-6939: no members discovered after 1029 ms: creating cluster as first member
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [DEBUG] -
[clm-tlv-spih31-6939 setDigest()]
existing digest: []
new digest: clm-tlv-spih31-6939: [0 (0)]
resulting digest: clm-tlv-spih31-6939: [0 (0)]
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [DEBUG] - clm-tlv-spih31-6939: installing view [clm-tlv-spih31-6939|0] (1) [clm-tlv-spih31-6939]
was (Author: kavraham):
i attached logs from server_A, server_B, and conf file.
in this case (after upgrade to version 4.0.8) it is look better but from some reason after a few restarts, server_A doesn't sent a join message to the other machine, and create new view with 'is_coord-true'.
any idea what could be the reason for that?
BTW, we must to use in ' port_range="0" ', because we want to use only in this port, and it worked perfect in 3.6.11 version (we upgraded the version because security issue).
when i set ' port_range="5" ', they not discovered each other from the beginning.
*+Log in trace mode:+*
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [DEBUG] - thread pool min/max/keep-alive: 2/30/60000 use_fork_join=false, internal pool: 0/4/30000 (2 cores available)
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.UNICAST3] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.STABLE] [main] [TRACE] - clm-tlv-spih31-6939: stable task started
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCPPING] [main] [TRACE] - clm-tlv-spih31-6939: sending discovery request to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [TRACE] - clm-tlv-spih31-6939: sending msg to 10.63.16.13:8102, src=clm-tlv-spih31-6939, headers are TCPPING: [GET_MBRS_REQ cluster=HACluster initial_discovery=true], TP: [cluster_name=HACluster]
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TQ-Bundler-7,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: connecting to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [Connection.Receiver [10.63.16.3:50534 - 10.63.16.13:8102]-9,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: removed connection to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TcpServer.Acceptor[8102]-5,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: accepted connection from 10.63.16.13:8102
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [TRACE] - clm-tlv-spih31-6939: no members discovered after 1029 ms: creating cluster as first member
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [DEBUG] -
[clm-tlv-spih31-6939 setDigest()]
existing digest: []
new digest: clm-tlv-spih31-6939: [0 (0)]
resulting digest: clm-tlv-spih31-6939: [0 (0)]
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [DEBUG] - clm-tlv-spih31-6939: installing view [clm-tlv-spih31-6939|0] (1) [clm-tlv-spih31-6939]
> The single node in the cluster not become a coordinator after coordinator leave.
> --------------------------------------------------------------------------------
>
> Key: JGRP-2237
> URL: https://issues.jboss.org/browse/JGRP-2237
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.2, 4.0.8
> Reporter: kfir avraham
> Assignee: Bela Ban
> Priority: Minor
> Attachments: Server_A.txt, Server_B.txt, conf.txt, test.xml
>
>
> I got cluster with 2 members, sometimes when the first node (coordinator) leave the cluster the second one is not become a coordinator.
> When the first one is rejoin, he could not determine coordinator and select new one from the nodes list.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (DROOLS-2154) Design navigation between the DRG and DRD's
by Jozef Marko (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2154?page=com.atlassian.jira.plugi... ]
Jozef Marko updated DROOLS-2154:
--------------------------------
Tester: Jozef Marko
> Design navigation between the DRG and DRD's
> -------------------------------------------
>
> Key: DROOLS-2154
> URL: https://issues.jboss.org/browse/DROOLS-2154
> Project: Drools
> Issue Type: Task
> Components: DMN Editor
> Reporter: Liz Clayton
> Assignee: Liz Clayton
> Labels: UX
>
> *Use case description:
*
> As a business user (Citizen Developer…), I want to navigate between the DRG and the DRDs, and the content of each node, to model business decision logic.
> *Verification Conditions*
> * The user of this feature is able to understand that the the DRDs are subsets of the whole DRG model/file.
> * The user of this feature can seamlessly navigate between the diagrams and the content of the nodes.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (JGRP-2237) The single node in the cluster not become a coordinator after coordinator leave.
by kfir avraham (JIRA)
[ https://issues.jboss.org/browse/JGRP-2237?page=com.atlassian.jira.plugin.... ]
kfir avraham edited comment on JGRP-2237 at 11/29/17 6:40 AM:
--------------------------------------------------------------
i attached logs from server_A, server_B, and conf file.
in this case (after upgrade to version 4.0.8) it is look better but from some reason after a few restarts, server_A doesn't sent a join message to the other machine, and create new view with 'is_coord-true'.
any idea what could be the reason for that?
BTW, we must to use in ' port_range="0" ', because we want to use only in this port, and it worked perfect in 3.6.11 version (we upgraded the version because security issue).
when i set ' port_range="5" ', they not discovered each other from the beginning.
*+Log in trace mode:+*
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [DEBUG] - thread pool min/max/keep-alive: 2/30/60000 use_fork_join=false, internal pool: 0/4/30000 (2 cores available)
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.UNICAST3] [main] [TRACE] - null: set max_xmit_req_size from 0 to 247600
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.pbcast.STABLE] [main] [TRACE] - clm-tlv-spih31-6939: stable task started
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCPPING] [main] [TRACE] - clm-tlv-spih31-6939: sending discovery request to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [main] [TRACE] - clm-tlv-spih31-6939: sending msg to 10.63.16.13:8102, src=clm-tlv-spih31-6939, headers are TCPPING: [GET_MBRS_REQ cluster=HACluster initial_discovery=true], TP: [cluster_name=HACluster]
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TQ-Bundler-7,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: connecting to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [Connection.Receiver [10.63.16.3:50534 - 10.63.16.13:8102]-9,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: removed connection to 10.63.16.13:8102
Nov-28-2017 23:31:36 GMT-12:00 [org.jgroups.protocols.TCP] [TcpServer.Acceptor[8102]-5,clm-tlv-spih31-6939] [TRACE] - 10.63.16.3:8102: accepted connection from 10.63.16.13:8102
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [TRACE] - clm-tlv-spih31-6939: no members discovered after 1029 ms: creating cluster as first member
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.NAKACK2] [main] [DEBUG] -
[clm-tlv-spih31-6939 setDigest()]
existing digest: []
new digest: clm-tlv-spih31-6939: [0 (0)]
resulting digest: clm-tlv-spih31-6939: [0 (0)]
Nov-28-2017 23:31:37 GMT-12:00 [org.jgroups.protocols.pbcast.GMS] [main] [DEBUG] - clm-tlv-spih31-6939: installing view [clm-tlv-spih31-6939|0] (1) [clm-tlv-spih31-6939]
was (Author: kavraham):
i attached logs from server_A, server_B, and conf file.
in this case (after upgrade to version 4.0.8) it is look better but from some reason after a few restarts, server_A doesn't sent a join message to the other machine, and create new view with 'is_coord-true'.
any idea what could be the reason for that?
BTW, we must to use in ' port_range="0" ', because we want to use only in this port, and it worked perfect in 3.6.11 version (we upgraded the version because security issue).
when i set ' port_range="5" ', they not discovered each other from the beginning.
> The single node in the cluster not become a coordinator after coordinator leave.
> --------------------------------------------------------------------------------
>
> Key: JGRP-2237
> URL: https://issues.jboss.org/browse/JGRP-2237
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.2, 4.0.8
> Reporter: kfir avraham
> Assignee: Bela Ban
> Priority: Minor
> Attachments: Server_A.txt, Server_B.txt, conf.txt, test.xml
>
>
> I got cluster with 2 members, sometimes when the first node (coordinator) leave the cluster the second one is not become a coordinator.
> When the first one is rejoin, he could not determine coordinator and select new one from the nodes list.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (JGRP-2237) The single node in the cluster not become a coordinator after coordinator leave.
by kfir avraham (JIRA)
[ https://issues.jboss.org/browse/JGRP-2237?page=com.atlassian.jira.plugin.... ]
kfir avraham commented on JGRP-2237:
------------------------------------
i attached logs from server_A, server_B, and conf file.
in this case (after upgrade to version 4.0.8) it is look better but from some reason after a few restarts, server_A doesn't sent a join message to the other machine, and create new view with 'is_coord-true'.
any idea what could be the reason for that?
BTW, we must to use in ' port_range="0" ', because we want to use only in this port, and it worked perfect in 3.6.11 version (we upgraded the version because security issue).
when i set ' port_range="5" ', they not discovered each other from the beginning.
> The single node in the cluster not become a coordinator after coordinator leave.
> --------------------------------------------------------------------------------
>
> Key: JGRP-2237
> URL: https://issues.jboss.org/browse/JGRP-2237
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.2, 4.0.8
> Reporter: kfir avraham
> Assignee: Bela Ban
> Priority: Minor
> Attachments: Server_A.txt, Server_B.txt, test.xml
>
>
> I got cluster with 2 members, sometimes when the first node (coordinator) leave the cluster the second one is not become a coordinator.
> When the first one is rejoin, he could not determine coordinator and select new one from the nodes list.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month
[JBoss JIRA] (WFLY-9582) server not starting from windows service
by Roshan Royal (JIRA)
[ https://issues.jboss.org/browse/WFLY-9582?page=com.atlassian.jira.plugin.... ]
Roshan Royal updated WFLY-9582:
-------------------------------
Summary: server not starting from windows service (was: Service is not starting)
> server not starting from windows service
> ----------------------------------------
>
> Key: WFLY-9582
> URL: https://issues.jboss.org/browse/WFLY-9582
> Project: WildFly
> Issue Type: Bug
> Components: Batch, Server
> Affects Versions: 9.0.2.Final
> Environment: Window 7 64-bit system
> Reporter: Roshan Royal
> Assignee: Cheng Fang
> Priority: Blocker
>
> Getting below error while trying to start server through windows service.
> [2017-11-29 14:24:47] [info] [14908] Commons Daemon procrun (1.0.15.0 64-bit) started
> [2017-11-29 14:24:47] [info] [14908] Starting service 'Wildfly' ...
> [2017-11-29 14:24:47] [info] [ 3760] Commons Daemon procrun (1.0.15.0 64-bit) started
> [2017-11-29 14:24:47] [info] [ 3760] Running 'Wildfly' Service...
> [2017-11-29 14:24:47] [info] [15500] Starting service...
> [2017-11-29 14:24:47] [info] [15500] Service started in 3 ms.
> [2017-11-29 14:24:47] [info] [ 3760] Run service finished.
> [2017-11-29 14:24:47] [info] [ 3760] Commons Daemon procrun finished
> [2017-11-29 14:24:48] [error] [14908] Failed to start 'Wildfly' service
> [2017-11-29 14:24:48] [error] [14908] The data area passed to a system call is too small.
> [2017-11-29 14:24:48] [info] [14908] Start service finished.
> [2017-11-29 14:24:48] [error] [14908] Commons Daemon procrun failed with exit value: 5 (Failed to start service)
> [2017-11-29 14:24:48] [error] [14908] The data area passed to a system call is too small.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
7 years, 1 month