[JBoss JIRA] (JGRP-2395) LOCAL_PING fails when 2 nodes start at the same time
by Bela Ban (Jira)
[ https://issues.jboss.org/browse/JGRP-2395?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2395 at 11/6/19 8:52 AM:
---------------------------------------------------------
Hmm, these things cannot be controlled; if multiple nodes are started without an existing coord, then (like with UDP:PING) LOCAL_PING (and SHARED_LOOPBACK_PING) may end up doing discovery at exactly the same time and become singletons, only to be merged later.
So, in that sense, both LOCAL_PING and …
[View More]SHARED_LOOPBACK_PING mimic the real world (UDP & PING).
However, I *can* change this:
* The first node to register for a given cluster becomes _coordinator_ (registration needs to be atomic)
* When the coord leaves, the next-in-line becomes coord (this is atomic, too, wrt gets)
* On a view, we make sure that the first node in the view is the coord (also in the {{discovery}} map). This is because of JGRP-2381.
* This _may_ fail when a user has installed a custom view generation policy, need to think about this
The important thing here is that we need to have a coord after the first member registers. After that, we adjust the {{discovery}} map (who is coord) based on view changes. A given is that there's always only *one* coord for a given cluster in the {{discovery}} map.
WDYT?
was (Author: belaban):
Hmm, these things cannot be controlled; if multiple nodes are started without an existing coord, then (like with UDP:PING) LOCAL_PING (and SHARED_LOOPBACK_PING) may end up doing discovery at exactly the same time and become singletons, only to be merged later.
So, in that sense, both LOCAL_PING and SHARED_LOOPBACK_PING mimic the real world (UDP & PING).
However, I *can* change this:
* The first node to register for a given cluster becomes _coordinator_ (registration needs to be atomic)
* When the coord leaves, the next-in-line becomes coord (this is atomic, too, wrt gets)
* On a view, we make sure that the first node in the view is the coord (also in the {{discovery}} map)
* This _may_ fail when a user has installed a custom view generation policy, need to think about this
The important thing here is that we need to have a coord after the first member registers. After that, we adjust the {{discovery}} map (who is coord) based on view changes. A given is that there's always only *one* coord for a given cluster in the {{discovery}} map.
WDYT?
> LOCAL_PING fails when 2 nodes start at the same time
> ----------------------------------------------------
>
> Key: JGRP-2395
> URL: https://issues.jboss.org/browse/JGRP-2395
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.1.6
> Reporter: Dan Berindei
> Assignee: Bela Ban
> Priority: Major
> Fix For: 4.1.8
>
>
> We have a test that starts 2 nodes in parallel ({{ConcurrentStartTest}} and it is randomly failing since we started using {{LOCAL_PING}}.
> {noformat}
> 01:02:11,930 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 3 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,930 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: discovery took 3 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: I (Test-NodeB-43694) am the first of the nodes, will become coordinator
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 0 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> ...
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
> 01:02:11,942 WARN (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: too many JOIN attempts (10): becoming singleton
> 01:02:11,942 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: installing view [Test-NodeA-29550|0] (1) [Test-NodeA-29550]
> 01:02:11,977 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: created cluster (first member). My view is [Test-NodeB-43694|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
> 01:02:11,977 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: created cluster (first member). My view is [Test-NodeA-29550|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
> {noformat}
> The problem seems to be that it takes longer for the coordinator to install the initial view and update {{LOCAL_PING}}'s {{PingData}} then it takes the other node to retry the discovery process 10 times.
> In some cases there is no retry, because one node starts slightly faster, but it's not yet coordinator when the 2nd node does its discovery, and both nodes decide they should be coordinator:
> {noformat}
> 01:13:44,460 INFO (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: no members discovered after 3 ms: creating cluster as first member
> 01:13:44,463 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: discovery took 1 ms, members: 1 rsps (0 coords) [done]
> 01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: nodes to choose new coord from are: [Test-NodeB-51165, Test-NodeA-5386]
> 01:13:44,466 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: I (Test-NodeB-51165) am the first of the nodes, will become coordinator
> 01:13:44,466 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: installing view [Test-NodeB-51165|0] (1) [Test-NodeB-51165]
> 01:13:44,466 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: installing view [Test-NodeA-5386|0] (1) [Test-NodeA-5386]
> {noformat}
> This second failure mode seems to go away if I move the {{discovery}} map access inside the {{synchronized}} block both in {{findMembers()}} and in {{down()}}.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (JGRP-2395) LOCAL_PING fails when 2 nodes start at the same time
by Bela Ban (Jira)
[ https://issues.jboss.org/browse/JGRP-2395?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2395 at 11/6/19 8:45 AM:
---------------------------------------------------------
Hmm, these things cannot be controlled; if multiple nodes are started without an existing coord, then (like with UDP:PING) LOCAL_PING (and SHARED_LOOPBACK_PING) may end up doing discovery at exactly the same time and become singletons, only to be merged later.
So, in that sense, both LOCAL_PING and …
[View More]SHARED_LOOPBACK_PING mimic the real world (UDP & PING).
However, I *can* change this:
* The first node to register for a given cluster becomes _coordinator_ (registration needs to be atomic)
* When the coord leaves, the next-in-line becomes coord (this is atomic, too, wrt gets)
* On a view, we make sure that the first node in the view is the coord (also in the {{discovery}} map)
* This _may_ fail when a user has installed a custom view generation policy, need to think about this
The important thing here is that we need to have a coord after the first member registers. After that, we adjust the {{discovery}} map (who is coord) based on view changes. A given is that there's always only *one* coord for a given cluster in the {{discovery}} map.
WDYT?
was (Author: belaban):
Hmm, these things cannot be controlled; if multiple nodes are started without an existing coord, then (like with UDP:PING) LOCAL_PING (and SHARED_LOOPBACK_PING) may end up doing discovery at exactly the same time and become singletons, only to be merged later.
So, in that sense, both LOCAL_PING and SHARED_LOOPBACK_PING mimic the real world (UDP & PING).
However, I *can* change this:
* The first node to register for a given cluster becomes _coordinator_ (registration needs to be atomic)
* When the coord leaves, the next-in-line becomes coord (this is atomic, too, wrt gets)
* This _may_ fail when a user has installed a custom view generation policy, need to think about this
WDYT?
> LOCAL_PING fails when 2 nodes start at the same time
> ----------------------------------------------------
>
> Key: JGRP-2395
> URL: https://issues.jboss.org/browse/JGRP-2395
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.1.6
> Reporter: Dan Berindei
> Assignee: Bela Ban
> Priority: Major
> Fix For: 4.1.8
>
>
> We have a test that starts 2 nodes in parallel ({{ConcurrentStartTest}} and it is randomly failing since we started using {{LOCAL_PING}}.
> {noformat}
> 01:02:11,930 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 3 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,930 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: discovery took 3 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: I (Test-NodeB-43694) am the first of the nodes, will become coordinator
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 0 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> ...
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
> 01:02:11,942 WARN (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: too many JOIN attempts (10): becoming singleton
> 01:02:11,942 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: installing view [Test-NodeA-29550|0] (1) [Test-NodeA-29550]
> 01:02:11,977 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: created cluster (first member). My view is [Test-NodeB-43694|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
> 01:02:11,977 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: created cluster (first member). My view is [Test-NodeA-29550|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
> {noformat}
> The problem seems to be that it takes longer for the coordinator to install the initial view and update {{LOCAL_PING}}'s {{PingData}} then it takes the other node to retry the discovery process 10 times.
> In some cases there is no retry, because one node starts slightly faster, but it's not yet coordinator when the 2nd node does its discovery, and both nodes decide they should be coordinator:
> {noformat}
> 01:13:44,460 INFO (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: no members discovered after 3 ms: creating cluster as first member
> 01:13:44,463 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: discovery took 1 ms, members: 1 rsps (0 coords) [done]
> 01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: nodes to choose new coord from are: [Test-NodeB-51165, Test-NodeA-5386]
> 01:13:44,466 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: I (Test-NodeB-51165) am the first of the nodes, will become coordinator
> 01:13:44,466 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: installing view [Test-NodeB-51165|0] (1) [Test-NodeB-51165]
> 01:13:44,466 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: installing view [Test-NodeA-5386|0] (1) [Test-NodeA-5386]
> {noformat}
> This second failure mode seems to go away if I move the {{discovery}} map access inside the {{synchronized}} block both in {{findMembers()}} and in {{down()}}.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (JGRP-2395) LOCAL_PING fails when 2 nodes start at the same time
by Bela Ban (Jira)
[ https://issues.jboss.org/browse/JGRP-2395?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2395:
--------------------------------
Hmm, these things cannot be controlled; if multiple nodes are started without an existing coord, then (like with UDP:PING) LOCAL_PING (and SHARED_LOOPBACK_PING) may end up doing discovery at exactly the same time and become singletons, only to be merged later.
So, in that sense, both LOCAL_PING and SHARED_LOOPBACK_PING mimic the real world (UDP &…
[View More]amp; PING).
However, I *can* change this:
* The first node to register for a given cluster becomes _coordinator_ (registration needs to be atomic)
* When the coord leaves, the next-in-line becomes coord (this is atomic, too, wrt gets)
* This _may_ fail when a user has installed a custom view generation policy, need to think about this
WDYT?
> LOCAL_PING fails when 2 nodes start at the same time
> ----------------------------------------------------
>
> Key: JGRP-2395
> URL: https://issues.jboss.org/browse/JGRP-2395
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.1.6
> Reporter: Dan Berindei
> Assignee: Bela Ban
> Priority: Major
> Fix For: 4.1.8
>
>
> We have a test that starts 2 nodes in parallel ({{ConcurrentStartTest}} and it is randomly failing since we started using {{LOCAL_PING}}.
> {noformat}
> 01:02:11,930 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 3 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,930 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: discovery took 3 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,931 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: I (Test-NodeB-43694) am the first of the nodes, will become coordinator
> 01:02:11,931 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: discovery took 0 ms, members: 1 rsps (0 coords) [done]
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,932 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> ...
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: nodes to choose new coord from are: [Test-NodeB-43694, Test-NodeA-29550]
> 01:02:11,941 TRACE (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: I (Test-NodeA-29550) am not the first of the nodes, waiting for another client to become coordinator
> 01:02:11,942 WARN (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: too many JOIN attempts (10): becoming singleton
> 01:02:11,942 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: installing view [Test-NodeA-29550|0] (1) [Test-NodeA-29550]
> 01:02:11,977 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-43694: created cluster (first member). My view is [Test-NodeB-43694|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
> 01:02:11,977 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-29550: created cluster (first member). My view is [Test-NodeA-29550|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
> {noformat}
> The problem seems to be that it takes longer for the coordinator to install the initial view and update {{LOCAL_PING}}'s {{PingData}} then it takes the other node to retry the discovery process 10 times.
> In some cases there is no retry, because one node starts slightly faster, but it's not yet coordinator when the 2nd node does its discovery, and both nodes decide they should be coordinator:
> {noformat}
> 01:13:44,460 INFO (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: no members discovered after 3 ms: creating cluster as first member
> 01:13:44,463 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: discovery took 1 ms, members: 1 rsps (0 coords) [done]
> 01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: could not determine coordinator from rsps 1 rsps (0 coords) [done]
> 01:13:44,465 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: nodes to choose new coord from are: [Test-NodeB-51165, Test-NodeA-5386]
> 01:13:44,466 TRACE (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: I (Test-NodeB-51165) am the first of the nodes, will become coordinator
> 01:13:44,466 DEBUG (ForkThread-2,ConcurrentStartTest:[]) [GMS] Test-NodeB-51165: installing view [Test-NodeB-51165|0] (1) [Test-NodeB-51165]
> 01:13:44,466 DEBUG (ForkThread-1,ConcurrentStartTest:[]) [GMS] Test-NodeA-5386: installing view [Test-NodeA-5386|0] (1) [Test-NodeA-5386]
> {noformat}
> This second failure mode seems to go away if I move the {{discovery}} map access inside the {{synchronized}} block both in {{findMembers()}} and in {{down()}}.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (WFWIP-256) Duration of server configuration is not printed
by Yeray Borges (Jira)
[ https://issues.jboss.org/browse/WFWIP-256?page=com.atlassian.jira.plugin.... ]
Yeray Borges reassigned WFWIP-256:
----------------------------------
Assignee: Yeray Borges (was: Jean Francois Denise)
> Duration of server configuration is not printed
> -----------------------------------------------
>
> Key: WFWIP-256
> URL: https://issues.jboss.org/browse/WFWIP-256
> Project: WildFly WIP
> Issue Type: Bug
> …
[View More] Components: OpenShift
> Reporter: Jan Blizňák
> Assignee: Yeray Borges
> Priority: Minor
>
> {code:java}
> INFO Configuring the server using embedded server
> INFO Duration:
> INFO Running jboss-eap-7-tech-preview/eap-cd-openshift-rhel8 image, version 18.0
> {code}
> The cause is the wrong usage of log_info function in {{/opt/eap/bin/launch/openshift-common.sh}}
> {code:java}
> log_info "Duration: " $((end-start)) " milliseconds"
> {code}
> which is actually calling it with 3 arguments while the function prints only the first one.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (JBJCA-1395) Use HTTPS for JBoss artifact repository
by Jan Stourac (Jira)
[ https://issues.jboss.org/browse/JBJCA-1395?page=com.atlassian.jira.plugin... ]
Jan Stourac resolved JBJCA-1395.
--------------------------------
Resolution: Done
MR has been merged into 1.4 branch.
As such, this should be fixed in an upcoming {{1.4.19.Final}} release then.
> Use HTTPS for JBoss artifact repository
> ---------------------------------------
>
> Key: JBJCA-1395
> URL: https://issues.jboss.org/browse/JBJCA-1395
> …
[View More] Project: IronJacamar
> Issue Type: Bug
> Components: Build
> Reporter: Jan Stourac
> Assignee: Jan Stourac
> Priority: Major
>
> JBoss repository now redirects from HTTP to HTTPS. As such we need to update {{build.xml}} file to use HTTPS for JBoss repository to avoid error with unresolved dependencies:
> {code}
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: UNRESOLVED DEPENDENCIES ::
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: apache-logging#commons-logging;1.1.0.jboss: not found
> 22:22:20 [ivy:retrieve] :: org.jboss.naming#jnpserver;5.0.3.GA: not found
> 22:22:20 [ivy:retrieve] :: org.apache#jasper;glassfish_2.1.0.v201004190952: not found
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (WFLY-12766) Upgrade WildFly Common to 1.5.2.Final
by Yeray Borges (Jira)
Yeray Borges created WFLY-12766:
-----------------------------------
Summary: Upgrade WildFly Common to 1.5.2.Final
Key: WFLY-12766
URL: https://issues.jboss.org/browse/WFLY-12766
Project: WildFly
Issue Type: Component Upgrade
Reporter: Yeray Borges
Assignee: Yeray Borges
This dependency is brought in by wildfly-core into wildfly, however, we are still requiring it explicitly in wildfly because it …
[View More]is used as a plugin dependency on wildfly-galleon-maven-plugin:
{code:xml}
<plugin>
<groupId>org.wildfly.galleon-plugins</groupId>
<artifactId>wildfly-galleon-maven-plugin</artifactId>
<version>${version.org.wildfly.galleon-plugins}</version>
<dependencies>
<!-- feature-spec-gen uses wildfly-embedded to generate the feature specs, hence the designated wildfly-embedded version must match the pack one -->
<dependency>
<groupId>org.wildfly.core</groupId>
<artifactId>wildfly-embedded</artifactId>
<version>${version.org.wildfly.core}</version>
</dependency>
<!-- If you add a dependency on wildfly-embedded you need to bring your own transitives -->
<dependency>
<groupId>org.wildfly.common</groupId>
<artifactId>wildfly-common</artifactId>
<version>${version.org.wildfly.common}</version>
</dependency>
</dependencies>
</plugin>
{code}
So, this upgrade is just to keep it in sync with the currentwildfly-core version we are using at the moment
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (JBJCA-1395) Use HTTPS for JBoss artifact repository
by Jan Stourac (Jira)
[ https://issues.jboss.org/browse/JBJCA-1395?page=com.atlassian.jira.plugin... ]
Jan Stourac commented on JBJCA-1395:
------------------------------------
Not sure what is correct state now as I've provided MR here https://github.com/ironjacamar/ironjacamar/pull/693.
> Use HTTPS for JBoss artifact repository
> ---------------------------------------
>
> Key: JBJCA-1395
> URL: https://issues.jboss.org/browse/JBJCA-1395
> Project: …
[View More]IronJacamar
> Issue Type: Bug
> Components: Build
> Reporter: Jan Stourac
> Assignee: Jan Stourac
> Priority: Major
>
> JBoss repository now redirects from HTTP to HTTPS. As such we need to update {{build.xml}} file to use HTTPS for JBoss repository to avoid error with unresolved dependencies:
> {code}
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: UNRESOLVED DEPENDENCIES ::
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: apache-logging#commons-logging;1.1.0.jboss: not found
> 22:22:20 [ivy:retrieve] :: org.jboss.naming#jnpserver;5.0.3.GA: not found
> 22:22:20 [ivy:retrieve] :: org.apache#jasper;glassfish_2.1.0.v201004190952: not found
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (JBJCA-1395) Use HTTPS for JBoss artifact repository
by Jan Stourac (Jira)
[ https://issues.jboss.org/browse/JBJCA-1395?page=com.atlassian.jira.plugin... ]
Jan Stourac reassigned JBJCA-1395:
----------------------------------
Assignee: Jan Stourac
> Use HTTPS for JBoss artifact repository
> ---------------------------------------
>
> Key: JBJCA-1395
> URL: https://issues.jboss.org/browse/JBJCA-1395
> Project: IronJacamar
> Issue Type: Bug
> Components: Build
> …
[View More]Reporter: Jan Stourac
> Assignee: Jan Stourac
> Priority: Major
>
> JBoss repository now redirects from HTTP to HTTPS. As such we need to update {{build.xml}} file to use HTTPS for JBoss repository to avoid error with unresolved dependencies:
> {code}
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: UNRESOLVED DEPENDENCIES ::
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: apache-logging#commons-logging;1.1.0.jboss: not found
> 22:22:20 [ivy:retrieve] :: org.jboss.naming#jnpserver;5.0.3.GA: not found
> 22:22:20 [ivy:retrieve] :: org.apache#jasper;glassfish_2.1.0.v201004190952: not found
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months
[JBoss JIRA] (JBJCA-1395) Use HTTPS for JBoss artifact repository
by Jan Stourac (Jira)
[ https://issues.jboss.org/browse/JBJCA-1395?page=com.atlassian.jira.plugin... ]
Jan Stourac updated JBJCA-1395:
-------------------------------
Git Pull Request: https://github.com/ironjacamar/ironjacamar/pull/693
> Use HTTPS for JBoss artifact repository
> ---------------------------------------
>
> Key: JBJCA-1395
> URL: https://issues.jboss.org/browse/JBJCA-1395
> Project: IronJacamar
> Issue Type: Bug
> …
[View More] Components: Build
> Reporter: Jan Stourac
> Priority: Major
>
> JBoss repository now redirects from HTTP to HTTPS. As such we need to update {{build.xml}} file to use HTTPS for JBoss repository to avoid error with unresolved dependencies:
> {code}
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: UNRESOLVED DEPENDENCIES ::
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> 22:22:20 [ivy:retrieve] :: apache-logging#commons-logging;1.1.0.jboss: not found
> 22:22:20 [ivy:retrieve] :: org.jboss.naming#jnpserver;5.0.3.GA: not found
> 22:22:20 [ivy:retrieve] :: org.apache#jasper;glassfish_2.1.0.v201004190952: not found
> 22:22:20 [ivy:retrieve] ::::::::::::::::::::::::::::::::::::::::::::::
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
[View Less]
5 years, 4 months