[JBoss JIRA] (HAWKULARQE-150) Redeploy archive on EAP - Should fail
by Hayk Hovsepyan (JIRA)
[ https://issues.jboss.org/browse/HAWKULARQE-150?page=com.atlassian.jira.pl... ]
Hayk Hovsepyan commented on HAWKULARQE-150:
-------------------------------------------
BZ is created https://bugzilla.redhat.com/show_bug.cgi?id=1475422
> Redeploy archive on EAP - Should fail
> -------------------------------------
>
> Key: HAWKULARQE-150
> URL: https://issues.jboss.org/browse/HAWKULARQE-150
> Project: Hawkular QE
> Issue Type: Task
> Reporter: Hayk Hovsepyan
> Assignee: Hayk Hovsepyan
> Priority: Critical
>
> <mfoley> both apps are deployed in the same eap?
> <bhirsch> yep
> <mfoley> so you deployed 1 ... shows in the display
> <bhirsch> And actions work. I can use cf to start, stop, and deploy
> <mfoley> deploy a 2nd ... not showing the display
> <bhirsch> correct
> <mfoley> odd
> <bhirsch> agreed
> <mfoley> i suppose a 3rd would not display either
> <bhirsch> haven't tried yet. I'm not really an MW guy, so all I had was ticket-monster and helloworld :-) Need a third .war
> <mfoley> it's okay
> <mfoley> seems like a bug to me ...
> <mfoley> i appreciate the feedback
> <bhirsch> np
> <mfoley> you are a solutions architect?
> <bhirsch> correct, SA in the Northeast
> <mfoley> cool ...
> <mfoley> general question ... what is your impression of the MW Provider?
> <bhirsch> I have a customer that wanted a CF demo, with a focus on the MW provider area (they are a JBoss customer) So, here we are.
> <mfoley> you can be direct .... it's okay
> <bhirsch> So far so good
> <mfoley> :)
> <mfoley> excellent
> <bhirsch> No complaints really. The deployment is a little involved, but very well documented
> <mfoley> well ... only demo 1 deployment :)
> <bhirsch> haha... yeah. I'm not very worried about that part. Just figured it was something you all would want to know
> <mfoley> oh we do
> <bhirsch> they'll probably be thrilled to see MW in CF at all
> <mfoley> i hope so ... it would be validation of the new direction and strategy
> <bhirsch> They use a lot of EAP, so this plus OpenShift (also an OSE customer) could help us beat VMware VRA out the door
> <mfoley> :)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (WFCORE-3113) Upgrade DMR to 1.4.1.Final
by Brian Stansberry (JIRA)
Brian Stansberry created WFCORE-3113:
----------------------------------------
Summary: Upgrade DMR to 1.4.1.Final
Key: WFCORE-3113
URL: https://issues.jboss.org/browse/WFCORE-3113
Project: WildFly Core
Issue Type: Component Upgrade
Components: Domain Management
Reporter: Brian Stansberry
Assignee: Brian Stansberry
Pull in a release with license info in the pom.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (JGRP-2206) Property strings are correct but JGROUPS is not recognizing other nodes
by Swathi Kumar (JIRA)
[ https://issues.jboss.org/browse/JGRP-2206?page=com.atlassian.jira.plugin.... ]
Swathi Kumar commented on JGRP-2206:
------------------------------------
Hi Bela,
The property string above shows our customer *IS* setting the bind address: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR) so something else in his vmware environment seems to be leading to this issue.
Our customer can ping between his nodes using the addresses above. We have asked him to execute ipconfig from a command window on node1.
Here is what they see:
C:\Users>ipconfig
Windows IP Configuration
Ethernet adapter Local Area Connection 5:
Connection-specific DNS Suffix . : s3.chp.cba
Link-local IPv6 Address . . . . . : fe80::1dda:27b9:ca17:40d5%11
IPv4 Address. . . . . . . . . . . : 10.38.46.27
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 10.38.46.254
Tunnel adapter isatap.s3.chp.cba:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Do you have insight (or troubleshooting experience) into vmware configurations that might explain why the localhost value is actually bound and not the "bind_addr=10.38.46.27" above?
I appreciate any help you be able to provide,
Jeff
> Property strings are correct but JGROUPS is not recognizing other nodes
> -----------------------------------------------------------------------
>
> Key: JGRP-2206
> URL: https://issues.jboss.org/browse/JGRP-2206
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.4
> Environment: With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
> OS: Windows Server 2008 R2 6.1,amd64
> Java version: 1.7.0,pwa6470sr9fp10-20150708_01 (SR9 FP10),IBM Corporation
> Reporter: Swathi Kumar
> Assignee: Bela Ban
> Priority: Blocker
> Attachments: VisibilityIssue.zip
>
>
> Our customer has a four node cluster which we believe is correctly defined yet the nodes are not communicating with each other.
> All nodes are on VMWare. None of the hostnames are virtual (in that they are all directly attached to an IP and are not managed by load balancers, etc).
>
> The nodes are located in separate data centers (2 in each) and jgroups is operating over tcp, rather than udp multicast.
> NOTE: The issue occurs only in the customer's environment (we are not able to reproduce this issue in our lab).
> We are attaching our logs (noapp.log.<timestamp>) with JGROUPS debugging enabled.
> *Node1 Property strings*:
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node2 Property strings*:
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node3 Property strings*:
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node4 Property strings*:
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (ELY-1304) Elytron subsystem does not expose digest-sha-384 for digest password
by Yeray Borges (JIRA)
[ https://issues.jboss.org/browse/ELY-1304?page=com.atlassian.jira.plugin.s... ]
Yeray Borges reassigned ELY-1304:
---------------------------------
Assignee: Yeray Borges (was: Darran Lofthouse)
> Elytron subsystem does not expose digest-sha-384 for digest password
> --------------------------------------------------------------------
>
> Key: ELY-1304
> URL: https://issues.jboss.org/browse/ELY-1304
> Project: WildFly Elytron
> Issue Type: Bug
> Reporter: Martin Choma
> Assignee: Yeray Borges
>
> For the sake of completeness add digest-sha-384 to allowed values of algorithm attribute of set-password operation
> {code:title=/subsystem=elytron/ldap-realm=a:read-operation-description(name=set-password)}
> "digest" => {
> "type" => OBJECT,
> "description" => "A digest password.",
> "expressions-allowed" => false,
> "required" => false,
> "nillable" => true,
> "value-type" => {
> "algorithm" => {
> "type" => STRING,
> "description" => "The algorithm used to encrypt the password.",
> "expressions-allowed" => false,
> "required" => false,
> "nillable" => true,
> "default" => "digest-sha-512",
> "allowed" => [
> "digest-md5",
> "digest-sha",
> "digest-sha-256",
> "digest-sha-512"
> ]
> },
> "password" => {
> "type" => STRING,
> "description" => "The actual password to set.",
> "expressions-allowed" => false,
> "required" => true,
> "nillable" => false,
> "min-length" => 1L,
> "max-length" => 2147483647L
> },
> "realm" => {
> "type" => STRING,
> "description" => "The realm.",
> "expressions-allowed" => false,
> "required" => true,
> "nillable" => false,
> "min-length" => 1L,
> "max-length" => 2147483647L
> }
> }
> },
> {code}
> Passwords of types otp, salted-simple-digest, simple-digest already expose sha-384 variant.
> Seems to me underlying Elytron implementation is already prepared for that.
> {code:java|title=DigestPasswordImpl.java}
> private static MessageDigest getMessageDigest(final String algorithm) throws NoSuchAlgorithmException {
> switch (algorithm) {
> case ALGORITHM_DIGEST_MD5:
> return MessageDigest.getInstance("MD5");
> case ALGORITHM_DIGEST_SHA:
> return MessageDigest.getInstance("SHA-1");
> case ALGORITHM_DIGEST_SHA_256:
> return MessageDigest.getInstance("SHA-256");
> case ALGORITHM_DIGEST_SHA_384:
> return MessageDigest.getInstance("SHA-384");
> case ALGORITHM_DIGEST_SHA_512:
> return MessageDigest.getInstance("SHA-512");
> default:
> throw log.noSuchAlgorithmInvalidAlgorithm(algorithm);
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (WFCORE-3112) Upgrade DMR to 1.4.1.Final
by Brian Stansberry (JIRA)
Brian Stansberry created WFCORE-3112:
----------------------------------------
Summary: Upgrade DMR to 1.4.1.Final
Key: WFCORE-3112
URL: https://issues.jboss.org/browse/WFCORE-3112
Project: WildFly Core
Issue Type: Component Upgrade
Components: Domain Management
Reporter: Brian Stansberry
Assignee: Brian Stansberry
Pull in a release with license info in the pom.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months