[JBoss JIRA] (WFCORE-4707) OperationContext.getCapabilityServiceName(String, String...) generates wrong ServiceName if capability is not registered and dynamic part contains a DOT
by Ivo Studensky (Jira)
[ https://issues.redhat.com/browse/WFCORE-4707?page=com.atlassian.jira.plug... ]
Ivo Studensky reassigned WFCORE-4707:
-------------------------------------
Assignee: Ivo Studensky (was: Jeff Mesnil)
> OperationContext.getCapabilityServiceName(String, String...) generates wrong ServiceName if capability is not registered and dynamic part contains a DOT
> --------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFCORE-4707
> URL: https://issues.redhat.com/browse/WFCORE-4707
> Project: WildFly Core
> Issue Type: Bug
> Components: Management
> Affects Versions: 10.0.0.Final
> Reporter: Paul Ferraro
> Assignee: Ivo Studensky
> Priority: Major
>
> Consider the call:
> {code:java}
> OperationContext.getCapabilityServiceName("test", "foo.bar");
> {code}
> This method first resolves the qualified capability name as "test.foo.bar". If no capability is registered with that name, the service name is generated using:
> {code:java}
> ServiceNameFactory.parseServiceName("test.foo.bar");
> {code}
> This generates a ServiceName with 3 elements.
> However, when this capability is actually created with name of "test" and dynamic part "foo.bar", its service name will contain 2 elements, not 3.
> i.e.
> {code:java}
> RuntimeCapability.Builder.of("test", true, ServiceType.class).fromBaseCapability("foo.bar").getCapabilityServiceName();
> {code}
> Interestingly, the correspond method return by the OperationContext.getCapabilityServiceSupport() implementation does this correctly.
> i.e.
> {code:java}
> OperationContext.getCapabilityServiceSupport().getCapabilityServiceName("test", "foo.bar");
> {code}
> returns the result of:
> {code:java}
> ServiceNameFactory.parseServiceName("test").append("foo.bar");
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months
[JBoss JIRA] (WFLY-13599) Admin console will not open - I added users
by Harald Pehl (Jira)
[ https://issues.redhat.com/browse/WFLY-13599?page=com.atlassian.jira.plugi... ]
Harald Pehl edited comment on WFLY-13599 at 6/19/20 5:59 AM:
-------------------------------------------------------------
I wasn't able to reproduce the error.
Here are my steps:
# Download clean WildFly 20.0.0.Final
# {{bin/add-user.sh -a -u admin -p admin -ro admin,kie-server,rest-all,kiemgmt}}
# {{bin/standalone.sh}}
# Open [http://localhost:9990|http://localhost:9990/]
# Login with admin/admin
Did I miss anything?
was (Author: harald.pehl):
I wasn't able to reproduce the error.
Here are my steps:
# Download clean WildFly 20.0.0.Final
# {{bin/add-user.sh -u admin -p admin --silent}}
# {{bin/standalone.sh}}
# Open [http://localhost:9990|http://localhost:9990/]
# Login with admin/admin
Did I miss anything?
> Admin console will not open - I added users
> -------------------------------------------
>
> Key: WFLY-13599
> URL: https://issues.redhat.com/browse/WFLY-13599
> Project: WildFly
> Issue Type: Bug
> Components: Web Console
> Affects Versions: 18.0.1.Final, 20.0.0.Final
> Reporter: Charles Herrick
> Assignee: Harald Pehl
> Priority: Major
>
> Wildfly 18.0.1 final and Wildfly 20.0 final installed.
> add-user.bat executed for admin and one other user.
> Kie-server 7.38 war deployed to deployments. start server.
> Admin console will not display. Error says "you have not yet added users".
> Used: add-user -a -u admin -p **** -ro admin,kie-server,rest-all,kiemgmt
> Got: Updated user 'admin' to file 'C:\Redhat\Wildfly18\standalone\configuration\application-users.properties'
> Updated user 'admin' to file 'C:\Redhat\Wildfly18\domain\configuration\application-users.properties'
> Updated user 'admin' with groups admin,kie-server,rest-all,kiemgmt to file 'C:\Redhat\Wildfly18\standalone\configuration\application-roles.properties'
> Updated user 'admin' with groups admin,kie-server,rest-all,kiemgmt to file 'C:\Redhat\Wildfly18\domain\configuration\application-roles.properties'
>
> Please help me get the admin console up.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months
[JBoss JIRA] (WFLY-13599) Admin console will not open - I added users
by Harald Pehl (Jira)
[ https://issues.redhat.com/browse/WFLY-13599?page=com.atlassian.jira.plugi... ]
Harald Pehl edited comment on WFLY-13599 at 6/19/20 5:57 AM:
-------------------------------------------------------------
I wasn't able to reproduce the error.
Here are my steps:
# Download clean WildFly 20.0.0.Final
# {{bin/add-user.sh -u admin -p admin --silent}}
# {{bin/standalone.sh}}
# Open [http://localhost:9990|http://localhost:9990/]
# Login with admin/admin
Did I miss anything?
was (Author: harald.pehl):
I wasn't able to reproduce the error.
Here are my steps:
# Download clean WildFly 20.0.0.Final
# {{bin/add-user.sh -u admin -p admin --silent}}
# {{bin/standalone.sh}}}}
# Open [http://localhost:9990|http://localhost:9990/]
# Login with admin/admin
Did I miss anything?
> Admin console will not open - I added users
> -------------------------------------------
>
> Key: WFLY-13599
> URL: https://issues.redhat.com/browse/WFLY-13599
> Project: WildFly
> Issue Type: Bug
> Components: Web Console
> Affects Versions: 18.0.1.Final, 20.0.0.Final
> Reporter: Charles Herrick
> Assignee: Harald Pehl
> Priority: Major
>
> Wildfly 18.0.1 final and Wildfly 20.0 final installed.
> add-user.bat executed for admin and one other user.
> Kie-server 7.38 war deployed to deployments. start server.
> Admin console will not display. Error says "you have not yet added users".
> Used: add-user -a -u admin -p **** -ro admin,kie-server,rest-all,kiemgmt
> Got: Updated user 'admin' to file 'C:\Redhat\Wildfly18\standalone\configuration\application-users.properties'
> Updated user 'admin' to file 'C:\Redhat\Wildfly18\domain\configuration\application-users.properties'
> Updated user 'admin' with groups admin,kie-server,rest-all,kiemgmt to file 'C:\Redhat\Wildfly18\standalone\configuration\application-roles.properties'
> Updated user 'admin' with groups admin,kie-server,rest-all,kiemgmt to file 'C:\Redhat\Wildfly18\domain\configuration\application-roles.properties'
>
> Please help me get the admin console up.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months
[JBoss JIRA] (WFLY-13599) Admin console will not open - I added users
by Harald Pehl (Jira)
[ https://issues.redhat.com/browse/WFLY-13599?page=com.atlassian.jira.plugi... ]
Harald Pehl commented on WFLY-13599:
------------------------------------
I wasn't able to reproduce the error.
Here are my steps:
# Download clean WildFly 20.0.0.Final
# {{bin/add-user.sh -u admin -p admin --silent}}
# {{bin/standalone.sh}}}}
# Open [http://localhost:9990|http://localhost:9990/]
# Login with admin/admin
Did I miss anything?
> Admin console will not open - I added users
> -------------------------------------------
>
> Key: WFLY-13599
> URL: https://issues.redhat.com/browse/WFLY-13599
> Project: WildFly
> Issue Type: Bug
> Components: Web Console
> Affects Versions: 18.0.1.Final, 20.0.0.Final
> Reporter: Charles Herrick
> Assignee: Harald Pehl
> Priority: Major
>
> Wildfly 18.0.1 final and Wildfly 20.0 final installed.
> add-user.bat executed for admin and one other user.
> Kie-server 7.38 war deployed to deployments. start server.
> Admin console will not display. Error says "you have not yet added users".
> Used: add-user -a -u admin -p **** -ro admin,kie-server,rest-all,kiemgmt
> Got: Updated user 'admin' to file 'C:\Redhat\Wildfly18\standalone\configuration\application-users.properties'
> Updated user 'admin' to file 'C:\Redhat\Wildfly18\domain\configuration\application-users.properties'
> Updated user 'admin' with groups admin,kie-server,rest-all,kiemgmt to file 'C:\Redhat\Wildfly18\standalone\configuration\application-roles.properties'
> Updated user 'admin' with groups admin,kie-server,rest-all,kiemgmt to file 'C:\Redhat\Wildfly18\domain\configuration\application-roles.properties'
>
> Please help me get the admin console up.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months
[JBoss JIRA] (DROOLS-5443) Move DMN-PMML tests to a separate module
by Jiri Petrlik (Jira)
Jiri Petrlik created DROOLS-5443:
------------------------------------
Summary: Move DMN-PMML tests to a separate module
Key: DROOLS-5443
URL: https://issues.redhat.com/browse/DROOLS-5443
Project: Drools
Issue Type: Task
Reporter: Jiri Petrlik
Assignee: Jiri Petrlik
According to DROOLS-5372 it seems that we will need to have 2 implementations of PMML in the same time. It would be better to move DMN-PMML tests to two separate modules in drools repo.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months
[JBoss JIRA] (WFLY-13604) Infinispan does not publish Statistic under object name 'javax.cache:type=CacheStatistics'
by Oliver Breidenbach (Jira)
Oliver Breidenbach created WFLY-13604:
-----------------------------------------
Summary: Infinispan does not publish Statistic under object name 'javax.cache:type=CacheStatistics'
Key: WFLY-13604
URL: https://issues.redhat.com/browse/WFLY-13604
Project: WildFly
Issue Type: Feature Request
Components: JPA / Hibernate
Affects Versions: 18.0.1.Final
Reporter: Oliver Breidenbach
Assignee: Scott Marlow
Wildfly uses inifinispan as default jcache implementation. We monitoring our application with javamelody. Javamelody expects to find the cache statistics under 'javax.cache:type=CacheStatistics' as defined by the [spec|https://www.javadoc.io/doc/javax.cache/cache-api/latest/javax/cache/CacheManager.html#enableStatistics-java.lang.String-boolean-].
So if inifispan is used as jcache provider it must publish its statistic under {{javax.cache:type=CacheStatistics}}.
Not sure if the failure is in how wildfly integrates inifinispan or in the infinispan-jcache module.
Also see: [Missing cache statistics|https://stackoverflow.com/questions/62423799/missing-mbean-typ...]
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months
[JBoss JIRA] (JGRP-2485) UDP is not working after upgarde to 3_6_19 from jgroups-3.4.0.Alpha2
by Janardhan Naidu (Jira)
[ https://issues.redhat.com/browse/JGRP-2485?page=com.atlassian.jira.plugin... ]
Janardhan Naidu commented on JGRP-2485:
---------------------------------------
with *jgroups-3.4.0.Alpha2,* we were using below mentioned udp sting and it was not having any issues.. cluster communication was working fine.
*property_string=UDP*(bind_addr=*hostIP*;bind_port=39061;mcast_addr=239.255.166.17;mcast_port=39060;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING(timeout=2000;num_initial_members=3):MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):UNICAST(timeout=5000):pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true)
and after upgrading to *3_6_19*, cluster communication was not working so we change/modified UDP property as below. still cluster communication is broken. and no errors/warnings seen after these changes.
*property_string=UDP*(bind_addr=*hostIP*;bind_port=39061;mcast_addr=239.255.166.17;mcast_port=39060;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING:MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=true;retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):UNICAST:pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> UDP is not working after upgarde to 3_6_19 from jgroups-3.4.0.Alpha2
> --------------------------------------------------------------------
>
> Key: JGRP-2485
> URL: https://issues.redhat.com/browse/JGRP-2485
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.19
> Reporter: Janardhan Naidu
> Assignee: Bela Ban
> Priority: Major
> Attachments: noapp.log.D20200619.T053202_NODE1
>
>
> Hi Team,
>
> we just upgraded from jgroups-3.4.0.Alpha2 to 3_6_19. post the UDP cluster communication is not working.
> after upgrade, we were hitting some warning as below and we solved those by changing property_string of UDP.
> *Warnings:*
> *WARNING: JGRP000014: Discovery.timeout has been deprecated: GMS.join_timeout should be used instead*
> [2020-06-17 14:05:49.271] ALL 000000000000 GLOBAL_SCOPE Jun 17, 2020 2:05:49 PM org.jgroups.stack.Configurator resolveAndAssignField
> *WARNING: JGRP000014: Discovery.num_initial_members has been deprecated: will be ignored*
> [2020-06-17 14:05:49.396] ALL 000000000000 GLOBAL_SCOPE Jun 17, 2020 2:05:49 PM org.jgroups.protocols.pbcast.NAKACK init
> *WARNING: use_mcast_xmit should not be used because the transport (TCP) does not support IP multicasting; setting use_mcast_xmit to false*
>
> To solve above warnings, we went through tcp.xml which was shipped using in 3_6_19 jar:
> and now our properties looks like:
> *property_string=UDP*(bind_addr=*hostIP*;bind_port=39061;mcast_addr=239.255.166.17;mcast_port=39060;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING:MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=true;retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):UNICAST:pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
> *distribution_property_string=TCP*(bind_port=39060;thread_pool_rejection_policy=run):TCPPING(async_discovery=true;initial_hosts=*hostIP*[39060];port_range=0;):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=false;discard_delivered_msgs=true;):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
> *lock.protocolStack=UDP*(bind_addr=*hostIP*;bind_port=39062;mcast_addr=239.255.166.17;mcast_port=39069;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING:MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=true;retransmit_timeout=300,600,1200,2400,4800):UNICAST:pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true):CENTRAL_LOCK(num_backups=2)
>
>
> With above properties, we are not seeing any warning or error. But still cluster communication is not working with UDP.
> Please help me in resolving the same
>
> Thank
> Janardhan
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months
[JBoss JIRA] (JGRP-2485) UDP is not working after upgarde to 3_6_19 from jgroups-3.4.0.Alpha2
by Janardhan Naidu (Jira)
[ https://issues.redhat.com/browse/JGRP-2485?page=com.atlassian.jira.plugin... ]
Janardhan Naidu updated JGRP-2485:
----------------------------------
Attachment: noapp.log.D20200619.T053202_NODE1
> UDP is not working after upgarde to 3_6_19 from jgroups-3.4.0.Alpha2
> --------------------------------------------------------------------
>
> Key: JGRP-2485
> URL: https://issues.redhat.com/browse/JGRP-2485
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.19
> Reporter: Janardhan Naidu
> Assignee: Bela Ban
> Priority: Major
> Attachments: noapp.log.D20200619.T053202_NODE1
>
>
> Hi Team,
>
> we just upgraded from jgroups-3.4.0.Alpha2 to 3_6_19. post the UDP cluster communication is not working.
> after upgrade, we were hitting some warning as below and we solved those by changing property_string of UDP.
> *Warnings:*
> *WARNING: JGRP000014: Discovery.timeout has been deprecated: GMS.join_timeout should be used instead*
> [2020-06-17 14:05:49.271] ALL 000000000000 GLOBAL_SCOPE Jun 17, 2020 2:05:49 PM org.jgroups.stack.Configurator resolveAndAssignField
> *WARNING: JGRP000014: Discovery.num_initial_members has been deprecated: will be ignored*
> [2020-06-17 14:05:49.396] ALL 000000000000 GLOBAL_SCOPE Jun 17, 2020 2:05:49 PM org.jgroups.protocols.pbcast.NAKACK init
> *WARNING: use_mcast_xmit should not be used because the transport (TCP) does not support IP multicasting; setting use_mcast_xmit to false*
>
> To solve above warnings, we went through tcp.xml which was shipped using in 3_6_19 jar:
> and now our properties looks like:
> *property_string=UDP*(bind_addr=*hostIP*;bind_port=39061;mcast_addr=239.255.166.17;mcast_port=39060;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING:MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=true;retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):UNICAST:pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
> *distribution_property_string=TCP*(bind_port=39060;thread_pool_rejection_policy=run):TCPPING(async_discovery=true;initial_hosts=*hostIP*[39060];port_range=0;):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=false;discard_delivered_msgs=true;):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
> *lock.protocolStack=UDP*(bind_addr=*hostIP*;bind_port=39062;mcast_addr=239.255.166.17;mcast_port=39069;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING:MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=true;retransmit_timeout=300,600,1200,2400,4800):UNICAST:pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true):CENTRAL_LOCK(num_backups=2)
>
>
> With above properties, we are not seeing any warning or error. But still cluster communication is not working with UDP.
> Please help me in resolving the same
>
> Thank
> Janardhan
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 6 months