[JBoss JIRA] (JGRP-2206) Property strings are correct but JGROUPS is not recognizing other nodes
by Swathi Kumar (JIRA)
[ https://issues.jboss.org/browse/JGRP-2206?page=com.atlassian.jira.plugin.... ]
Swathi Kumar commented on JGRP-2206:
------------------------------------
Hi Bela,
The property string above shows our customer *IS* setting the bind address: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR) so something else in his vmware environment seems to be leading to this issue.
Our customer can ping between his nodes using the addresses above. We have asked him to execute ipconfig from a command window on node1.
Here is what they see:
C:\Users>ipconfig
Windows IP Configuration
Ethernet adapter Local Area Connection 5:
Connection-specific DNS Suffix . : s3.chp.cba
Link-local IPv6 Address . . . . . : fe80::1dda:27b9:ca17:40d5%11
IPv4 Address. . . . . . . . . . . : 10.38.46.27
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 10.38.46.254
Tunnel adapter isatap.s3.chp.cba:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Do you have insight (or troubleshooting experience) into vmware configurations that might explain why the localhost value is actually bound and not the "bind_addr=10.38.46.27" above?
I appreciate any help you be able to provide,
Jeff
> Property strings are correct but JGROUPS is not recognizing other nodes
> -----------------------------------------------------------------------
>
> Key: JGRP-2206
> URL: https://issues.jboss.org/browse/JGRP-2206
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.4
> Environment: With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
> OS: Windows Server 2008 R2 6.1,amd64
> Java version: 1.7.0,pwa6470sr9fp10-20150708_01 (SR9 FP10),IBM Corporation
> Reporter: Swathi Kumar
> Assignee: Bela Ban
> Priority: Blocker
> Attachments: VisibilityIssue.zip
>
>
> Our customer has a four node cluster which we believe is correctly defined yet the nodes are not communicating with each other.
> All nodes are on VMWare. None of the hostnames are virtual (in that they are all directly attached to an IP and are not managed by load balancers, etc).
>
> The nodes are located in separate data centers (2 in each) and jgroups is operating over tcp, rather than udp multicast.
> NOTE: The issue occurs only in the customer's environment (we are not able to reproduce this issue in our lab).
> We are attaching our logs (noapp.log.<timestamp>) with JGROUPS debugging enabled.
> *Node1 Property strings*:
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node2 Property strings*:
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node3 Property strings*:
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node4 Property strings*:
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 3 months
[JBoss JIRA] (ELY-1304) Elytron subsystem does not expose digest-sha-384 for digest password
by Yeray Borges (JIRA)
[ https://issues.jboss.org/browse/ELY-1304?page=com.atlassian.jira.plugin.s... ]
Yeray Borges reassigned ELY-1304:
---------------------------------
Assignee: Yeray Borges (was: Darran Lofthouse)
> Elytron subsystem does not expose digest-sha-384 for digest password
> --------------------------------------------------------------------
>
> Key: ELY-1304
> URL: https://issues.jboss.org/browse/ELY-1304
> Project: WildFly Elytron
> Issue Type: Bug
> Reporter: Martin Choma
> Assignee: Yeray Borges
>
> For the sake of completeness add digest-sha-384 to allowed values of algorithm attribute of set-password operation
> {code:title=/subsystem=elytron/ldap-realm=a:read-operation-description(name=set-password)}
> "digest" => {
> "type" => OBJECT,
> "description" => "A digest password.",
> "expressions-allowed" => false,
> "required" => false,
> "nillable" => true,
> "value-type" => {
> "algorithm" => {
> "type" => STRING,
> "description" => "The algorithm used to encrypt the password.",
> "expressions-allowed" => false,
> "required" => false,
> "nillable" => true,
> "default" => "digest-sha-512",
> "allowed" => [
> "digest-md5",
> "digest-sha",
> "digest-sha-256",
> "digest-sha-512"
> ]
> },
> "password" => {
> "type" => STRING,
> "description" => "The actual password to set.",
> "expressions-allowed" => false,
> "required" => true,
> "nillable" => false,
> "min-length" => 1L,
> "max-length" => 2147483647L
> },
> "realm" => {
> "type" => STRING,
> "description" => "The realm.",
> "expressions-allowed" => false,
> "required" => true,
> "nillable" => false,
> "min-length" => 1L,
> "max-length" => 2147483647L
> }
> }
> },
> {code}
> Passwords of types otp, salted-simple-digest, simple-digest already expose sha-384 variant.
> Seems to me underlying Elytron implementation is already prepared for that.
> {code:java|title=DigestPasswordImpl.java}
> private static MessageDigest getMessageDigest(final String algorithm) throws NoSuchAlgorithmException {
> switch (algorithm) {
> case ALGORITHM_DIGEST_MD5:
> return MessageDigest.getInstance("MD5");
> case ALGORITHM_DIGEST_SHA:
> return MessageDigest.getInstance("SHA-1");
> case ALGORITHM_DIGEST_SHA_256:
> return MessageDigest.getInstance("SHA-256");
> case ALGORITHM_DIGEST_SHA_384:
> return MessageDigest.getInstance("SHA-384");
> case ALGORITHM_DIGEST_SHA_512:
> return MessageDigest.getInstance("SHA-512");
> default:
> throw log.noSuchAlgorithmInvalidAlgorithm(algorithm);
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 3 months
[JBoss JIRA] (WFCORE-3112) Upgrade DMR to 1.4.1.Final
by Brian Stansberry (JIRA)
Brian Stansberry created WFCORE-3112:
----------------------------------------
Summary: Upgrade DMR to 1.4.1.Final
Key: WFCORE-3112
URL: https://issues.jboss.org/browse/WFCORE-3112
Project: WildFly Core
Issue Type: Component Upgrade
Components: Domain Management
Reporter: Brian Stansberry
Assignee: Brian Stansberry
Pull in a release with license info in the pom.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 3 months
[JBoss JIRA] (WFCORE-3107) Allow slave hosts to ignore missing RBAC config resources
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-3107?page=com.atlassian.jira.plugi... ]
Brian Stansberry updated WFCORE-3107:
-------------------------------------
Description:
Part of parent issue whereby slaves can ignore missing RBAC constraint resources for write requests coming from the DC.
If the DC sent the request, then the address is ok overall. So if it's missing on the slave that means the slave doesn't have that constraint registered and doesn't need to handle the op.
This fix could possibly be backported to the 2.1.x and to EAP 6.4.x in lieu of adding transformers as part of the parent issue. In the case of 2.1.x it also allows slaves to ignore the related extension even if the code for it is present (which is only a minor benefit.)
was:
Part of parent issue whereby slaves can ignore missing RBAC constraint resources for write requests coming from the DC.
If the DC sent the request, then the address is ok overall. So if it's missing on the slave that means the slave doesn't have that constraint registered and doesn't need to handle the op.
This fix could possibly be backported to the 2.1.x and to EAP 6.4.x in lieu of adding transformers as part of the parent issue.
> Allow slave hosts to ignore missing RBAC config resources
> ---------------------------------------------------------
>
> Key: WFCORE-3107
> URL: https://issues.jboss.org/browse/WFCORE-3107
> Project: WildFly Core
> Issue Type: Sub-task
> Components: Domain Management
> Reporter: Brian Stansberry
> Assignee: Brian Stansberry
>
> Part of parent issue whereby slaves can ignore missing RBAC constraint resources for write requests coming from the DC.
> If the DC sent the request, then the address is ok overall. So if it's missing on the slave that means the slave doesn't have that constraint registered and doesn't need to handle the op.
> This fix could possibly be backported to the 2.1.x and to EAP 6.4.x in lieu of adding transformers as part of the parent issue. In the case of 2.1.x it also allows slaves to ignore the related extension even if the code for it is present (which is only a minor benefit.)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 3 months
[JBoss JIRA] (WFCORE-3110) NullPointerException in ModelControllerMBeanServerPlugin::accepts
by Brian Stansberry (JIRA)
Brian Stansberry created WFCORE-3110:
----------------------------------------
Summary: NullPointerException in ModelControllerMBeanServerPlugin::accepts
Key: WFCORE-3110
URL: https://issues.jboss.org/browse/WFCORE-3110
Project: WildFly Core
Issue Type: Bug
Components: JMX
Reporter: Brian Stansberry
ModelControllerMBeanServerPlugin.accepts throws a NPE if getLegacyDomain returns null.
{code}
java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Unknown Source)
at java.util.regex.Matcher.reset(Unknown Source)
at java.util.regex.Matcher.(Unknown Source)
at java.util.regex.Pattern.matcher(Unknown Source)
at org.jboss.as.jmx.model.ModelControllerMBeanServerPlugin.accepts(ModelControllerMBeanServerPlugin.java:111)
at org.jboss.as.jmx.PluggableMBeanServerImpl.queryMBeans(PluggableMBeanServerImpl.java:821)
{code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 3 months
[JBoss JIRA] (SECURITY-864) NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration
by Eric B (JIRA)
[ https://issues.jboss.org/browse/SECURITY-864?page=com.atlassian.jira.plug... ]
Eric B commented on SECURITY-864:
---------------------------------
[~pmm] I noticed the `JBossPolicyRegistrationObjectFactory`, but not entirely sure how it should be used.
I created an issue in GitHub relating to my question.
> NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration
> -------------------------------------------------------------------------------------------------------
>
> Key: SECURITY-864
> URL: https://issues.jboss.org/browse/SECURITY-864
> Project: PicketBox
> Issue Type: Bug
> Components: PicketBox
> Reporter: Chao Wang
> Assignee: Stefan Guilhen
>
> "NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration" is recorded in server.log during quickstart example run by changing log level:
> {noformat}
> <logger category="org.jboss.as.security">
> <level name="TRACE"/>
> </logger>
> <logger category="org.jboss.security">
> <level name="TRACE"/>
> </logger>
> {noformat}
> See detailed description in community discussion [#907134|https://developer.jboss.org/message/907134]
> I choose Jira component picketbox since the exception is titled as "PBOX000293: Exception caught: javax.naming.NameNotFoundException"
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 3 months
[JBoss JIRA] (WFLY-9140) Elytron API doesn't work in JSPs
by Josef Cacek (JIRA)
Josef Cacek created WFLY-9140:
---------------------------------
Summary: Elytron API doesn't work in JSPs
Key: WFLY-9140
URL: https://issues.jboss.org/browse/WFLY-9140
Project: WildFly
Issue Type: Bug
Components: Web (Undertow), Security
Reporter: Josef Cacek
Assignee: Stuart Douglas
Priority: Blocker
Elytron dependencies seems to be missing on Jasper classpath, so usage of Elytron API in JSPs can lead to ClassNotFound issues.
I have following JSP:
{code:jsp}
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" session="false"%>
<%
final java.util.concurrent.Callable<String> callable = () -> {
return "test";
};
System.out.println(org.wildfly.security.auth.client.AuthenticationContext.captureCurrent().runCallable(callable));
%>
{code}
But its deployment fails due to CNF during compilation. The Jasper is missing Contextual class from wildfly-common.
{noformat}
13:44:56,380 ERROR [io.undertow.request] (default task-7) UT005023: Exception handling request to /authenticator/: org.apache.jasper.JasperException: JBWEB004062: Unable to compile class for JSP:
JBWEB004060: An error occurred at line: 8 in the jsp file: /index.jsp
The type org.wildfly.common.context.Contextual cannot be resolved. It is indirectly referenced from required .class files
5: return "test";
6: };
7:
8: System.out.println(org.wildfly.security.auth.client.AuthenticationContext.captureCurrent().runCallable(callable));
9: %>
10:
JBWEB004060: An error occurred at line: 8 in the jsp file: /index.jsp
The method runCallable(Callable<String>) is undefined for the type AuthenticationContext
5: return "test";
6: };
7:
8: System.out.println(org.wildfly.security.auth.client.AuthenticationContext.captureCurrent().runCallable(callable));
9: %>
10:
Stacktrace:
at org.apache.jasper.compiler.DefaultErrorHandler.javacError(DefaultErrorHandler.java:95)
at org.apache.jasper.compiler.ErrorDispatcher.javacError(ErrorDispatcher.java:198)
at org.apache.jasper.compiler.JDTCompiler.generateClass(JDTCompiler.java:449)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:359)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:334)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:321)
at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:652)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:358)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:403)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:347)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
at io.undertow.jsp.JspFileHandler.handleRequest(JspFileHandler.java:32)
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
at org.wildfly.elytron.web.undertow.server.ElytronRunAsHandler.lambda$handleRequest$1(ElytronRunAsHandler.java:68)
at org.wildfly.security.auth.server.FlexibleIdentityAssociation.runAsFunctionEx(FlexibleIdentityAssociation.java:101)
at org.wildfly.security.auth.server.Scoped.runAsFunctionEx(Scoped.java:150)
at org.wildfly.security.auth.server.Scoped.runAs(Scoped.java:62)
at org.wildfly.elytron.web.undertow.server.ElytronRunAsHandler.handleRequest(ElytronRunAsHandler.java:67)
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292)
at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138)
at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:326)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:812)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
{noformat}
This is a similar problem (in modules dependencies) as we already have reported for Maven dependencies in JBEAP-10893.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 3 months