[JBoss JIRA] (JGRP-2485) UDP is not working after upgarde to 3_6_19 from jgroups-3.4.0.Alpha2
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2485?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2485.
--------------------------
Resolution: Cannot Reproduce
> UDP is not working after upgarde to 3_6_19 from jgroups-3.4.0.Alpha2
> --------------------------------------------------------------------
>
> Key: JGRP-2485
> URL: https://issues.redhat.com/browse/JGRP-2485
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.19
> Reporter: Janardhan Naidu
> Assignee: Bela Ban
> Priority: Major
> Attachments: noapp.log.D20200619.T053202_NODE1, udp.xml
>
>
> Hi Team,
>
> we just upgraded from jgroups-3.4.0.Alpha2 to 3_6_19. post the UDP cluster communication is not working.
> after upgrade, we were hitting some warning as below and we solved those by changing property_string of UDP.
> *Warnings:*
> *WARNING: JGRP000014: Discovery.timeout has been deprecated: GMS.join_timeout should be used instead*
> [2020-06-17 14:05:49.271] ALL 000000000000 GLOBAL_SCOPE Jun 17, 2020 2:05:49 PM org.jgroups.stack.Configurator resolveAndAssignField
> *WARNING: JGRP000014: Discovery.num_initial_members has been deprecated: will be ignored*
> [2020-06-17 14:05:49.396] ALL 000000000000 GLOBAL_SCOPE Jun 17, 2020 2:05:49 PM org.jgroups.protocols.pbcast.NAKACK init
> *WARNING: use_mcast_xmit should not be used because the transport (TCP) does not support IP multicasting; setting use_mcast_xmit to false*
>
> To solve above warnings, we went through tcp.xml which was shipped using in 3_6_19 jar:
> and now our properties looks like:
> *property_string=UDP*(bind_addr=*hostIP*;bind_port=39061;mcast_addr=239.255.166.17;mcast_port=39060;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING:MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=true;retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):UNICAST:pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
> *distribution_property_string=TCP*(bind_port=39060;thread_pool_rejection_policy=run):TCPPING(async_discovery=true;initial_hosts=*hostIP*[39060];port_range=0;):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=false;discard_delivered_msgs=true;):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
> *lock.protocolStack=UDP*(bind_addr=*hostIP*;bind_port=39062;mcast_addr=239.255.166.17;mcast_port=39069;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING:MERGE2(min_interval=5000;max_interval=10000):FD_ALL(interval=5000;timeout=20000):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(use_mcast_xmit=true;retransmit_timeout=300,600,1200,2400,4800):UNICAST:pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096):pbcast.GMS(join_timeout=5000;print_local_addr=true):CENTRAL_LOCK(num_backups=2)
>
>
> With above properties, we are not seeing any warning or error. But still cluster communication is not working with UDP.
> Please help me in resolving the same
>
> Thank
> Janardhan
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (JGRP-2256) Connection.Receiver - Failed handling incoming message
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2256?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2256.
--------------------------
Resolution: Cannot Reproduce
> Connection.Receiver - Failed handling incoming message
> ------------------------------------------------------
>
> Key: JGRP-2256
> URL: https://issues.redhat.com/browse/JGRP-2256
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.10
> Reporter: Sibin Karnavar
> Assignee: Bela Ban
> Priority: Minor
>
> In AWS environment,
> I have not defined the port {color:red}41493{color} . I have configured TCP bind_port as 7803.
> But I can see from the below stack-trace that,
> Connection.Receiver [10.91.133.210:7803 - 10.91.135.64:{color:red}41493{color}]
> 1) Do I need to open/configure any other port ? I am wondering How 41493 is used when i have configured my bind_port as 7803.
> 2) What is the significance of client_bind_port for TCP. What happens if I dont configure it?
> 2018-03-05 19:07:28.833 ERROR 23012 --- [TQ-Bundler-9,ABC-SKS-stage_SJX_040320181613_XSJ,ip-10-91-133-210-46045] org.jgroups.protocols.TCP : - - JGRP000034: ip-10-91-133-210-46045: failure sending message to ip-10-91-135-64-18021: java.net.SocketTimeoutException: connect timed out
> 2018-03-05 19:07:29.834 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 1
> 2018-03-05 19:07:32.842 ERROR 23012 --- [TQ-Bundler-9,ABC-SKS-stage_SJX_040320181613_XSJ,ip-10-91-133-210-46045] org.jgroups.protocols.TCP : - - JGRP000034: ip-10-91-133-210-46045: failure sending message to ip-10-91-135-64-18021: java.net.SocketTimeoutException: connect timed out
> 2018-03-05 19:07:32.889 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 2
> 2018-03-05 19:07:35.944 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 3
> 2018-03-05 19:07:36.848 ERROR 23012 --- [TQ-Bundler-9,ABC-SKS-stage_SJX_040320181613_XSJ,ip-10-91-133-210-46045] org.jgroups.protocols.TCP : - - JGRP000034: ip-10-91-133-210-46045: failure sending message to ip-10-91-135-64-18021: java.net.SocketTimeoutException: connect timed out
> 2018-03-05 19:07:38.999 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 4
> 2018-03-05 19:07:40.854 ERROR 23012 --- [TQ-Bundler-9,ABC-SKS-stage_SJX_040320181613_XSJ,ip-10-91-133-210-46045] org.jgroups.protocols.TCP : - - JGRP000034: ip-10-91-133-210-46045: failure sending message to ip-10-91-135-64-18021: java.net.SocketTimeoutException: connect timed out
> 2018-03-05 19:07:42.053 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 5
> 2018-03-05 19:07:44.860 ERROR 23012 --- [TQ-Bundler-9,ABC-SKS-stage_SJX_040320181613_XSJ,ip-10-91-133-210-46045] org.jgroups.protocols.TCP : - - JGRP000034: ip-10-91-133-210-46045: failure sending message to ip-10-91-135-64-18021: java.net.SocketTimeoutException: connect timed out
> 2018-03-05 19:07:45.108 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 6
> 2018-03-05 19:07:48.163 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 7
> 2018-03-05 19:07:48.866 ERROR 23012 --- [TQ-Bundler-9,ABC-SKS-stage_SJX_040320181613_XSJ,ip-10-91-133-210-46045] org.jgroups.protocols.TCP : - - JGRP000034: ip-10-91-133-210-46045: failure sending message to ip-10-91-135-64-18021: java.net.SocketTimeoutException: connect timed out
> 2018-03-05 19:07:51.218 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: JOIN(ip-10-91-133-210-46045) sent to ip-10-91-135-64-18021 timed out (after 3000 ms), on try 8
> 2018-03-05 19:07:51.218 WARN 23012 --- [localhost-startStop-1] org.jgroups.protocols.pbcast.GMS : - - ip-10-91-133-210-46045: too many JOIN attempts (8): becoming singleton
> 2018-03-05 19:07:52.041 ERROR 23012 --- [Connection.Receiver [10.91.133.210:7803 - 10.91.135.64:41493]-12,ABC-SKS-stage_SJX_040320181613_XSJ,ip-10-91-133-210-46045] org.jgroups.protocols.TCP : - - JGRP000030: ip-10-91-133-210-46045: failed handling incoming message
> java.io.IOException: Stream closed
> at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:269)
> at java.io.DataInputStream.readByte(DataInputStream.java:265)
> at org.jgroups.Message.readFrom(Message.java:724)
> at org.jgroups.util.Util.readMessageBatch(Util.java:1193)
> at org.jgroups.protocols.TP.handleMessageBatch(TP.java:1329)
> at org.jgroups.protocols.TP.receive(TP.java:1321)
> at org.jgroups.blocks.cs.BaseServer.receive(BaseServer.java:171)
> at org.jgroups.blocks.cs.TcpConnection$Receiver.run(TcpConnection.java:290)
> at java.lang.Thread.run(Thread.java:745)
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (JGRP-2461) Clustering can fail when re-adding an existing node using TCP_NIO2
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2461?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2461.
--------------------------
Resolution: Won't Fix
> Clustering can fail when re-adding an existing node using TCP_NIO2
> ------------------------------------------------------------------
>
> Key: JGRP-2461
> URL: https://issues.redhat.com/browse/JGRP-2461
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.1.8
> Reporter: Robert Mitchell
> Assignee: Bela Ban
> Priority: Major
>
> When a node leaves a cluster and then later attempts to re-enter, a race condition can occur where the clustering fails to occur. Here is the sequence of events that seems to allow this to occur:
> # The rejoining node must have a "higher" IP address than the current cluster coordinator.
> # On the rejoin attempt, the coordinator sends a message to the rejoining node before the rejoining node sends to the coordinator using its prior address. I have seen this happen for two reasons:
> ## UNICAST3 is resending messages (which often happens with the final LEAVE_RSP from the prior cluster membership because it apparently does not get acked before the connection closes)
> ## TCPPING is sending a ping request to the cached prior address.
> # The connection gets established. It will then be used by the rejoining node whenever communicating with the cluster coordinator.
> # However, the cluster coordinator has this as the connection for the prior address. So the following happens whenever it wants to send a message to the rejoining node:
> ## It will attempt to create a new connection.
> ## The rejoining node will reject the connection as a redundant connection with its current connection taking precedence since it is coming from the same logical address as the "bad" connection.
> Since the messages needed to find and join the cluster or merge the two clusters are all unicast messages, the rejoining node will never get them and not be able to join until something happens that causes the initial connection to get closed.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (JGRP-2399) Problem with loading resources from the bundle root (/) in OSGI environment
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2399?page=com.atlassian.jira.plugin... ]
Bela Ban resolved JGRP-2399.
----------------------------
Resolution: Done
> Problem with loading resources from the bundle root (/) in OSGI environment
> ----------------------------------------------------------------------------
>
> Key: JGRP-2399
> URL: https://issues.redhat.com/browse/JGRP-2399
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.19
> Environment: Reproducible both on linux(centos 7.3) and windows 10, my project is using org.eclipse.equinox 3.6.0.v20100503, oracle jdk 1.8.0.212
> Reporter: Orlin Stalyanov
> Assignee: Bela Ban
> Priority: Major
>
> {color:#0747A6}Util.java:108
> resource_bundle=ResourceBundle.getBundle("jg-messages",Locale.getDefault(),Util.class.getClassLoader());{color}
> is crashing with:
> ...
> {color:#0747A6}Caused by: java.util.MissingResourceException: Can't find bundle for base name jg-messages, locale en_US
> at java.util.ResourceBundle.throwMissingResourceException(ResourceBundle.java:1581)
> at java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1396)
> at java.util.ResourceBundle.getBundle(ResourceBundle.java:1091)
> at org.jgroups.util.Util.<clinit>(Util.java:108)
> ... 162 more{color}
> found some relevant info here : https://stackoverflow.com/questions/7564370/importing-resources-from-osgi...
> seems like OSGI doesn't really like resources residing in the bundle root ...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (JGRP-2361) Error related to Jgroup and Database connection is getting reset
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2361?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2361.
--------------------------
Resolution: Cannot Reproduce
This is not JGroups itself, but Hybris. I suggest contact Hybris for help (I know they've done some work on upgrading JGroups in their product)
> Error related to Jgroup and Database connection is getting reset
> ----------------------------------------------------------------
>
> Key: JGRP-2361
> URL: https://issues.redhat.com/browse/JGRP-2361
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.6.11
> Environment: Hybris running on tomcat - Centos 7
> Reporter: karthikeyan Aruljothi
> Assignee: Bela Ban
> Priority: Major
> Attachments: Jgroup error in preprod-000.txt, Jgroup node configuration.txt, Jgroups blocking and terminating connection.txt, Jgroups error in console.txt, error Jgroups.txt, jgroups-tcp.xml
>
>
> Hi ,
> we are facing an issue with our cluster configuration and due to this JVM responding time also takes more time, after clearing the cache / restarting all nodes application works as expected.
> When issue arises one of the core occupies 100% cpu utilization then it confirms to restart the server otherwise it never process any request. Below is our configuration in local.properties. Also providing error logs as attachment. could see error in logs related to Jgroups blocking and connection getting terminated between nodes.
> Let us know your valuable inputs, on what exactly the issue i.e causing the slowness then blocking the whole server.
> Attached cluster configuration for each nodes and error logs
> Adding to this we are getting below error while doing deployment/restarting of servers
> WARN [localhost-startStop-1] [GMS] hybrisnode-0: JOIN(hybrisnode-0) sent to hybrisnode-2 timed out (after 3000 ms), on try 3
> WARN [pool-3-thread-1] [GMS] hybrisnode-3: JOIN(hybrisnode-3) sent to hybrisnode-1 timed out (after 3000 ms), on try 4
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (JGRP-2363) DNS Ping cannot lookup SRV record for service port
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2363?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2363.
--------------------------
Resolution: Out of Date
> DNS Ping cannot lookup SRV record for service port
> --------------------------------------------------
>
> Key: JGRP-2363
> URL: https://issues.redhat.com/browse/JGRP-2363
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.20
> Reporter: Howard Gao
> Assignee: Bela Ban
> Priority: Major
> Attachments: App2.java
>
>
> I've got a problem regarding getting service port in DNS_PING DNS lookup.
> It seems in my openshift environment the JNDI DNS lookup cannot query the
> correct SRV record from the openshift DNS server. Ref:
> https://github.com/jboss-openshift/openshift-ping/blob/1.2.1.Final/dns/sr...
> For example, here is the ping service:
> apiVersion: v1
> kind: Service
> metadata:
> annotations:
> description: The JGroups ping port for clustering.
> service.alpha.kubernetes.io/tolerate-unready-endpoints: 'true'
> labels:
> application: application0
> template: amq-broker-73-persistence-clustered
> xpaas: 1.4.16
> name: application0-ping
> spec:
> clusterIP: None
> publishNotReadyAddresses: true
> ports:
> port: 8888
> protocol: TCP
> name: jgroup-port
> targetPort: 8888
> selector:
> deploymentConfig: application0-amq
> After it is deployed I deployed a application pod
> with JGroups DNS_PING protocol loaded. The relevant
> jgroups xml part looks like this:
> <config> ... <openshift.DNS_PING timeout="3000" serviceName="application0-ping" /> ... </config>
> After my application pod is in running state, I checked the log
> and there is a warning message from DNS_PING:
> 2019-07-22 04:16:59,600 INFO [org.openshift.ping.common.Utils] 3 attempt(s) with a 1000ms sleep to execute [GetServicePort] failed. Last failure was [java.lang.NullPointerException: null]
> 2019-07-22 04:16:59,601 WARNING [org.jgroups.protocols.openshift.DNS_PING] No DNS SRV record found for service [application0-ping]
> After some debugging it turns out that the DNS lookup for the record by this name
> "_tcp.application0-ping" returned null.
> However if I logged into the application pod and do nslookup it will give me correct record:
> sh-5.0# nslookup -type=srv _tcp.application0-ping
> Server: 10.74.177.77
> Address: 10.74.177.77#53
> _tcp.application0-ping.default.svc.cluster.local service = 10 100 8888 44c84e52.application0-ping.default.svc.cluster.local.
> And you can get the full name from the record, which is
> _tcp.application0-ping.default.svc.cluster.local
> If I then pass the full qualified name into the application and it can query the SRV
> record successfully.
> I have no idea why my application can't query the record using the short form name (i.e. _tcp.application0-ping). Could it be some configuration issue for the DNS ping?
> My openshift env details are:
> oc v3.11.117
> kubernetes v1.11.0+d4cacc0
> features: Basic-Auth GSSAPI Kerberos SPNEGO
> and the java version used in pod:
> sh-5.0# java -version
> openjdk version "1.8.0_212"
> OpenJDK Runtime Environment (build 1.8.0_212-b04)
> OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
> and the base OS is fedora 30.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (JGRP-2265) Jgroup MERGE3 is not merging big cluster
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2265?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2265.
--------------------------
Resolution: Out of Date
> Jgroup MERGE3 is not merging big cluster
> ----------------------------------------
>
> Key: JGRP-2265
> URL: https://issues.redhat.com/browse/JGRP-2265
> Project: JGroups
> Issue Type: Feature Request
> Reporter: lokesh raheja
> Assignee: Bela Ban
> Priority: Major
>
> Hi Bela
> Jgroup 3.6.13
> We have a cluster of 105 nodes.I see after the cluster split it merge the cluster every time.
> But sometimes it gets stuck and divided into two or three clusters .1 big cluster and 2 small clusters.
> We have to restart the node in the small cluster to join the big cluster.
> In the case when it doesn’t join the cluster, I noticed that the node which is not in big cluster show the coordinator(using MERGE3.dumpviews) is the one which is already in a big cluster.The strange part is if I shut down the coordinator of a small cluster, but on all the nodes of the small cluster, it shows that same coordinator which is shut down.
> Is some caching issue?Could you please help on the same
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (ELY-1851) Elytron ldaps realm fails if a referral is returned inside a search
by Ivo Studensky (Jira)
[ https://issues.redhat.com/browse/ELY-1851?page=com.atlassian.jira.plugin.... ]
Ivo Studensky updated ELY-1851:
-------------------------------
Labels: downstream_dependency (was: )
> Elytron ldaps realm fails if a referral is returned inside a search
> -------------------------------------------------------------------
>
> Key: ELY-1851
> URL: https://issues.redhat.com/browse/ELY-1851
> Project: WildFly Elytron
> Issue Type: Bug
> Affects Versions: 1.6.3.Final, 1.10.8.Final
> Reporter: Chao Wang
> Assignee: Chao Wang
> Priority: Major
> Labels: downstream_dependency
>
> Elytron LdapRealm fails to follow a referral when ldaps is used (the {{ThreadLocalSSLSocketFactory}} is not set).
> With a configuration similar to this one ({{memberOf}} is used to locate groups):
> {code:xml}
> <ldap-realm name="ldap-realm" dir-context="ldap-dir-context" direct-verification="true">
> <identity-mapping rdn-identifier="sAMAccountName" use-recursive-search="true" search-base-dn="DC=redhat,DC=com">
> <attribute-mapping>
> <attribute reference="memberOf" from="cn" to="Roles" role-recursion="3"/>
> </attribute-mapping>
> </identity-mapping>
> </ldap-realm>
> ...
> <dir-context name="ldap-dir-context" url="ldaps://ldap.redhat.com:636" principal="cn=Administrator,cn=Users,DC=redhat,DC=com" referral-mode="FOLLOW" ssl-context="ldaps-context">
> <credential-reference store="credstore" alias="ldap_password"/>
> </dir-context>
> {code}
> If we have a group (or user) which contains a {{memberOf}} of another ldap, something like the following:
> {noformat}
> dn: CN=group-with-external-members,OU=Groups,DC=redhat,DC=com
> ...
> memberOf: CN=group-in-another-domain,OU=Groups,DC=lab,DC=redhat,DC=com
> {noformat}
> The following exception is thrown when a referral is returned for a group that is inside another ldapserver of the forest:
> {noformat}
> TRACE [org.jboss.remoting.remote.server] (management task-1) Server sending authentication rejected: java.lang.RuntimeException: ELY01079: ldap-realm realm failed to obtain attributes for entry [CN=group-with-external-members,OU=Groups,DC=redhat,DC=com]
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.extractFilteredAttributesFromSearch(LdapSecurityRealm.java:808)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.lambda$null$4(LdapSecurityRealm.java:768)
> at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.forEachAttributeValue(LdapSecurityRealm.java:841)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.lambda$extractFilteredAttributes$6(LdapSecurityRealm.java:766)
> at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
> at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
> at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.extractAttributes(LdapSecurityRealm.java:828)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.extractFilteredAttributes(LdapSecurityRealm.java:754)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.getAttributes(LdapSecurityRealm.java:516)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.getAuthorizationIdentity(LdapSecurityRealm.java:497)
> at org.wildfly.security.auth.server.ServerAuthenticationContext$NameAssignedState.doAuthorization(ServerAuthenticationContext.java:1923)
> at org.wildfly.security.auth.server.ServerAuthenticationContext$NameAssignedState.authorize(ServerAuthenticationContext.java:1952)
> at org.wildfly.security.auth.server.ServerAuthenticationContext.authorize(ServerAuthenticationContext.java:509)
> at org.wildfly.security.auth.server.ServerAuthenticationContext.authorize(ServerAuthenticationContext.java:489)
> at org.wildfly.security.auth.server.ServerAuthenticationContext$1.handleOne(ServerAuthenticationContext.java:872)
> at org.wildfly.security.auth.server.ServerAuthenticationContext$1.handle(ServerAuthenticationContext.java:839)
> at org.wildfly.security.sasl.util.SSLQueryCallbackHandler.handle(SSLQueryCallbackHandler.java:60)
> at org.wildfly.security.sasl.util.TrustManagerSaslServerFactory.lambda$createSaslServer$0(TrustManagerSaslServerFactory.java:96)
> at org.wildfly.security.sasl.plain.PlainSaslServer.evaluateResponse(PlainSaslServer.java:146)
> at org.wildfly.security.sasl.util.AuthenticationCompleteCallbackSaslServerFactory$1.evaluateResponse(AuthenticationCompleteCallbackSaslServerFactory.java:58)
> at org.wildfly.security.sasl.util.AuthenticationTimeoutSaslServerFactory$DelegatingTimeoutSaslServer.evaluateResponse(AuthenticationTimeoutSaslServerFactory.java:106)
> at org.wildfly.security.sasl.util.SecurityIdentitySaslServerFactory$1.evaluateResponse(SecurityIdentitySaslServerFactory.java:59)
> at org.xnio.sasl.SaslUtils.evaluateResponse(SaslUtils.java:245)
> at org.xnio.sasl.SaslUtils.evaluateResponse(SaslUtils.java:217)
> at org.jboss.remoting3.remote.ServerConnectionOpenListener$AuthStepRunnable.run(ServerConnectionOpenListener.java:486)
> at org.jboss.remoting3.EndpointImpl$TrackingExecutor.lambda$execute$0(EndpointImpl.java:942)
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.wildfly.security.auth.server.RealmUnavailableException: ELY01108: ldap-realm realm identity search failed
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapSearch.search(LdapSecurityRealm.java:1141)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapRealmIdentity.extractFilteredAttributesFromSearch(LdapSecurityRealm.java:797)
> ... 46 more
> Caused by: javax.naming.CommunicationException: ldap.lab.redhat.com:636 [Root exception is java.lang.IllegalStateException: ELY04025: DirContext tries to connect without ThreadLocalSSLSocketFactory thread local setting]
> at com.sun.jndi.ldap.LdapReferralContext.<init>(LdapReferralContext.java:96)
> at com.sun.jndi.ldap.LdapReferralException.getReferralContext(LdapReferralException.java:151)
> at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1861)
> at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1769)
> at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1786)
> at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:418)
> at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:396)
> at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:297)
> at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:297)
> at org.wildfly.security.auth.realm.ldap.DelegatingLdapContext.search(DelegatingLdapContext.java:335)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapSearch.searchWithPagination(LdapSecurityRealm.java:1161)
> at org.wildfly.security.auth.realm.ldap.LdapSecurityRealm$LdapSearch.search(LdapSecurityRealm.java:1038)
> ... 47 more
> Caused by: java.lang.IllegalStateException: ELY04025: DirContext tries to connect without ThreadLocalSSLSocketFactory thread local setting
> at org.wildfly.security.auth.realm.ldap.ThreadLocalSSLSocketFactory.getDefault(ThreadLocalSSLSocketFactory.java:46)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.sun.jndi.ldap.Connection.createSocket(Connection.java:296)
> at com.sun.jndi.ldap.Connection.<init>(Connection.java:215)
> at com.sun.jndi.ldap.LdapClient.<init>(LdapClient.java:137)
> at com.sun.jndi.ldap.LdapClient.getInstance(LdapClient.java:1609)
> at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2749)
> at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:319)
> at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:192)
> at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:151)
> at com.sun.jndi.url.ldap.ldapURLContextFactory.getObjectInstance(ldapURLContextFactory.java:52)
> at org.jboss.as.naming.context.ObjectFactoryBuilder$ReferenceUrlContextFactoryWrapper.getObjectInstance(ObjectFactoryBuilder.java:293)
> at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:300)
> at com.sun.jndi.ldap.LdapReferralContext.<init>(LdapReferralContext.java:119)
> ... 58 more
> {noformat}
> The reason seems to be that the {{ThreadLocalSSLSocketFactory}} is not set when doing a search, so, if a referral is returned the new search created inside the current one has no access to the {{SSLSocketFactory}} in the thread local.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month
[JBoss JIRA] (JGRP-2155) Weight loss program
by Bela Ban (Jira)
[ https://issues.redhat.com/browse/JGRP-2155?page=com.atlassian.jira.plugin... ]
Bela Ban closed JGRP-2155.
--------------------------
Resolution: Out of Date
> Weight loss program
> -------------------
>
> Key: JGRP-2155
> URL: https://issues.redhat.com/browse/JGRP-2155
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Yves Cuillerdier
> Assignee: Bela Ban
> Priority: Major
>
> JGroups jar is quite large and it is sometime desirable to shrink it some way.
> Currently the jar contains unnecessary class (demo, test and perf) and it is not possible to keep the only needed protocols using tools like Proguard (it's like having a server with all ports open).
> My suggestions are:
> h2. Move JGroups' side class
> Demo: Move the {{demo}} package to a maven module.
> Test and perf: Move the test package to the maven test source and generate test-source jars.
> Note that the {{MPerf$MPerfHeader}} must be removed from the {{jg-magic-map.xml}} file.
> h2.
> h2. Make JGroups Proguard compatible
> A classical way to shrink and optimize project is to use the *{{Proguard }}*tools.
> This tool cannot be used for JGroups mainly because of the two configuration files {{jg-magic-map.xml}} and {{jg-protocol-ids.xml}}.
> These two files contain all class files for all possible protocols, even those that are not required by the selected configuration for the project (for example, using {{fast.xml}} does not require {{ENCRYPT}}, {{TUNNEL}} and many more.)
> The problem is that Proguard could not understand these two files and removes all class because there is no entry points.
> One way to solve this may be to create Annotation (for example {{@MagicMap}} and {{@ProtocolIds}}) and remove the two files. These annotations could be searched by the initialization process in the class maintained by the Proguard tools. This should not affect the loading time as all relevant class are in the same package {{org.jgroups.protocols}}.
> This is not enough because some fields are initialized by reflection using hard coded name (for example "{{bind_add}}"). For such fields we need an Annotation like {{@KeepField}} to tell Proguard not to optimize, remove or rename the fields. For example:
> -keepclassmembers class * { @jgroups.annotations.KeepField <fields>; }
> This Annotation may also be used for class where instance are created by reflection (like {{GMS.GmsHeader}}).
> Last we need to specify the entry points for the project configuration else Proguard will still remove all. The xml configuration files (like {{fast.xml}}) should be kept to provide the protocols configuration.
> This is a matter of the Proguard configuration file. For example in a project using{{ fast.xml}}, we should have:
> -keep public class org.jgroups.protocols.UDP.** { *; }
> -keep public class org.jgroups.protocols.PING.** { *; }
> -keep public class org.jgroups.protocols.MERGE3.** { *; }
> -keep public class org.jgroups.protocols.FD_SOCK.** { *; }
> -keep public class org.jgroups.protocols.FD_ALL.** { *; }
> -keep public class org.jgroups.protocols.VERIFY_SUSPECT.** { *; }
> -keep public class org.jgroups.protocols.BARRIER.** { *; }
> -keep public class org.jgroups.protocols.pbcast.NAKACK2.** { *; }
> ... etc
> My 2 cents.
> Yves
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 1 month