[JBoss JIRA] (SECURITY-864) NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration
by Eric B (JIRA)
[ https://issues.jboss.org/browse/SECURITY-864?page=com.atlassian.jira.plug... ]
Eric B edited comment on SECURITY-864 at 7/24/17 5:00 PM:
----------------------------------------------------------
I am encountering the same issue as well after a JB4 -> Wildfly 10 migration. Although I have not yet done performance tests, my application is EJB intensive as well and I am definitely concerned by [~mletenay]'s performance results.
[~mletenay] Are you able to provide sample code that you use to workaround this issue?
[~pmm] I looked at your code in GitHub. Won't the exceptions thrown by the EmptyPolicy still cause the same slowdowns? Or is the exception thrown by WF simply limited to the fact that no policyRegistration is found (even though it is never actually used)?
was (Author: benze):
I am encountering the same issue as well after a JB4 -> Wildfly 10 migration. Although I have not yet done performance tests, my application is EJB intensive as well and I am definitely concerned by [~mletenay]'s performance results.
[~mletenay] Are you able to provide sample code that you use to workaround this issue?
> NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration
> -------------------------------------------------------------------------------------------------------
>
> Key: SECURITY-864
> URL: https://issues.jboss.org/browse/SECURITY-864
> Project: PicketBox
> Issue Type: Bug
> Components: PicketBox
> Reporter: Chao Wang
> Assignee: Stefan Guilhen
>
> "NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration" is recorded in server.log during quickstart example run by changing log level:
> {noformat}
> <logger category="org.jboss.as.security">
> <level name="TRACE"/>
> </logger>
> <logger category="org.jboss.security">
> <level name="TRACE"/>
> </logger>
> {noformat}
> See detailed description in community discussion [#907134|https://developer.jboss.org/message/907134]
> I choose Jira component picketbox since the exception is titled as "PBOX000293: Exception caught: javax.naming.NameNotFoundException"
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (SECURITY-864) NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration
by Eric B (JIRA)
[ https://issues.jboss.org/browse/SECURITY-864?page=com.atlassian.jira.plug... ]
Eric B commented on SECURITY-864:
---------------------------------
I am encountering the same issue as well after a JB4 -> Wildfly 10 migration. Although I have not yet done performance tests, my application is EJB intensive as well and I am definitely concerned by [~mletenay]'s performance results.
[~mletenay] Are you able to provide sample code that you use to workaround this issue?
> NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration
> -------------------------------------------------------------------------------------------------------
>
> Key: SECURITY-864
> URL: https://issues.jboss.org/browse/SECURITY-864
> Project: PicketBox
> Issue Type: Bug
> Components: PicketBox
> Reporter: Chao Wang
> Assignee: Stefan Guilhen
>
> "NameNotFoundException due to policyRegistration -- service jboss.naming.context.java.policyRegistration" is recorded in server.log during quickstart example run by changing log level:
> {noformat}
> <logger category="org.jboss.as.security">
> <level name="TRACE"/>
> </logger>
> <logger category="org.jboss.security">
> <level name="TRACE"/>
> </logger>
> {noformat}
> See detailed description in community discussion [#907134|https://developer.jboss.org/message/907134]
> I choose Jira component picketbox since the exception is titled as "PBOX000293: Exception caught: javax.naming.NameNotFoundException"
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (JGRP-2206) Property strings are correct but JGROUPS is not recognizing other nodes
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2206?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2206:
--------------------------------
This is your problem: {{ Setting jgroups.bind_addr = localhost}}. If you grep for "physical address", you'll get
{noformat}
[belasmac] /Users/bela/Downloads$ grep physical node*
node1.noapp.log.D20170724.T215702:GMS: address=P02HBNFW9657-64048, cluster=Sterling_NodeInfo_group, physical address=127.0.0.1:5061
node1.noapp.log.D20170724.T215702:GMS: address=P02HBNFW9657-2450, cluster=Sterling_NodeInfo_group_WFC, physical address=127.0.0.1:5060
node2.noapp.log.D20170724.T215702:GMS: address=P02HBNFW9657-64048, cluster=Sterling_NodeInfo_group, physical address=127.0.0.1:5061
node2.noapp.log.D20170724.T215702:GMS: address=P02HBNFW9657-2450, cluster=Sterling_NodeInfo_group_WFC, physical address=127.0.0.1:5060
node3.noapp.log.D20170724.T220020:GMS: address=P02HBNFW6872-9702, cluster=Sterling_NodeInfo_group, physical address=127.0.0.1:5061
node3.noapp.log.D20170724.T220020:GMS: address=P02HBNFW6872-37964, cluster=Sterling_NodeInfo_group_WFC, physical address=127.0.0.1:5060
node4.noapp.log.D20170724.T215955:GMS: address=P02HBNFW9137-63139, cluster=Sterling_NodeInfo_group, physical address=127.0.0.1:5061
node4.noapp.log.D20170724.T215955:GMS: address=P02HBNFW9137-4273, cluster=Sterling_NodeInfo_group_WFC, physical address=127.0.0.1:5060
{noformat}
A you can see, members bind to {{localhost}}, which is {{127.0.0.1}}, so they can't communicate.
You need to set {{jgroups.bind_addr}} to a routable IP address.
Note that your config is missing {{UNICAST2}}, that will cause lossy point-to-point communication.
> Property strings are correct but JGROUPS is not recognizing other nodes
> -----------------------------------------------------------------------
>
> Key: JGRP-2206
> URL: https://issues.jboss.org/browse/JGRP-2206
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 3.4
> Environment: With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
> OS: Windows Server 2008 R2 6.1,amd64
> Java version: 1.7.0,pwa6470sr9fp10-20150708_01 (SR9 FP10),IBM Corporation
> Reporter: Swathi Kumar
> Assignee: Bela Ban
> Priority: Blocker
> Attachments: VisibilityIssue.zip
>
>
> Our customer has a four node cluster which we believe is correctly defined yet the nodes are not communicating with each other.
> All nodes are on VMWare. None of the hostnames are virtual (in that they are all directly attached to an IP and are not managed by load balancers, etc).
>
> The nodes are located in separate data centers (2 in each) and jgroups is operating over tcp, rather than udp multicast.
> NOTE: The issue occurs only in the customer's environment (we are not able to reproduce this issue in our lab).
> We are attaching our logs (noapp.log.<timestamp>) with JGROUPS debugging enabled.
> *Node1 Property strings*:
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node2 Property strings*:
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node3 Property strings*:
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> *Node4 Property strings*:
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
> [2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
>
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (JGRP-2205) DISCARD ignores the DONT_LOOPBACK transient flag
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/JGRP-2205?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on JGRP-2205:
------------------------------------
No, would only change {{down(Message)}} to clarify what {{DISCARD}} is supposed to do when {{discard_all == true && excludeItself == true}}. I would instead change {{loopback(Message)}} to something like this:
{code}
final Message rsp=msg.copy(true);
if(rsp.getSrc() == null)
rsp.setSrc(localAddress());
rsp.dest(localAddress());
down(rsp);
{code}
> DISCARD ignores the DONT_LOOPBACK transient flag
> ------------------------------------------------
>
> Key: JGRP-2205
> URL: https://issues.jboss.org/browse/JGRP-2205
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.4
> Reporter: Dan Berindei
> Assignee: Bela Ban
> Fix For: 4.0.5
>
>
> When {{discard_all = true}}, {{DISCARD}} does its own loopback, and doesn't check for {{DONT_LOOPBACK}} like {{TP}}. It always sends the message back up, even if {{excludeItself = false}}.
> If possible, {{DISCARD}} should just set the message destination to the local address and pass the message down. That way, {{TP}} would decide make the loopback decision, and using the {{TP}} thread pool would also make the thread name nicer in the logs. (Currently the thread name is {{Thread-n}}, which means searching for the test name in our test suite's log misses some messages.)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (DROOLS-1386) NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
by Mario Fusco (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1386?page=com.atlassian.jira.plugi... ]
Mario Fusco updated DROOLS-1386:
--------------------------------
Fix Version/s: 7.2.0.Final
> NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
> --------------------------------------------------------
>
> Key: DROOLS-1386
> URL: https://issues.jboss.org/browse/DROOLS-1386
> Project: Drools
> Issue Type: Bug
> Affects Versions: 6.5.0.Final, 7.0.0.Beta4, 7.0.0.Final
> Reporter: Arkady Syamtomov
> Assignee: Mario Fusco
> Priority: Critical
> Fix For: 7.2.0.Final
>
>
> In our integration tests which were perfectly running with drools 6.3.0.Final, now we have failures with the following exception during the rules evaluation:
> java.lang.NullPointerException: null
> at org.drools.core.common.TupleSetsImpl.setNextTuple(TupleSetsImpl.java:349) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.TupleSetsImpl.removeUpdate(TupleSetsImpl.java:205) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.TupleSetsImpl.addDelete(TupleSetsImpl.java:110) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.reteoo.QueryElementNode$UnificationNodeViewChangedEventListener.rowRemoved(QueryElementNode.java:444) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.PhreakQueryTerminalNode.doLeftDeletes(PhreakQueryTerminalNode.java:154) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.PhreakQueryTerminalNode.doNode(PhreakQueryTerminalNode.java:46) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.innerEval(RuleNetworkEvaluator.java:282) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.evalStackEntry(RuleNetworkEvaluator.java:198) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.outerEval(RuleNetworkEvaluator.java:141) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.evaluateNetwork(RuleNetworkEvaluator.java:94) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleExecutor.reEvaluateNetwork(RuleExecutor.java:194) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleExecutor.evaluateNetworkAndFire(RuleExecutor.java:73) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireNextItem(DefaultAgenda.java:970) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireLoop(DefaultAgenda.java:1312) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1251) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1364) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1355) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1346) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:109) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:36) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:137) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:51) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatelessKnowledgeSessionImpl.execute(StatelessKnowledgeSessionImpl.java:254) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (DROOLS-1386) NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
by Mario Fusco (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1386?page=com.atlassian.jira.plugi... ]
Mario Fusco updated DROOLS-1386:
--------------------------------
Sprint: 2017 Week 30-31
> NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
> --------------------------------------------------------
>
> Key: DROOLS-1386
> URL: https://issues.jboss.org/browse/DROOLS-1386
> Project: Drools
> Issue Type: Bug
> Affects Versions: 6.5.0.Final, 7.0.0.Beta4, 7.0.0.Final
> Reporter: Arkady Syamtomov
> Assignee: Mario Fusco
> Priority: Critical
>
> In our integration tests which were perfectly running with drools 6.3.0.Final, now we have failures with the following exception during the rules evaluation:
> java.lang.NullPointerException: null
> at org.drools.core.common.TupleSetsImpl.setNextTuple(TupleSetsImpl.java:349) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.TupleSetsImpl.removeUpdate(TupleSetsImpl.java:205) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.TupleSetsImpl.addDelete(TupleSetsImpl.java:110) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.reteoo.QueryElementNode$UnificationNodeViewChangedEventListener.rowRemoved(QueryElementNode.java:444) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.PhreakQueryTerminalNode.doLeftDeletes(PhreakQueryTerminalNode.java:154) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.PhreakQueryTerminalNode.doNode(PhreakQueryTerminalNode.java:46) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.innerEval(RuleNetworkEvaluator.java:282) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.evalStackEntry(RuleNetworkEvaluator.java:198) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.outerEval(RuleNetworkEvaluator.java:141) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.evaluateNetwork(RuleNetworkEvaluator.java:94) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleExecutor.reEvaluateNetwork(RuleExecutor.java:194) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleExecutor.evaluateNetworkAndFire(RuleExecutor.java:73) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireNextItem(DefaultAgenda.java:970) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireLoop(DefaultAgenda.java:1312) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1251) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1364) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1355) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1346) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:109) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:36) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:137) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:51) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatelessKnowledgeSessionImpl.execute(StatelessKnowledgeSessionImpl.java:254) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (JGRP-2206) Property strings are correct but JGROUPS is not recognizing other nodes
by Swathi Kumar (JIRA)
Swathi Kumar created JGRP-2206:
----------------------------------
Summary: Property strings are correct but JGROUPS is not recognizing other nodes
Key: JGRP-2206
URL: https://issues.jboss.org/browse/JGRP-2206
Project: JGroups
Issue Type: Bug
Affects Versions: 3.4
Environment: With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
OS: Windows Server 2008 R2 6.1,amd64
Java version: 1.7.0,pwa6470sr9fp10-20150708_01 (SR9 FP10),IBM Corporation
Reporter: Swathi Kumar
Assignee: Bela Ban
Priority: Blocker
Attachments: VisibilityIssue.zip
Our customer has a four node cluster which we believe is correctly defined yet the nodes are not communicating with each other.
All nodes are on VMWare. None of the hostnames are virtual (in that they are all directly attached to an IP and are not managed by load balancers, etc).
The nodes are located in separate data centers (2 in each) and jgroups is operating over tcp, rather than udp multicast.
NOTE: The issue occurs only in the customer's environment (we are not able to reproduce this issue in our lab).
We are attaching our logs (noapp.log.<timestamp>) with JGROUPS debugging enabled.
*Node1 Property strings*:
[2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.27;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
[2017-07-24 21:58:30.867] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
*Node2 Property strings*:
[2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.46.28;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5061],10.38.46.27[5061],10.38.175.30[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
[2017-07-24 22:01:01.666] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.46.28[5060],10.38.46.27[5060],10.38.175.30[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
*Node3 Property strings*:
[2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.30;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.32[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
[2017-07-24 22:02:01.411] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.30[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.32[5060];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
*Node4 Property strings*:
[2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.property_string. Receivied this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.property_string. Using this property: TCP(bind_addr=10.38.175.32;bind_port=5061;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5061],10.38.46.27[5061],10.38.46.28[5061],10.38.175.30[5061];port_range=0;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_ALL(interval=5000;timeout=20000):FD(timeout=5000;max_tries=110):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=100,200,300,600,1200,2400,4800;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(print_local_addr=true;join_timeout=5000)
[2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Initializing jgroups_cluster.distributed_property_string. Receivied this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48;):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
[2017-07-24 22:01:14.365] ALL 000000000000 GLOBAL_SCOPE Done initializing jgroups_cluster.distributed_property_string. Using this property: TCP(bind_port=5060;thread_pool_rejection_policy=run;level=ERROR):TCPPING(initial_hosts=10.38.175.32[5060],10.38.46.27[5060],10.38.46.28[5060],10.38.175.30[5060];port_range=1;timeout=5000;num_initial_members=4):MERGE2(min_interval=3000;max_interval=5000):FD_SOCK:FD(timeout=5000;max_tries=48):VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(retransmit_timeout=3000;discard_delivered_msgs=true):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=20000;max_bytes=0):pbcast.GMS(join_timeout=5000;print_local_addr=true)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months
[JBoss JIRA] (JGRP-2205) DISCARD ignores the DONT_LOOPBACK transient flag
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2205?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2205:
--------------------------------
Are you suggesting to _not_ drop the message in {{down(Message)}} if {{discard_all == true}}, but later in {{up(Message()}} / {{up(MessageBatch}}?
> DISCARD ignores the DONT_LOOPBACK transient flag
> ------------------------------------------------
>
> Key: JGRP-2205
> URL: https://issues.jboss.org/browse/JGRP-2205
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.4
> Reporter: Dan Berindei
> Assignee: Bela Ban
> Fix For: 4.0.5
>
>
> When {{discard_all = true}}, {{DISCARD}} does its own loopback, and doesn't check for {{DONT_LOOPBACK}} like {{TP}}. It always sends the message back up, even if {{excludeItself = false}}.
> If possible, {{DISCARD}} should just set the message destination to the local address and pass the message down. That way, {{TP}} would decide make the loopback decision, and using the {{TP}} thread pool would also make the thread name nicer in the logs. (Currently the thread name is {{Thread-n}}, which means searching for the test name in our test suite's log misses some messages.)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 9 months