[jboss-jira] [JBoss JIRA] (JGRP-1741) JGroups went into an infinite loop

Sebastiano Vigna (JIRA) jira-events at lists.jboss.org
Wed Nov 13 09:14:06 EST 2013


    [ https://issues.jboss.org/browse/JGRP-1741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12923179#comment-12923179 ] 

Sebastiano Vigna commented on JGRP-1741:
----------------------------------------

No, that's not possible. A few hours later s1 was happily saying

logs/bubing.2013-11-13_11.log:2013-11-13 11:02:27,347 466892233 INFO [Incoming-59,eu,s1{1}@service:jmx:rmi:///jndi/rmi://192.168.0.21:9999/jmxrmi] i.u.d.j.j.JGroupsJobManager - Currently knowing 4 job managers (4 alive)

So communication was OK. s1 knew s0 had come to life.
                
> JGroups went into an infinite loop
> ----------------------------------
>
>                 Key: JGRP-1741
>                 URL: https://issues.jboss.org/browse/JGRP-1741
>             Project: JGroups
>          Issue Type: Bug
>    Affects Versions: 3.3.5
>         Environment: Linux s0.law.di.unimi.it 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 14:23:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux
> java version "1.7.0_45"
> Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
> Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
>            Reporter: Sebastiano Vigna
>            Assignee: Bela Ban
>             Fix For: 3.5
>
>
> We experienced a severe problem with JGroups going into an infinite loop and locking all our threads trying to send messages.
> We had four agents running in parallel and exchanging messages. After about four days, for about half an hour the network between the agents was interrupted for unknown hardware reasons. Each agent started to suspect the others, and then, when the network restarted, found them again. Except s0, which logged
> 2013-11-13 04:26:47,017 443146923 WARN [Incoming-28,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.p.GMS - s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi: not member of view [s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi|6]; discarding it
> We had then a couple of log like
> 013-11-13 04:26:47,017 443146923 DEBUG [OOB-1,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.p.STABLE - s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi: received digest from s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi (digest=s2{1}@service:jmx:rmi:///jndi/rmi://192.168.0.22:9999/jmxrmi: [0 (0)], s1{1}@service:jmx:rmi:///jndi/rmi://192.168.0.21:9999/jmxrmi: [0 (0)], s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi: [5353 (5353)]) which does not match my own digest (s2{1}@service:jmx:rmi:///jndi/rmi://192.168.0.22:9999/jmxrmi: [0 (0)], s1{1}@service:jmx:rmi:///jndi/rmi://192.168.0.21:9999/jmxrmi: [0 (0)], s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi: [1042 (1043)], s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi: [5352 (5352)]): ignoring digest and re-initializing own digest
> Then 
> 2013-11-13 04:26:47,681 443147587 WARN [OOB-25,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.UNICAST3 - s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi: (requester=s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi) message s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi::963120278 not found in retransmission table of s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi:
> Then at some point JGroups went into an infinite loop. The application entirely stopped working—all thread waiting on a send(). We got something like 60G of logs like this:
> 2013-11-13 14:00:00,393 477540299 WARN [OOB-865,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.UNICAST3 - s0{1}@servi
> ce:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi: (requester=s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi) message s3{1}@ser
> vice:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi::963120278 not found in retransmission table of s3{1}@service:jmx:rmi:///jndi/rmi://192.1
> 68.0.23:9999/jmxrmi:
> There was apparently an infinite loop. We have a (large) stack trace of the JVM that shows all our threads stuck in a send() like this:
> "ParsingThread-63" prio=10 tid=0x00007fdcceded000 nid=0xbd9b in Object.wait() [0x00007fdb76896000]
>    java.lang.Thread.State: TIMED_WAITING (on object monitor)
>         at java.lang.Object.wait(Native Method)
>         at org.jgroups.protocols.FlowControl$Credit.decrementIfEnoughCredits(FlowControl.java:575)
>         - locked <0x00000004bd4c4c30> (a org.jgroups.protocols.FlowControl$Credit)
>         at org.jgroups.protocols.UFC.handleDownMessage(UFC.java:121)
>         at org.jgroups.protocols.FlowControl.down(FlowControl.java:325)
>         at org.jgroups.protocols.FlowControl.down(FlowControl.java:335)
>         at org.jgroups.protocols.FRAG2.down(FRAG2.java:143)
>         at org.jgroups.protocols.RSVP.down(RSVP.java:142)
>         at org.jgroups.stack.ProtocolStack.down(ProtocolStack.java:1022)
>         at org.jgroups.JChannel.down(JChannel.java:767)
>         at org.jgroups.JChannel.send(JChannel.java:432)
>         at it.unimi.dsi.jai4j.jgroups.JGroupsRemoteJobManager.process(JGroupsRemoteJobManager.java:144)
>         at it.unimi.dsi.jai4j.jgroups.JGroupsJobManager.submit(JGroupsJobManager.java:465)
>         at it.unimi.di.law.bubing.frontier.Frontier.enqueue(Frontier.java:642)
>         at it.unimi.di.law.bubing.frontier.ParsingThread$FrontierEnqueuer.enqueue(ParsingThread.java:204)
>         at it.unimi.di.law.bubing.frontier.ParsingThread.run(ParsingThread.java:392)
> You can find the full stack trace here:
> http://vigna.di.unimi.it/stack
> When the other agents exited a few hours later, however, the loop ended:
> 2013-11-13 14:04:13,088 477792994 DEBUG [Incoming-46,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.p.GMS - s0{1}@ser
> vice:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi: installing [boss{0}@service:jmx:rmi:///jndi/rmi://192.168.0.1:9998/jmxrmi|1043] [boss{0}
> @service:jmx:rmi:///jndi/rmi://192.168.0.1:9998/jmxrmi, s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi, s1{1}@service:jmx:rmi:/
> //jndi/rmi://192.168.0.21:9999/jmxrmi]
> 2013-11-13 14:04:13,088 477792994 DEBUG [Incoming-46,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.p.NAKACK2 - s0{1}
> @service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi: removed s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi from xmit_table
>  (not member anymore)
> 2013-11-13 14:04:13,088 477792994 DEBUG [Incoming-46,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.FD_SOCK - VIEW_CH
> ANGE received: [boss{0}@service:jmx:rmi:///jndi/rmi://192.168.0.1:9998/jmxrmi, s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi, 
> s1{1}@service:jmx:rmi:///jndi/rmi://192.168.0.21:9999/jmxrmi]
> 2013-11-13 14:04:13,088 477792994 DEBUG [FD_SOCK pinger,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.FD_SOCK - sock
> et to s3{1}@service:jmx:rmi:///jndi/rmi://192.168.0.23:9999/jmxrmi was closed gracefully
> 2013-11-13 14:04:13,088 477792994 DEBUG [FD_SOCK pinger,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] o.j.p.FD_SOCK - ping_dest is s1{1}@service:jmx:rmi:///jndi/rmi://192.168.0.21:9999/jmxrmi, pingable_mbrs=[boss{0}@service:jmx:rmi:///jndi/rmi://192.168.0.1:9998/jmxrmi, s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi, s1{1}@service:jmx:rmi:///jndi/rmi://192.168.0.21:9999/jmxrmi]
> 2013-11-13 14:04:13,088 477792994 INFO [Incoming-46,eu,s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi] i.u.d.j.j.JGroupsJobManager - New JGroups view [boss{0}@service:jmx:rmi:///jndi/rmi://192.168.0.1:9998/jmxrmi|1043] [boss{0}@service:jmx:rmi:///jndi/rmi://192.168.0.1:9998/jmxrmi, s0{1}@service:jmx:rmi:///jndi/rmi://192.168.0.20:9999/jmxrmi, s1{1}@service:jmx:rmi:///jndi/rmi://192.168.0.21:9999/jmxrmi]
> After that, our agent was able to close the JGroups connection.
> We will be happy to provide any other information you might find useful. Unfortunately, replicating in vitro this behaviour is quite impossible.
> Our configuration follows:
> <config xmlns="urn:org:jgroups"
>         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>         xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.3.xsd">
>     <UDP
>          mcast_port="${jgroups.udp.mcast_port:45588}"
>          tos="8"
>          ucast_recv_buf_size="5M"
>          ucast_send_buf_size="640K"
>          mcast_recv_buf_size="5M"
>          mcast_send_buf_size="640K"
>          loopback="true"
>          max_bundle_size="64K"
>          max_bundle_timeout="30"
>          ip_ttl="${jgroups.udp.ip_ttl:8}"
>          enable_diagnostics="true"
>          thread_naming_pattern="cl"
>          timer_type="new3"
>          timer.min_threads="4"
>          timer.max_threads="10"
>          timer.keep_alive_time="3000"
>          timer.queue_max_size="500"
>          thread_pool.enabled="true"
>          thread_pool.min_threads="2"
>          thread_pool.max_threads="8"
>          thread_pool.keep_alive_time="5000"
>          thread_pool.queue_enabled="true"
>          thread_pool.queue_max_size="10000"
>          thread_pool.rejection_policy="discard"
>          oob_thread_pool.enabled="true"
>          oob_thread_pool.min_threads="1"
>          oob_thread_pool.max_threads="8"
>          oob_thread_pool.keep_alive_time="5000"
>          oob_thread_pool.queue_enabled="false"
>          oob_thread_pool.queue_max_size="100"
>          oob_thread_pool.rejection_policy="discard"/>
>     <PING timeout="2000"
>             num_initial_members="20"/>
>     <MERGE2 max_interval="30000"
>             min_interval="10000"/>
>     <FD_SOCK/>
>     <FD_ALL timeout="120000"
>             interval="10000"
>             />
>     <VERIFY_SUSPECT timeout="1500"  />
>     <BARRIER />
>     <pbcast.NAKACK2 xmit_interval="500"
>                     xmit_table_num_rows="100"
>                     xmit_table_msgs_per_row="2000"
>                     xmit_table_max_compaction_time="30000"
>                     max_msg_batch_size="500"
>                     use_mcast_xmit="false"
>                     discard_delivered_msgs="true"/>
>     <UNICAST3 xmit_interval="500"
>               xmit_table_num_rows="100"
>               xmit_table_msgs_per_row="2000"
>               xmit_table_max_compaction_time="60000"
>               conn_expiry_timeout="0"
>               max_msg_batch_size="500"/>
>     <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
>                    max_bytes="4M"/>
>     <pbcast.GMS print_local_addr="true" join_timeout="3000"
>                 view_bundling="true"/>
>     <UFC max_credits="2M"
>          min_threshold="0.4"/>
>     <MFC max_credits="2M"
>          min_threshold="0.4"/>
>     <FRAG2 frag_size="60K"  />
>     <RSVP resend_interval="2000" timeout="10000"/>
> </config>

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira



More information about the jboss-jira mailing list