[jboss-remoting-issues] [JBoss JIRA] Commented: (JBREM-1182) Update testsuite to run under Hudson

Richard Achmatowicz (JIRA) jira-events at lists.jboss.org
Fri Jan 22 16:00:21 EST 2010


    [ https://jira.jboss.org/jira/browse/JBREM-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12507902#action_12507902 ] 

Richard Achmatowicz commented on JBREM-1182:
--------------------------------------------

I have now found out why JRunit was not working with the settings: 
-Djrunit.bind_addr=127.0.0.1
-Djrunit.mcast_addr=228.15.2.3
-Djrunit.mcast_port=45565
-Djrunit.receive_on_all_interfaces=false
-Djrunit.send_on_all_interfaces=false
-Djrunit.send_interfaces=127.0.0.1 

At the level of TestDriver communicating with the two ServerTestHarness instances,  the JGroups groups were forming correctly, but the ServerTestHarness processes were not receiving the RunTestMessage multicast by TestDriver after it gets confirmation of Server startup. I couldn't understand how some messages were getting through and others were not. Eventually, I realised that this was the first message that TestDriver multicasts on the specified send interface, 127.0.0.1. 

The problem arises in JGroups: look at this section of output from UDP.createSockets:

jrunit using bind_address: 127.0.0.1
jrunit using mcast_addr: 228.15.2.3
jrunit using mcast_port: 45566
jrunit using receive_on_all_interfaces=false
jrunit using send_on_all_interfaces=false
jrunit using send_interfaces=127.0.0.1
props: UDP(mcast_addr=228.15.2.3;mcast_port=45566;bind_addr=127.0.0.1;tos=8;ucast_recv_buf_size=80000;ucast_send_buf_size=150000;mcast_send_buf_size=150000;mcast_recv_buf_size=80000;loopback=false;discard_incompatible_packets=true;max_bundle_size=64000;max_bundle_timeout=30;use_incoming_packet_handler=true;use_outgoing_packet_handler=false;ip_ttl=2;down_thread=false;up_thread=false;enable_bundling=false;receive_on_all_interfaces=false;send_on_all_interfaces=false;send_interfaces=127.0.0.1;):PING(timeout=2000;num_initial_members=3;up_thread=false;down_thread=false):MERGE2(min_interval=10000;max_interval=20000;up_thread=false;down_thread=false):FD_SOCK(up_thread=false;down_thread=false):FD(timeout=10000;max_tries=5;shun=true;up_thread=false;down_thread=false):VERIFY_SUSPECT(timeout=1500;up_thread=false;down_thread=false):pbcast.NAKACK(max_xmit_size=8192;use_mcast_xmit=false;gc_lag=0;retransmit_timeout=300,600,1200,2400,4800;up_thread=false;down_thread=false):UNICAST(timeout=300,600,1200,2400;up_thread=false;down_thread=false):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=400000;up_thread=false;down_thread=false):VIEW_SYNC(avg_send_interval=60000;up_thread=false;down_thread=false):pbcast.GMS(join_timeout=3000;join_retry_timeout=2000;shun=false;view_bundling=false;print_local_addr=true;up_thread=false;down_thread=false):FC(max_credits=2000000;min_threshold=0.10;up_thread=false;down_thread=false):FRAG2(frag_size=60000;down_thread=false;up_thread=false):pbcast.STATE_TRANSFER(up_thread=false;down_thread=false)
HELLO!
local_addr=127.0.0.1:57464, mcast_addr=228.15.2.3:45566, bind_addr=/127.0.0.1, ttl=2
sock: bound to 127.0.0.1:57464, receive buffer size=55808, send buffer size=55808
mcast_recv_sock: bound to 127.0.0.1:45566, send buffer size=55808, receive buffer size=55808
1 mcast send sockets:
0:0:0:0:0:0:0:1%1:36720, send buffer size=55808, receive buffer size=55808
-------------------------------------------------------
GMS: address is 127.0.0.1:57464
-------------------------------------------------------
  
Note the mcast send socket used: it's an IPv6 version of localhost and not an IPv4 version, as specified in the system property -Djrunit.send_interfaces.

It turns out that, if you don't start the MessageBus() with -Djava.net.preferIPv4Stack=true, this situation can arise, due to the way JGroups creates its multicast send sockets and the way NetworkInterface.getHostName() works when there is more than one IP address associated woith an interface (in this case, 127.0.0.1 and ::1). So, JGroups was sending multicasts on lo, but it was using an IPv6 stack and not an IPv4 stack. So the multicast receive in ServerTestHarness (whose processes by the way are started with -Djava.net.preferIPv4Stack=true) using his IPv4 stack would not receive the messages.

The problem is fixed by adding a <sysprop name="java.net.preferIPv4Stack" value="true"/> to all of the junit targets which start test cases.



> Update testsuite to run under Hudson  
> --------------------------------------
>
>                 Key: JBREM-1182
>                 URL: https://jira.jboss.org/jira/browse/JBREM-1182
>             Project: JBoss Remoting
>          Issue Type: Feature Request
>      Security Level: Public(Everyone can see) 
>          Components: general
>    Affects Versions: 2.2.3.SP1
>            Reporter: Richard Achmatowicz
>            Assignee: Richard Achmatowicz
>             Fix For: 2.2.3.SP2
>
>         Attachments: JBREM-1182.patch, jgroups.jar
>
>
> Update the JBoss Remoting testsuite to run under Hudson. Some current problems include:
> (i) on Linux, JRunit based tests are failing due to members not finding each other
> (ii) on Linux, under JDK6, JRunit based tests are not able to create a JGroups stack
> (iii) JRunit system properties specified by the user on the command line are not being passed the the JUnit targets correctly, and so have no effect on the tests
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       



More information about the jboss-remoting-issues mailing list