[
https://issues.redhat.com/browse/JGRP-2504?page=com.atlassian.jira.plugin...
]
Bela Ban edited comment on JGRP-2504 at 9/30/20 9:50 AM:
---------------------------------------------------------
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}.
This should be copied to the sockets received by {{accept()}}. However, I read that this
behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* ForĀ {{DatagramSocket}} and {{MulticastSocket}}, there is no need to do this, as
receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos.
Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an
indication and the OS can choose to ignore this. As TCP adapts the receive-window size
dynamically, I'm also a bit surprised that this didn't happen in your case?
Perhaps the receive buffer size is the *max size* of the TCP receive window...?
was (Author: belaban):
I made the following changes:
* For TCP, I set the receive buffer size on {{ServerSocket}} _before_ calling {{bind()}}.
This should be copied to the sockets received by {{accept()}}. However, I read that this
behavior is implementation-dependent
* Ditto for TCP_NIO2 and {{ServerSocketChannel}}
* {{ForĀ {{DatagramSocket}}}} and {{MulticastSocket}}, there is no need to do this, as
receive- and send buffer sizes can be set after calling {{bind()}}.
Unfortunately, I cannot reproduce this, as I don't have 2 boxes in different geos.
Please verify this fix works for you and re-open if needed.
I was also under the impression that setting buffer sizes on TCP sockets is only an
indication and the OS can choose to ignore this. As TCP adapts the receive-window size
dynamically, I'm also a bit surprised that this didn't happen in your case?
Perhaps the receive buffer size is the *max size* of the TCP receive window...?
Poor throughput over high latency TCP connection when recv_buf_size
is configured
---------------------------------------------------------------------------------
Key: JGRP-2504
URL:
https://issues.redhat.com/browse/JGRP-2504
Project: JGroups
Issue Type: Bug
Affects Versions: 5.0.0.Final
Reporter: Andrew Skalski
Assignee: Bela Ban
Priority: Minor
Fix For: 5.1
Attachments: SpeedTest.java, bla5.java, bla6.java, bla7.java
I recently finished troubleshooting a unidirectional throughput bottleneck involving a
JGroups application (Infinispan) communicating over a high-latency (~45 milliseconds) TCP
connection.
The root cause was JGroups improperly configuring the receive/send buffers on the
listening socket. According to the tcp(7) man page:
{code:java}
On individual connections, the socket buffer size must be set prior to
the listen(2) or connect(2) calls in order to have it take effect.
{code}
However, JGroups does not set the buffer size on the listening side until after
accept().
The result is poor throughput when sending data from client (connecting side) to server
(listening side.) Because the issue is a too-small TCP receive window, throughput is
ultimately latency-bound.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)