We are using NIO - the configuration settings you provided in an earlier email failed to configure the OIO UDP buffer size<br>-Neil.<br><br><div class="gmail_quote">2009/6/27 "ÀÌÈñ½Â (Trustin Lee)" <span dir="ltr"><<a href="mailto:trustin@gmail.com">trustin@gmail.com</a>></span><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Thanks for the feed back. It seems like your application is running in a controlled network. Otherwise such a large UDP packet could be truncated or discarded.<br>
<br>
Out of curiosity, do you use OIO UDP or NIO UDP?<br>
<br>
Trustin<br>
<br>
Neil Avery wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="im">
Oops - forgot to add my point - thats our particular use case - so 768<br>
is probably ok for most cases!<br>
Cheers.<br>
<br></div>
2009/6/26 Neil Avery <<a href="mailto:neil@liquidlabs.co.uk" target="_blank">neil@liquidlabs.co.uk</a> <mailto:<a href="mailto:neil@liquidlabs.co.uk" target="_blank">neil@liquidlabs.co.uk</a>>><div class="im">
<br>
<br>
We are using it (udp) to burst a search request to hundreds or<br>
thousands of machines. Sending using TCP degrades as more machines<br>
are added to the network - even when using our custom connection<br>
pooling we still see degradation - using udp provides much better<br>
performance (and we cannot use multicast). Our message size is a<br>
maximum of 5K, - btw - we use netty tcp elsewhere when reliablity is<br>
required.<br>
Cheers Neil.<br>
<br>
2009/6/26 "ÀÌÈñ½Â (Trustin Lee)" <<a href="mailto:trustin@gmail.com" target="_blank">trustin@gmail.com</a><br></div>
<mailto:<a href="mailto:trustin@gmail.com" target="_blank">trustin@gmail.com</a>>><div><div></div><div class="h5"><br>
<br>
By the way, please let me know if you think the default (768) is<br>
not sane and should be increased. I'm not really a UDP expert so<br>
other people's suggestion is appreciated.<br>
<br>
Thanks<br>
<br>
ÀÌÈñ½Â (Trustin Lee) wrote:<br>
<br>
Hi Neil,<br>
<br>
It's because the default receiveBufferSizePredictor of<br>
DatagramChannel<br>
is FixedReceiveBufferSizePredictor(768). You can configure<br>
the channel<br>
to a FixedReceiveBufferSizePredictor with different payload<br>
size. For<br>
example:<br>
<br>
DatagramChannel ch = ...;<br>
ch.getConfig().setReceiveBufferSizePredictor(<br>
new FixedReceiveBufferSizePredictor(1024));<br>
<br>
or:<br>
<br>
ConnectionlessBootstrap b = ...;<br>
b.setOption("receiveBufferSizePredictor",<br>
new FixedReceiveBufferSizePredictor(1024));<br>
<br>
BTW, I wouldn't recommend to using<br>
AdaptiveReceiveBufferSizePrediector<br>
for datagrams.<br>
<br>
HTH,<br>
Trustin<br>
<br>
On 2009-06-25 ¿ÀÀü 4:49, neilson9 wrote:<br>
<br>
Hi,<br>
<br>
Im having a couple of problems sending UDP byte[]> 768K<br>
<br>
For example if Im passing 1024bytes the buffer.readIndex<br>
does not get<br>
updated. The sender only passes 768 bytes and if I<br>
manually set the<br>
readIndex to 768 it sends another 768 bytes on the<br>
second send. I would like<br>
to iterate of the byte[] and use the offsets to prevent<br>
copying data etc.<br>
<br>
Any help appreciated.<br>
Regards Neil.<br>
<br>
For example:<br>
byte[] payload = userdata....;<br>
<br>
channel = (DatagramChannel) b.bind(new<br>
InetSocketAddress(0));<br>
ChannelBuffer buffer = dynamicBuffer(bytes.length);<br>
buffer.writeBytes(bytes);<br>
LOGGER.info("Sending:" + bytes.length + " sent:" +<br>
buffer.readerIndex());<br>
ChannelFuture channelFuture = channel.write(buffer, new<br>
InetSocketAddress(uri.getHost(), port));<br>
// manually setting to see if it sends the remainder<br>
buffer.readerIndex(768);<br>
<br>
channelFuture = channel.write(buffer, new<br>
InetSocketAddress(uri.getHost(), port));<br>
LOGGER.info("Sending:" + buffer.readerIndex());<br>
<br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
netty-users mailing list<br></div></div>
<a href="mailto:netty-users@lists.jboss.org" target="_blank">netty-users@lists.jboss.org</a> <mailto:<a href="mailto:netty-users@lists.jboss.org" target="_blank">netty-users@lists.jboss.org</a>><div class="im">
<br>
<a href="https://lists.jboss.org/mailman/listinfo/netty-users" target="_blank">https://lists.jboss.org/mailman/listinfo/netty-users</a><br>
<br>
<br>
<br>
--<br></div>
Trustin Lee, <a href="http://gleamynode.net" target="_blank">http://gleamynode.net</a> <<a href="http://gleamynode.net/" target="_blank">http://gleamynode.net/</a>><br>
<br>
_______________________________________________<br>
netty-users mailing list<br>
<a href="mailto:netty-users@lists.jboss.org" target="_blank">netty-users@lists.jboss.org</a> <mailto:<a href="mailto:netty-users@lists.jboss.org" target="_blank">netty-users@lists.jboss.org</a>><div class="im">
<br>
<a href="https://lists.jboss.org/mailman/listinfo/netty-users" target="_blank">https://lists.jboss.org/mailman/listinfo/netty-users</a><br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
netty-users mailing list<br>
<a href="mailto:netty-users@lists.jboss.org" target="_blank">netty-users@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/netty-users" target="_blank">https://lists.jboss.org/mailman/listinfo/netty-users</a><br>
</div></blockquote><div><div></div><div class="h5">
<br>
<br>
-- <br>
Trustin Lee, <a href="http://gleamynode.net" target="_blank">http://gleamynode.net</a><br>
<br>
_______________________________________________<br>
netty-users mailing list<br>
<a href="mailto:netty-users@lists.jboss.org" target="_blank">netty-users@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/netty-users" target="_blank">https://lists.jboss.org/mailman/listinfo/netty-users</a><br>
</div></div></blockquote></div><br>