Unfair writers/readers ration (revisited)

"Trustin Lee (이희승)" trustin at gmail.com
Thu Mar 25 01:36:58 EDT 2010



David Alves wrote:
> Hi Trustin
> 
>> If I understood correctly, the problem is that the client takes 3 times
>> more CPU than the server for encoding, right?
> 
> 	Actually more than that, one server is stable at 65% and I need three clients (that maintain a stable load at 75%) to flood the ethernet I/O. (Get to the point that if I add more clients the throughput for one server doesn't increase).
> 
>>> - Should I make sure that "streams" of the same client to the same
>>>  server use the same pipeline?
>> I don't think so.
> 
> 	My question was actually If I should use the same channel (i.e. async socket connection) or keep the +- 12 connections (channels) from a single client to a single server (server usually has at most three clients, i.e. 36 connections, and clients usually send to three different servers, ie. 36 connections).

You need to increase / decrease the number of connections to see what
the optimal number of connections is on the client side.

>>> - Can I make serialization any faster (I'm using
>>>  Channels.dynamicBuffer())?
>> I guess you already tuned the dynamic buffer creation by specifying
>> estimated message size, right?  If not, try to specify better default
>> when you create a dynamic buffer to reduce the number of expansion or
>> unnecessary allocation.
> 
> 	You are right, I already specified the size, and I actually update the estimated length based on the size of the previous tuple, as tuples are mostly the same size.
> 	I.e. by (where bout is the buffer created by Channels.dynamicBuffer():
> 	
> 	ChannelBuffer encoded = bout.buffer();
>    encoded.setInt(0, encoded.writerIndex() - 4);
>    if (this.estimatedLength != bout.buffer().writerIndex()) {
> 	this.estimatedLength = bout.buffer().writerIndex();
>    }
> 
>>> - the serialization is performed on a OneToOneEncoder this is ran by
>>>  my own threads right? not by Netty's client I/O workers?
>> The serialization is performed in the I/O worker thread if you called
>> write() in your handler and you did not insert an ExecutionHandler to
>> the pipeline.  If the number of connections is small, I'd recommend you
>> to increase the number of I/O worker threads by specifying an integer to
>> NioClient/ServerSocketChannelFactory constructor calls.  As of Netty
>> 3.2.0.BETA1, the default has been doubled for better out-of-the-box
>> performance.
>>
> 
> 	I'm already using 3.2.0 BETA1 and I call channel.write() within my own code, so this means serialization is actually performed by my own threads right?

It's not determined by from whose code a message is being written but by
from what thread.  Try to get the name of the current thread.

> 	Would I still gain from adding more workers? Would it be better to delegate the serialization to netty's I/O workers?

If you mean I/O workers, yeah, try to increase the number of I/O workers
to see if it helps.

HTH,
Trustin

-- 
what we call human nature in actuality is human habit
http://gleamynode.net/


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 260 bytes
Desc: OpenPGP digital signature
Url : http://lists.jboss.org/pipermail/netty-users/attachments/20100325/1d32f12b/attachment-0001.bin 


More information about the netty-users mailing list