Unfair writers/readers ration (revisited)

"Trustin Lee (이희승)" trustin at gmail.com
Wed Mar 24 02:18:11 EDT 2010


Hi David,

If I understood correctly, the problem is that the client takes 3 times
more CPU than the server for encoding, right?

> - Should I make sure that "streams" of the same client to the same
>   server use the same pipeline?

I don't think so.

> - Can I make serialization any faster (I'm using
>   Channels.dynamicBuffer())?

I guess you already tuned the dynamic buffer creation by specifying
estimated message size, right?  If not, try to specify better default
when you create a dynamic buffer to reduce the number of expansion or
unnecessary allocation.

> - the serialization is performed on a OneToOneEncoder this is ran by
>   my own threads right? not by Netty's client I/O workers?

The serialization is performed in the I/O worker thread if you called
write() in your handler and you did not insert an ExecutionHandler to
the pipeline.  If the number of connections is small, I'd recommend you
to increase the number of I/O worker threads by specifying an integer to
NioClient/ServerSocketChannelFactory constructor calls.  As of Netty
3.2.0.BETA1, the default has been doubled for better out-of-the-box
performance.

HTH,
Trustin

dralves wrote:
> Hi 
> 
>         I've built a large scale application using netty and the unfair
> writer/reader ratio is getting problematic. 
>         I actually need 3 client machines to flood one server machine, which
> means that to test scaling to 100 nodes I actually need 300 more (which is
> very expensive in EC2 :). 
>         If I can get this ratio even to 1.5/1 (from 3/1)  that would already
> be an enormous progress and would allow me to continue my work (my ultimate
> goal is to run a 1000 node cluster). 
>         
>         My setup: Each client runs several threads each thread has its own
> set of connections to the servers (to avoid unecessary contention on some
> bottlenecks) 
>         Each thread I has its own set of netty pipelines (one for each
> different server and for each "stream" within that server up to about 32
> different "streams" split across 3-4 different servers). I did this for ease
> of abstraction (clients simply request connections to streams, disregarding
> where they are). 
>         For this particular test (more of an I/O test) clients do mostly
> nothing except serialization and socket writes (objects are kept in a pool
> so no object creation overheads and serialization is very simple where each
> object knows how to write and read itself from a DataOutput/DataInput) 
>         
>         Servers handle great (even when flooded i.e. no more net I/O in)
> they maintain stable load (about 65%) 
>         Clients require more cpu (about 75% each) and I actually need three
> whole client machines to flood one server machine. 
> 
>         I've tested several configurations tuning the buffersize on the
> client and server side as well as other parameters. Found the optimal
> configuration but my problem didn't go away. 
>         
>         I must be doing something wrong. Any pointers? 
> 
>         Some specific doubts: 
>         - Should I make sure that "streams" of the same client to the same
> server use the same pipeline? 
>         - Can I make serialization any faster (I'm using
> Channels.dynamicBuffer())? 
>         - the serialization is performed on a OneToOneEncoder this is ran by
> my own threads right? not by Netty's client I/O workers? 
> 
>         On a more positive note, when configured for latency (small batch
> flushes and tcpnodelay on) Netty handles great (total cluster throughput of
> about 2.5 GB/sec and latencies of <0.5 sec :) 
> 
> Any help would be greatly appreciated 
> Best Regards 
> David Alves 

-- 
what we call human nature in actuality is human habit
http://gleamynode.net/


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 260 bytes
Desc: OpenPGP digital signature
Url : http://lists.jboss.org/pipermail/netty-users/attachments/20100324/3b9adad0/attachment-0001.bin 


More information about the netty-users mailing list