How to host a high performing Netty App in a fluctuating network environment?
George
georgel2004 at gmail.com
Wed Jul 27 17:35:04 EDT 2011
Thanks for your comments John,
Part 1 & 3 of your reply, Video stream scaling in terms of network through
put is complex as every user needs consistent stream. But i am working on
HTTP streaming, so the scaling part is comparatively easier than consistent
video streaming. I can redirect user on secs or mins interval, lower per
user throughput etc.
The requests will be 1 or 2KB, but the response could be into MBs, the
possibility of hitting OOM should be there from what i understood from other
posts. But it depends on the app on how it reacts to channel writeable
status. I will read more on this.
Real problem and a very well existing one is part 2. Good servers connected
to a poor backbone switch. This is the one i am trying to solve.
For example, when people report of getting only 300Mbps through put in their
1Gbps NIC cards, basically it means the hosted environment is allowing them
that much. This is the real metrics i want to find out.
>From your reply, what i understand is trying to predict the hosted
environment congestion in Netty app using the channels status is not a good
solution and it may very well not be a real scenario of the network or the
experience of the users.
This problem is not a clear cut Netty domain one, but the Netty app has to
react to such scenario.
Is there any better way of solving this particular problem.?
Thanks
George
On Wed, Jul 27, 2011 at 3:30 PM, John D. Mitchell <jdmitchell at gmail.com>wrote:
> On Jul 27, 2011, at 12:59 , George wrote:
> [...]
> > I am using TCP NIO and the servers will be in different host environments
> across regions.
>
> So you can do "client to different servers" testing (and then select which
> server a client should talk to based on that) but given that you're doing
> video (IIRC), it's the longer-term steady state throughput that matters
> rather than the quick and dirty connectivity latency & early throughput --
> so that's tougher to figure out cheaply/quickly/accurately.
>
> > I was looking into collectd to gather bandwidth throughput of the server
> and compare with the Netty app throughput to decide on the routing logic.
>
> Depending on your networking setup (e.g. in your colos), it's not just the
> saturation of e.g., the local NIC but also any switches/up-links/etc. At a
> recent place I was working, they had full racks of machines with dual 1Gbps
> NICs on the front-end servers but all of those links were being run through
> rack switches that had (a) much, much less than N-servers * 1Gbps of
> switching fabric capacity and, even worse, (b) had only 1 or 2 * 1Gbps
> network drops into the backbone switch. So, looking at the performance on a
> single machine may confuse you when the congestion happens downstream, etc.
>
> > But that didn't seem like a proper solution because of Netty app hitting
> OOM in case Channel.isWritable becomes false.
>
> Why is Netty running out of memory just because a channel isn't writable?
> That back-pressure should be propagated to stall the upstream sources.
>
> FWIW, you might also want to google around a bit as I recall there have
> been some nice write ups earlier this year by people using plain TCP to
> serve video streams. Might find some helpful details there.
>
> Hope this helps,
> John
>
>
> _______________________________________________
> netty-users mailing list
> netty-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/netty-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/netty-users/attachments/20110727/279b8b91/attachment.html
More information about the netty-users
mailing list