Connections get closed for no apparent reason but no exceptions are thrown

dralves davidralves at gmail.com
Tue Sep 8 01:46:57 EDT 2009


Hi 

 I'm using netty (3.1.2 GA) as the async protocol in a large cluster
application prototype I use for research.
 Recently my client stopped being able to communicate with the server.

 Its strange because in my local computer everything works fine both the
client and the server show the connections as open and start to communicate.
 When I deploy it on the cluster however (win 20008 server or ubuntu jaunty
(clients)-> ubuntu jaunty (xen domU server)) the communication channel is
open but is immediately closed afterwards, and no exception is thrown as to
why the connection was closed (both on the client and the server).

 This only started to happen a little time ago, (the prototype was
undergoing big changes) but I can't seem to figure out why.

 Any ideas?

 On another note how to make sure TRACE level logging is turned on in netty?
Or how to force netty to use Log4j? I have a uber jar with all the
dependencies that includes log4j but also sfl4j and adding the line
log4j.logger.org.jboss.nety=TRACE doesn't seem to do anything, only the logs
in my own hooks (like handlers) appear.

 O another note before I had this problem I benchmarked netty in my
application against other frameworks (like mina) and its rocks. The only
problem I had before, which was latency, is solved now, i.e. latency depends
only on the frequency of flushes and is very stable, and netty still
achieves the highest throughputs.

Cheers
David





 
-- 
View this message in context: http://n2.nabble.com/Connections-get-closed-for-no-apparent-reason-but-no-exceptions-are-thrown-tp3601173p3601173.html
Sent from the Netty User Group mailing list archive at Nabble.com.


More information about the netty-users mailing list