The basic http implementation we have would not work over a clever firewall, this is
because it is not a true 1 - 1 mapping between request and response. To solve this we
could do the following.
The client send can will only change slightly. Since Http 1.1 supports request pipelining
we should be able to send requests without waiting for a response, so the order could be
req1 req2 req3 res1 res2 res3. The spec does say the following however:
anonymous wrote : Clients SHOULD NOT pipeline requests using non-idempotent methods or
non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature
termination of the transport connection could lead to indeterminate results. A client
wishing to send a non-idempotent request SHOULD wait to send that request until it has
received the response status for the previous request.
But I think it should be ok for our usage. The client will also need to send a
'bogus' request when idle for n seconds to make sure the connection isn't
closed.
The server cannot send an unsolicited response to a client but *must* send a response to
every request.This should work as follows.
The server does not send anything directly to the client, instead it keeps a list of
'Pending packets'. When the server receives a request it will initiate a response.
any pending packets will be wrapped up in a single response and sent*. If there are no
pending responses then the server will wait for a configurable amount of time for a
pending message to arrive or send a bogus response, this is again to keep the connection
alive.
*wrapping up all pending packets into a single response will require a copy so theres an
overhead, it may be better just to send back a single packet at a time but I'll need
to test this to see. It all depends how many actual requests vs responses there are.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4194574#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...