This may not give you any performance increase:
#1 In my experience, serialization is way faster than de-serialization.
Unless you're doing something fancy in your serializer
#2 Assuming that the serialization thread pool has a bounded queue/size,
by pushing the serialization further down the stack, you'll be faster in
Infinispan, but at some point, you'll block on a full queue / exhaused
thread pool.
#3 If you don't pass me a byte buffer, FRAG(2) will effectively not do
fragmentation. If the serialized object later generates a larger byte
buffer, it's too late for fragmentation.
#4 Ditto for flow control which depends on the number of bytes sent to
throttle a sender
The only benefit I see here is to handle temporary performance spikes.
I remember a couple of years ago, in Neuchatel, Manik, Jason and I
looked at ehcache and why it was so much faster than JBossCache with
async replication. We found out they had a replication buffer, into
which they placed all replication tasks, and which was periodically
flushed (causing real network traffic). Turns out they didn't flush it
until the test was over, therefore getting super performance ! :-) Had
they run it long enough, they would have paid the penalty by most
threads getting blocked on a full replication queue...
On 1/19/12 3:31 PM, Mircea Markus wrote:
Hi Bela,
ATM when asyncMarshalling is enabled[1], we use our own thread pool for a) serializing
the object and then b) pass it to the jgroups transport to send it async.
As per [1], this has a major limitation: requests might be re-ordered at infinispan's
level, in the thread pool that handles serialization.
A possible way to improve this is by sending both the marshaller and the object to the
jgroups transport which would serialize it on the same thread used for sending it. This
way we would avoid re-ordering and potentially reduce thread's context switching.
Wdyt?
[1]
http://bit.ly/yNk6In
--
Bela Ban
Lead JGroups (
http://www.jgroups.org)
JBoss / Red Hat