[jboss-jira] [JBoss JIRA] (JGRP-1564) TP: handling of message batches

Bela Ban (JIRA) jira-events at lists.jboss.org
Wed Jan 23 09:10:47 EST 2013


    [ https://issues.jboss.org/browse/JGRP-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12749898#comment-12749898 ] 

Bela Ban commented on JGRP-1564:
--------------------------------

When sending messages batches, the transport header (TpHeader) is now removed and the cluster name is instead sent as part of the message batch header. This greatly reduces the size of the marshalled message batch !
So if we have a batch of 60 messages and they are sent in a cluster called "my-cluster", then we *add* the length of the cluster name (+12) to a batch, but remove every TpHeader from all 60 messages, that's a saving of (2+2+12) * 60 = 840 -12 = 828 bytes for the message batch !
                
> TP: handling of message batches
> -------------------------------
>
>                 Key: JGRP-1564
>                 URL: https://issues.jboss.org/browse/JGRP-1564
>             Project: JGroups
>          Issue Type: Enhancement
>            Reporter: Bela Ban
>            Assignee: Bela Ban
>             Fix For: 3.3
>
>
> When B receives a batch of 5 messages from A (unicast or multicast), then B uses the *same thread* to send the 5 messages up (this isn't the case for OOB messages).
> It would be more efficient to either have different threads passing the 5 messages up, or use a new *message batch event type* to pass all 5 messages up in one go.
> The advantage of different threads is that all 5 threads add their message to the window, but only 1 removes them and passes them up, rather than each thread adding and removing its own message (fewer lock acquisitions).
> We could try moving the unmarshalling of messages and message batches into TP.receive(). If a batch was received, that code could unmarshal the 5 messages and pass them to corresponding thread pools to send them up.
> The unmarshalling shouldn't take long, so TP.receive() should return quickly.
> This approach would allow us to send OOB messages in message batches, too (currently not allowed).
> The advantage of a message batch is that we pass *one* event up the stack, passing only *once* through all protocols from TP to UNICAST/2 and NAKACK/2, and not 5 times. Also, adding 5 messages to the window under the same lock is more eficient than acquiring the lock 5 times. Ditto for removal.
> The disadvantage is that we now need to handle a different event type (all protocols under UNICAST/NAKACK), e.g. ENCRYPT, SIZE, FRAG(2) (if placed under UNICAST/NAKACK), COMPRESS etc. However, we could add another up(Batch) method, which by default (in Protocol):
> - removes all messages for a given protocol P (by P.ID)
>   and calls up(Event.MSG, msg) for all messages in the batch
> - calls up_prot.up(batch) if the batch is not empty
> This would allow for all current protocols to continue working and only the protocols which don't check for headers and/or need special processing (such as UNICAST and NAKACK) would have to implement up(Batch).
> This solution would be better than introducing another event type MSG_BATCH, as not every protocol overriding up(Event) calls super.up(Event).
> However, this solution is not symmetric, ie. messages are batched at the transport level, and should be unbatched at the transport level of the receiver(s) as well...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


More information about the jboss-jira mailing list