It's a copy of your email, nicely reformatted by Mircea. What (public)
place would you like them to be?
We have a section "meeting minutes", it's in there now.
On 4 March 2013 10:52, Bela Ban <bban(a)redhat.com> wrote:
Is this a copy, or a ref ? Because I'd like to have all minutes
for
cluster team meetings in one place... +1 on making this available to the
public though...
On 3/4/13 11:43 AM, Sanne Grinovero wrote:
> Thanks Bela,
> I've moved it to the wiki:
>
https://community.jboss.org/wiki/ClusteringMeetingLondonFeb2013
>
> Sanne
>
> On 24 February 2013 10:26, Bela Ban <bban(a)redhat.com> wrote:
>>
>> Mircea, Dan, Pedro, Sanne and I had a meeting in London this week on how
>> to use the new features of JGroups 3.3 in Infinispan 5.3, I've copied
>> the minutes from the wiki below.
>>
>>
>> London meeting
>>
>> Bela, Pedro, Mircea, Dan, Sanne
>>
>>
>> Message bundling and OOB messages
>>
>> In 3.3, all messages will be bundled, not just regular messages, but
>>
>> also OOB messages. The way this works on the sender side is:
>>
>> - A thread sending a message in the transport adds it to a queue
>>
>> - There's one thread which dequeues messages and sends them as bundles
>>
>> - It sends a messages bundle if the max size has been reached, or
>>
>> there are no more messages in the queue
>>
>> - This means single messages are sent immediately, or we fill up a
>>
>> bundle (in a few microseconds) and send it
>>
>> Impact on Infinispan:
>>
>> - Use DONT_BUNDLE instead of OOB if you don't want to bundle messages
>>
>> - However, even DONT_BUNDLE might get deprecated
>>
>> - If we have 1 sender invoking sync RPCs, we don't need to set
>>
>> DONT_BUNDLE anymore
>>
>> - If we have multiple senders invoking sync RPCs, performance should
>>
>> get better as RPCs and responses are bundled
>>
>> - Since bundling will result in message *batches* on the receiver,
>>
>> performance should increase in general
>>
>>
>> Message batching
>>
>> Message bundles sent by a sender are received as message batches
>>
>> (MessageBatch) by the receivers. When a batch is received, the batch
>>
>> is passed up using up(MessageBatch).
>>
>> Protocols can remove / replace / add messages in a batch and pass the
>>
>> batch further up.
>>
>> The advantage of a batch is that resources such as locks are acquired
>>
>> only once for a batch of N messages rather than N times. Example: when
>>
>> NAKACK2 receives a batch of 10 messages, it adds the 10 messages to
>>
>> the receiver table in a bulk operation, which is more efficient than
>>
>> doing this 10 times.
>>
>> Further optimizations on batching (probably 3.4):
>>
>> - Remove similar ops, e.g. UNICAST3 acks for A:15, A:25 and A:35 can
>>
>> be clubbed together into just ack(A:35)
>>
>> - Merge similar headers, e.g. multicast messages 20-30 can be orderd
>>
>> by seqno, and we simply send a range [20..30] and let the receiver
>>
>> generate the headers on the fly
>>
>>
>> Async Invocation API (AIA)
>>
>> JGroups only passes up messages to Infinispan, which then uses its own
>>
>> thread pool to deliver them. E.g. based on Pedro's code for TO, we
>>
>> could parallelize delivery based on the target keys of the
>>
>> transaction. E.g if we have tx1 modifying keys {A,B,C} and tx2
>>
>> modifying keys {T,U}, then tx1 and tx2 can be run concurrently.
>>
>> If tx1 and tx2 modify overlapping key sets, then tx2 would be queued
>>
>> and executed *after* tx1, not taking up a thread from the pool,
>>
>> reducing the chances of the thread pool maxing out and also
>>
>> ensuring different threads are not going to contend on the locks
>>
>> on same keys.
>>
>> The implementation could be done in an interceptor fronting the
>> interceptor stack, which queues dependent TXs and
>>
>> - when ready to be executed - sends them up the interceptor stack on a
>> thread from
>>
>> the internal pool.
>>
>> Infinispan having its own thread pool means that JGroups threads will
>>
>> not block anymore, e.g. trying to acquire a lock for a TX. The size of
>>
>> those pools can therefore be reduced.
>>
>> The advantage of AIA is that it's up to Infinispan, not JGroups, how
>>
>> to deliver messages. JGroups delivers messages based on the order in
>>
>> which they were sent by a sender (FIFO), whereas Infinispan can make
>>
>> much more informed decisions as to how to deliver the messages.
>>
>>
>> Internal thread pool for JGroups
>>
>> All JGroups internal messages use the internal thread pool (message
>>
>> flag=INTERNAL). Not having to share the OOB pool with apps (such as
>>
>> Infinispan) means that internal messages can always be processed, and
>>
>> are not discarded or blocked, e.g. by a maxed out thread pool.
>>
>> The internal pool can be switched off, and - if AIA is implemented in
>>
>> Infinispan - the number of OOB and regular threads can be massively
>>
>> reduced. The internal thread pool doesn't need to be big either.
>>
>>
>> UNICAST3
>>
>> Successor to UNICAST and UNICAST2, best of both worlds. Acks single
>>
>> messages quickly, so we have no first-msg-lost or last-msg-lost issues
>>
>> anymore. Doesn't generate many acks though.
>>
>> Proposed to trigger and ACK only after a certain number of messages
>>
>> rather than after any batch to avoid ACK on small batches.
>>
>>
https://issues.jboss.org/browse/JGRP-1594
>>
>> --
>> Bela Ban, JGroups lead (
http://www.jgroups.org)
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
--
Bela Ban, JGroups lead (
http://www.jgroups.org)
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev