[infinispan-dev] L1 consistency for transactional caches.

Dan Berindei dan.berindei at gmail.com
Wed Jul 3 05:28:29 EDT 2013


On Tue, Jul 2, 2013 at 9:51 PM, Pedro Ruivo <pedro at infinispan.org> wrote:

>
>
> On 07/02/2013 04:55 PM, Dan Berindei wrote:
> >
> >
> >
> > On Tue, Jul 2, 2013 at 6:35 PM, Pedro Ruivo <pedro at infinispan.org
> > <mailto:pedro at infinispan.org>> wrote:
> >
> >
> >
> >     On 07/02/2013 04:29 PM, Dan Berindei wrote:
> >      >
> >      >
> >      >
> >      > On Tue, Jul 2, 2013 at 5:59 PM, Pedro Ruivo <pedro at infinispan.org
> >     <mailto:pedro at infinispan.org>
> >      > <mailto:pedro at infinispan.org <mailto:pedro at infinispan.org>>>
> wrote:
> >      >
> >      >     Hi all,
> >      >
> >      >     simple question: What are the consistency guaranties that is
> >     supposed to
> >      >     be ensured?
> >      >
> >      >     I have the following scenario (happened in a test case):
> >      >
> >      >     NonOwner: remote get key
> >      >     BackupOwner: receives the remote get and replies (with the
> >     correct
> >      >     value)
> >      >     BackupOwner: put in L1 the value
> >      >
> >      >
> >      > I assume you meant NonOwner here?
> >
> >     yes
> >
> >      >
> >      >     PrimaryOwner: [at the same time] is committing a transaction
> >     that will
> >      >     update the key.
> >      >     PrimaryOwer: receives the remote get after sending the
> >     commit. The
> >      >     invalidation for L1 is not sent to NonOwner.
> >      >
> >      >
> >      > At some point, BackupOwner has to execute the commit as well, and
> it
> >      > should send an InvalidateL1Command(key) to NonOwner. However, one
> >     of the
> >      > bugs that Will is working on could prevent that invalidation from
> >      > working (maybe https://issues.jboss.org/browse/ISPN-2965).
> >
> >     only the primary owner is sending the invalidation command.
> >
> >
> > Oops, you're right! And I'm pretty sure I made the same assumptions in
> > my replies to Will's L1 thread...
> >
> > I guess we could make it work either sending the invalidations from all
> > the owners (slowing down writes, because most of the time we'd send the
> > same commands twice), or by sending remote get commands ONLY to the
> > primary owner (which will slow down remote reads).
> >
> > Staggering remote GET commands won't work, because with staggering you
> > still have the possibility of the first request reaching the primary
> > owner only after the second request returned and the entry was written
> > to L1.
>
> This may be a stupid idea, but we only store the entry in L1 if it is a
> reply from the primary owner? Of course this will reduce the L1 hit
> ratio... :(
>
>
I like that... writing the entry to L1 only if the primary owner replied
first would also allow for staggering get requests.


>  >
> > Sending the invalidations from the tx originator after the commit might
> > work as well, but only if L1.invalidationTreshold == 0 and all the L1
> > invalidations are sent as multicasts.
> >
> >      >
> >      >
> >      >     The test finishes and I perform a check for the key value in
> >     all the
> >      >     caches. The NonOwner returns the L1 cached value (==test
> fail).
> >      >
> >      >     IMO, this is bug (or not) depending what guaranties we
> provide.
> >      >
> >      >     wdyt?
> >      >
> >      >
> >      > It's a bug!
> >      >
> >      > IMO, at least in DIST_SYNC mode with sync commit, we should
> guarantee
> >      > that stale entries are removed from non-owners before the
> TM.commit()
> >      > call returns on the originator.
> >      >
> >      > With other configurations we probably can't guarantee that,
> >     instead we
> >      > should guarantee that stale entries are removed from non-owners
> >     "soon"
> >      > after the TM.commit() call returns on the originator.
> >      >
> >      > I don't think it's ok to say that a stale entry can stay in L1
> >      > indefinitely in any configuration - otherwise why have L1
> >     invalidation
> >      > at all?
> >      >
> >      >
> >      >
> >      >
> >      > _______________________________________________
> >      > infinispan-dev mailing list
> >      > infinispan-dev at lists.jboss.org
> >     <mailto:infinispan-dev at lists.jboss.org>
> >      > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >      >
> >     _______________________________________________
> >     infinispan-dev mailing list
> >     infinispan-dev at lists.jboss.org <mailto:
> infinispan-dev at lists.jboss.org>
> >     https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20130703/ee863325/attachment-0001.html 


More information about the infinispan-dev mailing list