[infinispan-dev] PutForExternalRead consistency
William Burns
mudokonman at gmail.com
Fri Nov 22 11:00:21 EST 2013
On Fri, Nov 22, 2013 at 10:51 AM, Pedro Ruivo <pedro at infinispan.org> wrote:
>
>
> On 11/22/2013 03:39 PM, Dan Berindei wrote:
>> I think I need to clarify my earlier email a bit: the problem I'm
>> worried about is that we could have a thread execute a
>> putForExternalLeave(k, v1), then a put(k, v2), then a remove(k), and in
>> the end leaving k = v1 in the cache. (Without the remove(k) we'd be ok,
>> because PFER uses putIfAbsent() under the hood.)
>>
>> This is quite different from the problem that Pedro raised, that of
>> different owners ending up with different values for the same key.
>> Will's suggestion to implement PFER as a regular put from a background
>> thread does fix that problem.
>
> nope, it the same as I described. We have no guarantee so you can have
> nodes with v1 and other with nothing (different owners ending up with
> different values).
I think Dan was referring to even with making it sync put from another
will not fix the fact that after the remove is complete you don't know
if the cache (primary and backups) has a value v1 or nothing. But I
think that is just the contract of PFER.
>
> Without the remove(k), it is the same problem: some owners with v1 and
> others with v2...
To further add on to the previous if PFER was sync on another thread
the put case could never happen since PFER uses putIfAbsent and put
always wins in that case.
>
>>
>> Writing the value only locally like Galder suggested would also address
>> my concern, but at the expense of extra misses from the cache -
>> especially in DIST mode. Hence my proposal to not support PFER in DIST
>> mode at all.
>
> REPL and DIST are similar now. they both send the command to the primary
> owner and I think they both have the same problem. If you make the PFER
> local only, it would help (as suggested by Galder, not so useful for DIST)
>
>>
>>
>>
>> On Fri, Nov 22, 2013 at 3:45 PM, Dan Berindei <dan.berindei at gmail.com
>> <mailto:dan.berindei at gmail.com>> wrote:
>>
>> That doesn't sound right, we don't keep any lock for the duration of
>> the replication. In non-tx mode, we have to do a RPC to the primary
>> owner before we acquire any key. So there's nothing stopping the
>> PFER from writing its value after a regular (sync) put when the put
>> was initiated after the PFER.
>>
>>
>> On Fri, Nov 22, 2013 at 2:49 PM, William Burns <mudokonman at gmail.com
>> <mailto:mudokonman at gmail.com>> wrote:
>>
>> I wonder if we are over analyzing this. It seems the main issue is
>> that the replication is done asynchronously. Infinispan has
>> many ways
>> to be make something asynchronous. In my opinion we just chose the
>> wrong way. Wouldn't it be easier to just change the PFER to instead
>> of passing along the FORCE_ASYNCHRONOUS flag we instead just
>> have the
>> operation performed asynchronous using putIfAbsentAsync ? This way
>> the lock is held during the duration of the replication and
>> should be
>> consistent with other operations. Also the user can regain control
>> back faster as it doesn't even have to process the local interceptor
>> chain. We could also change the putForExternalRead method
>> declaration
>> to also return a NotifiableFuture<Void> or something so they
>> know when
>> the operation is completed (if they want).
>>
>> - Will
>>
>> On Thu, Nov 21, 2013 at 9:54 AM, Dan Berindei
>> <dan.berindei at gmail.com <mailto:dan.berindei at gmail.com>> wrote:
>> >
>> >
>> >
>> > On Thu, Nov 21, 2013 at 12:35 PM, Galder Zamarreño
>> <galder at redhat.com <mailto:galder at redhat.com>>
>> > wrote:
>> >>
>> >>
>> >> On Nov 18, 2013, at 12:42 PM, Dan Berindei
>> <dan.berindei at gmail.com <mailto:dan.berindei at gmail.com>> wrote:
>> >>
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Nov 18, 2013 at 9:43 AM, Galder Zamarreño
>> <galder at redhat.com <mailto:galder at redhat.com>>
>> >> > wrote:
>> >> >
>> >> > On Nov 14, 2013, at 1:20 PM, Pedro Ruivo
>> <pedro at infinispan.org <mailto:pedro at infinispan.org>> wrote:
>> >> >
>> >> > > Hi,
>> >> > >
>> >> > > Simple question: shouldn't PFER ensure some consistency?
>> >> > >
>> >> > > I know that PFER is asynchronous but (IMO) it can create
>> >> > > inconsistencies
>> >> > > in the data. the primary owner replicates the PFER
>> follow by a PUT
>> >> > > (PFER
>> >> > > is sent async log the lock is released immediately) for
>> the same key,
>> >> > > we
>> >> > > have no way to be sure if the PFER is delivered after or
>> before in all
>> >> > > the backup owners.
>> >> > >
>> >> > > comments?
>> >> >
>> >> > Assuming that PFER and PUT happen in the same thread,
>> we're normally
>> >> > relying on the JGroups sequence of events to send the
>> first, wait no
>> >> > response, and then send the second put. That should
>> guarantee order in which
>> >> > puts are received in the other nodes, but after that yeah,
>> there's a risk
>> >> > that it could happen. PFER and PUT for a given key
>> normally happen in the
>> >> > same thread in cache heavy use cases such as Hibernate
>> 2LC, but there's no
>> >> > guarantee.
>> >> >
>> >> > I don't think that's correct. If the cache is synchronous,
>> the PUT will
>> >> > be sent as an OOB message, and as such it can be delivered
>> on the target
>> >> > before the previous PFER command. That's regardless of
>> whether the PFER
>> >> > command was sent as a regular or as an OOB message.
>> >>
>> >> ^ Hmmmm, that's definitely risky. I think we should make
>> PFER local only.
>> >>
>> >> The fact that PFER is asynchronous is nice to have. IOW, if
>> you read a
>> >> value from a database and you want to store it in the cache
>> for later read,
>> >> the fact that it's replicated asynchronously is just so that
>> other nodes can
>> >> take advantage of the value being in the cache. Since it's
>> asynchronous some
>> >> nodes could fail to apply, but that's fine since you can go
>> to the database
>> >> and re-retrieve it from there. So, making PFER local only
>> would be the
>> >> degenerate case, where all nodes fail to apply except the
>> local node, which
>> >> is fine. This is better than having the reordering above.
>> >>
>> >> In a chat I had with Dan, he pointed out that having PFER
>> local only would
>> >> be problematic for DIST mode w/ L1 enabled, since the local
>> write would not
>> >> invalidate other nodes, but this is fine because PFER only
>> really makes
>> >> sense for situations where the Infinispan is used as a
>> cache. So, if the
>> >> data is in the DB, you might as well go there (1 network
>> trip), as opposed
>> >> to askign the other nodes for data and the database in the
>> worst case (2
>> >> network trips).
>> >>
>> >> PFER is really designed for replication or invalidation use
>> cases, which
>> >> are precisely the ones configured for Hibernate 2LC.
>> >>
>> >> Thoughts?
>> >>
>> >
>> > +1 to make PFER local-only in replicated caches, but I now
>> think we should
>> > go all the way and disallow PFER completely in dist caches.
>> >
>> > I still think having L1 enabled would be a problem, because a
>> regular put()
>> > won't invalidate the entry on all the nodes that did a PFER
>> for that key
>> > (there are no requestors, and even if we assume that we do a
>> remote get
>> > before the PFER we'd still have race conditions).
>> >
>> > With L1 disabled, we have the problem that you mentioned:
>> we're trying to
>> > read the value from the proper owners, but we never write it
>> to the proper
>> > owners, so the hit ratio will be pretty bad. Using the
>> SKIP_REMOTE_LOOKUP
>> > flag on reads, we'll avoid the extra RPC in Infinispan, but
>> that will make
>> > the hit ratio even worse. E.g. in a 4-nodes cluster with
>> numOwners=2, the
>> > hit ratio will never go above 50%.
>> >
>> > I don't think anyone would use a cache knowing that its hit
>> ratio can never
>> > get above 50%, so we should just save ourselves some effort
>> and stop
>> > supporting PFER in DIST mode.
>> >
>> > Cheers
>> > Dan
>> >
>> >
>> > _______________________________________________
>> > infinispan-dev mailing list
>> > infinispan-dev at lists.jboss.org
>> <mailto:infinispan-dev at lists.jboss.org>
>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> <mailto:infinispan-dev at lists.jboss.org>
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
More information about the infinispan-dev
mailing list