I didn't read all the details on the optimistick lock implementation,
but want to warn here against the interpretation of the
SKIP_REMOTE_LOOKUP flag for these purposes; it should only apply on
REMOTE lookups, not imply a general-purpose "I'm ignoring the return
value", that would be misleading for users as it's not how things
work, it's not automatically implying for example also a skip from
cache stores or even a local data container GET hit.. which could
affect access statistics and eviction decisions.
https://github.com/infinispan/infinispan/pull/1221#r1310549
On 25 July 2012 14:27, Mircea Markus <mircea.markus(a)jboss.com> wrote:
On 25 Jul 2012, at 12:26, Galder ZamarreƱo wrote:
On Jul 25, 2012, at 1:14 PM, Mircea Markus wrote:
On 24 Jul 2012, at 20:44, Galder ZamarreƱo wrote:
Mircea, one last thing. Why is there a check for local here?
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
That basically means that concurrent cluster wide conditional removes won't
work with OL + RR + writeSkew.
Is there a reason why you added this local check?
That code was added with ISPN-1941 and only handles write skew (ws) for
local caches, Manik might comment a bit more about it.
The code that handles ws for distributed caches is in the
VersionedEntryWrappingInterceptor(VEWI).
I don't think that that code you pointed belongs to the
OptimisticLockingInterceptor (OLI) for two reasons: the OLI is should handle
the locking and write skew checking is an orthogonal concern. Also the rest
of write skew checking is handled in the VEWI, so this logic should be
placed there as well.
TBH I think write skew check logic deserve its own dedicated interceptor as
the code in this area is spread over too many places: OLI, VEWI and some
not nice static methods in the ClusteringDependentLogic. At least for me it
is quite hard to follow it as it is now. Wdyt?
Good points. So, we need a better way of doing write skew checks both local
and in cluster, from a more centralized place then?
Let's see what Manik and others say, but won't be able to properly fix this
before I go on holidays. There's a valid workaround for it though, so not
massively urgent.
+1.
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev