On 13 March 2012 16:06, Dan Berindei <dan.berindei(a)gmail.com> wrote:
On Tue, Mar 13, 2012 at 3:57 PM, Sanne Grinovero
<sanne(a)infinispan.org> wrote:
> As it already came up during other design discussions, we should make
> a very clear split between a logical lock (something owned by a
> transaction) and an internal entry lock.
> A logical lock needs to be addressable uniquely, striping or
> semaphores are not a viable option as these are
> - long lived (definitely more than 1 RPC)
> - defeats any attempt from end users / intermediate patterns to avoid deadlocks
> - likely a small ration of overall keys, so should not be too
> expensive to store
>
While the locks may not be very expensive to store, they are complex
objects and creating them over and over again can get quite expensive.
For instance, if 10 tx threads wait for a key lock, the lock will keep
a queue with the 10 waiting threads. When the tx that owned the key
unlocks it, the key is removed, and all 10 threads wake up and try to
create a new lock. Only one of them succeeds, the others add
themselves to the queue once again.
Agreed but if you think of a lock as a logical attribute of the entry,
it's essentially a boolean,
not an object which needs pooling.
You don't even need "expensive" AtomicBoolean instances assuming you
have the "structure locks" in place anyway.
(not sure about the name, I mean the ones we talk about below).
This may not be a problem in our perf tests because we update keys
randomly using a uniform distribution, but regular application will
almost certainly have a non-uniform distribution and a few keys will
be highly contended compared to the rest. This makes me think that
LIRS could be a nice fit for a LockContainer implementation.
you lost me here :) We don't want to drop lock information?
> Internal entry locks are very short lived (never live across
multiple
> RPCs) and are essentially matching what we have as segments locks in
> the CHM (which is already striping == concurrency level), just that
> the CHM model isn't fitting well our needs: we need excplicit control
> of these, for example for when a value is being moved from the
> DataContainer to a CacheLoader.
>
Sanne, I'm not sure about surfacing the DataContainer's locking, as
ConcurrentHashMapV8 doesn't have segment locks any more and we'll
probably want to move our BCHM towards that model as well in the
future.
For the situation you mentioned I would prefer making the CacheLoader
a participant in cache transactions and holding the logical lock while
passivating the entry.
In many cases you don't have a transaction, or at least you don't need
to agree on an eviction operation with other nodes.
Nice idea anyway to consider moving a value from memory container to
CacheLoader as transactional! You should open a JIRA on that.
Think as well about CAS operations on our entries, such in-place swaps
can be done almost atomically: almost in the sense that some lock on
the structure will be needed, still it's something that can be swapped
"in place" without having such a lock scope leaving some specific
method.
And let's extend this to MVCC CAS with vector clocks.. you definitely
don't need a cluster wide lock to apply changes on the entry
container.
> In conclusion, I'd not spend time arguing on small
improvements of the
> existing design - at least it's serving well for now.
The discussion may be a bit premature, but it's a nice change from
thinking about NBST ;-)
I'm not saying it's premature nor pointless, sorry.. I meant let's
push this beyond little optimisations ;-)
--Sanne
Cheers
Dan
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev