On Tue, Jul 5, 2011 at 12:23 PM, Galder ZamarreƱo <galder(a)redhat.com> wrote:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
> I agree they don't make sense, but only in the sense of exposed API
> during a transaction: some time ago I admit I was expecting them to
> just work: the API is there, nice public methods in the public
> interface with javadocs explaining that that was exactly what I was
> looking for, no warnings, no failures. Even worse, all works fine when
> running a local test because how the locks currently work they are
> acquired locally first, so unless you're running such a test in DIST
> mode, and happen to be *not* the owner of the being tested key, people
> won't even notice that this is not supported.
>
> Still being able to use them is very important, also in combination
> with transactions: I might be running blocks of transactional code
> (like a CRUD operation via OGM) and still require to advance a
> sequence for primary key generation. This needs to be an atomic
> operation, and I should really not forget to suspend the transaction.
Fair point. At first glance, the best way to deal with this is suspending the tx cos that
guarantees the API contract while not forcing locks to be acquired for too long.
I'd advice though that whoever works on this though needs to go over existing use
cases and see if the end result could differ somehow if this change gets applied. If any
divergences are found and are to be expected, these need to be thoroughly documented.
I've gone through some cases and end results would not differ at first glance if the
atomic ops suspend the txs. The only thing that would change would be the expectations of
lock acquisition timeouts by atomic ops within txs.
For example:
Cache contains: k1=galder
1. Tx1 does a cache.replace(k1, "galder", "sanne") -> suspends tx
and applies change -> k1=sanne now
2. Tx2 does a cache.replace(k1, "galder", "manik") -> suspends tx
and is not able to apply change
3. Tx2 commits
4. Tx1 commits
End result: k1=sanne
1. Tx1 does a cache.replace(k1, "galder", "sanne") -> acquires
lock
2. Tx2 does a cache.replace(k1, "galder", "manik") -> waits for
lock
3. Tx2 rollback -> times out acquiring lock
4. Tx1 commits -> applies change
End result: k1=sanne
Galder, the big difference would be that with optimistic transactions
you don't acquire the lock on the spot, so the tx will be rolled back
if someone else modifies the key between your get and the prepare.
We could make a compromise, instead of forcing the atomic operations
to happen outside the transaction we could force them to always use
pessimistic locking. Actually Mircea and I discussed this Friday
evening too, but I forgot about it until now.
After all Sanne has two use cases for atomic operations: sequences and
reference counts. Sequences can and should happen outside
transactions, but as we discussed on the train we could make each
node/thread acquire a range of say 100 seq numbers at a time and
remove the need for any locking to get a new sequence number.
Reference counts on the other hand should remain inside the
transaction, because you would have to to the refcount rollback by
hand (plus I don't think you should let other transactions see the
modified refcount before they see the new data).
Dan
>
> Sanne
>
> 2011/7/4 Galder ZamarreƱo <galder(a)redhat.com>:
>> Do these atomic operations really make sense within an (optimitic) transaction?
>>
>> For example, putIfAbsent(): it stores a k,v pair if the key is not present. But
the key about it's usability is that the return of putIfAbsent can tell you whether
the put succeeded or not.
>>
>> Once you go into transactions, the result is only valid once the transaction has
been prepared unless the pessimistic locking as per definition in
http://community.jboss.org/docs/DOC-16973 is in use, and that's already pretty
confusing IMO.
>>
>> I get the feeling that those atomic operations are particularly useful when
transactions are not used cos they allow you to reduce to cache operations to one, hence
avoiding the need to use a lock or synchronized block, or in our case, a transaction.
>>
>> On Jun 30, 2011, at 3:11 PM, Sanne Grinovero wrote:
>>
>>> Hello all,
>>> some team members had a meeting yesterday, one of the discussed
>>> subjects was about using atomic operations (putIfAbsent, etc..).
>>> Mircea just summarised it in the following proposal:
>>>
>>> The atomic operations, as defined by the ConcurrentHashMap, don't fit
>>> well within the scope of optimistic transaction: this is because there
>>> is a discrepancy between the value returned by the operation and the
>>> value and the fact that the operation is applied or not:
>>> E.g. putIfAbsent(k, v) might return true as there's no entry for k in
>>> the scope of the current transaction, but in fact there might be a
>>> value committed by another transaction, hidden by the fact we're
>>> running in repeatable read mode.
>>> Later on, at prepare time when the same operation is applied on the
>>> node that actually holds k, it might not succeed as another
>>> transaction has updated k in between, but the return value of the
>>> method was already evaluated long before this point.
>>> In order to solve this problem, if an atomic operations happens within
>>> the scope of a transaction, Infinispan eagerly acquires a lock on the
>>> remote node. This locks is held for the entire duration of the
>>> transaction, and is an expensive lock as it involves an RPC. If
>>> keeping the lock remotely for potentially long time represents a
>>> problem, the user can suspend the running transaction and run the
>>> atomic operation out of transaction's scope, then resume the
>>> transaction.
>>>
>>>
>>> In addition to this, would would you think about adding a flag to
>>> these methods which acts as suspending the transaction just before and
>>> resuming it right after? I don't know what is the cost of suspending
&
>>> resuming a transaction, but such a flag could optionally be optimized
>>> in future by just ignoring the current transaction instead of really
>>> suspending it, or apply other clever tricks we might come across.
>>>
>>> I also think that we should discuss if such a behaviour should not be
>>> the default - anybody using an atomic operation is going to make some
>>> assumptions which are clearly incompatible with the transaction, so
>>> I'm wondering what is the path here to "least surprise" for
default
>>> invocation.
>>>