Email summary:
Number of lines: 188
Number of useful lines (Strict): 1 (0,53%)
Number of useful line (contextual): 12 (6.3%)
Position of the useful line in the amount of data: 57 (had to scroll 30% of the data to
find it and the addition 70% as I was not sure another one wasn't lost somewhere
later)
I'm sure one can do better than that.
On 5 juil. 2011, at 15:15, Sanne Grinovero wrote:
2011/7/5 Galder Zamarreño <galder(a)redhat.com>:
>
> On Jul 5, 2011, at 11:46 AM, Sanne Grinovero wrote:
>
>> 2011/7/5 Galder Zamarreño <galder(a)redhat.com>:
>>>
>>>
>>> On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
>>>
>>>> I agree they don't make sense, but only in the sense of exposed API
>>>> during a transaction: some time ago I admit I was expecting them to
>>>> just work: the API is there, nice public methods in the public
>>>> interface with javadocs explaining that that was exactly what I was
>>>> looking for, no warnings, no failures. Even worse, all works fine when
>>>> running a local test because how the locks currently work they are
>>>> acquired locally first, so unless you're running such a test in DIST
>>>> mode, and happen to be *not* the owner of the being tested key, people
>>>> won't even notice that this is not supported.
>>>>
>>>> Still being able to use them is very important, also in combination
>>>> with transactions: I might be running blocks of transactional code
>>>> (like a CRUD operation via OGM) and still require to advance a
>>>> sequence for primary key generation. This needs to be an atomic
>>>> operation, and I should really not forget to suspend the transaction.
>>>
>>> Fair point. At first glance, the best way to deal with this is suspending the
tx cos that guarantees the API contract while not forcing locks to be acquired for too
long.
>>>
>>> I'd advice though that whoever works on this though needs to go over
existing use cases and see if the end result could differ somehow if this change gets
applied. If any divergences are found and are to be expected, these need to be thoroughly
documented.
>>>
>>> I've gone through some cases and end results would not differ at first
glance if the atomic ops suspend the txs. The only thing that would change would be the
expectations of lock acquisition timeouts by atomic ops within txs.
>>>
>>> For example:
>>>
>>> Cache contains: k1=galder
>>>
>>> 1. Tx1 does a cache.replace(k1, "galder", "sanne") ->
suspends tx and applies change -> k1=sanne now
>>> 2. Tx2 does a cache.replace(k1, "galder", "manik") ->
suspends tx and is not able to apply change
>>> 3. Tx2 commits
>>> 4. Tx1 commits
>>> End result: k1=sanne
>>
>> Right.
>> To clarify, this is what would happen with the current implementation:
>>
>> 1. Tx2 does a cache.get(k1) -> it reads the value of k1, and is
>> returned "galder"
>> 2. Tx1 does a cache.replace(k1, "galder", "sanne") ->
k1="sanne" in
>> the scope of this transaction, but not seen by other tx
>> 3. Tx2 does a cache.replace(k1, "galder", "manik") ->
k1="manik" is
>> assigned, as because of repeatable read we're still seeing
"galder"
>> 4. Tx2 & Tx1 commit
>>
>> ..and the end result depends on who commits first.
>
> The sequence of events above is what I suppose would happen with the suspended tx
mode, not the current impl
thanks, I just felt the need to double check we where on the same page.
>>> 1. Tx1 does a cache.replace(k1, "galder", "sanne") ->
acquires lock
>>> 2. Tx2 does a cache.replace(k1, "galder", "manik") ->
waits for lock
>>> 3. Tx2 rollback -> times out acquiring lock
>>> 4. Tx1 commits -> applies change
>>> End result: k1=sanne
>>
>> I'm not sure we're on the same line here. 1) should apply the
>> operation right away, so even if it might very briefly have to acquire
>> a lock on it, it's immediately released (not at the end of the
>> transaction), so why would TX2 have to wait for it to the point it
>> needs to rollback?
>
> This is what I was trying to picture as current implementation. It's true that it
should apply the operation, but it also acquires the lock, at least in local mode and the
locks are only release at prepare/commit time.
>
> Well, tx2 is trying to acquire a WL on a entry that's being modified by TX1. Here
I'm assuming that Tx1 does 'something else' and so Tx2 times out waiting for
the lock.
>
>>
>>
>>>
>>>>
>>>> Sanne
>>>>
>>>> 2011/7/4 Galder Zamarreño <galder(a)redhat.com>:
>>>>> Do these atomic operations really make sense within an (optimitic)
transaction?
>>>>>
>>>>> For example, putIfAbsent(): it stores a k,v pair if the key is not
present. But the key about it's usability is that the return of putIfAbsent can tell
you whether the put succeeded or not.
>>>>>
>>>>> Once you go into transactions, the result is only valid once the
transaction has been prepared unless the pessimistic locking as per definition in
http://community.jboss.org/docs/DOC-16973 is in use, and that's already pretty
confusing IMO.
>>>>>
>>>>> I get the feeling that those atomic operations are particularly
useful when transactions are not used cos they allow you to reduce to cache operations to
one, hence avoiding the need to use a lock or synchronized block, or in our case, a
transaction.
>>>>>
>>>>> On Jun 30, 2011, at 3:11 PM, Sanne Grinovero wrote:
>>>>>
>>>>>> Hello all,
>>>>>> some team members had a meeting yesterday, one of the discussed
>>>>>> subjects was about using atomic operations (putIfAbsent, etc..).
>>>>>> Mircea just summarised it in the following proposal:
>>>>>>
>>>>>> The atomic operations, as defined by the ConcurrentHashMap,
don't fit
>>>>>> well within the scope of optimistic transaction: this is because
there
>>>>>> is a discrepancy between the value returned by the operation and
the
>>>>>> value and the fact that the operation is applied or not:
>>>>>> E.g. putIfAbsent(k, v) might return true as there's no entry
for k in
>>>>>> the scope of the current transaction, but in fact there might be
a
>>>>>> value committed by another transaction, hidden by the fact
we're
>>>>>> running in repeatable read mode.
>>>>>> Later on, at prepare time when the same operation is applied on
the
>>>>>> node that actually holds k, it might not succeed as another
>>>>>> transaction has updated k in between, but the return value of
the
>>>>>> method was already evaluated long before this point.
>>>>>> In order to solve this problem, if an atomic operations happens
within
>>>>>> the scope of a transaction, Infinispan eagerly acquires a lock on
the
>>>>>> remote node. This locks is held for the entire duration of the
>>>>>> transaction, and is an expensive lock as it involves an RPC. If
>>>>>> keeping the lock remotely for potentially long time represents a
>>>>>> problem, the user can suspend the running transaction and run
the
>>>>>> atomic operation out of transaction's scope, then resume the
>>>>>> transaction.
>>>>>>
>>>>>>
>>>>>> In addition to this, would would you think about adding a flag
to
>>>>>> these methods which acts as suspending the transaction just
before and
>>>>>> resuming it right after? I don't know what is the cost of
suspending &
>>>>>> resuming a transaction, but such a flag could optionally be
optimized
>>>>>> in future by just ignoring the current transaction instead of
really
>>>>>> suspending it, or apply other clever tricks we might come
across.
>>>>>>
>>>>>> I also think that we should discuss if such a behaviour should
not be
>>>>>> the default - anybody using an atomic operation is going to make
some
>>>>>> assumptions which are clearly incompatible with the transaction,
so
>>>>>> I'm wondering what is the path here to "least
surprise" for default
>>>>>> invocation.
>>>>>>
>>>>>> Regards,
>>>>>> Sanne
>>>>>> _______________________________________________
>>>>>> infinispan-dev mailing list
>>>>>> infinispan-dev(a)lists.jboss.org
>>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>> --
>>>>> Galder Zamarreño
>>>>> Sr. Software Engineer
>>>>> Infinispan, JBoss Cache
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev(a)lists.jboss.org
>>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev(a)lists.jboss.org
>>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>> --
>>> Galder Zamarreño
>>> Sr. Software Engineer
>>> Infinispan, JBoss Cache
>>>
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> --
> Galder Zamarreño
> Sr. Software Engineer
> Infinispan, JBoss Cache
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev