[jbosscache-dev] READ_COMMITTED should be enforced for Hibernate 2nd level caching?

Jason T. Greene jason.greene at redhat.com
Wed Mar 18 10:07:19 EDT 2009


Manik Surtani wrote:
> 
> On 18 Mar 2009, at 13:54, Jason T. Greene wrote:
> 
>> Manik Surtani wrote:
>>> On 17 Mar 2009, at 20:33, Jason T. Greene wrote:
>>>> Brian Stansberry wrote:
>>>>
>>>>>> However, this sounds like a problem with PFER. If someone calls 
>>>>>> PFER, I think the original transaction should resync the node 
>>>>>> snapshot.
>>>>> How would this be done? AFAIK the application has no control over 
>>>>> the data in JBCs transaction context.
>>>>
>>>> The PFER implementation, not the application, would just drop the 
>>>> node from the tx context which invoked pfer. That would mean that 
>>>> any subsequent read would fetch the most current data.
>>> No, that is not correct.  PFER suspends ongoing TXs and runs outside 
>>> of any TX, to prevent a failure rolling back the TX.  And this is the 
>>> root of the problem.
>>
>> "correctness" I think is in the eye of the beholder :)
>>
>> To me it does not seem correct that i can do
>>
>> pfer(k, 7)
>> get(k) == null
> 
> The above would only happen if you did:
> 
> tx.start() // ensure this in a transactional context
> assert get(k) == null // initially empty
> pfer(k, 7) // this *always* happens outside of the context of a tx
> assert get(k) == null // this still holds true since we initially read 
> this as a null.

Yep

> Anyway, this pattern has nothing to do with the problem at hand, or the 
> correctness/consistency I discussed, which has to do with handling a 
> remove() on a null entry with repeatable read.

Sure, remove was also broken, which fixing the above would have hidden 
(since the issues are related). However, that doesn't mean this should 
not be addressed at some point.

-- 
Jason T. Greene
JBoss, a division of Red Hat



More information about the jbosscache-dev mailing list