[jbosscache-dev] READ_COMMITTED should be enforced for Hibernate 2nd level caching?

Manik Surtani manik at jboss.org
Wed Mar 18 10:00:30 EDT 2009


On 18 Mar 2009, at 13:54, Jason T. Greene wrote:

> Manik Surtani wrote:
>> On 17 Mar 2009, at 20:33, Jason T. Greene wrote:
>>> Brian Stansberry wrote:
>>>
>>>>> However, this sounds like a problem with PFER. If someone calls  
>>>>> PFER, I think the original transaction should resync the node  
>>>>> snapshot.
>>>> How would this be done? AFAIK the application has no control over  
>>>> the data in JBCs transaction context.
>>>
>>> The PFER implementation, not the application, would just drop the  
>>> node from the tx context which invoked pfer. That would mean that  
>>> any subsequent read would fetch the most current data.
>> No, that is not correct.  PFER suspends ongoing TXs and runs  
>> outside of any TX, to prevent a failure rolling back the TX.  And  
>> this is the root of the problem.
>
> "correctness" I think is in the eye of the beholder :)
>
> To me it does not seem correct that i can do
>
> pfer(k, 7)
> get(k) == null

The above would only happen if you did:

tx.start() // ensure this in a transactional context
assert get(k) == null // initially empty
pfer(k, 7) // this *always* happens outside of the context of a tx
assert get(k) == null // this still holds true since we initially read  
this as a null.

Anyway, this pattern has nothing to do with the problem at hand, or  
the correctness/consistency I discussed, which has to do with handling  
a remove() on a null entry with repeatable read.

Cheers
--
Manik Surtani
Lead, JBoss Cache
http://www.jbosscache.org
manik at jboss.org




-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/jbosscache-dev/attachments/20090318/ee9c88e6/attachment.html 


More information about the jbosscache-dev mailing list