[hibernate-dev] ReadWrite 2LC "read uncommitted" data anomaly
Sanne Grinovero
sanne at hibernate.org
Tue Apr 5 09:30:17 EDT 2016
On 5 April 2016 at 14:11, Vlad Mihalcea <mihalcea.vlad at gmail.com> wrote:
> Hi,
>
> While reviewing the PR for this issue:
>
> https://hibernate.atlassian.net/browse/HHH-10649
>
> I realized that the ReadWrite cache concurrency strategy has a flaw that
> permits "read uncommitted" anomalies.
> The RW cache concurrency strategy guards any modifications with Lock
> entries, as explained in this post that I wrote some time ago:
>
> http://vladmihalcea.com/2015/05/25/how-does-hibernate-read_write-cacheconcurrencystrategy-work/
>
> Every time we update/delete an entry, a Lock is put in the cache under the
> entity key, and, this way, "read uncommitted" anomalies should be prevented.
>
> The problem comes when entries are evicted either explicitly:
>
> session.getSessionFactory().getCache().evictEntity( CacheableItem.class,
> item1.getId() );
>
> or implicitly:
>
> session.refresh( item1 );
Good catch!
I think this is caused as we generally don't expect the evict
operation to be controlled explicitly.
In my personal experience, I would use the evictAll method to nuke the
cache state after some significant operation, like restoring a
backup.. and no other Session would have been opened in the meantime.
I never used an explicit single-shot evict so I can't say what the use
case would be.
But of course you're right that it might be used differently, or at
least such a limitation should be clear.
>
> During eviction, the 2PL will remove the Lock entry, and if the user
> attempts to load the entity anew (in the same transaction that has modified
> the entity but which is not committed yet), an uncommitted change could be
> propagated to the 2PL.
>
> This issue is replicated by the PR associated to this Jira issue, and I
> also replicated it with manual eviction and entity loading.
>
> To fix it, the RW cache concurrency strategy should not delete entries from
> 2PL upon eviction, but instead it should turn them in Lock entries.
I'm not sure I understood this part. Shouldn't it rather be allowed to
delete everything, except any existing locks?
Then rather than turn the remaining locks into locks, it would be
enough to leave them.
> For the evict method, this is not really a problem, but evictAll would
> imply taking all entries and replacing them with Locks, and that might not
> perform very well in a distributed-cache scenario.
>
> Ideally, lock entries would be stored separately than actual cached value
> entries, and this problem would be fixed in a much cleaner fashion.
I'd leave this as a detail to the Cache implementation, some might be
able to perform some operation more efficiently.
Probably a good idea to clarify this expectation on the javadocs of
the SPI methods.
Thanks,
Sanne
>
> Let me know what you think about this.
>
> Vlad
> _______________________________________________
> hibernate-dev mailing list
> hibernate-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hibernate-dev
More information about the hibernate-dev
mailing list