[infinispan-issues] [JBoss JIRA] (ISPN-5876) Pre-commit cache invalidation creates stale cache vulnerability
Stephen Fikes (JIRA)
issues at jboss.org
Wed Oct 21 11:01:00 EDT 2015
[ https://issues.jboss.org/browse/ISPN-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Stephen Fikes updated ISPN-5876:
--------------------------------
Description:
In a cluster where Infinispan serves as the level 2 cache for Hibernate (configured for invalidation), because invalidation requests for modified entities are sent *before* database commit, it is possible for nodes receiving the invalidation request to perform eviction and then (due to "local" read requests) reload the evicted entities prior to the time the database commit takes place in the server where the entity was modified.
Consequently, other servers in the cluster may contain data that remains stale until a subsequent change in another server or until the entity times out from lack of use.
It isn't easy to write a testcase for this - it requires lots of manual intervention - but it can be seen with any entity class, cluster, etc. (at least using Oracle - results may be different in some databases) so I've not attached a testcase. The issue is quite general and can be seen/understood by code inspection (i.e. the timing of invalidation vs. database commit). That said, I set up a two node cluster and I used Byteman rules to delay database commit of a change to an entity (with an optimistic version property) long enough in "server 1" for eviction and re-load to take place in "server 2". Following the reload in "server 2", I the database commit proceeds in "server 1" and "server 2" now has a stale copy of the entity in cache. This can be seen with any entity.
One option is pessimistic locking which will block any read attempt until the DB commit completes. It is not feasible, however, for many applications to use pessimistic locking for all reads as this can have a severe impact on concurrency - and is the reason that optimistic locking was created. But due to the early timing of invalidation broadcast (*before* database commit), optimistic locking is insufficient to guard against "permanently" stale data. We did see that some databases default to blocking repeatable reads even outside of transactions and without explicit lock requests. Oracle does not provide such a mode. So, all reads must be implemented to use pessimistic locks (which must be enclosed in explicit transactions - (b)locking reads are disallowed when autocommit=true) and this could require significant effort (re-writes) to use pessimistic reads throughout.
If broadcast of an invalidation message always occurs *after* database commit, optimistic control attributes block attempts to write stale data and though a few failures may occur, it can be known that the stale data will be removed in some finite period.
was:
In a cluster where Infinispan serves as the level 2 cache for Hibernate (configured for invalidation), because invalidation requests for modified entities are sent *before* database commit, it is possible for nodes receiving the invalidation request to perform eviction and then (due to "local" read requests) reload the evicted entities prior to the time the database commit takes place in the server where the entity was modified.
Consequently, other servers in the cluster may contain data that remains stale until a subsequent change in another server or until the entity times out from lack of use.
It isn't easy to write a testcase for this - it requires lots of manual intervention - but it can be seen with any entity class, cluster, etc. (at least using Oracle - results may be different in some databases) so I'm not attaching anything. The issue is quite general and can be seen/understood by code inspection (i.e. the timing of invalidation vs. database commit). That said, I set up a two node cluster and I used Byteman rules to delay database commit of a change to an entity (with an optimistic version property) long enough in "server 1" for eviction and re-load to take place in "server 2". Following the reload in "server 2", I the database commit proceeds in "server 1" and "server 2" now has a stale copy of the entity in cache. This can be seen with any entity.
One option is pessimistic locking which will block any read attempt until the DB commit completes. It is not feasible, however, for many applications to use pessimistic locking for all reads as this can have a severe impact on concurrency - and is the reason that optimistic locking was created. But due to the early timing of invalidation broadcast (*before* database commit), optimistic locking is insufficient to guard against "permanently" stale data. We did see that some databases default to blocking repeatable reads even outside of transactions and without explicit lock requests. Oracle does not provide such a mode. So, all reads must be implemented to use pessimistic locks (which must be enclosed in explicit transactions - (b)locking reads are disallowed when autocommit=true) and this could require significant effort (re-writes) to use pessimistic reads throughout.
If broadcast of an invalidation message always occurs *after* database commit, optimistic control attributes block attempts to write stale data and though a few failures may occur, it can be known that the stale data will be removed in some finite period.
> Pre-commit cache invalidation creates stale cache vulnerability
> ---------------------------------------------------------------
>
> Key: ISPN-5876
> URL: https://issues.jboss.org/browse/ISPN-5876
> Project: Infinispan
> Issue Type: Bug
> Components: Eviction
> Affects Versions: 5.2.7.Final
> Reporter: Stephen Fikes
> Assignee: Galder Zamarreño
>
> In a cluster where Infinispan serves as the level 2 cache for Hibernate (configured for invalidation), because invalidation requests for modified entities are sent *before* database commit, it is possible for nodes receiving the invalidation request to perform eviction and then (due to "local" read requests) reload the evicted entities prior to the time the database commit takes place in the server where the entity was modified.
> Consequently, other servers in the cluster may contain data that remains stale until a subsequent change in another server or until the entity times out from lack of use.
> It isn't easy to write a testcase for this - it requires lots of manual intervention - but it can be seen with any entity class, cluster, etc. (at least using Oracle - results may be different in some databases) so I've not attached a testcase. The issue is quite general and can be seen/understood by code inspection (i.e. the timing of invalidation vs. database commit). That said, I set up a two node cluster and I used Byteman rules to delay database commit of a change to an entity (with an optimistic version property) long enough in "server 1" for eviction and re-load to take place in "server 2". Following the reload in "server 2", I the database commit proceeds in "server 1" and "server 2" now has a stale copy of the entity in cache. This can be seen with any entity.
> One option is pessimistic locking which will block any read attempt until the DB commit completes. It is not feasible, however, for many applications to use pessimistic locking for all reads as this can have a severe impact on concurrency - and is the reason that optimistic locking was created. But due to the early timing of invalidation broadcast (*before* database commit), optimistic locking is insufficient to guard against "permanently" stale data. We did see that some databases default to blocking repeatable reads even outside of transactions and without explicit lock requests. Oracle does not provide such a mode. So, all reads must be implemented to use pessimistic locks (which must be enclosed in explicit transactions - (b)locking reads are disallowed when autocommit=true) and this could require significant effort (re-writes) to use pessimistic reads throughout.
> If broadcast of an invalidation message always occurs *after* database commit, optimistic control attributes block attempts to write stale data and though a few failures may occur, it can be known that the stale data will be removed in some finite period.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
More information about the infinispan-issues
mailing list