We're about to deploy our application using JBC. We have several systems that work on
two databases: The master DB that's always up to date and the slave DB that is
synchronized <5 seconds after the master DB. We use DB queue replication.
All systems use the same cache. We do not always use replication and have to use
invalidation quite a lot. In this case, a cache will reload the data from the DB of the
system during the next request.
Now with JBC we might run into the following problem:
- A system changes the master DB and invalidates the corresponding cache entries.
- JBC multicasts the invalidation event to the cache cluster members.
- On the system using the slave DB, the cache entry is invalidated immediately. The DB is
not synchronized yet.
- On the system using the slave DB, somebody requests the value from the cache that has
just been invalidated. As the cache doesn't have the data, it loads it from the DB -
which means the old version before the update is loaded!
- The cache contains the old data.
- The DB synchronization happens, but the cache doesn't know and keeps the outdated
data.
Is there a common solution to this kind of problem?
My idea was to create a JGroups protocol similar to DELAY (let's say FIXED_DELAY) that
sits in the slave machines' JGroups stack and delays all invalidation events for some
time to compensate for the DB synchronization lag. How do I know which events or messages
I should delay, and where in the stack should I put the protocol?
Could an ExtendedTreeCacheListener with a delay in nodeRemove help?
Regards,
Kai
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4167359#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...