After porting to 3.0.0.GA and putForExternalRead, it turned out we hadn't read the
fine print, which says nothing happens if the node is already there (not the data
itself--the node containing the data--we had misread the Javadocs).
Because we have more than one piece of data stored in a node, the node is often there even
though some data is not.
So, puts of data weren't happening when we wanted them to, and the cache was returning
nulls instead of data fetched to fill the node after a cache miss.
We realize this is by design and documented as such, but the design puzzles us. The put
should succeed if the node is there but the key for the data being put is missing, and
should succeed if the node and key are there but the value in the node for that key is
null.
We are, after all, trying to do a cache fill after a cache miss. A fill that doesn't
fill isn't very useful.
Furthermore, if all of the data in a node ages out of the cache due to LRU, is the data
removed from the cache and the node left in place, or is the node also removed? Because if
the node is not removed, then PFER wouldn't work in that case either (which makes the
design even more puzzling to us).
JBoss support's suggestion was that we mimic PFER ourselves, which was what we had
originally asked for help with, but when it got to the part about suspending transactions
ourselves, that's when we said we're using JPA and the JPA spec says we'll go
to JPA jail if we try to muck with transactions at all while being managed by the
container. So that's not an option.
For now, we're hoping changing the isolation level to READ_COMMITTED from the default
helps. If not, we'll have to lengthen the timeout.
Ideally JBoss Cache would have a method with a better name than
putFastForCacheFillFromExternalReadAfterCacheMissEvenIfNodeIsPresent, but with that
semantics.
The behavior of PFFCFFERACMEINIP would be remarkably similar to the behavior of PFER,
except for modifying the first bullet in the Javadocs:
- Only goes through if the node specified does not exist, or exists but does not have the
specified key for the value being put, or exists and has the specified key for the value
being put but the value is null.
- Force asynchronous mode for replication to prevent any blocking.
invalidation does not take place.
- 0ms lock timeout to prevent any blocking here either. If the lock is not acquired, this
method is a no-op, and swallows the timeout exception.
- Ongoing transactions are suspended before this call, so failures here will not affect
any ongoing transactions.
- Errors and exceptions are 'silent' - logged at a much lower level than normal,
and this method does not throw exceptions.
However, it occurs to us that the folks at
hibernate.org seem plenty satisfied with the
current behavior of PFER, and their use case is basically the same as our use case, which
gets us to wondering why they're happy and we're not.
Is our problem due to storing multiple items per node by key? If so, we could change to
storing each item in a separate node. That seemed expensive (a lot of maps), but maybe
that's how this is supposed to work?
If so, we have a followup question--currently when we invalidate data locally (to force a
cache miss and cache fill on the next access to that data), we just clear the data in a
node, but leave the node in place. Are we supposed to invalidate by deleting the node
itself? If so, how does *that* work when multiple threads are trying to access the same
node? Won't readers back up behind a write lock, thereby re-introducing the very
problem we were trying to solve with PFER in the first place? (Puts due to cache misses on
reads timing out because of contention for write locks.)
View the original post :
http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4220791#...
Reply to the post :
http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&a...