Allthough I'm not familiar enough with the code to understand every implication of the
proposed implementation, I think that the idea is sound.
I encountered blocking reads in our application which need to be addressed. In the end I
constructed a local cache in front of the distributed cache. The local cache is updated by
the distributed in a fashion where we never block readers. (Optimistically locking was not
a perfect fit here, I might revise that in the future if I get the time).
The blocking encountered was not only due to write locked nodes but also synchronization
blocks. However, the local cache solved the issues and I now get very performant
concurrent reading from the cache layers.
If I haven't gotten this all backwards this is a bit similar to what you propose. And
the implications by this is that you would infact be solving a real problem that at least
we had to deal with. I think that given the application areas for a cache, fast,
non-blocking read access is definitely crucial. And I would love to see it implemented in
the core cache instead of a tacked on solution like mine. =)
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4066397#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...