[infinispan-issues] [JBoss JIRA] Commented: (ISPN-650) DefaultCacheManager.getCache(...) should block until newly created cache is started
Paul Ferraro (JIRA)
jira-events at lists.jboss.org
Thu Sep 16 00:40:28 EDT 2010
[ https://jira.jboss.org/browse/ISPN-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12551172#action_12551172 ]
Paul Ferraro commented on ISPN-650:
-----------------------------------
It appears this has some unintended consequences. Referencing a newly started cache from within a @CacheStarted listener deadlocks, since the listener notification happens within Cache.start() and the listener's call to event.getCacheManager().getCache(event.getCacheName()) will block forever.
Some tests in the AS testsuite leverage this mechanism to add cache listeners to a newly started cache, specifically for on-demand caches, e.g. named entity cache regions. I can workaround this, but it's worthy of note, since this worked prior to this jira.
> DefaultCacheManager.getCache(...) should block until newly created cache is started
> -----------------------------------------------------------------------------------
>
> Key: ISPN-650
> URL: https://jira.jboss.org/browse/ISPN-650
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core API
> Affects Versions: 4.2.0.ALPHA1
> Reporter: Paul Ferraro
> Assignee: Manik Surtani
> Priority: Minor
> Fix For: 4.2.0.ALPHA2, 4.2.0.BETA1, 4.2.0.Final
>
>
> Currently, DefaultCacheManager stores it's caches in a concurrent map. When a call to getCache(...) is made for a cache that does not yet exist, the cache is created, put into the map (via putIfAbsent()) and then the cache is started. Consequently, a subsequent, but concurrent thread calling getCache(...) with the same cache name may end up with a cache that is not yet ready for use, leading to unexpected behavior.
> Ideally, calls to getCache(...) should block if the requested cache is newly created, but not yet started. Requests for an already started cache should not block.
> A possible implementation might involve storing the cache along side a volatile single-use thread gate (e.g. new CountDownLatch(1)) in the concurrent map. The algorithm might look like:
> 1. Lookup the map entry (i.e. cache + gate) using the cache name
> 2. If the map entry exists, but no gate is present, return the cache.
> 3. If the map entry exists, and a latch is present, wait on the latch (ideally with a timeout) and return the cache.
> 4. If the map entry does not exist, create the cache and put it into the map (if absent) with a new thread gate.
> 4a. If the put was not successful (i.e. an entry already existed), goto 1.
> 5. Start the cache - if start fails, stop the cache and remove the map entry (threads waiting on it's gate will timeout, oh well)
> 6. Open the gate
> 7. Remove the gate from the map entry
> 8. Return the cache
> A horridly generic version of the above can be found in the HA-JDBC source code:
> http://ha-jdbc.svn.sourceforge.net/viewvc/ha-jdbc/trunk/src/main/java/net/sf/hajdbc/util/concurrent/Registry.java?revision=2399&view=markup
> and an example demonstrating use of a Registry with a MapRegistryStoreFactory can be found here:
> http://ha-jdbc.svn.sourceforge.net/viewvc/ha-jdbc/trunk/src/main/java/net/sf/hajdbc/sql/Driver.java?revision=2414&view=markup
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
More information about the infinispan-issues
mailing list