[
https://issues.jboss.org/browse/ISPN-7224?page=com.atlassian.jira.plugin....
]
Sebastian Łaskawiec commented on ISPN-7224:
-------------------------------------------
I guess you are referring to [this piece of the
code|https://github.com/infinispan/infinispan/blob/master/spring/spring4/...]:
{code}
@Override
public <T> T get(Object key, Callable<T> valueLoader) {
return cacheImplementation.get(key, valueLoader);
}
{code}
This in turn invokes
[
this|https://github.com/infinispan/infinispan/blob/master/spring/spring4/...]:
{code}
@Override
public <T> T get(Object key, Callable<T> valueLoader) {
return (T) nativeCache.computeIfAbsent(key, keyToBeInserted -> {
try {
return valueLoader.call(); //loading value in a caller's thread
} catch (Exception e) {
throw ValueRetrievalExceptionResolver.throwValueRetrievalException(key,
valueLoader, e);
}
});
}
{code}
From Spring Cache
documentation|http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/Cache.html#get-java.lang.Object-java.util.concurrent.Callable-]
we can learn that:
{quote}
Return the value to which this cache maps the specified key, obtaining that value from
valueLoader if necessary. This method provides a simple substitute for the conventional
"if cached, return; otherwise create, cache and return" pattern.
If possible, implementations should ensure that the loading operation is synchronized so
that the specified valueLoader is only called once in case of concurrent access on the
same key.
{quote}
So Spring suggests locking but doesn't enforce it (it is still up to the
implementation). And of course you are absolutely right that
[
computeIfAbsent|https://docs.oracle.com/javase/8/docs/api/java/util/Map.h...]
is not atomic:
{quote}
The default implementation makes no guarantees about synchronization or atomicity
properties of this method. Any implementation providing atomicity guarantees must override
this method and document its concurrency properties. In particular, all implementations of
subinterface ConcurrentMap must document whether the function is applied once atomically
only if the value is not present.
{quote}
To sum it up - I believe we are aligned with the Spring Cache contract (since it
doesn't really enforce synchronization). With our implementation we decided to avoid
locking (we would need to lock the key, blocking caller's thread is not sufficient
because you may use different integrations (like CDI) at the same time) in favor of
performance.
Support synchronous get in Spring's cache abstraction
-----------------------------------------------------
Key: ISPN-7224
URL:
https://issues.jboss.org/browse/ISPN-7224
Project: Infinispan
Issue Type: Feature Request
Components: Spring Integration
Reporter: Stéphane Nicoll
Assignee: Sebastian Łaskawiec
Priority: Critical
Fix For: 9.0.0.Beta1, 9.0.0.Final
Spring Framework 4.3 has introduced a read-through option See
https://jira.spring.io/browse/SPR-9254 for more details. In practice this would require
you to compile against 4.3 and implement the additional method.
The code is meant to be backward compatible with previous version, as long as you're
guarding the new exception in an inner class, see [this implementation for an
example|https://github.com/hazelcast/hazelcast/blob/37ba79c4a8d35617c5f6a...]
Let me know if I can help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)