[JBoss JIRA] (ISPN-7224) Support synchronous get in Spring's cache abstraction
by Mike Noordermeer (JIRA)
[ https://issues.jboss.org/browse/ISPN-7224?page=com.atlassian.jira.plugin.... ]
Mike Noordermeer edited comment on ISPN-7224 at 4/3/17 11:20 AM:
-----------------------------------------------------------------
It would solve my issue, as we mainly use local (non-replicating/invalidating) caches. Also, I wouldn't want global locking, so I think any locks should only apply to the local node when possible. In our case, we are retrieving and parsing a rather large object from a remote URL, and caching it (incl. passivation). The URL is unique per-user, and users may perform multiple requests at the same time. In that case, I simply want the second request to wait a bit for the first to finish, but if the supplier is invoked twice nothing will break really.
I think a best-effort implementation is all that is needed, and users locking/reusing caches used by the Spring cache abstraction are asking for trouble. I think an atomicity guarantee may be too much, especially in replicated scenarios, but that's not a problem.
was (Author: miken123):
It would solve my issue, as we mainly use local (non-replicating/invalidating) caches. Also, I wouldn't want global locking, so I think any locks should only apply to the local node when possible. I our case, we are retrieving and parsing a rather large object from a remote URL, and caching it (incl. passivation). The URL is unique per-user, and users may perform multiple requests at the same time. In that case, I simply want the second request to wait a bit for the first to finish, but if the supplier is invoked twice nothing will break really.
I think a best-effort implementation is all that is needed, and users locking/reusing caches used by the Spring cache abstraction are asking for trouble. I think an atomicity guarantee may be too much, especially in replicated scenarios, but that's not a problem.
> Support synchronous get in Spring's cache abstraction
> -----------------------------------------------------
>
> Key: ISPN-7224
> URL: https://issues.jboss.org/browse/ISPN-7224
> Project: Infinispan
> Issue Type: Feature Request
> Components: Spring Integration
> Reporter: Stéphane Nicoll
> Assignee: Sebastian Łaskawiec
> Priority: Critical
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> Spring Framework 4.3 has introduced a read-through option See https://jira.spring.io/browse/SPR-9254 for more details. In practice this would require you to compile against 4.3 and implement the additional method.
> The code is meant to be backward compatible with previous version, as long as you're guarding the new exception in an inner class, see [this implementation for an example|https://github.com/hazelcast/hazelcast/blob/37ba79c4a8d35617c5f6a...]
> Let me know if I can help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 9 months
[JBoss JIRA] (ISPN-7224) Support synchronous get in Spring's cache abstraction
by Mike Noordermeer (JIRA)
[ https://issues.jboss.org/browse/ISPN-7224?page=com.atlassian.jira.plugin.... ]
Mike Noordermeer commented on ISPN-7224:
----------------------------------------
It would solve my issue, as we mainly use local (non-replicating/invalidating) caches. Also, I wouldn't want global locking, so I think any locks should only apply to the local node when possible. I our case, we are retrieving and parsing a rather large object from a remote URL, and caching it (incl. passivation). The URL is unique per-user, and users may perform multiple requests at the same time. In that case, I simply want the second request to wait a bit for the first to finish, but if the supplier is invoked twice nothing will break really.
I think a best-effort implementation is all that is needed, and users locking/reusing caches used by the Spring cache abstraction are asking for trouble. I think an atomicity guarantee may be too much, especially in replicated scenarios, but that's not a problem.
> Support synchronous get in Spring's cache abstraction
> -----------------------------------------------------
>
> Key: ISPN-7224
> URL: https://issues.jboss.org/browse/ISPN-7224
> Project: Infinispan
> Issue Type: Feature Request
> Components: Spring Integration
> Reporter: Stéphane Nicoll
> Assignee: Sebastian Łaskawiec
> Priority: Critical
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> Spring Framework 4.3 has introduced a read-through option See https://jira.spring.io/browse/SPR-9254 for more details. In practice this would require you to compile against 4.3 and implement the additional method.
> The code is meant to be backward compatible with previous version, as long as you're guarding the new exception in an inner class, see [this implementation for an example|https://github.com/hazelcast/hazelcast/blob/37ba79c4a8d35617c5f6a...]
> Let me know if I can help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 9 months
[JBoss JIRA] (ISPN-7224) Support synchronous get in Spring's cache abstraction
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-7224?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-7224:
-------------------------------------------
hmmmm it's not that easy as you think guys... So let me try to explain my concerns a bit more...
The {{Cache}} instance on which SpringCache support is baked can be re-used in many different places. There's nothing that prevents a user from grabbing an instance and locking the entire container for some reason. In that case {{Cache#get(Object, Callable<T>)}} would block forever. The second aspect is asynchronous replication. Imagine you cache is clustered and at the same time you supply object {{A}} on one node and object {{B}} on another one. In that case your clustered code will also invoke the supplier twice.
The only thing we _could_ do is to provide "best effort" to synchronize on the key by locking it (with ability to disable this feature in configuration). This still doesn't give you any guarantees that your supplier won't be invoked twice and for me it's any better than {{computeIfAbsent}}. Would it solve your problem [~miken123]?
> Support synchronous get in Spring's cache abstraction
> -----------------------------------------------------
>
> Key: ISPN-7224
> URL: https://issues.jboss.org/browse/ISPN-7224
> Project: Infinispan
> Issue Type: Feature Request
> Components: Spring Integration
> Reporter: Stéphane Nicoll
> Assignee: Sebastian Łaskawiec
> Priority: Critical
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> Spring Framework 4.3 has introduced a read-through option See https://jira.spring.io/browse/SPR-9254 for more details. In practice this would require you to compile against 4.3 and implement the additional method.
> The code is meant to be backward compatible with previous version, as long as you're guarding the new exception in an inner class, see [this implementation for an example|https://github.com/hazelcast/hazelcast/blob/37ba79c4a8d35617c5f6a...]
> Let me know if I can help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 9 months
[JBoss JIRA] (ISPN-7224) Support synchronous get in Spring's cache abstraction
by Stéphane Nicoll (JIRA)
[ https://issues.jboss.org/browse/ISPN-7224?page=com.atlassian.jira.plugin.... ]
Stéphane Nicoll commented on ISPN-7224:
---------------------------------------
IMO, if you can synchronize and you're not doing it you are missing the point entirely. The reason why we phrase "if possible" is that a Cache implementation may not be able to support that feature and we still need a way out for them. There's no way for us to know you're not enforcing that (since that's implementation dependant) and we have no way to synchronize on a key at our level of abstraction.
So, please, implement the method with the full contract if you can.
> Support synchronous get in Spring's cache abstraction
> -----------------------------------------------------
>
> Key: ISPN-7224
> URL: https://issues.jboss.org/browse/ISPN-7224
> Project: Infinispan
> Issue Type: Feature Request
> Components: Spring Integration
> Reporter: Stéphane Nicoll
> Assignee: Sebastian Łaskawiec
> Priority: Critical
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> Spring Framework 4.3 has introduced a read-through option See https://jira.spring.io/browse/SPR-9254 for more details. In practice this would require you to compile against 4.3 and implement the additional method.
> The code is meant to be backward compatible with previous version, as long as you're guarding the new exception in an inner class, see [this implementation for an example|https://github.com/hazelcast/hazelcast/blob/37ba79c4a8d35617c5f6a...]
> Let me know if I can help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 9 months
[JBoss JIRA] (ISPN-7224) Support synchronous get in Spring's cache abstraction
by Mike Noordermeer (JIRA)
[ https://issues.jboss.org/browse/ISPN-7224?page=com.atlassian.jira.plugin.... ]
Mike Noordermeer commented on ISPN-7224:
----------------------------------------
It's true that it sort-of adheres to the contract, but the whole sync flag becomes fairly useless if it is not atomic. If the compute function was fast, I wouldn't need caching, and if it isn't fast, I don't want to execute it multiple times.
The Spring contract also states 'If possible', and I do think it is possible to lock the key (at least locally), so not 100% sure it fully matches the contract.
I'll see if I can work around it by overriding a couple of classes, thanks for your reply.
> Support synchronous get in Spring's cache abstraction
> -----------------------------------------------------
>
> Key: ISPN-7224
> URL: https://issues.jboss.org/browse/ISPN-7224
> Project: Infinispan
> Issue Type: Feature Request
> Components: Spring Integration
> Reporter: Stéphane Nicoll
> Assignee: Sebastian Łaskawiec
> Priority: Critical
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> Spring Framework 4.3 has introduced a read-through option See https://jira.spring.io/browse/SPR-9254 for more details. In practice this would require you to compile against 4.3 and implement the additional method.
> The code is meant to be backward compatible with previous version, as long as you're guarding the new exception in an inner class, see [this implementation for an example|https://github.com/hazelcast/hazelcast/blob/37ba79c4a8d35617c5f6a...]
> Let me know if I can help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 9 months
[JBoss JIRA] (ISPN-7224) Support synchronous get in Spring's cache abstraction
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-7224?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-7224:
-------------------------------------------
I guess you are referring to [this piece of the code|https://github.com/infinispan/infinispan/blob/master/spring/spring4/...]:
{code}
@Override
public <T> T get(Object key, Callable<T> valueLoader) {
return cacheImplementation.get(key, valueLoader);
}
{code}
This in turn invokes [this|https://github.com/infinispan/infinispan/blob/master/spring/spring4/...]:
{code}
@Override
public <T> T get(Object key, Callable<T> valueLoader) {
return (T) nativeCache.computeIfAbsent(key, keyToBeInserted -> {
try {
return valueLoader.call(); //loading value in a caller's thread
} catch (Exception e) {
throw ValueRetrievalExceptionResolver.throwValueRetrievalException(key, valueLoader, e);
}
});
}
{code}
>From Spring Cache documentation|http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/Cache.html#get-java.lang.Object-java.util.concurrent.Callable-] we can learn that:
{quote}
Return the value to which this cache maps the specified key, obtaining that value from valueLoader if necessary. This method provides a simple substitute for the conventional "if cached, return; otherwise create, cache and return" pattern.
If possible, implementations should ensure that the loading operation is synchronized so that the specified valueLoader is only called once in case of concurrent access on the same key.
{quote}
So Spring suggests locking but doesn't enforce it (it is still up to the implementation). And of course you are absolutely right that [computeIfAbsent|https://docs.oracle.com/javase/8/docs/api/java/util/Map.h...] is not atomic:
{quote}
The default implementation makes no guarantees about synchronization or atomicity properties of this method. Any implementation providing atomicity guarantees must override this method and document its concurrency properties. In particular, all implementations of subinterface ConcurrentMap must document whether the function is applied once atomically only if the value is not present.
{quote}
To sum it up - I believe we are aligned with the Spring Cache contract (since it doesn't really enforce synchronization). With our implementation we decided to avoid locking (we would need to lock the key, blocking caller's thread is not sufficient because you may use different integrations (like CDI) at the same time) in favor of performance.
> Support synchronous get in Spring's cache abstraction
> -----------------------------------------------------
>
> Key: ISPN-7224
> URL: https://issues.jboss.org/browse/ISPN-7224
> Project: Infinispan
> Issue Type: Feature Request
> Components: Spring Integration
> Reporter: Stéphane Nicoll
> Assignee: Sebastian Łaskawiec
> Priority: Critical
> Fix For: 9.0.0.Beta1, 9.0.0.Final
>
>
> Spring Framework 4.3 has introduced a read-through option See https://jira.spring.io/browse/SPR-9254 for more details. In practice this would require you to compile against 4.3 and implement the additional method.
> The code is meant to be backward compatible with previous version, as long as you're guarding the new exception in an inner class, see [this implementation for an example|https://github.com/hazelcast/hazelcast/blob/37ba79c4a8d35617c5f6a...]
> Let me know if I can help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 9 months