[JBoss JIRA] (ISPN-9459) Remove compat mode from the Memcached server
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-9459?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-9459:
------------------------------------
Status: Open (was: New)
> Remove compat mode from the Memcached server
> --------------------------------------------
>
> Key: ISPN-9459
> URL: https://issues.jboss.org/browse/ISPN-9459
> Project: Infinispan
> Issue Type: Sub-task
> Components: Memcached
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
>
> The memcached server makes it hard to enable interoperability with other endpoints, as it stores keys as java.lang.String and treats byte[] as opaque values.
> It should respect the cache storage configuration and provide a way to specify what is the data type of the value that clients sends and receive, in order to convert it to/from the storage format.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
5 years, 8 months
[JBoss JIRA] (ISPN-5545) Make lazy near caching more selective
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5545?page=com.atlassian.jira.plugin.... ]
William Burns edited comment on ISPN-5545 at 8/29/18 10:45 AM:
---------------------------------------------------------------
I wonder if we should convert this JIRA into something a little more generic regarding how to make near cache scale better.
I came to the same conclusion regarding Create for INVALIDATION. I think this is critical to reduce the # of events that are generated with very low cost and the user is completely unaffected. The only time this can't be used is if we implement tombstones in the near cache, since CREATE would have to invalidate those.
The second is a bit more interesting (being selective), because the user would have to specify what entries they want to cache and we would have to be able to send that to the server in some fashion (possibly causing class loading issues).
I thought of two more additional ways to reduce near cache overhead:
1. When a client event queue is found to be exhausted, instead of waiting forever we should probably add some sort of timeout. And if the timeout occurs, we throw out the event queue and just send a clear event to the client. This way the client can essentially start over with their near cache. This could cause issues if the server is constantly under fire from modify/remove events, but for those given bursts it will at least let the server respond in a more timely fashion.
2. Possibly store the requested keys on the server side for a given client. This way only INVALIDATION messages are sent for keys that are actually modified, instead of all keys. This would require clearing out cache during rehash I would think though, since you don't want to retain keys for non owned entries (also this shouldn't be used if the underlying storage is off heap) - we should make sure the retained key is the same one in the container.
was (Author: william.burns):
I wonder if we should convert this JIRA into something a little more generic regarding how to make near cache scale better.
I came to the same conclusion regarding Create for INVALIDATION. I think this is critical to reduce the # of events that are generated with very low cost and the user is completely unaffected.
The second is a bit more interesting (being selective), because the user would have to specify what entries they want to cache and we would have to be able to send that to the server in some fashion (possibly causing class loading issues).
I thought of two more additional ways to reduce near cache overhead:
1. When a client event queue is found to be exhausted, instead of waiting forever we should probably add some sort of timeout. And if the timeout occurs, we throw out the event queue and just send a clear event to the client. This way the client can essentially start over with their near cache. This could cause issues if the server is constantly under fire from modify/remove events, but for those given bursts it will at least let the server respond in a more timely fashion.
2. Possibly store the requested keys on the server side for a given client. This way only INVALIDATION messages are sent for keys that are actually modified, instead of all keys. This would require clearing out cache during rehash I would think though, since you don't want to retain keys for non owned entries (also this shouldn't be used if the underlying storage is off heap) - we should make sure the retained key is the same one in the container.
> Make lazy near caching more selective
> -------------------------------------
>
> Key: ISPN-5545
> URL: https://issues.jboss.org/browse/ISPN-5545
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
>
> In the current form, when lazy near caching is enabled, the server sends invalidation messages for any key that has been created, modified or removed.
> This is suboptimal for a couple of reasons:
> 1. First of all, a near cache might only interested in receiving invalidation events for those keys that are currently stored in the near cache. If the near cache is small subset of the entire cache, having such option would vastly reduce the number of events sent to clients. So, there needs to be a way to be able to narrow the events sent from the server to this subset of keys.
> 2. Lazy near caches do not care about created events. If an entry is present in the near cache, it has already been created, so it's only interested in modified and removed events. There needs to be a way to narrow this down too.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
5 years, 8 months
[JBoss JIRA] (ISPN-5545) Make lazy near caching more selective
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5545?page=com.atlassian.jira.plugin.... ]
William Burns edited comment on ISPN-5545 at 8/29/18 10:40 AM:
---------------------------------------------------------------
I wonder if we should convert this JIRA into something a little more generic regarding how to make near cache scale better.
I came to the same conclusion regarding Create for INVALIDATION. I think this is critical to reduce the # of events that are generated with very low cost and the user is completely unaffected.
The second is a bit more interesting (being selective), because the user would have to specify what entries they want to cache and we would have to be able to send that to the server in some fashion (possibly causing class loading issues).
I thought of two more additional ways to reduce near cache overhead:
1. When a client event queue is found to be exhausted, instead of waiting forever we should probably add some sort of timeout. And if the timeout occurs, we throw out the event queue and just send a clear event to the client. This way the client can essentially start over with their near cache. This could cause issues if the server is constantly under fire from modify/remove events, but for those given bursts it will at least let the server respond in a more timely fashion.
2. Possibly store the requested keys on the server side for a given client. This way only INVALIDATION messages are sent for keys that are actually modified, instead of all keys. This would require clearing out cache during rehash I would think though, since you don't want to retain keys for non owned entries (also this shouldn't be used if the underlying storage is off heap) - we should make sure the retained key is the same one in the container.
was (Author: william.burns):
I wonder if we should convert this JIRA into something a little more generic regarding how to make near cache scale better.
I came to the same conclusion re: Create for INVALIDATION. I think this is critical to reduce the # of events are generated with very low cost and the user is completely unaffected.
The second is a bit more interesting (being selective), because the user would have to specify what entries they want to cache and we would have to be able to send that to the server in some fashion (possibly causing class loading issues).
I thought of two more additional ways to reduce near cache overhead:
1. When a client event queue is found to be exhausted, instead of waiting forever we should probably add some sort of timeout. And if the timeout occurs, we throw out the event queue and just send a clear event to the client. This way the client can essentially start over with their near cache. This could cause issues if the server is constantly under fire from modify/remove events, but for those given bursts it will at least let the server respond in a more timely fashion.
2. Possibly store the requested keys on the server side for a given client. This way only INVALIDATION messages are sent for keys that are actually modified, instead of all keys. This would require clearing out cache during rehash I would think though, since you don't want to retain keys for non owned entries (also this shouldn't be used if the underlying storage is off heap) - we should make sure the retained key is the same one in the container.
> Make lazy near caching more selective
> -------------------------------------
>
> Key: ISPN-5545
> URL: https://issues.jboss.org/browse/ISPN-5545
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
>
> In the current form, when lazy near caching is enabled, the server sends invalidation messages for any key that has been created, modified or removed.
> This is suboptimal for a couple of reasons:
> 1. First of all, a near cache might only interested in receiving invalidation events for those keys that are currently stored in the near cache. If the near cache is small subset of the entire cache, having such option would vastly reduce the number of events sent to clients. So, there needs to be a way to be able to narrow the events sent from the server to this subset of keys.
> 2. Lazy near caches do not care about created events. If an entry is present in the near cache, it has already been created, so it's only interested in modified and removed events. There needs to be a way to narrow this down too.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
5 years, 8 months
[JBoss JIRA] (ISPN-5545) Make lazy near caching more selective
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5545?page=com.atlassian.jira.plugin.... ]
William Burns edited comment on ISPN-5545 at 8/29/18 10:38 AM:
---------------------------------------------------------------
I wonder if we should convert this JIRA into something a little more generic regarding how to make near cache scale better.
I came to the same conclusion re: Create for INVALIDATION. I think this is critical to reduce the # of events are generated with very low cost and the user is completely unaffected.
The second is a bit more interesting (being selective), because the user would have to specify what entries they want to cache and we would have to be able to send that to the server in some fashion (possibly causing class loading issues).
I thought of two more additional ways to reduce near cache overhead:
1. When a client event queue is found to be exhausted, instead of waiting forever we should probably add some sort of timeout. And if the timeout occurs, we throw out the event queue and just send a clear event to the client. This way the client can essentially start over with their near cache. This could cause issues if the server is constantly under fire from modify/remove events, but for those given bursts it will at least let the server respond in a more timely fashion.
2. Possibly store the requested keys on the server side for a given client. This way only INVALIDATION messages are sent for keys that are actually modified, instead of all keys. This would require clearing out cache during rehash I would think though, since you don't want to retain keys for non owned entries (also this shouldn't be used if the underlying storage is off heap) - we should make sure the retained key is the same one in the container.
was (Author: william.burns):
I wonder if we should convert this JIRA into something a little more generic regarding how to make near cache scale better.
I came to the same conclusion re: Create for INVALIDATION. I think this is critical to reduce the # of events are generated with very low cost and the user is completely unaffected.
The second is a bit more interesting, because the user would have to specify what entries they want to cache and we would have to be able to send that to the server in some fashion (possibly causing class loading issues).
I thought of two more additional ways to reduce near cache overhead:
1. When a client event queue is found to be exhausted, instead of waiting forever we should probably add some sort of timeout. And if the timeout occurs, we throw out the event queue and just send a clear event to the client. This way the client can essentially start over with their near cache. This could cause issues if the server is constantly under fire from modify/remove events, but for those given bursts it will at least let the server respond in a more timely fashion.
2. Possibly store the requested keys on the server side for a given client. This way only INVALIDATION messages are sent for keys that are actually modified, instead of all keys. This would require clearing out cache during rehash I would think though, since you don't want to retain keys for non owned entries (also this shouldn't be used if the underlying storage is off heap) - we should make sure the retained key is the same one in the container.
> Make lazy near caching more selective
> -------------------------------------
>
> Key: ISPN-5545
> URL: https://issues.jboss.org/browse/ISPN-5545
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
>
> In the current form, when lazy near caching is enabled, the server sends invalidation messages for any key that has been created, modified or removed.
> This is suboptimal for a couple of reasons:
> 1. First of all, a near cache might only interested in receiving invalidation events for those keys that are currently stored in the near cache. If the near cache is small subset of the entire cache, having such option would vastly reduce the number of events sent to clients. So, there needs to be a way to be able to narrow the events sent from the server to this subset of keys.
> 2. Lazy near caches do not care about created events. If an entry is present in the near cache, it has already been created, so it's only interested in modified and removed events. There needs to be a way to narrow this down too.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
5 years, 8 months
[JBoss JIRA] (ISPN-5545) Make lazy near caching more selective
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5545?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-5545:
-------------------------------------
I wonder if we should convert this JIRA into something a little more generic regarding how to make near cache scale better.
I came to the same conclusion re: Create for INVALIDATION. I think this is critical to reduce the # of events are generated with very low cost and the user is completely unaffected.
The second is a bit more interesting, because the user would have to specify what entries they want to cache and we would have to be able to send that to the server in some fashion (possibly causing class loading issues).
I thought of two more additional ways to reduce near cache overhead:
1. When a client event queue is found to be exhausted, instead of waiting forever we should probably add some sort of timeout. And if the timeout occurs, we throw out the event queue and just send a clear event to the client. This way the client can essentially start over with their near cache. This could cause issues if the server is constantly under fire from modify/remove events, but for those given bursts it will at least let the server respond in a more timely fashion.
2. Possibly store the requested keys on the server side for a given client. This way only INVALIDATION messages are sent for keys that are actually modified, instead of all keys. This would require clearing out cache during rehash I would think though, since you don't want to retain keys for non owned entries (also this shouldn't be used if the underlying storage is off heap) - we should make sure the retained key is the same one in the container.
> Make lazy near caching more selective
> -------------------------------------
>
> Key: ISPN-5545
> URL: https://issues.jboss.org/browse/ISPN-5545
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
>
> In the current form, when lazy near caching is enabled, the server sends invalidation messages for any key that has been created, modified or removed.
> This is suboptimal for a couple of reasons:
> 1. First of all, a near cache might only interested in receiving invalidation events for those keys that are currently stored in the near cache. If the near cache is small subset of the entire cache, having such option would vastly reduce the number of events sent to clients. So, there needs to be a way to be able to narrow the events sent from the server to this subset of keys.
> 2. Lazy near caches do not care about created events. If an entry is present in the near cache, it has already been created, so it's only interested in modified and removed events. There needs to be a way to narrow this down too.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
5 years, 8 months
[JBoss JIRA] (ISPN-9463) Provide API to enlist the resource with TransactionManager
by Ramesh Reddy (JIRA)
Ramesh Reddy created ISPN-9463:
----------------------------------
Summary: Provide API to enlist the resource with TransactionManager
Key: ISPN-9463
URL: https://issues.jboss.org/browse/ISPN-9463
Project: Infinispan
Issue Type: Enhancement
Components: Transactions
Reporter: Ramesh Reddy
Currently, Infinispan will automatically enlist the transaction when a transaction is bound to the executing thread. However, in cases like Teiid where remote access to Infinispan access is wrapped with a resource-adapter (RAR) the resource adapter does the explicit enlisting or delisting. For this there needs to be API provided by Infinispan Cache like
{code}
RemoteCacheManager.getXaResource()
{code}
where it can access the XA resource.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
5 years, 8 months
[JBoss JIRA] (ISPN-9095) NPE during server shutdown when using scattered cache
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/ISPN-9095?page=com.atlassian.jira.plugin.... ]
Paul Ferraro commented on ISPN-9095:
------------------------------------
[~rvansa] Please backport this fix to 9.3.x.
> NPE during server shutdown when using scattered cache
> -----------------------------------------------------
>
> Key: ISPN-9095
> URL: https://issues.jboss.org/browse/ISPN-9095
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.1.Final
> Reporter: Paul Ferraro
> Assignee: Radim Vansa
> Fix For: 9.4.0.Beta1
>
>
> We hit NPE when running tests for RFE EAP7-867.
> EAP distribution was built from https://github.com/pferraro/wildfly/tree/scattered .
> Test description: Positive stress test (no failover), 4-node EAP cluster, clients: starting with 400 clients in the beginning, raising the number of clients to 6000 in the end of the test.
> During clean server shutdown in the end of the test, server logged NPE and got stuck:
> {code}
> [JBossINF] [0m[31m07:55:57,643 ERROR [org.infinispan.scattered.impl.ScatteredStateConsumerImpl] (thread-200,ejb,dev214) ISPN000471: Failed processing values received from remote node during rebalance.: java.lang.NullPointerException
> [JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.applyValues(ScatteredStateConsumerImpl.java:505)
> [JBossINF] at org.infinispan.scattered.impl.ScatteredStateConsumerImpl.lambda$getValuesAndApply$8(ScatteredStateConsumerImpl.java:475)
> [JBossINF] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> [JBossINF] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> [JBossINF] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> [JBossINF] at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> [JBossINF] at org.infinispan.remoting.transport.AbstractRequest.complete(AbstractRequest.java:66)
> [JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.receiveResponse(SingleTargetRequest.java:56)
> [JBossINF] at org.infinispan.remoting.transport.impl.SingleTargetRequest.onResponse(SingleTargetRequest.java:35)
> [JBossINF] at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:53)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1304)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1207)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$200(JGroupsTransport.java:123)
> [JBossINF] at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.receive(JGroupsTransport.java:1342)
> [JBossINF] at org.jgroups.JChannel.up(JChannel.java:819)
> [JBossINF] at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:134)
> [JBossINF] at org.jgroups.stack.Protocol.up(Protocol.java:340)
> [JBossINF] at org.jgroups.protocols.FORK.up(FORK.java:134)
> [JBossINF] at org.jgroups.protocols.FRAG3.up(FRAG3.java:166)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
> [JBossINF] at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
> [JBossINF] at org.jgroups.protocols.pbcast.GMS.up(GMS.java:864)
> [JBossINF] at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:240)
> [JBossINF] at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1002)
> [JBossINF] at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:728)
> [JBossINF] at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:383)
> [JBossINF] at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:600)
> [JBossINF] at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:119)
> [JBossINF] at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:199)
> [JBossINF] at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:252)
> [JBossINF] at org.jgroups.protocols.MERGE3.up(MERGE3.java:276)
> [JBossINF] at org.jgroups.protocols.Discovery.up(Discovery.java:267)
> [JBossINF] at org.jgroups.protocols.TP.passMessageUp(TP.java:1248)
> [JBossINF] at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [JBossINF] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [JBossINF] at org.jboss.as.clustering.jgroups.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:52)
> [JBossINF] at java.lang.Thread.run(Thread.java:748)
> [JBossINF]
> {code}
> Scattered cache was configured with bias-lifespan="0".
> Server configuration:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
> Server link:
> http://jenkins.hosts.mwqe.eng.bos.redhat.com/hudson/job/eap-7x-stress-ses...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
5 years, 8 months