[JBoss JIRA] (ISPN-4903) ServerFailureRetrySingleOwnerTest doesn't actually test client retry
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-4903?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-4903:
------------------------------------
Fix Version/s: 8.2.0.Final
(was: 8.2.0.CR1)
> ServerFailureRetrySingleOwnerTest doesn't actually test client retry
> --------------------------------------------------------------------
>
> Key: ISPN-4903
> URL: https://issues.jboss.org/browse/ISPN-4903
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite - Server
> Affects Versions: 7.0.0.CR2
> Reporter: Dan Berindei
> Fix For: 8.2.0.Final
>
> Attachments: ServerFailureRetrySingleOwnerTest.java
>
>
> With {{useSynchronization = true}} (the default, before ISPN-4166 is integrated), the {{SuspectException}} thrown by the listener is swallowed by the transaction manager and the client doesn't retry. The test doesn't pick that up because the exception is thrown _after_ the entry was updated in the data container (a regular SuspectException would be thrown before).
> I changed the configuration to {{useSynchronization = false}}, but it didn't work because the {{SuspectException}} is wrapped in a {{CacheListenerException}}, so the client throws an exception instead of retrying. I also changed the test to use an interceptor instead of a listener, but then I got a {{ClassCastException}}:
> {noformat}
> Caused by: java.lang.ClassCastException: [B cannot be cast to org.infinispan.container.entries.CacheEntry
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:424)
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:429)
> at org.infinispan.server.hotrod.Decoder2x$.customReadKey(Decoder2x.scala:285)
> at org.infinispan.server.hotrod.HotRodDecoder.customDecodeKey(HotRodDecoder.scala:156)
> at org.infinispan.server.core.AbstractProtocolDecoder.org$infinispan$server$core$AbstractProtocolDecoder$$decodeKey(AbstractProtocolDecoder.scala:176)
> at org.infinispan.server.core.AbstractProtocolDecoder.decodeDispatch(AbstractProtocolDecoder.scala:71) ... 14 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (ISPN-5083) Hot Rod decoder should use async Cache operations
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5083?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-5083:
------------------------------------
Fix Version/s: 8.2.0.Final
(was: 8.2.0.CR1)
> Hot Rod decoder should use async Cache operations
> -------------------------------------------------
>
> Key: ISPN-5083
> URL: https://issues.jboss.org/browse/ISPN-5083
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Gustavo Fernandes
> Fix For: 8.2.0.Final
>
>
> Hot Rod decoder is currently tying up Netty threads as a result of calling up to Infinispan sync operations. Instead, Hot Rod decoder should call up async operations, convert the Notifying Futures to Scala Futures, and write up the reply when it's received. This should increase performance specially under heavy load.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (ISPN-5093) Granularity of remote event listener implementations doing the same job
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5093?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-5093:
------------------------------------
Fix Version/s: 8.2.0.Final
(was: 8.2.0.CR1)
> Granularity of remote event listener implementations doing the same job
> -----------------------------------------------------------------------
>
> Key: ISPN-5093
> URL: https://issues.jboss.org/browse/ISPN-5093
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 8.2.0.Final
>
>
> Currently, if N clients add the same listener to a cache that does the same job, e.g. keeping a near cache consistent, this results in N server-side cluster listeners created, each potentially installed in different nodes. If one of those nodes fails, all clients that had a listener registered to that node will have to find a different node for this listener.
> The downsides of this approach is that there are as many cluster listeners installed as clients have added listeners (or have near cache enabled), which might not very efficient. If a node goes down, all clients that have cluster listeners there need to failover to some other node.
> The advantage of this approach is simplicity of the approach to decide where to add the listener and where to failover to.
> For this type of scenarios, an alternative set up might be worth exploring:
> If all these client side listeners are interested in exactly the same events, and the client ID would be exposed via the RemoteCache API, a server side cluster listener multi-plexing between all these clients could be potentially built. In other words, instead of having N clients register N cluster listeners, the first client would register the cluster listener with a client listener ID, and if more registrations were added with the same client listener ID, the connections would be added to the existing cluster listener implementation.
> The maximise the efficiency of this solution, all clients (even running in different JMVs), given the same client listener ID, should agree upon the node to add the listener in. For a distributed cache, hashing on the cache name would work. For replicated caches, since there's no hashing available, the first node of the view could be used.
> Since the logic to be executed server-side varies between being the first node adding the client listener vs the others, synchronization would be added to make sure that the first invocation only creates the cluster listener, and the others simply add the channel to the listener.
> Failover is a bit more tricky too, because if the node with the cluster listener goes down, all the clients have to failover, which again exposes a 1st vs the others type of logic.
> Advantages of this approach is the reduction in number of cluster listeners and potentially efficiency coming from a single cluster listener implementation server side.
> The disadvantages come from the server side logic to add/failover a cluster listener, which need to take into account if the listener is present or not. Other disadvantages come from needing the clients to use some specific routing for adding listeners for same node.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (ISPN-5077) Custom remote events can be slightly inefficient
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5077?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-5077:
------------------------------------
Fix Version/s: 8.2.0.Final
(was: 8.2.0.CR1)
> Custom remote events can be slightly inefficient
> ------------------------------------------------
>
> Key: ISPN-5077
> URL: https://issues.jboss.org/browse/ISPN-5077
> Project: Infinispan
> Issue Type: Enhancement
> Components: hot, Remote Protocols
> Affects Versions: 7.0.2.Final
> Reporter: Galder Zamarreño
> Fix For: 8.2.0.Final
>
>
> Something we might want to improve for Hot Rod 3.0 protocol:
> [16:40] <galderz> i've been thinking further about converters, and I think i've found a slight mismatch between what converter means for embedded listeners vs remote listeners
> [16:40] <wburns> oh yeah?
> [16:40] <galderz> for embedded listeners, it essentially transforms what you see as `value`
> [16:41] <galderz> with the knowledge that key and metadata information will be shipped
> [16:41] <galderz> the way i mapped converter to remote listeners is that whatever the converter returns, we ship that, as is, to the client
> [16:41] <galderz> so, if a remote listener wants a custom event that includes key + value
> [16:41] <galderz> it needs to develop a converter impl that returns bytes containing key + value
> [16:41] <galderz> which is inefficient because you are passing around the key twice
> [16:42] <galderz> once as part of the event itself, and again inside the converted value
> [16:42] <galderz> inefficient from the POV of shipping stuff around from other nodes to where the cluster listener is located
> [16:44] <wburns> yeah makes sense
> [16:44] <galderz> not a major issue but not easy to fix without changing semantics or public protocol
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (ISPN-5076) Pessimistic transactions can lose their locks when the primary owner changes
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5076?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-5076:
------------------------------------
Fix Version/s: 8.2.0.Final
(was: 8.2.0.CR1)
> Pessimistic transactions can lose their locks when the primary owner changes
> ----------------------------------------------------------------------------
>
> Key: ISPN-5076
> URL: https://issues.jboss.org/browse/ISPN-5076
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 7.0.2.Final, 7.1.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: 7.0
> Fix For: 8.2.0.Final
>
>
> In a pessimistic cache, if a transaction {{T1}} has a {{put(k, v)}} operation and the primary owner of the key is the originator, the lock is acquired on the originator but it is not replicated to on the backup(s).
> If one of the backup owners becomes the primary owner, it will allow another transaction {{T2}} to lock (and update) key {{k}} before it receives the one-phase prepare command from the originator of {{T1}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month