[JBoss JIRA] (ISPN-5093) Granularity of remote event listener implementations doing the same job
by Tristan Tarrant (Jira)
[ https://issues.redhat.com/browse/ISPN-5093?page=com.atlassian.jira.plugin... ]
Tristan Tarrant reassigned ISPN-5093:
-------------------------------------
Assignee: (was: Galder Zamarreño)
> Granularity of remote event listener implementations doing the same job
> -----------------------------------------------------------------------
>
> Key: ISPN-5093
> URL: https://issues.redhat.com/browse/ISPN-5093
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Priority: Major
>
> Currently, if N clients add the same listener to a cache that does the same job, e.g. keeping a near cache consistent, this results in N server-side cluster listeners created, each potentially installed in different nodes. If one of those nodes fails, all clients that had a listener registered to that node will have to find a different node for this listener.
> The downsides of this approach is that there are as many cluster listeners installed as clients have added listeners (or have near cache enabled), which might not very efficient. If a node goes down, all clients that have cluster listeners there need to failover to some other node.
> The advantage of this approach is simplicity of the approach to decide where to add the listener and where to failover to.
> For this type of scenarios, an alternative set up might be worth exploring:
> If all these client side listeners are interested in exactly the same events, and the client ID would be exposed via the RemoteCache API, a server side cluster listener multi-plexing between all these clients could be potentially built. In other words, instead of having N clients register N cluster listeners, the first client would register the cluster listener with a client listener ID, and if more registrations were added with the same client listener ID, the connections would be added to the existing cluster listener implementation.
> The maximise the efficiency of this solution, all clients (even running in different JMVs), given the same client listener ID, should agree upon the node to add the listener in. For a distributed cache, hashing on the cache name would work. For replicated caches, since there's no hashing available, the first node of the view could be used.
> Since the logic to be executed server-side varies between being the first node adding the client listener vs the others, synchronization would be added to make sure that the first invocation only creates the cluster listener, and the others simply add the channel to the listener.
> Failover is a bit more tricky too, because if the node with the cluster listener goes down, all the clients have to failover, which again exposes a 1st vs the others type of logic.
> Advantages of this approach is the reduction in number of cluster listeners and potentially efficiency coming from a single cluster listener implementation server side.
> The disadvantages come from the server side logic to add/failover a cluster listener, which need to take into account if the listener is present or not. Other disadvantages come from needing the clients to use some specific routing for adding listeners for same node.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11132) Data not indexed during state transfer in server mode
by Gustavo Fernandes (Jira)
[ https://issues.redhat.com/browse/ISPN-11132?page=com.atlassian.jira.plugi... ]
Gustavo Fernandes updated ISPN-11132:
-------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Data not indexed during state transfer in server mode
> -----------------------------------------------------
>
> Key: ISPN-11132
> URL: https://issues.redhat.com/browse/ISPN-11132
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Querying
> Affects Versions: 9.4.17.Final, 10.1.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Nistor Adrian
> Priority: Major
> Fix For: 10.1.1.Final, 9.4.18.Final
>
>
> A REPL cache with index auto-config, during State transfer is skipping index of data.
> The order of events observed is:
> CACHE_STARTING LIFECYCLE -> ST HAPPENS -> DATA WRITTEN -> CACHE STARTED LIFECYCLE
> During data writes, the {{ProtobufValueWrapperIndexingInterceptor}} is activated by Hibernate Search and skips the indexing altogether since the {{ProtobufValueWrapperSearchWorkCreator}} is not used, and thus it cannot extract the descriptor from the entities. That search creator is only installed at cacheStarted phase.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months