[JBoss JIRA] (ISPN-11624) Server patching tool
by Tristan Tarrant (Jira)
[ https://issues.redhat.com/browse/ISPN-11624?page=com.atlassian.jira.plugi... ]
Tristan Tarrant updated ISPN-11624:
-----------------------------------
Status: Open (was: New)
> Server patching tool
> --------------------
>
> Key: ISPN-11624
> URL: https://issues.redhat.com/browse/ISPN-11624
> Project: Infinispan
> Issue Type: Feature Request
> Components: CLI, Server
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Priority: Major
> Labels: supportability
> Fix For: 11.0.0.Dev05
>
>
> Add patching commands to the CLI:
> * *patch create* to create patch archives (zips)
> * *patch ls* to list the patches applied to a server
> * *patch describe* to describe the contents of a patch archive
> * *patch install* to install a patch on a server
> * *patch rollback* to roll back the server to the previous patch
> The patch zip should contain all the artifacts to upgrade from any number of source versions to a target version and a series of json patch descriptor with instructions (ADD, REMOVE, REPLACE, UPGRADE) on how to apply those artifacts
> The installation process will back up any artifacts that will be replaced/upgraded/removed so that they can be rolled back.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 12 months
[JBoss JIRA] (ISPN-11647) DataSource support in the Server
by Tristan Tarrant (Jira)
Tristan Tarrant created ISPN-11647:
--------------------------------------
Summary: DataSource support in the Server
Key: ISPN-11647
URL: https://issues.redhat.com/browse/ISPN-11647
Project: Infinispan
Issue Type: Feature Request
Components: Server
Affects Versions: 10.1.1.Final
Reporter: Tristan Tarrant
Assignee: Tristan Tarrant
Fix For: 11.0.0.Dev04, 11.0.0.Final
The server needs to have the ability to manage datasources/connection pools to databases so that they can be shared by multiple cache stores/loaders.
We should use Agroal for this.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 12 months
[JBoss JIRA] (ISPN-9222) Custom clientListener filters without a need to deploy java code to Infinispan server
by Pedro Zapata Fernandez (Jira)
[ https://issues.redhat.com/browse/ISPN-9222?page=com.atlassian.jira.plugin... ]
Pedro Zapata Fernandez updated ISPN-9222:
-----------------------------------------
Fix Version/s: 12.0.0.Final
> Custom clientListener filters without a need to deploy java code to Infinispan server
> -------------------------------------------------------------------------------------
>
> Key: ISPN-9222
> URL: https://issues.redhat.com/browse/ISPN-9222
> Project: Infinispan
> Issue Type: Enhancement
> Components: Documentation, Hot Rod
> Affects Versions: 9.2.1.Final
> Reporter: Marek Posolda
> Assignee: Donald Naro
> Priority: Major
> Labels: redhat-summit-18
> Fix For: 12.0.0.Final
>
>
> Currently JDG has a way to register client listeners for the remote HotRod events. There are also ways to filter the events, so that client listener doesn't receive the filtered events, which it's not interested in. But it looks that filtering currently requires custom code with CacheEventFilterFactory to be available on JDG server side as described in https://access.redhat.com/documentation/en-us/red_hat_jboss_data_grid/7.2... .
> I was wondering if it's possible to have custom filter, which is able to somehow filter fields of custom objects without a need to deploy custom code to the Infinispan/JDG server? Both the object and CacheEventFilterFactory to not be required on JDG side. AFAIK the protobuf schema could be used to query custom objects on JDG server side without having the code of the objects available on the JDG side? So iwas thinking about something similar.
> More details: Let's assume that on HotRod client side, I have entity like this:
> {code}
> public class UserEntity {
> private String username;
> private String email;
> private String country;
> }
> {code}
> I will be able to create client listener like this (I don't need to deploy "protobuf-factory". It will be available on JDG out of the box):
> {code}
> @org.infinispan.client.hotrod.annotation.ClientListener(filterFactoryName = "protobuf-factory")
> public class CustomLogListener {
> ...
> }
> {code}
> Then I will be able to use the examples like this to register client listener on client side (just an example how can the filtering "psudo-language" look like):
> Interested just for users from Czech republic:
> {code}
> remoteCache.addClientListener(listener, new String[] { "country.equals('cs')" }, null);
> {code}
> Interested just for users from Czech republic with emails from "@redhat.com":
> {code}
> remoteCache.addClientListener(listener, new String[] { "country.equals('cs') && email.endsWith('@redhat.com')" }, null);
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 12 months
[JBoss JIRA] (ISPN-5290) Better automatic merge for caches with enabled partition handling
by Pedro Zapata Fernandez (Jira)
[ https://issues.redhat.com/browse/ISPN-5290?page=com.atlassian.jira.plugin... ]
Pedro Zapata Fernandez updated ISPN-5290:
-----------------------------------------
Fix Version/s: 12.0.0.Final
> Better automatic merge for caches with enabled partition handling
> -----------------------------------------------------------------
>
> Key: ISPN-5290
> URL: https://issues.redhat.com/browse/ISPN-5290
> Project: Infinispan
> Issue Type: Feature Request
> Environment: JDG cluster with partitionHandling enabled
> Reporter: Wolf-Dieter Fink
> Assignee: Dan Berindei
> Priority: Major
> Labels: cluster, clustering, infinispan, partition_handling
> Fix For: 12.0.0.Final
>
>
> At the moment there is no detection whether a node which join a cluster is one of the nodes which are known from the "last stable view" or not.
> This will have the drawback that the cluster will be still in DEGRADED_MODE if there are some nodes restarted during the split-brain.
> Assuming the cluster split is a power failure of some nodes the available nodes are DEGRADED as >=numOwners are lost.
> If the failed nodes are restarted, let's say we have an application which use library mode in EAP, these instances are now identified as new nodes as the node-ID's are different.
> If these nodes join the 'cluster' all the nodes are still degraded as the restarted are now known as different nodes and not as the lost nodes, so the cluster will not heal and come back to AVAILABLE.
> There is a way to prevent some of the possibilities by using server hinting to ensure that at least one owner will survive.
> But there are other cases where it would be good to have a different strategy to get the cluster back to AVAILABLE mode.
> During the split-brain there is no way to continue as there is no possiblity to know whether "the other" part is gone or still acessable but not seen.
> For a shared persistence it might be possible but there is a huge drawback for normal working state to synchronize that with locking and version columns.
> If the node ID can be kept I see the following enhancements:
> - with a shared persistence there should no data lost, if all nodes are back in the cluster it can go AVAILABLE and reload the missing entries
> - for a 'side' cache the values are calculated or retrieved from other (slow) systems, so the cluster can be AVAILABLE and reload the entries
> - In other cases there might be a WARNING/ERROR that all members are back from split, there is maybe some data lost and automaticaly or manually set back to AVAILABLE
> It might be complicated to calculate this modes, but a configuration for partition-handling might give the possibility to the administrator to decide which behaviour is apropriate for a cache
> i.e.
> <partition-handling enabled="true" healing="HEALING.MODE"/>
> where modes are
> AVAILABLE_NO_WARNING back to available after all nodes from "last stable" are back
> AVAILABLE_WARNING_DATALOST dto. but log a warning that some DATA can be lost
> WARNING_DATALOST only a warning and a hint how to enable manually
> NONE same as current behaviour (if necessary, maybe WARNING_DATALOST is similar or better)
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 12 months