[JBoss JIRA] (ISPN-8440) Enable ConflictResolution on partition merge with PreferConsistencyStrategy
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8440?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-8440:
-------------------------------
Summary: Enable ConflictResolution on partition merge with PreferConsistencyStrategy (was: Enabled ConflictResolution on partition merge with PreferConsistencyStrategy)
> Enable ConflictResolution on partition merge with PreferConsistencyStrategy
> ---------------------------------------------------------------------------
>
> Key: ISPN-8440
> URL: https://issues.jboss.org/browse/ISPN-8440
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Affects Versions: 9.1.1.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 9.2.0.Final
>
>
> Currently conflict resolution only occurs on a partition merge when the PreferAvailabilityStrategy is utilised (ALLOW_READ_WRITES). This should also occur when utilising the PreferConsistencyStrategy as it's possible for cache entries to become conflicted before a split-brain has been detected.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (ISPN-8402) Prevent rebalance
by Emmanuel Bernard (JIRA)
[ https://issues.jboss.org/browse/ISPN-8402?page=com.atlassian.jira.plugin.... ]
Emmanuel Bernard commented on ISPN-8402:
----------------------------------------
We discussed the notion of timeout with [~william.burns] but in the end we decided it was not necessary. Or rather that this was OpenShift's responsibility to bring back targeted number of nodes.
I think I wrote it in the design document but the risk of losing data due to more nodes failing was deemed less dangerous than the risk of losing the whole minority partition.
> Prevent rebalance
> -----------------
>
> Key: ISPN-8402
> URL: https://issues.jboss.org/browse/ISPN-8402
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations, Core, State Transfer
> Reporter: Sebastian Łaskawiec
> Assignee: Dan Berindei
>
> Both Caching Service and Shared Memory Service require a way to prevent state transfer until the cluster is larger than "target" amount of nodes.
> Note: A thing to consider during the design - we might want to have some timeout here. When we hit it, we might want to do the rebalance regardless to the number of nodes.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (ISPN-8400) Adjust merge policies for JDG Online Services
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8400?page=com.atlassian.jira.plugin.... ]
Ryan Emerson commented on ISPN-8400:
------------------------------------
[~epbernard] Good point. I think for a generic shared memory use-case where we don't know what is more important for the users data that you're right and REMOVE_ALL is the better choice as it will be more predictable and easier for the user to reason about.
> Adjust merge policies for JDG Online Services
> ---------------------------------------------
>
> Key: ISPN-8400
> URL: https://issues.jboss.org/browse/ISPN-8400
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Ryan Emerson
>
> Both Shared Memory and Caching Service require custom merge policies.
> In Caching Service we need to clear out all conflicted entries upon split brain. Shared Memory service might be a little bit more tricky and we might want to use some different strategy (but that needs to be checked out).
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (ISPN-8449) Separate the marshalling of the remote QueryRequest/QueryReponse objects and the client marshaller
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-8449?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-8449:
--------------------------------
Status: Open (was: New)
> Separate the marshalling of the remote QueryRequest/QueryReponse objects and the client marshaller
> --------------------------------------------------------------------------------------------------
>
> Key: ISPN-8449
> URL: https://issues.jboss.org/browse/ISPN-8449
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Querying
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
>
> Remote query currently uses the client marshaller for marshalling the query request/response objects. This is currently hardcoded to work for the protobuf and jboss-marshalling case, but if the client's marshaller is changed to something else this will break.
> The solution would be to separate the marshalling. The query request/response object should be marshalled using protobuf regardless of what client marshaller is configured. Only the payload of the reponse should be marshalled using the client marshaller.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (ISPN-8379) Support configuration wildcards
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-8379?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-8379:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Support configuration wildcards
> -------------------------------
>
> Key: ISPN-8379
> URL: https://issues.jboss.org/browse/ISPN-8379
> Project: Infinispan
> Issue Type: Enhancement
> Components: Configuration
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Fix For: 9.2.0.Final, 9.2.0.Alpha2
>
>
> Allow defining configuration names containing wildcards so that they are implicitly used for caches whose names match. This would be particularly useful for using templates with JCache which doesn't allow specifying additional configuration properties outside what is available in MutableConfiguration.
> Therefore, declaring a cache configuration such as:
> {code:xml}
> <invalidation-cache-configuration name="invalidation-*" />
> {code}
> and invoking:
> {code:java}
> cacheManager.getCache("invalidation-1");
> {code}
> would use the above configuration
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (ISPN-8449) Separate the marshalling of the remote QueryRequest/QueryReponse objects and the client marshaller
by Adrian Nistor (JIRA)
Adrian Nistor created ISPN-8449:
-----------------------------------
Summary: Separate the marshalling of the remote QueryRequest/QueryReponse objects and the client marshaller
Key: ISPN-8449
URL: https://issues.jboss.org/browse/ISPN-8449
Project: Infinispan
Issue Type: Enhancement
Components: Remote Querying
Reporter: Adrian Nistor
Assignee: Adrian Nistor
Remote query currently uses the client marshaller for marshalling the query request/response objects. This is currently hardcoded to work for the protobuf and jboss-marshalling case, but if the client's marshaller is changed to something else this will break.
The solution would be to separate the marshalling. The query request/response object should be marshalled using protobuf regardless of what client marshaller is configured. Only the payload of the reponse should be marshalled using the client marshaller.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months
[JBoss JIRA] (ISPN-8448) Retried prepare times out while partition is in degraded mode
by Dan Berindei (JIRA)
Dan Berindei created ISPN-8448:
----------------------------------
Summary: Retried prepare times out while partition is in degraded mode
Key: ISPN-8448
URL: https://issues.jboss.org/browse/ISPN-8448
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 9.2.0.Alpha2, 9.1.2.Final, 8.2.8.Final, 9.0.3.Final, 8.1.9.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 8.1.10.Final, 8.2.9.Final, 9.2.0.Beta1, 9.1.3.Final
Since ISPN-5046, prepare commands are retried if one of the prepare targets has left the cluster. However, when the cache enters degraded mode, the prepare targets still include the owners in other partitions, and the prepare command is retried again.
Each retry automatically waits for cache topology {{<command topology> + 1}}. But the second retry is not really triggered by a topology change, so the retry blocks for {{remoteTimeout}} milliseconds before failing with a {{TimeoutException}}.
This situation actually happens in {{OptimisticTxPartitionAndMergeDuringPrepareTest}}, but the tests do not fail because it doesn't wait for an {{AvailabilityException}} specifically: they just take 15+ seconds each.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 5 months