[JBoss JIRA] (ISPN-6879) Calculate (and expose) minimum number of nodes for data in Infinispan
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-6879?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-6879:
-------------------------------------
{quote}
This one is actually easy - this is the default behavior of StatefulSets. The controller kills one node at a time and waits till cluster stabilizes.
{quote}
This seems precisely what we would want then.
{quote}
So maybe there are better ways to tell it - "Here is our minimum number of nodes. Never, ever go below that."
{quote}
So is this more so that we can tell them to never go below a certain number of nodes because otherwise we would have memory issues with all nodes storing so much?
I am just trying to understand the purpose of this.
> Calculate (and expose) minimum number of nodes for data in Infinispan
> ---------------------------------------------------------------------
>
> Key: ISPN-6879
> URL: https://issues.jboss.org/browse/ISPN-6879
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations, Server
> Reporter: Sebastian Łaskawiec
> Assignee: William Burns
>
> With Kubernetes autoscaling we need to be able to tell what is the minimum amount of nodes necessary for hosting data (probably some sort of size + number of nodes estimation).
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-6879) Calculate (and expose) minimum number of nodes for data in Infinispan
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6879?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6879:
-------------------------------------------
{quote}
I am not certain of the benefit in this case. Even if you go from 100 nodes down to 10, you would have lost ~90% of your entries and there is no way to know which ones were lost and which ones were kept.
{quote}
hmmm that's actually an interesting use case. To be honest I thought that if you scale gradually from 100 to 10 (killing one node at a time and waiting until rebalance happens) you will preserve the data. I had this impression since Kubernetes generates SIGINT when killing a node and waits ~30 second till it shuts down. I noticed that we trigger rebalance on each SIGINT (since Wildfly catches SIGINT and shuts all services down).
So I probably was wrong ([~dan.berindei] could you please confirm this?) or the use case with number of owner = 1 doesn't make sense for this feature. We should always aim for number of owners >= 2.
{quote}
As long as numOwners < numNodes, I don't see the real benefit as you would have to throttle the nodes going down to guarantee that data isn't lost. I personally don't see a user wanting to sit there for minutes just to shut down a subset of nodes. More than likely they would want to say I want X nodes. Can we not (OpenShift or us) shut them down in an orderly fashion to do this instead? This seems much safer and wouldn't have as many user input errors.
{quote}
This one is actually easy - this is the default behavior of {{StatefulSets}}. The controller kills one node at a time and waits till cluster stabilizes.
Let me add one more aspect here - scaling based on custom metrics will *probably* need to figure out the lowest number of nodes to operate with given dataset. Without it, we might see some weird behavior. So maybe there are better ways to tell it - "Here is our minimum number of nodes. Never, ever go below that."
> Calculate (and expose) minimum number of nodes for data in Infinispan
> ---------------------------------------------------------------------
>
> Key: ISPN-6879
> URL: https://issues.jboss.org/browse/ISPN-6879
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations, Server
> Reporter: Sebastian Łaskawiec
> Assignee: William Burns
>
> With Kubernetes autoscaling we need to be able to tell what is the minimum amount of nodes necessary for hosting data (probably some sort of size + number of nodes estimation).
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-8439) Support a GLOBAL flag for InternalCacheRegistry
by Tristan Tarrant (JIRA)
Tristan Tarrant created ISPN-8439:
-------------------------------------
Summary: Support a GLOBAL flag for InternalCacheRegistry
Key: ISPN-8439
URL: https://issues.jboss.org/browse/ISPN-8439
Project: Infinispan
Issue Type: Enhancement
Reporter: Tristan Tarrant
Assignee: Tristan Tarrant
InternalCacheRegistry should support a GLOBAL flag which automatically selects a cache type for global caches (local or replicated)
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-6879) Calculate (and expose) minimum number of nodes for data in Infinispan
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-6879?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-6879:
-------------------------------------
I am not certain of the benefit in this case. Even if you go from 100 nodes down to 10, you would have lost ~90% of your entries and there is no way to know which ones were lost and which ones were kept.
As long as numOwners < numNodes, I don't see the real benefit as you would have to throttle the nodes going down to guarantee that data isn't lost. I personally don't see a user wanting to sit there for minutes just to shut down a subset of nodes. More than likely they would want to say I want X nodes. Can we not (OpenShift or us) shut them down in an orderly fashion to do this instead? This seems much safer and wouldn't have as many user input errors.
> Calculate (and expose) minimum number of nodes for data in Infinispan
> ---------------------------------------------------------------------
>
> Key: ISPN-6879
> URL: https://issues.jboss.org/browse/ISPN-6879
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations, Server
> Reporter: Sebastian Łaskawiec
> Assignee: William Burns
>
> With Kubernetes autoscaling we need to be able to tell what is the minimum amount of nodes necessary for hosting data (probably some sort of size + number of nodes estimation).
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-8093) Remove operation for counters
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-8093?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-8093:
------------------------------
Status: Open (was: New)
> Remove operation for counters
> -----------------------------
>
> Key: ISPN-8093
> URL: https://issues.jboss.org/browse/ISPN-8093
> Project: Infinispan
> Issue Type: Feature Request
> Components: Clustered Counter
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Allow counters to be removed. If an operation finds out that the counter doesn't exists, it is recreated with its initial value and the operation is replied.
> If a getValue is invoked and the counter doesn't exist, it returns its initial value without recreating the counter.
> New methods to the API:
> {code:java}
> CounterManager.removeCounter(String counterName) //remove the counter with this name
> StrongCounter.remove() //removes the counter represent by this instance
> WeakCounter.remove() //same as above
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-8093) Remove operation for counters
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-8093?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-8093:
------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5532
> Remove operation for counters
> -----------------------------
>
> Key: ISPN-8093
> URL: https://issues.jboss.org/browse/ISPN-8093
> Project: Infinispan
> Issue Type: Feature Request
> Components: Clustered Counter
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Allow counters to be removed. If an operation finds out that the counter doesn't exists, it is recreated with its initial value and the operation is replied.
> If a getValue is invoked and the counter doesn't exist, it returns its initial value without recreating the counter.
> New methods to the API:
> {code:java}
> CounterManager.removeCounter(String counterName) //remove the counter with this name
> StrongCounter.remove() //removes the counter represent by this instance
> WeakCounter.remove() //same as above
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months
[JBoss JIRA] (ISPN-8402) Prevent rebalance
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-8402?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-8402:
-------------------------------------------
//cc [~epbernard][~NadirX]
> Prevent rebalance
> -----------------
>
> Key: ISPN-8402
> URL: https://issues.jboss.org/browse/ISPN-8402
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations, Core, State Transfer
> Reporter: Sebastian Łaskawiec
> Assignee: Dan Berindei
>
> Both Caching Service and Shared Memory Service require a way to prevent state transfer until the cluster is larger than "target" amount of nodes.
> Note: A thing to consider during the design - we might want to have some timeout here. When we hit it, we might want to do the rebalance regardless to the number of nodes.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 6 months