On 25 Sep 2017, at 12:37, Sebastian Laskawiec
<slaskawi(a)redhat.com> wrote:
On Mon, Sep 25, 2017 at 11:58 AM Galder Zamarreño <galder(a)redhat.com> wrote:
> On 22 Sep 2017, at 17:58, Sebastian Laskawiec <slaskawi(a)redhat.com> wrote:
>
>
>
> On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <sanne(a)infinispan.org> wrote:
> On 22 September 2017 at 13:49, Sebastian Laskawiec <slaskawi(a)redhat.com>
wrote:
> > It's very tricky...
> >
> > Memory is adjusted automatically to the container size [1] (of course you
> > may override it by supplying Xmx or "-n" as parameters [2]). The safe
limit
> > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> > that you can squeeze Infinispan much, much more).
> >
> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> > bustable memory category so if there is additional memory in the node,
we'll
> > get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> I hope that's a temporary choice of the work in process?
>
> Doesn't sound acceptable to address real world requirements..
> Infinispan expects users to estimate how much memory they will need -
> which is hard enough - and then we should at least be able to start a
> cluster to address the specified need. Being able to rely on 512MB
> only per node would require lots of nodes even for small data sets,
> leading to extreme resource waste as each node would consume some non
> negligible portion of memory just to run the thing.
>
> hmmm yeah - its finished.
>
> I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or
setting 50% of container memory?
>
> If the former and you set nothing, you will get the worse QoS and Kubernetes will
shut your container in first order whenever it gets out of resources (I really recommend
reading [4] and watching [3]). If the latter, yeah I guess we can tune it a little with
off-heap but, as my the latest tests showed, if you enable RocksDB Cache Store, allocating
even 50% is too much (the container got killed by OOM Killer). That's probably the
reason why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even
setting it to 50% means that we take the risk...
>
> So TBH, I see no silver bullet here and I'm open for suggestions. IMO if
you're really know what you're doing, you should set Xmx yourself (this will turn
off setting Xmx automatically by the bootstrap script) and possibly set limits (and adjust
requests) in your Deployment Configuration (if you set both requests and limits you will
have the best QoS).
Try put it this way:
I've just started an Infinispan ephermeral instance and trying to load some data and
it's running out of memory. What knobs/settings does the template offer to make sure I
have a big enough Infinispan instance(s) to handle my data?
Unfortunately calculating the number of instances based on input (e.g. "I want to
have 10 GB of space for my data, please calculate how many 1 GB instances I need to create
and adjust my app") is something that can not be done with templates. Templates are
pretty simple and they do not support any calculations. You will probably need an Ansible
Service Broker or Service Broker SDK to do it.
So assuming you did the math on paper and you need 10 replicas, 1 GB each - just type oc
edit dc/<your_app> and modify number of replicas and increase memory request. That
should do the trick. Alternatively you can edit the ConfigMap and turn eviction on (but it
really depends on your use case).
BTW, the number of replicas is a parameter in template [1]. I can also expose memory
request if you want me to (in that case just shoot me a ticket:
https://github.com/infinispan/infinispan-openshift-templates/issues). And let me say it
one more time - I'm open for suggestions (and pull requests) if you think this is not
the way it should be done.
I don't know how the overarching OpenShift caching, or shared memory services will be
exposed, as an OpenShift user that was to store data in Infinispan, I should be able to
provide how much (total) data I will put on it, and optionally how many backups I want for
the data, and OpenShift should maybe provide with some options on how to do this:
User: I want 2gb of data
OpenShift: Assuming default of 1 backup (2 copies of data), I can offer you (assuming at
least 25% overhead):
a) 2 nodes of 2b
b) 4 nodes of 1gb
c) 8 nodes of 512mb
And user decides...
Assuming those higher level OpenShift services consume the Infinispan OpenShift templates,
and you try to implement a situation like above, where the user specifies total amount of
data, and you decide what options to offer them..., then the template would need to expose
number of instances (done already) and memory for each of those instance (not there yet).
Still, I'll try to see if I can get my use case working with only 512mb per node, and
use the number of instances as a way to add more memory. However, I feel that only
exposing number of instances is not enough...
Btw, this is something that needs to be agreed on and should be part of our Infinispan
OpenShift integration specification/plan.
Cheers,
[1]
https://github.com/infinispan/infinispan-openshift-templates/blob/master/...
(Don't reply with: make your data smaller)
Cheers,
>
>
> Thanks,
> Sanne
>
> >
> > Thanks,
> > Sebastian
> >
> > [1]
> >
https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjust...
> > [2]
> >
https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker...
> > [3]
https://www.youtube.com/watch?v=nWGkvrIPqJ4
> > [4]
> >
https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
> >
> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <galder(a)redhat.com>
wrote:
> >>
> >> Hi Sebastian,
> >>
> >> How do you change memory settings for Infinispan started via service
> >> catalog?
> >>
> >> The memory settings seem defined in [1], but this is not one of the
> >> parameters supported.
> >>
> >> I guess we want this as parameter?
> >>
> >> Cheers,
> >>
> >> [1]
> >>
https://github.com/infinispan/infinispan-openshift-templates/blob/master/...
> >> --
> >> Galder Zamarreño
> >> Infinispan, Red Hat
> >>
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev(a)lists.jboss.org
> >
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
Infinispan, Red Hat
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Galder Zamarreño
Infinispan, Red Hat