[infinispan-dev] Adjusting memory settings in template

Sebastian Laskawiec slaskawi at redhat.com
Fri Sep 22 11:58:13 EDT 2017


On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <sanne at infinispan.org>
wrote:

> On 22 September 2017 at 13:49, Sebastian Laskawiec <slaskawi at redhat.com>
> wrote:
> > It's very tricky...
> >
> > Memory is adjusted automatically to the container size [1] (of course you
> > may override it by supplying Xmx or "-n" as parameters [2]). The safe
> limit
> > is roughly Xmx=Xms=50% of container capacity (unless you do the off-heap,
> > that you can squeeze Infinispan much, much more).
> >
> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
> > bustable memory category so if there is additional memory in the node,
> we'll
> > get it. But if not, we won't go below 512 MB (and 500 mCPU).
>
> I hope that's a temporary choice of the work in process?
>
> Doesn't sound acceptable to address real world requirements..
> Infinispan expects users to estimate how much memory they will need -
> which is hard enough - and then we should at least be able to start a
> cluster to address the specified need. Being able to rely on 512MB
> only per node would require lots of nodes even for small data sets,
> leading to extreme resource waste as each node would consume some non
> negligible portion of memory just to run the thing.
>

hmmm yeah - its finished.

I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or
setting 50% of container memory?

If the former and you set nothing, you will get the worse QoS and
Kubernetes will shut your container in first order whenever it gets out of
resources (I really recommend reading [4] and watching [3]). If the latter,
yeah I guess we can tune it a little with off-heap but, as my the latest
tests showed, if you enable RocksDB Cache Store, allocating even 50% is too
much (the container got killed by OOM Killer). That's probably the reason
why setting MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So
even setting it to 50% means that we take the risk...

So TBH, I see no silver bullet here and I'm open for suggestions. IMO if
you're really know what you're doing, you should set Xmx yourself (this
will turn off setting Xmx automatically by the bootstrap script) and
possibly set limits (and adjust requests) in your Deployment Configuration
(if you set both requests and limits you will have the best QoS).


> Thanks,
> Sanne
>
> >
> > Thanks,
> > Sebastian
> >
> > [1]
> >
> https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
> > [2]
> >
> https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
> > [4]
> >
> https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
> >
> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <galder at redhat.com>
> wrote:
> >>
> >> Hi Sebastian,
> >>
> >> How do you change memory settings for Infinispan started via service
> >> catalog?
> >>
> >> The memory settings seem defined in [1], but this is not one of the
> >> parameters supported.
> >>
> >> I guess we want this as parameter?
> >>
> >> Cheers,
> >>
> >> [1]
> >>
> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
> >> --
> >> Galder Zamarreño
> >> Infinispan, Red Hat
> >>
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170922/0ca2228b/attachment-0001.html 


More information about the infinispan-dev mailing list