[infinispan-dev] Adjusting memory settings in template

Sanne Grinovero sanne at infinispan.org
Mon Sep 25 07:56:06 EDT 2017


On 22 September 2017 at 16:58, Sebastian Laskawiec <slaskawi at redhat.com> wrote:
>
>
> On Fri, Sep 22, 2017 at 5:05 PM Sanne Grinovero <sanne at infinispan.org>
> wrote:
>>
>> On 22 September 2017 at 13:49, Sebastian Laskawiec <slaskawi at redhat.com>
>> wrote:
>> > It's very tricky...
>> >
>> > Memory is adjusted automatically to the container size [1] (of course
>> > you
>> > may override it by supplying Xmx or "-n" as parameters [2]). The safe
>> > limit
>> > is roughly Xmx=Xms=50% of container capacity (unless you do the
>> > off-heap,
>> > that you can squeeze Infinispan much, much more).
>> >
>> > Then there are Limits, Requests and QoS in Kubernetes [3][4]. We are in
>> > bustable memory category so if there is additional memory in the node,
>> > we'll
>> > get it. But if not, we won't go below 512 MB (and 500 mCPU).
>>
>> I hope that's a temporary choice of the work in process?
>>
>> Doesn't sound acceptable to address real world requirements..
>> Infinispan expects users to estimate how much memory they will need -
>> which is hard enough - and then we should at least be able to start a
>> cluster to address the specified need. Being able to rely on 512MB
>> only per node would require lots of nodes even for small data sets,
>> leading to extreme resource waste as each node would consume some non
>> negligible portion of memory just to run the thing.
>
>
> hmmm yeah - its finished.
>
> I'm not exactly sure where the problem is. Is it 512 MB RAM/500 mCPUs? Or
> setting 50% of container memory?

If the orchestrator "might" give us more than 512MB but this is not
guaranteed, we can't rely on it and we'll have to assume we have 512M
only.
I see no use in getting some heap size which was not explicitly set;
if there's extra available memory that's not too bad to use as native
memory (e.g. buffering RocksDB IO operations) so you might as well not
assign it to the JVM - since we can't rely on it we won't make
effective use of it.

Secondarily, yes we should make sure it's easy enough to request nodes
with more than 512MB each as Infinispan gets way more useful with
larger heaps. The ROI on 512MB would make me want to use a different
technology!

>
> If the former and you set nothing, you will get the worse QoS and Kubernetes
> will shut your container in first order whenever it gets out of resources (I
> really recommend reading [4] and watching [3]). If the latter, yeah I guess
> we can tune it a little with off-heap but, as my the latest tests showed, if
> you enable RocksDB Cache Store, allocating even 50% is too much (the
> container got killed by OOM Killer). That's probably the reason why setting
> MaxRAM JVM parameters sets Xmx to 25% (!!!) of MaxRAM value. So even setting
> it to 50% means that we take the risk...
>
> So TBH, I see no silver bullet here and I'm open for suggestions. IMO if
> you're really know what you're doing, you should set Xmx yourself (this will
> turn off setting Xmx automatically by the bootstrap script) and possibly set
> limits (and adjust requests) in your Deployment Configuration (if you set
> both requests and limits you will have the best QoS).

+1 Let's recommend this approach, and discourage the automated sizing
at least until we can implement some of the things Galder is also
suggesting. I'd just remove that option as it's going to cause more
trouble than what it's worth it.

You are the OpenShift expert and I have no idea how this could be done :)
I'm just highlighting that Infinispan can't deal with having some
variable heap size, having this would makes right-size tuning
extremely more complex to users - heck I wouldn't know how to do it
myself.

+1 to Galder's suggestions; I particularly like the idea to create
various templates specifically tuned for some fixed heap values; for
example we could create one for each of the common machine types on
popular cloud providers. Not suggesting to have a template for each of
them but we could pick some reasonable configurations so that then we
can help matching the template to the physical machine. I guess this
doesn't translate directly to OpenShift resource limits but that's
something you could figure out? After all an OS container has to run
on some cloud so it would still help people to have a template
"suited" for each popular, actually existing machine type.
Incidentally this approach would also produce helpful configuration
templates for people running on clouds directly.

Thanks,
Sanne

>
>>
>> Thanks,
>> Sanne
>>
>> >
>> > Thanks,
>> > Sebastian
>> >
>> > [1]
>> >
>> > https://github.com/jboss-dockerfiles/infinispan/tree/master/server#adjusting-memory
>> > [2]
>> >
>> > https://github.com/jboss-dockerfiles/infinispan/blob/master/server/docker-entrypoint.sh#L303-L308
>> > [3] https://www.youtube.com/watch?v=nWGkvrIPqJ4
>> > [4]
>> >
>> > https://docs.openshift.com/enterprise/3.2/dev_guide/compute_resources.html
>> >
>> > On Fri, Sep 22, 2017 at 2:33 PM Galder Zamarreño <galder at redhat.com>
>> > wrote:
>> >>
>> >> Hi Sebastian,
>> >>
>> >> How do you change memory settings for Infinispan started via service
>> >> catalog?
>> >>
>> >> The memory settings seem defined in [1], but this is not one of the
>> >> parameters supported.
>> >>
>> >> I guess we want this as parameter?
>> >>
>> >> Cheers,
>> >>
>> >> [1]
>> >>
>> >> https://github.com/infinispan/infinispan-openshift-templates/blob/master/templates/infinispan-ephemeral.json#L308
>> >> --
>> >> Galder Zamarreño
>> >> Infinispan, Red Hat
>> >>
>> >
>> > _______________________________________________
>> > infinispan-dev mailing list
>> > infinispan-dev at lists.jboss.org
>> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev



More information about the infinispan-dev mailing list