Hi,

250MB is about the amount which the GC lets the heap live. And it grows and shrinks all the time based on the load.

But in the end and start, the lowest point was the same.

And of course, heap requirement is increased when we do the compression, as we load the rows from the database. That includes both the metricIds to be compressed as well as the datapoints.

  -  Micke


On 10/11/2016 02:17 PM, Heiko W.Rupp wrote:

Hi,

On 11 Oct 2016, at 12:57, Michael Burman wrote:

(running for two hours now). It's running between 147MB of memory and 400MB.

That is a 250MB difference.
Can you try doing a full-gc right before compression start
and right after end to
and measure the heap usage at those points
(before start and at end before gc)
so that we get an idea how much memory
the compression really takes?

I started a 0.16 h-services at ~ 10:35, let it run until 11:13
(first compression happened at 11)
then added 25 feeds - one every 2 minutes.

The compression at 1pm went well, but the graph certainly
shows a much increased need for heap during that time.

We may perhaps try to spread out the compression over more time and thus trade time for memory
to get rid of this large memory usage peak.

After the next crash, I will try to run with external DB for Inventory to see how much this contributes
to the issue here.



_______________________________________________
hawkular-dev mailing list
hawkular-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev