Hi,
On 11 Oct 2016, at 12:57, Michael Burman wrote:
(running for two hours now). It's running between 147MB of memory
and
400MB.
That is a 250MB difference.
Can you try doing a full-gc right before compression start
and right after end to
and measure the heap usage at those points
(before start and at end before gc)
so that we get an idea how much memory
the compression really takes?
I started a 0.16 h-services at ~ 10:35, let it run until 11:13
(first compression happened at 11)
then added 25 feeds - one every 2 minutes.
The compression at 1pm went well, but the graph certainly
shows a much increased need for heap during that time.
![](cid:F4DB9C04-A766-4C72-B253-73030D4487AB@redhat.com "Bildschirmfoto
2016-10-11 um 13.13.16.png")
We may perhaps try to spread out the compression over more time and thus
trade time for memory
to get rid of this large memory usage peak.
After the next crash, I will try to run with external DB for Inventory
to see how much this contributes
to the issue here.