On 30 Jul 2013, at 20:03, Shane Johnson <shjohnso(a)redhat.com> wrote:
One option might be to use a fix key set size and simply increment
the value for each key by X every time it is written. Sort of like an object with a
collection and every time a nested object is added to the collection, the parent object is
written to the cache.
In this example the aggregated objects should hold a foreign key to the aggregator,
otherwise the object would grow indefinitely causing OOMs.
But good point nevertheless: if the size of every object varies from 1k,2k,3k..Nk
circularly, then the total disk capacity allocated for storing an entry is ((1+N)*N)/2.
So for storing 100MB of data you'd end up with a file with size 5GB. On top of that
the memory consumption grows proportionally, as we keep in memory information about all
the allocated segments on disk.
Cheers,
--
Mircea Markus
Infinispan lead (
www.infinispan.org)