[
https://issues.jboss.org/browse/ISPN-8550?page=com.atlassian.jira.plugin....
]
William Burns edited comment on ISPN-8550 at 11/22/17 3:55 PM:
---------------------------------------------------------------
So I have been testing this and it seems like an additional overhead of 16 bytes per
allocated memory is about correct.
I verified the #'s using valgrind
http://valgrind.org/ btw. When I did simple
allocations, valgrind said my app requested entries aligned to 8 bytes and used its
default 8 byte overhead for memory keeping (not correct as seen below).
In my test I am directly calling _OffHeapMemory.INSTANCE.allocate_ and passing in 1000 for
a size (already multiple of 8). I tried allocating at different amounts of entries
I additionally allocated 100 extra objects before the 100_000 so that puts us at (1000
entry bytes + 8 overhead bytes) * (100_000 + 100) which equals 100900800 bytes or
96.226501465 MB. This would provide about 8 MB of overhead for the entire JVM as seen in
the second row below.
8 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.62|17.02|7.4|
|100,100|96.2|104.2|8.0|
|1,000,100|961.39|976.2|14.81|
So it could be that 8 is not quite enough, from what I have read most allocators vary
between 4 to 15 bytes per block. So I tried a couple more numbers:
9 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.71|17.02|7.31|
|100,100|96.32|104.2|7.88|
|1,000,100|962.35|976.2|13.8|
9 still not enough, so I went to the other extreme 16 bytes.
16 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.78|17.02|7.23|
|100,100|96.99|104.2|7.21|
|1,000,100|969.03|976.2|7.1|
So from this it looks like the overhead is about 16 bytes per allocation on my box. It
might actually be 15 though so lets do that
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.77|17.02|7.24|
|100,100|96.89|104.2|7.3|
|1,000,100|968.07|976.2|8.12|
So that is scaling the wrong way, it seems like it is somewhere between 15 and 16 bytes.
In which case I would say to err on the side of 16 so we don't allocate too much.
All my app does is the following, the sleeps are there for valgrind to show a slightly
different graph.
{code}
public static void main(String[] args) throws InterruptedException {
int allocationSize = 1000;
int allocationCount = 1_000_000;
// Warmup
for (int i = 0; i < 100; i++) {
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
// Give it some time to flatten out
Thread.sleep(10_000);
for (int i = 0; i < allocationCount; ++i) {
if (i % 10_000 == 0) {
Thread.sleep(100);
}
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
}
{code}
was (Author: william.burns):
So I have been testing this and it seems like an additional overhead of 16 bytes per
allocated memory is about correct.
I verified the #'s using valgrind
http://valgrind.org/ btw. When I did simple
allocations, valgrind said my app requested entries aligned to 8 bytes and used its
default 8 byte overhead for memory keeping (not correct as seen below).
In my test I am directly calling _OffHeapMemory.INSTANCE.allocate_ and passing in 1000 for
a size (already multiple of 8). I tried allocating at different amounts of entries
I additionally allocated 100 extra objects before the 100_000 so that puts us at (1000 +
8) * (100_000 + 100) which equals 100900800 bytes or 96.226501465 MB. This would provide
about 8 MB of overhead for the entire JVM.
8 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.62|17.02|7.4|
|100,100|96.2|104.2|8.0|
|1,000,100|961.39|976.2|14.81|
So it could be that 8 is not quite enough, from what I have read most allocators vary
between 4 to 15 bytes per block. So I tried a couple more numbers:
9 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.71|17.02|7.31|
|100,100|96.32|104.2|7.88|
|1,000,100|962.35|976.2|13.8|
9 still not enough, so I went to the other extreme 16 bytes.
16 bytes overhead
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.78|17.02|7.23|
|100,100|96.99|104.2|7.21|
|1,000,100|969.03|976.2|7.1|
So from this it looks like the overhead is about 16 bytes per allocation on my box. It
might actually be 15 though so lets do that
||\# Entries||Entries (MB)||Program Mem (MB)||Other (MB)||
|10,100|9.77|17.02|7.24|
|100,100|96.89|104.2|7.3|
|1,000,100|968.07|976.2|8.12|
So that is scaling the wrong way, it seems like it is somewhere between 15 and 16 bytes.
In which case I would say to err on the side of 16 so we don't allocate too much.
All my app does is the following, the sleeps are there for valgrind to show a slightly
different graph.
{code}
public static void main(String[] args) throws InterruptedException {
int allocationSize = 1000;
int allocationCount = 1_000_000;
// Warmup
for (int i = 0; i < 100; i++) {
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
// Give it some time to flatten out
Thread.sleep(10_000);
for (int i = 0; i < allocationCount; ++i) {
if (i % 10_000 == 0) {
Thread.sleep(100);
}
OffHeapMemory.INSTANCE.allocate(allocationSize);
}
}
{code}
Try to estimate malloc overhead and add to memory based eviction
----------------------------------------------------------------
Key: ISPN-8550
URL:
https://issues.jboss.org/browse/ISPN-8550
Project: Infinispan
Issue Type: Sub-task
Reporter: William Burns
Assignee: William Burns
We should try to also estimate malloc overhead. We could do something like Dan mentioned
at
https://github.com/infinispan/infinispan/pull/5590#pullrequestreview-7805...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)