Ø When you ask for 100 GB of mapped memory, on Linux doesn't need to
allocate any pages to it, you just change the virtual memory table of your
process. (This must in turn use some memory but the overhead is low)
Got it! Thanks Peter
its coming back to me (been a while since I last
studied SystemV process subsystem internals).
From: Peter Lawrey [mailto:peter.lawrey@higherfrequencytrading.com]
Sent: Saturday, March 8, 2014 8:03 PM
To: ben.cotton(a)alumni.rutgers.edu
Cc: Ben Cotton; Justin P Dildy; Dmitry Gordeev
Subject: Re: [infinispan-dev] Infinispan embedded off-heap cache
When you ask for 100 GB of mapped memory, on Linux doesn't need to allocate
any pages to it, you just change the virtual memory table of your process.
(This must in turn use some memory but the overhead is low)
e.g. even in plain NIO, you can create a 1 GB mapping to a new file. At
this point it doesn't need to allocate any memory, nor disk space as you
haven't used any of it yet. You write to every 1 MB of this 1 GB file. i.e.
1024 times. What happens? well it need to allocate at least one page i.e.
4 KB, for each write (even if just a byte). At this point the file will be
4KB * 1024, i.e. only 4 MB of memory and disk space is used at this point.
If you plan for this, you can optimise the structure to only use the pages
needed as much as possible.
In the example I gave, each entry has up to 64 KB. This means each entry
uses 64 KB of *address space* not memory, not disk space. So if you write
only 1KB, it has to use a page, 4 KB but not the whole 64 KB. i.e. once you
make the maximum size well over 4 KB, you may as well go for broke because
it won't make much difference to the memory or disk space used.
BTW, This is how Linux/UNIX behaves, Windows is lame this way and over
committing memory doesn't work so well.
On 9 March 2014 11:52, <ben.cotton(a)alumni.rutgers.edu> wrote:
Ø How you might wonder?
You got that right! Let me study this
Truth is my first, second nor
third reads set off the light bulb in my head.
Ø This works because even though I only have 7.7 GB after the OS, I can
create a SHM of 137 GB because this only uses 21 MB of actual disk
space/memory.
Still wondering How?, Peter. LOL. Im mean Im sure it is true
. But I
have no insight to how it is true.
Again, Let me study the code for this one.
From: Peter Lawrey [mailto:peter.lawrey@higherfrequencytrading.com]
Sent: Saturday, March 8, 2014 7:41 PM
To: ben.cotton(a)alumni.rutgers.edu
Cc: Ben Cotton; Justin P Dildy; Dmitry Gordeev;
ml-node+s980875n4028967h94(a)n3.nabble.com
<mailto:ml-node%2Bs980875n4028967h94@n3.nabble.com>
Subject: Re: [infinispan-dev] Infinispan embedded off-heap cache
You might find this example interesting.
While SHM is not expandable, this is not as much of a problem as it might
seem. SHM uses virtual memory and leave the OS to map it to real memory as
required. This means you can over allocate extents with a very low cost on
Linux.
https://github.com/OpenHFT/HugeCollections/blob/master/collections/src/test/
java/net/openhft/collections/OSResizesMain.java
In the example above I create extents for an SHM which much larger than main
memory and it takes a fraction of a second to do this. How you might
wonder? It prints
System memory= 7.7 GB, Size of map is 137.5 GB, disk used= 21MB
This works because even though I only have 7.7 GB after the OS, I can create
a SHM of 137 GB because this only uses 21 MB of actual disk space/memory.
You can freely over allocate the size on the basis that the system only uses
the resources it needs.
On 8 March 2014 04:06, Ben Cotton <ben.cotton(a)alumni.rutgers.edu> wrote:
Thanks Peter. The plan is for Dmitry and I to at first extend
VanillaSharedHashMap and groom it into an ISPN7 join via their DataContainer
API bridge.
That ExtendedVSHM will be morphed into a fully inter-operable JCACHE operand
will
- initially be brokered by the ISPN 7 config (JSR107 <---->
VSHMExtendedDataContainer <----> VSHM)
- eventually, possibly, be rendered with ExtendedVSHM directly implementing
javax.cache.Cache (in addititon to DataContainer)
On 03/07/2014 11:43 AM, Peter Lawrey wrote:
In the medium term I would see SHM supporting a DataContainer. If a Cache
were supported I would do it as a layered class so those who don't need the
functionality of a Cache don't incur an overhead.
On 8 Mar 2014 03:35, "Ben Cotton" <ben.cotton(a)alumni.rutgers.edu> wrote:
Thank you for this insight Mircea ...
Ultimately ... I want the OpenHFT SHM off-heap operand to behave *exactly*
like a JCACHE ... Amenable to being soundly/completely operated upon by
any/all parts of ISPN7's Impl of the JSR-107 API .
Musing openly: Won't that (eventually) necessitate me physically
implementing javax.cache.Cache ?
Another way to do it is to have CacheImpl implement the
DataContainer
only, and then configure Infinispan's JCache implementation to use
that
custom DataContainer.
I see what you mean. OK, for sure, this sounds much simpler than what I
have put on my initial TODO list.
Question: Will doing it this way in any manner suggest that my JSR-107
specific operators are being transitively "brokered" by the ISPN config
onto my OpenHFT SHM operand? If possible, I want everything to be direct --
no API bridge.
Thanks again, Mircea.
-Ben and Dmitry
Sent from my iPhone
On 03/07/2014 09:45 AM, Mircea Markus-2 [via Infinispan Developer List]
wrote:
Hi Ben,
In the diagram provided, the CacheImpl (your class) extends both from
javax.cache.Cache and org.infinispan.container.DataContainer.
The Cache and DataContainer interfaces are quite different and I anticipate
an single class implementing both to be hard to follow and potentially not
very efficient.
Another way to do it is to have CacheImpl implement the DataContainer only,
and then configure Infinispan's JCache implementation to use that custom
DataContainer.
On Mar 3, 2014, at 3:46 PM, cotton-ben <[hidden email]
<
http://user/SendEmail.jtp?type=node&node=4028967&i=0> > wrote:
Quick Update:
It is my understandng that Peter Lawrey will make available an OpenHFT HC
Alpha Release in Maven Central next weekend. At that time, Dmitry
Gordeev
and I will take the OpenHFT dependency tag and proceed to build a branch
of
Red Hat's ISPN 7 that will treat
net.openhft.collections.SharedHashMap as
a
Red Hat Infinispan 7 default impl of a fully JSR-107 interoperable
off-heap
javax.cache.Cache ...
A diagram of this build effort can be found here:
https://raw.github.com/Cotton-Ben/OpenHFT/master/doc/AdaptingOpenHFT-SHM-as-
JCACHE-Impl.jpg
...
The Red Hat view of his effort will be tracked here:
https://issues.jboss.org/browse/ISPN-871 ...
The code that defines the Impl will be here
https://github.com/Cotton-Ben/infinispan ...
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinis
pan-embedded-off-heap-cache-tp4026102p4028931.html
Sent from the Infinispan Developer List mailing list archive at
Nabble.com.
Cheers,
--
Mircea Markus
Infinispan lead (
www.infinispan.org)
_______________________________________________
infinispan-dev mailing list
[hidden email] <
http://user/SendEmail.jtp?type=node&node=4028967&i=2>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_____
If you reply to this email, your message will be added to the discussion
below:
http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinis
pan-embedded-off-heap-cache-tp4026102p4028967.html
To start a new topic under Infinispan Developer List, email
ml-node+s980875n2085493h0(a)n3.nabble.com
To unsubscribe from [infinispan-dev] Infinispan embedded off-heap cache,
click here
<
http://infinispan-developer-list.980875.n3.nabble.com/template/NamlServlet.
jtp?macro=unsubscribe_by_code&node=4026102&code=YmVuLmNvdHRvbkBBTFVNTkkuUlVU
R0VSUy5FRFV8NDAyNjEwMnwtMTU2ODA0NTc1OA==> .
<
http://infinispan-developer-list.980875.n3.nabble.com/template/NamlServlet.
jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.nam
l.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.
view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aem
ail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3A
email.naml> NAML
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infi...
Sent from the Infinispan Developer List mailing list archive at
Nabble.com.