[infinispan-dev] L1 Data Container

Sanne Grinovero sanne at infinispan.org
Wed Jun 19 10:19:58 EDT 2013


On 19 June 2013 13:44, William Burns <mudokonman at gmail.com> wrote:
> All the L1 data for a DIST cache is stored in the same data container as the
> actual distributed data itself.  I wanted to propose breaking this out so
> there is a separate data container for the L1 cache as compared to the
> distributed data.
>
> I thought of a few quick benefits/drawbacks:
>
> Benefits:
> 1. L1 cache can be separately tuned - L1 maxEntries for example

-1!
  I don't think thats a benefit actually, from the point of view of a user:
as a user I only know I have a certain amount of memory available on
each node, and the application is going to use certain data way more
often than others.
The eviction strategy should be put in condition to be able to make an
optimal choice about which entries - among all - are better kept in
memory vs. passivated.
I don't see a specific reason to "favour" keeping in memory owned
entries over an L1 entry: the L1 entry might be very hot, and the
owned entry might be almost-never read.
Considering that even serving a Get operation to another node (as
owners of the entry) makes the entry less likely to be passivated (it
counts as a "hit"), the current design naturally provides an optimal
balance for memory usage.

At the opposite site, I don't see how - as a user - I could optimally
tune a separate container.

> 2. L1 values will not cause eviction of real data

-1
That's not a benefit, as I explained above. "Real Data" is not less
important, especially if it's never used.
Granted I'm making some assumptions about the application having some
hot-data and some less hot data, and not being able to take advantage
of node pinning or affinity strategies.. but that is another way to
say that I'm assuming the user needs L1: if it was possible to apply
these more advanced strategies I'd disable L1 altogether.

> 3. Would make https://issues.jboss.org/browse/ISPN-3229 an easy fix
> 4. Could add a new DataContainer implementation specific to L1 with
> additional optimizations

You have some example of what you have in mind?
Considering you would need to consider the optimal balance usage of
the available heap space, I suspect that would be quite hard.

> 5. Help with some concurrency issues with L1 without requiring wider locking
> (such as locking a key for an entire ClusteredGet rpc call) -
> https://issues.jboss.org/browse/ISPN-3197.

I don't understand this. L1 entries require the same level of
consistency than any other entry so I suspect you would need the same
locking patterns replicated. The downside would be that you get
duplication of the same logic.
Remember also that L1 is having some similarities with entries still
"hanging around" when they where previously stored in this node after
a state transfer.. today these are considered L1-active entries, if
you change the storage you would need to design a migration of state
from one container to the other; migration of state might not be too
hard, doing it while guaranteeing consistent locking is going to be I
guess as hard as considering the L1 problem today.

>
> Drawbacks:
> 1. Would require, depending on configuration, an additional thread for
> eviction
> 2. Users upgrading could have double memory used up due to 2 data containers

This drawback specifically is to be considered very seriously. I don't
think people would be happy to buy and maintain a twice as large
datacenter than what they actually need.

Sanne

>
> Both?:
> 1. Additional configuration available
>    a. Add maxEntires just like the normal data container (use data container
> size if not configured?)
>    b. Eviction wakeup timer?  We could just reuse the task cleanup
> frequency?
>    c. Eviction strategy?  I would think the default data container's would
> be sufficient.
>
> I was wondering what you guys thought.
>
> Thanks,
>
>  - Will
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


More information about the infinispan-dev mailing list