[infinispan-dev] L1 Data Container

Sanne Grinovero sanne at infinispan.org
Wed Jun 19 13:27:33 EDT 2013


On 19 June 2013 17:17, William Burns <mudokonman at gmail.com> wrote:
>
>
>
> On Wed, Jun 19, 2013 at 11:56 AM, Sanne Grinovero <sanne at infinispan.org>
> wrote:
>>
>> On 19 June 2013 16:44, cotton-ben <ben.cotton at alumni.rutgers.edu> wrote:
>> >
>> > />>    At the opposite site, I don't see how - as a user - I could
>> > optimally
>> >>>    tune a separate container.
>> >
>> >> I agree that is more difficult to configure, this was one of my points
>> >> as
>> >> both a drawback and benefit.   It > sounds like in general you don't
>> >> believe the benefits outweigh the drawbacks then./
>> >
>> > Hi William.  The benefits of your ambition to provide  L1 capability
>> > enhancements -- for /certain/ user's completeness requirements--
>> > definitely
>> > outweigh the drawbacks . This is a FACT.
>>
>> I have to disagree ;-) It certainly is a fact that he's very well
>> intentioned to make enhancements, but I don't this strategy is proven
>> the be superior; I'm actually convinced of the opposite.
>>
>> We simply cannot assume that the "real data" and the L1 stored entries
>> will have the same level of hotness, it's actually very likely (since
>> you like stats) that the entries stored in L1 are frequently accessed,
>> to the opposite of other entries which - for as far as we know - could
>> be large and dormant for years.
>
> Actually this is only half true, we know that the values are hot on this
> node specifically.  Other nodes could be requesting the "cold" data quite
> frequently as well.

I see where you're coming from, but my point is the opposite: if other
nodes would be requesting this data quite frequently, it woudln't be
considered "cold": by using a single data container the eviction
strategy automatically takes this into account as well. A hit is a hit
in all senses.

> This could lead to L1 values pushing out distributed
> data leaving it where nodes have L1 cached values for which the owner
> doesn't even own.

That's just another excellent reason to keep a unified datacontainer:
if a different node uses the value frequently, allow it to be cached
for read operations, even if the primary owner is passivating it.
Write operations are inherently safe as they have to go through the
owner and trigger entry activation as needed.

> And when the L1 cache value expires there will be no more
> backup (not including passivation).  This is a very odd situation though,
> since you can't do conditional operations then as well.

Conditional operations would hit the owner, and by doing so trigger
loading. Not too odd, as it's the design today.


> And even with separate containers this is possible to have the L1
> discrepency, but actually having a lower lifespan would help remedy this
> since every once in a while the "hot" value would have to retrieved again
> from the owning node.

Should be unnecessary, consistency is guaranteed no matter what
timeouts of lifespans are set.

>>
>>
>> Storing useless data in memory is going to force to evict entries from
>> L1 which are hot by definition (as they wouldn't be in L1 otherwise -
>> as you pointed out there likely is an expiry) is a strategy which
>> actively strives towards a less efficient storage.
>>
>> Also L1 timeout can be disabled, making for a very nice self-tuning
>> adaptive system.
>
> The value cannot be disabled for the timeout.  Unless by disable you mean to
> increase the timeout so high that it will never be hit?  Is that a common
> use case?

Sure why not, any read-mostly system would benefit from it: think of
video streaming services, online stores (amazon like or app stores),
news papers. Basically, most systems on which I'd suggest to enable
L1.
BTW I think L1's lifespan accepts a -1 as well, which should result in
immortal cache entries.

Sanne


More information about the infinispan-dev mailing list