[infinispan-dev] Time measurement and expiry

Galder Zamarreño galder at redhat.com
Thu Oct 20 03:47:15 EDT 2011


On Oct 19, 2011, at 2:13 PM, Dan Berindei wrote:

> On Tue, Oct 18, 2011 at 10:57 AM, Sanne Grinovero <sanne at infinispan.org> wrote:
>> The replies so far are very interesting, but at this stage I'd be more
>> interested in discussing the fundamental question:
>> 
>> Why are we preferring to provide more cache misses && slower
>> performance, to fake a slightly better precision in eviction?
>> 
> 
> I guess this is the main question, how much worse would the eviction
> precision be if we only relied on the periodic eviction thread?

We did use to have a periodic eviction thread in the JBoss Cache days, but that used to cause us problems since we queued events to the cache in order to apply eviction algorithms.

I don't see the precision as being paramount in the eviction area, so these avenue could be revisited if a non-blocking, low-cost  solution could be found.

> 
> I would agree with you if we could guarantee a constant precision in
> eviction, but with the current algorithm the eviction interval has to
> grow with the cache size so we can't offer any guarantees. And that's
> even without considering CacheLoaders.
> 
> Elias' suggestion of using a heap for eviction is interesting, but
> adding entries to the heap will slow puts. We could try a combined
> approach, every minute scan the data container and add the entries
> that will expire in the next minute to a heap. Only puts that expire
> before the deadline will add the entry to the eviction heap. Then
> another eviction thread can evict entries from the heap at a higher
> frequency.
> 
> I'm sure there are better ideas, but I we'd have to prototype them and
> show that we can keep the staleness under an upper bound (say 1
> second) for reasonably large caches before changing the algorithm.
> 
>> Since we can't guarantee precision, I'd want to actually remove these
>> checks altogether: is there any use case requiring high precision in
>> the eviction process?
>> I hope not, I wouldn't suggest to rely on Infinispan as a reliable clock.
>> 
> 
> Relying on (relatively) precise expiration is the kind of implicit
> assumption that people make and don't even realize they depend on it
> until an upgrade breaks their application.
> 
> I'm curious if other cache impls offer any guarantees in this area.
> 
>> In the couple of cases we should really keep this logic - likely the
>> CacheLoader - we can discuss how to optimize this. For example I like
>> Dan's idea of introducing a Clock component; we could explore such
>> solutions for the remaining bits which will still need a time source
>> but I'd first want to remove the main bottleneck.
>> 
>> --Sanne
>> 
>> 
>> On 18 October 2011 07:57, Dan Berindei <dan.berindei at gmail.com> wrote:
>>> On Tue, Oct 18, 2011 at 1:32 AM, Mircea Markus <mircea.markus at jboss.com> wrote:
>>>> 
>>>> On 17 Oct 2011, at 14:13, Sanne Grinovero wrote:
>>>> 
>>>>>> Very interesting, I knew that in Windows currentTimeMillis() basically
>>>>>> just reads a volatile because I got bit by the 15 millisecond accuracy
>>>>>> issue before, so I thought it would always be very fast. I had no idea
>>>>>> on Linux it would have the same performance as nanoTime().
>>>> Indeed very nice!
>>>> I miss the part where the article says nanoTime has the same performance on modern Linux as currentTimeMillis, are you sure?
>>> 
>>> Yeah, the article didn't talk about Linux but I found this article:
>>> http://blogs.oracle.com/ksrini/entry/we_take_java_performance_very
>>> There's also this JDK bug complaining about both being slow:
>>> http://bugs.sun.com/view_bug.do?bug_id=6876279  (the test output is in
>>> a weird format, ignore the 1st number, the 2nd one is nanos/call).
>>> Except when I actually ran the code in the bug report I my timings
>>> were much better than those reported in the bug description:
>>> 
>>> java version "1.6.0_22"
>>> OpenJDK Runtime Environment (IcedTea6 1.10.3) (fedora-59.1.10.3.fc15-x86_64)
>>> OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)
>>> Intel(R) Core(TM) i5 CPU       M 540  @ 2.53GHz
>>> 
>>> currentTimeMillis: 36ns
>>> nanoTime: 28ns
>>> 
>>> -server or -XX:AggressiveOpts don't seem to make a difference.
>>> 
>>> 
>>> I also ran the test on cluster10 and the results are slightly worse,
>>> but not significantly:
>>> 
>>> java version "1.6.0_17"
>>> OpenJDK Runtime Environment (IcedTea6 1.7.10) (rhel-1.39.b17.el6_0-x86_64)
>>> OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode)
>>> Intel(R) Xeon(R) CPU           E5640  @ 2.67GHz
>>> 
>>> currentTimeMillis: 40ns
>>> nanoTime: 35ns
>>> 
>>> 
>>> It would be interesting if we could run the test on all our machines
>>> and see how the timings vary by machine and OS.
>>> 
>>> 
>>> It seems we're not the only ones with this problem either, Oracle (the
>>> database) apparently calls gettimeofday() a lot so RHEL has some
>>> optimizations to remove the system call overhead and make it even
>>> faster (more like Windows I presume, but I don't have a Windows
>>> machine on hand to confirm):
>>> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/1.3/html/Realtime_Tuning_Guide/sect-Realtime_Tuning_Guide-General_System_Tuning-gettimeofday_speedup.html
>>> http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/1.3/html/Realtime_Tuning_Guide/sect-Realtime_Tuning_Guide-Realtime_Specific_Tuning-RT_Specific_gettimeofday_speedup.html
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> 
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Galder Zamarreño
Sr. Software Engineer
Infinispan, JBoss Cache




More information about the infinispan-dev mailing list