[infinispan-dev] Consolidating temporary per-key data

Radim Vansa rvansa at redhat.com
Tue Dec 1 10:13:07 EST 2015


On 12/01/2015 03:26 PM, Dan Berindei wrote:
> On Tue, Dec 1, 2015 at 11:00 AM, Radim Vansa <rvansa at redhat.com> wrote:
>> On 11/30/2015 08:17 PM, Dan Berindei wrote:
>>> The first problem that comes to mind is that context entries are also
>>> stored in a map, at least in transactional mode. So access through the
>>> context would only be faster in non-tx caches, in tx caches it would
>>> not add any benefits.
>> I admit that I was thinking more about non-tx caches, I would have to
>> re-check what maps are written tx mode. Please, let's focus on non-tx.
>>
>> However I probably don't comprehend your answer completely. I am not
>> talking that much about reading something from the context, but reads
>> from these concurrent maps (these can be always somehow cached in the
>> context), but about writes.
> It makes sense if you consider tx caches, because the only way to
> cache these in the context for a transaction is to store them in
> another hashmap (a la `AbstractCacheTransaction.lookedUpEntries`).
>
> I know you're more interested in non-tx caches, but you have the same
> problem with the non-tx commands that use a NonTxInvocationContext
> (e.g. PutMapCommand).

lookedUpEntries is something completely different than I am talking 
about, although it is also addressed by-key, but it's just *private to 
the command invocation*.

I was talking only about maps that are shared by *concurrent command 
invocations* (as any map held in Interceptor).

Radim

>
>>> I also have some trouble imagining how these temporary entries would
>>> be released, since locks, L1 requestors, L1 synchronizers, and write
>>> registrations all have their own rules for cleaning up.
>> Maybe I am oversimplifying that - all the components would modify the
>> shared record, and once the record becomes empty, it's removed. It's
>> just a logical OR on these collections.
> I think you'd need something more like a refcount, and to atomically
> release the entry when the count reaches 0. (We used to have something
> similar for the locks map.) It's certainly doable, but the challenge
> is to make it cheaper to maintain than the separate maps we have now.
>
>>> Finally, I'm not sure how much this would help. I actually removed the
>>> write registration for everything except RemoveExpiredCommand when
>>> testing the HotRod server performance, but I didn't get any
>>> significant improvement on my machine. Which was kind of expected,
>>> since the benchmark doesn't seem to be CPU-bound, and JFR was showing
>>> it with < 1.5% of CPU.
>> Datapoint noted. But if there's application using embedded Infinispan,
>> it's quite likely that the app is CPU-bound, and Infinispan becomes,
>> too. We're not optimizing for benchmarks but apps. Though, I can see
>> that if server is not CPU bound, it won't make much difference.
>>
>> Thanks for your opinions, guys - if Will will remove the expiration
>> registration, this idea is void (is anyone really using L1?). We'll see
>> how this will end up.
>>
>> Radim
>>
>>> Cheers
>>> Dan
>>>
>>>
>>> On Fri, Nov 27, 2015 at 11:28 AM, Radim Vansa <rvansa at redhat.com> wrote:
>>>> No thoughts at all? @wburns, could I have your view on this?
>>>>
>>>> Thanks
>>>>
>>>> Radim
>>>>
>>>> On 11/23/2015 04:26 PM, Radim Vansa wrote:
>>>>> Hi again,
>>>>>
>>>>> examining some flamegraphs I've found out that recently the
>>>>> ExpirationInterceptor has been added, which registers ongoing write in a
>>>>> hashmap. So at this point we have a map for locks, map for writes used
>>>>> for expiration, another two key-addressed maps in L1ManagerImpl and one
>>>>> in L1NonTxInterceptor and maybe another maps elsewhere.
>>>>>
>>>>> This makes me think that we could spare map lookups and expensive writes
>>>>> by providing *single map for temporary per-key data*. A reference to the
>>>>> entry could be stored in the context to save the lookups. An extreme
>>>>> case would be to put this into DataContainer, but I think that this
>>>>> would prove too tricky in practice.
>>>>>
>>>>> A downside would be the loss of encapsulation (any component could
>>>>> theoretically access e.g. locks), but I don't find that too dramatic.
>>>>>
>>>>> WDYT?
>>>>>
>>>>> Radim
>>>>>
>>>> --
>>>> Radim Vansa <rvansa at redhat.com>
>>>> JBoss Performance Team
>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>> --
>> Radim Vansa <rvansa at redhat.com>
>> JBoss Performance Team
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Radim Vansa <rvansa at redhat.com>
JBoss Performance Team



More information about the infinispan-dev mailing list