[infinispan-dev] Need help

Sanne Grinovero sanne at infinispan.org
Mon Oct 7 06:43:43 EDT 2013


On 7 October 2013 11:12, Pedro Ruivo <pedro at infinispan.org> wrote:
>
>
> On 10/07/2013 12:30 AM, Sanne Grinovero wrote:
>>
>> On 6 October 2013 00:01, Pedro Ruivo <pedro at infinispan.org> wrote:
>>>
>>> Hi Sanne.
>>>
>>> Thanks for your comments. please see inline...
>>>
>>> Cheers,
>>> Pedro
>>>
>>>
>>> On 10/05/2013 09:15 PM, Sanne Grinovero wrote:
>>>>
>>>>
>>>> Hi Pedro,
>>>> looks like you're diving in some good fun :-)
>>>> BTW please keep the dev discussions on the mailing list, adding it.
>>>>
>>>> inline :
>>>>
>>>> On 4 October 2013 22:01, Pedro Ruivo <pedro at infinispan.org> wrote:
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> Sanne I need your expertise in here. I'm afraid that the problem is in
>>>>> FileListOperations :(
>>>>> I think the FileListOperations implementation needs a transactional
>>>>> cache
>>>>> with strong consistency...
>>>>>
>>>>> I'm 99% sure that it is originating the java.lang.AssertionError: file
>>>>> XPTO
>>>>> does not exist. I find out that we have multiple threads adding and
>>>>> removing
>>>>> files from the list. The scenario in [1] we see 2 threads loading the
>>>>> key
>>>>> from the cache loader and one thread adds a file and other removes. the
>>>>> thread that removes is the last one to commit and the file list is
>>>>> updated
>>>>> to an old state. When it tries to updat an index, I got the assertion
>>>>> error.
>>>>
>>>>
>>>>
>>>> Nice, looks like you're on something.
>>>> I've never seen specifically an AssertionError, looks like you have a
>>>> new test. Could you share it?
>>>
>>>
>>>
>>> yes of course:
>>>
>>> https://github.com/pruivo/infinispan/blob/a4483d08b92d301350823c7fd42725c339a65c7b/query/src/test/java/org/infinispan/query/cacheloaders/CacheStoreTest.java
>>>
>>> so far, only the tests with eviction are failing...
>>
>>
>> Some of the failures you're seeing are caused by internal "assert"
>> keyword in Lucene's code, which have the purpose of verifying the
>> "filesystem" is going to be synched properly.
>> These assertions don't apply when using our storage: we don't need
>> this synch to happen: in fact if it weren't because of the assertions
>> the whole method would be a no-op as it finally delegates all logic to
>> a method in the Infinispan Directory which is a no-op.
>>
>> In other words, these are misleading failures and we'd need to avoid
>> the TestNG "feature" of enabling assertions in this case.
>>
>> Still, even if the stacktrace is misleading, I agree on your diagnosis
>> below.
>>
>> Could you reproduce the problem without involving also the Query
>> framework?
>
>
> yes, I think I could
>
>
>> I'd hope that such a test could be independent and live solely in the
>> lucene-directory module; in practice if you can only reproduce it with
>> the query module it makes me suspicious that we're actually debugging
>> a race condition in the initialization of the two services: a race
>> between the query initialization thread needing to check the index
>> state (so potentially triggering a load from cachestore), and the
>> thread performing the cachestore preload.
>> (I see your test also fails without preload, but just wondering if
>> that might be an additional complexity).
>
>
> the assertion is triggered during the putData() method. but I believe that
> the same problem can happen in the cache restart without preload.

Ok I was just trying to figure out why you might have needed the test
to include the Query module.
Looking forward to a test needing Directory only then :-)


>>>> Let's step back a second and consider the Cache usage from the point
>>>> of view of FileListOperations.
>>>> Note that even if you have two threads writing at the same time, as
>>>> long as they are on the same node they will be adding/removing
>>>> elements from the same instance of a ConcurrentHashMap.
>>>> Since it's the same instance, it doesn't matter which thread will do
>>>> the put operation as last: it will push the correct state.
>>>> (there is an assumptions here, but we can forget about those for the
>>>> sake of this debugging: same node -> fine as there is an external
>>>> lock, no other node is allowed to write at the same time)
>>>>
>>>
>>> 100% agreed with you but with cache store, we no longer ensure that 2 (or
>>> more) threads are pointing to the same instance of Concurrent Hash Set.
>>>
>>> With eviction, the entries are removed from in-memory container and
>>> persisted in the cache store. The scenario I've described, 2 threads are
>>> trying to add/remove a file and the file list does not exist in-memory.
>>> So,
>>> each thread will read from cache store and deserialize the byte array. In
>>> the end, each thread will have a pointer for different instances of
>>> ConcurrentHashSet but with the same elements. And when this happens, we
>>> lost
>>> one of the operation.
>>
>>
>> I'm seeing more than a couple of different smelly behaviors interacting
>> here:
>>
>> 1## The single instance ConcurrentHashSet
>> I guess this could be re-thought as it's making some strong
>> assumptions, but considering this service can't be transactional I'd
>> rather explore other solutions first as I think the following changes
>> should be good enough.
>>
>> 2## Eviction not picking the right entry
>> This single key is literally read for each and every performed query,
>> and all writes as well. Each write, will write on this key.
>> Even with eviction being enabled on the cache, I would never expect
>> this key to be actually evicted!
>>
>>   # Action 1: Open an issue to investigate the eviction choice: the
>> strategy seems to be making a very poor job (or maybe it's just that
>> maxEntries(10) is too low and makes LIRS degenerate into insane
>> choices).
>
>
> my bet is because I'm using a small max entries.
>
>
>>
>>   # Action 2: I think that for now we could disallow usage of eviction
>> on the metadata cache. I didn't have tests using it, as I wouldn't
>> recommended such a configuration as these entries are very hot and
>> very small: viable to make it an illegal option?
>
>
> disabling eviction can solve the problem IMO. but during a restart, if
> preload is not enable, then we may hit the same problem again.

Proposal:
let's be more aggressive on validation.
 - make preload mandatory (for the metadata only)
 - eviction not allowed (for the metadata only)

Note that I don't think we're putting much of a restriction to users,
we're more likely helping
with good guidance on how these caches are supposed to work.
I think it's a good thing to narrow down the possible configuration options
and make it clear what we expect people to use for this purpose.


>> 3## The CacheLoader loading the same entry multiple times in parallel
>> Kudos for finding out that there are situations in which we deal with
>> multiple different instances of ConcurrentHashSet! Still, I think that
>> Infinispan core is terribly wrong in this case:
>> from the client code POV a new CHS is created with a put-if-absent
>> atomic operation, and I will assume there that core will check/load
>> any enabled cachestore as well.
>> To handle multiple GET operations in parallel, or even in parallel
>> with preload or the client's put-if-absent operation, I would *not*
>> expect Infinispan core to storm the CacheStore implementation with
>> multiple load operations on the same put: a lock should be hold on the
>> potential key during such a load operation.
>
>
> I didn't understand what you mean here ^^. Infinispan only tries to load
> from a cache store once per operation (get, put, remove, etc...).
>
> however, I think we have a big windows in which multiple operations can load
> the same key from the store. This happens because we only write to the data
> container after the operation end.

I think that's a very important consideration.

Imagine a REST cache is built on Infinispan, using a filesystem based
CacheStore, and we receive a million requests per second
for a key "X1", which doesn't exist.
Since each request is not going to find it in the data container, each
request is triggering a CacheStore load.
Do you want to enqueue a million IO operations to the disk?
I'd much rather have some locking in memory, that will prevent nut
just disks from struggling but also allow the CPUs to context switch
to perform more useful tasks: being blocked on a lock is a good thing
in this case, rather than allowing "parallel processing of pointless
requests". That's just parallel energy burning, and potentially DOSing
yourself regarding more interesting requests which could be handled
instead.

To translate it into an implementation design, you need to lock the
"X1" key before attempting to load it from the CacheStore, and when
the lock is acquired the data container needs to be re-checked as it
might have been loaded by a parallel thread.
Or alternatively, lock the key before attempting to read from the data
container: that's not too bad, it's a very brief local only lock.

Worth a different thread? Not that I don't know how Infinispan core is
implementing this today, I'm just understanding from you that it's not
as good as I'd expected it.

This might also be the reason for some people to have recorded a very
bad performance when writing stress tests on the CacheStore, and of
course such a benchmark would highlight too much IO going on.. but
it's not how fast we do the IO which is the problem if we can just
avoid some of them.

>
>
>>
>> If your observation is right, this could also be one of the reasons
>> for so many complaints on the CacheStore performance: under these
>> circumstances - which I'd guesstimate are quite common - we're doing
>> lots of unnecessary IO, potentially stressing the slower storage
>> devices. This could even have dramatic effects if there are frequent
>> requests for entries for which the returned value is a null.
>>
>>   # Action 3: Investigate and open a JIRA about the missing locking on
>> CacheStore load operations.
>>
>> If this where resolves, we would still have a guarantee of single
>> instance, am I correct?
>>
>
> cache stores acquire the read lock to load a key. I'm not sure if exclusive
> locking a key would help in anything.

See above: allowing parallelism on loads is not a good idea as it will
storm the CacheStore with redundant requests.


Sanne


More information about the infinispan-dev mailing list