[JBoss JIRA] (ISPN-7324) DDAsyncInterceptor indirection slows down replicated reads
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7324?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-7324:
------------------------------------
Implicit transactions benefited a lot from the {{GetKeyValueCommand}} check in {{invokeNext()}}, however explicit transactions were actually slowed down because of the extra {{instanceof}} checks for {{PrepareCommand}} and {{CommitCommand}}. I will just inline `DDAsyncInterceptor.visitCommand()` instead, which has much smaller effects (both for the better and for the worse).
> DDAsyncInterceptor indirection slows down replicated reads
> ----------------------------------------------------------
>
> Key: ISPN-7324
> URL: https://issues.jboss.org/browse/ISPN-7324
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Beta1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: performance
> Fix For: 9.0.0.Beta2
>
>
> Local reads are fast enough, but the additional interceptors and stage callbacks in (transactional) replicated mode seem to impact with the async interceptor stack a lot more than the classic one.
> One thing that's different with the new interceptors is that {{invokeNext()}} doesn't call {{command.acceptVisitor(nextInterceptor)}} directly. Instead it calls {{nextInterceptor.visitCommand()}}, and the interceptor decides whether to use double-dispatch (by extending {{DDAsyncInterceptor}}) or another strategy.
> In theory this allows us to use simpler interceptors, e.g. having just the methods {{visitReadCommand()}}, {{visitWriteCommand()}}, and {{visitTxCommand()}}. {{CallInterceptor}} already calls {{command.perform()}} for each command. For now, however, most interceptors extend {{DDAsyncInterceptor}}, and tx replicated reads are slower than in 9.0.0.Alpha0.
> With transactions, the {{VisitableCommand.acceptVisitor(}} call site in {{DDAsyncInterceptor.visitCommand}} is megamorphic (since the initial preload uses put, prepare, and commit). Adding a special check in {{invokeNext()}} to invoke {{command.acceptVisitor(nextInterceptor)}} didn't help, but adding a special check for {{GetKeyValueCommand}} made a big difference on my machine:
> |9.0.0.Alpha0 (CommandInterceptor)|4937351.255 ±(99.9%) 61665.164 ops/s|
> |9.0.0.Beta1 (AsyncInterceptor)|4387466.151 ±(99.9%) 78665.887 ops/s|
> |master before ISPN-6802 and ISPN-6803| 4247769.260 ±(99.9%) 133767.371 ops/s|
> |master| 4710798.986 ±(99.9%) 166062.177 ops/s|
> |master with GKVC special case| 5749357.895 ±(99.9%) 87338.878 ops/s|
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months
[JBoss JIRA] (ISPN-7328) Administration console - cache statuses in cache container page behave randomly
by Vladimir Blagojevic (JIRA)
[ https://issues.jboss.org/browse/ISPN-7328?page=com.atlassian.jira.plugin.... ]
Vladimir Blagojevic updated ISPN-7328:
--------------------------------------
Status: Open (was: New)
> Administration console - cache statuses in cache container page behave randomly
> -------------------------------------------------------------------------------
>
> Key: ISPN-7328
> URL: https://issues.jboss.org/browse/ISPN-7328
> Project: Infinispan
> Issue Type: Bug
> Components: Console
> Affects Versions: 9.0.0.Beta1
> Reporter: Jiří Holuša
> Assignee: Vladimir Blagojevic
> Priority: Minor
> Attachments: screenshot1.png
>
>
> Steps to reproduce: create more caches (in my case at least ~20), go to cache container and try to refresh the page several times. It should sometimes appear that some of the cache has yellow warning status, see attached screenshot.
> This occurs very randomly and only with more caches (and probably more servers). IMHO there is some kind of timeout issue that the console fails to retrieve statuses from all caches in time.
> I think the best solution would be to, when waiting for retrieving of the cache status, have instead of "warning" icon some kind of spinner which would basically signal "I haven't got the status yet". This would also solve a bit of user-unfriendliness, which is when you go to cache container, initially all the statuses are "warning" and then they change to "OK". This moment can time quite some time when there are more caches and can confuse users quite a bit.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months
[JBoss JIRA] (ISPN-7328) Administration console - cache statuses in cache container page behave randomly
by Vladimir Blagojevic (JIRA)
[ https://issues.jboss.org/browse/ISPN-7328?page=com.atlassian.jira.plugin.... ]
Vladimir Blagojevic updated ISPN-7328:
--------------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan-management-console/pull/176
> Administration console - cache statuses in cache container page behave randomly
> -------------------------------------------------------------------------------
>
> Key: ISPN-7328
> URL: https://issues.jboss.org/browse/ISPN-7328
> Project: Infinispan
> Issue Type: Bug
> Components: Console
> Affects Versions: 9.0.0.Beta1
> Reporter: Jiří Holuša
> Assignee: Vladimir Blagojevic
> Priority: Minor
> Attachments: screenshot1.png
>
>
> Steps to reproduce: create more caches (in my case at least ~20), go to cache container and try to refresh the page several times. It should sometimes appear that some of the cache has yellow warning status, see attached screenshot.
> This occurs very randomly and only with more caches (and probably more servers). IMHO there is some kind of timeout issue that the console fails to retrieve statuses from all caches in time.
> I think the best solution would be to, when waiting for retrieving of the cache status, have instead of "warning" icon some kind of spinner which would basically signal "I haven't got the status yet". This would also solve a bit of user-unfriendliness, which is when you go to cache container, initially all the statuses are "warning" and then they change to "OK". This moment can time quite some time when there are more caches and can confuse users quite a bit.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months
[JBoss JIRA] (ISPN-5876) Pre-commit cache invalidation creates stale cache vulnerability
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5876?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5876:
-----------------------------------------------
Petr Penicka <ppenicka(a)redhat.com> changed the Status of [bug 1273147|https://bugzilla.redhat.com/show_bug.cgi?id=1273147] from VERIFIED to CLOSED
> Pre-commit cache invalidation creates stale cache vulnerability
> ---------------------------------------------------------------
>
> Key: ISPN-5876
> URL: https://issues.jboss.org/browse/ISPN-5876
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 5.2.7.Final
> Reporter: Stephen Fikes
> Assignee: Galder Zamarreño
> Fix For: 5.2.15.Final, 8.1.0.Beta1, 8.1.0.Final
>
>
> In a cluster where Infinispan serves as the level 2 cache for Hibernate (configured for invalidation), because invalidation requests for modified entities are sent *before* database commit, it is possible for nodes receiving the invalidation request to perform eviction and then (due to "local" read requests) reload the evicted entities prior to the time the database commit takes place in the server where the entity was modified.
> Consequently, other servers in the cluster may contain data that remains stale until a subsequent change in another server or until the entity times out from lack of use.
> It isn't easy to write a testcase for this - it required manual intervention to reproduce - but can be seen with any entity class, cluster, etc. (at least using Oracle - results may vary with specific databases) so I've not attached a testcase. The issue can be seen/understood by code inspection (i.e. the timing of invalidation vs. database commit). That said, my test consisted of a two node cluster and I used Byteman rules to delay database commit of a change to an entity (with an optimistic version property) long enough in "server 1" for eviction to complete and a subsequent re-read (by a worker thread on behalf of an EJB) to take place in "server 2". Following the re-read in "server 2", I the database commit proceeds in "server 1" and "server 2" now has a stale copy of the entity in cache.
> One option is pessimistic locking which will block any read attempt until the DB commit completes. It is not feasible, however, for many applications to use pessimistic locking for all reads as this can have a severe impact on concurrency - and is the reason for using optimistic version control. But due to the early timing of invalidation broadcast (*before* database commit, while the data is not yet stale), optimistic locking is insufficient to guard against "permanently" stale data. We did see that some databases default to blocking repeatable reads even outside of transactions and without explicit lock requests. Oracle does not provide such a mode. So, all reads must be implemented to use pessimistic locks (which must be enclosed in explicit transactions - (b)locking reads are disallowed when autocommit=true in Oracle) and this could require significant effort (re-writes) to use pessimistic reads throughout - in addition to the performance issues this can introduce.
> If broadcast of an invalidation message always occurs *after* database commit, optimistic control attributes are sufficient to block attempts to write stale data and though a few failures may occur (as they would in a single server with multiple active threads), it can be known that the stale data will be removed in some finite period.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months
[JBoss JIRA] (ISPN-5568) KeyAffinityService race condition on view change
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5568?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5568:
-----------------------------------------------
Petr Penicka <ppenicka(a)redhat.com> changed the Status of [bug 1233968|https://bugzilla.redhat.com/show_bug.cgi?id=1233968] from VERIFIED to CLOSED
> KeyAffinityService race condition on view change
> ------------------------------------------------
>
> Key: ISPN-5568
> URL: https://issues.jboss.org/browse/ISPN-5568
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 5.2.11.Final
> Reporter: Dennis Reed
> Assignee: Bartosz Baranowski
> Fix For: 8.0.0.Beta2, 5.2.14.Final, 7.2.4.Final
>
>
> KeyAffinityService#getKeyForAddress runs in a tight loop looking for keys:
> {noformat}
> queue = address2key.get(address)
> while (result == null)
> result = queue.poll()
> {noformat}
> KeyAffinityService#handleViewChange clears and resets the queue list on membership change:
> {noformat}
> address2key.clear()
> for each address
> map.put(address, new queue)
> {noformat}
> If a view change comes in after getKeyForAddress gets the queue, and the queue is empty, it will get stuck in a tight loop looking at the wrong queue forever while new keys are added to the new queue.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 11 months