[JBoss JIRA] (ISPN-7318) Update cloud cache store integration tests
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-7318?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-7318:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 8.0.2.Final
Resolution: Done
> Update cloud cache store integration tests
> ------------------------------------------
>
> Key: ISPN-7318
> URL: https://issues.jboss.org/browse/ISPN-7318
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 8.2.4.Final
> Reporter: Vojtech Juranek
> Assignee: Vojtech Juranek
> Fix For: 8.0.2.Final
>
>
> Cloud cache store integration tests don't run properly, fails with AWS and OpenStack is not supported at all. This should be fixed. Also part of the tests obsolete (e.g. testing of streaming support) and don't run at all - these need to be removed.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7047) Cloud cache store can check if container exists too early
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-7047?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-7047:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 8.0.2.Final
Resolution: Done
> Cloud cache store can check if container exists too early
> ---------------------------------------------------------
>
> Key: ISPN-7047
> URL: https://issues.jboss.org/browse/ISPN-7047
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 8.2.4.Final
> Reporter: Vojtech Juranek
> Assignee: Vojtech Juranek
> Fix For: 8.0.2.Final
>
>
> When container doesn't exits and cloud cache store creates new one, it check immediately if container is created. It can happen (observed several times on AWS S3), that container was created, but cache store failed with exception that it's not able to create cache store, as it checks S3 too early.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7325) Shared transactional store warnings not applicable to local caches
by Ryan Emerson (JIRA)
Ryan Emerson created ISPN-7325:
----------------------------------
Summary: Shared transactional store warnings not applicable to local caches
Key: ISPN-7325
URL: https://issues.jboss.org/browse/ISPN-7325
Project: Infinispan
Issue Type: Bug
Components: Loaders and Stores
Affects Versions: 9.0.0.Beta1
Reporter: Ryan Emerson
Assignee: Ryan Emerson
Currently a CacheConfigurationException is thrown if a cache store is configured as transactional but not shared, for all cache modes. This exception shouldn't be thrown for a local cache.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7324) DDAsyncInterceptor indirection slows down replicated reads
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7324?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7324:
-------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/4743
> DDAsyncInterceptor indirection slows down replicated reads
> ----------------------------------------------------------
>
> Key: ISPN-7324
> URL: https://issues.jboss.org/browse/ISPN-7324
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Beta1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: performance
> Fix For: 9.0.0.Beta2
>
>
> Local reads are fast enough, but the additional interceptors and stage callbacks in (transactional) replicated mode seem to impact with the async interceptor stack a lot more than the classic one.
> One thing that's different with the new interceptors is that {{invokeNext()}} doesn't call {{command.acceptVisitor(nextInterceptor)}} directly. Instead it calls {{nextInterceptor.visitCommand()}}, and the interceptor decides whether to use double-dispatch (by extending {{DDAsyncInterceptor}}) or another strategy.
> In theory this allows us to use simpler interceptors, e.g. having just the methods {{visitReadCommand()}}, {{visitWriteCommand()}}, and {{visitTxCommand()}}. {{CallInterceptor}} already calls {{command.perform()}} for each command. For now, however, most interceptors extend {{DDAsyncInterceptor}}, and tx replicated reads are slower than in 9.0.0.Alpha0.
> With transactions, the {{VisitableCommand.acceptVisitor(}} call site in {{DDAsyncInterceptor.visitCommand}} is megamorphic (since the initial preload uses put, prepare, and commit). Adding a special check in {{invokeNext()}} to invoke {{command.acceptVisitor(nextInterceptor)}} didn't help, but adding a special check for {{GetKeyValueCommand}} made a big difference on my machine:
> |9.0.0.Alpha0 (CommandInterceptor)|4937351.255 ±(99.9%) 61665.164 ops/s|
> |9.0.0.Beta1 (AsyncInterceptor)|4387466.151 ±(99.9%) 78665.887 ops/s|
> |master before ISPN-6802 and ISPN-6803| 4247769.260 ±(99.9%) 133767.371 ops/s|
> |master| 4710798.986 ±(99.9%) 166062.177 ops/s|
> |master with GKVC special case| 5749357.895 ±(99.9%) 87338.878 ops/s|
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7322) Improve triangle algorithm: ordering by segment
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-7322?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-7322:
-----------------------------------
[~belaban] Allocating memory is fast, you pay for that later in GC. Despite of that, I've seen quite a substantial amount of time spent in marshalling (in flame graphs - so the GC overheads was not included), even though these were just byte arrays and strings. I can imagine that with complex objects the cost is higher; in any case, I wouldn't call marshalling effect neglible.
> Improve triangle algorithm: ordering by segment
> -----------------------------------------------
>
> Key: ISPN-7322
> URL: https://issues.jboss.org/browse/ISPN-7322
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Current triangle algorithm uses regular message (FIFO ordered) between the primary owner and backup owners of a key. While it ensures that the backup owners receives the stream of updates in the same order, it makes everything slower since it doesn't allow different keys to be handled in parallel.
> "Triangle unordered" solves this problem by sending OOB messages (not ordered) between the primary and backup. To keep the consistency, Infinispan introduces the TriangleOrderManager that orders the updates based on the segment of the key.
> While it is not as perfect as ordering per key, the segments are static; this removes the complexity and avoids handling the cluster topology changes and key adding/removal while improves the performance.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7324) DDAsyncInterceptor indirection slows down replicated reads
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7324?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7324:
-------------------------------
Status: Open (was: New)
> DDAsyncInterceptor indirection slows down replicated reads
> ----------------------------------------------------------
>
> Key: ISPN-7324
> URL: https://issues.jboss.org/browse/ISPN-7324
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Beta1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: performance
> Fix For: 9.0.0.Beta2
>
>
> Local reads are fast enough, but the additional interceptors and stage callbacks in (transactional) replicated mode seem to impact with the async interceptor stack a lot more than the classic one.
> One thing that's different with the new interceptors is that {{invokeNext()}} doesn't call {{command.acceptVisitor(nextInterceptor)}} directly. Instead it calls {{nextInterceptor.visitCommand()}}, and the interceptor decides whether to use double-dispatch (by extending {{DDAsyncInterceptor}}) or another strategy.
> In theory this allows us to use simpler interceptors, e.g. having just the methods {{visitReadCommand()}}, {{visitWriteCommand()}}, and {{visitTxCommand()}}. {{CallInterceptor}} already calls {{command.perform()}} for each command. For now, however, most interceptors extend {{DDAsyncInterceptor}}, and tx replicated reads are slower than in 9.0.0.Alpha0.
> With transactions, the {{VisitableCommand.acceptVisitor(}} call site in {{DDAsyncInterceptor.visitCommand}} is megamorphic (since the initial preload uses put, prepare, and commit). Adding a special check in {{invokeNext()}} to invoke {{command.acceptVisitor(nextInterceptor)}} didn't help, but adding a special check for {{GetKeyValueCommand}} made a big difference on my machine:
> |9.0.0.Alpha0 (CommandInterceptor)|4937351.255 ±(99.9%) 61665.164 ops/s|
> |9.0.0.Beta1 (AsyncInterceptor)|4387466.151 ±(99.9%) 78665.887 ops/s|
> |master before ISPN-6802 and ISPN-6803| 4247769.260 ±(99.9%) 133767.371 ops/s|
> |master| 4710798.986 ±(99.9%) 166062.177 ops/s|
> |master with GKVC special case| 5749357.895 ±(99.9%) 87338.878 ops/s|
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7324) DDAsyncInterceptor indirection slows down replicated reads
by Dan Berindei (JIRA)
Dan Berindei created ISPN-7324:
----------------------------------
Summary: DDAsyncInterceptor indirection slows down replicated reads
Key: ISPN-7324
URL: https://issues.jboss.org/browse/ISPN-7324
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 9.0.0.Beta1
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.0.0.Beta2
Local reads are fast enough, but the additional interceptors and stage callbacks in (transactional) replicated mode seem to impact with the async interceptor stack a lot more than the classic one.
One thing that's different with the new interceptors is that {{invokeNext()}} doesn't call {{command.acceptVisitor(nextInterceptor)}} directly. Instead it calls {{nextInterceptor.visitCommand()}}, and the interceptor decides whether to use double-dispatch (by extending {{DDAsyncInterceptor}}) or another strategy.
In theory this allows us to use simpler interceptors, e.g. having just the methods {{visitReadCommand()}}, {{visitWriteCommand()}}, and {{visitTxCommand()}}. {{CallInterceptor}} already calls {{command.perform()}} for each command. For now, however, most interceptors extend {{DDAsyncInterceptor}}, and tx replicated reads are slower than in 9.0.0.Alpha0.
With transactions, the {{VisitableCommand.acceptVisitor(}} call site in {{DDAsyncInterceptor.visitCommand}} is megamorphic (since the initial preload uses put, prepare, and commit). Adding a special check in {{invokeNext()}} to invoke {{command.acceptVisitor(nextInterceptor)}} didn't help, but adding a special check for {{GetKeyValueCommand}} made a big difference on my machine:
|9.0.0.Alpha0 (CommandInterceptor)|4937351.255 ±(99.9%) 61665.164 ops/s|
|9.0.0.Beta1 (AsyncInterceptor)|4387466.151 ±(99.9%) 78665.887 ops/s|
|master before ISPN-6802 and ISPN-6803| 4247769.260 ±(99.9%) 133767.371 ops/s|
|master| 4710798.986 ±(99.9%) 166062.177 ops/s|
|master with GKVC special case| 5749357.895 ±(99.9%) 87338.878 ops/s|
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7322) Improve triangle algorithm: ordering by segment
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-7322?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on ISPN-7322:
--------------------------------
[~rvansa] No, a batch is delivered by *one* thread (same as a single message). In my experiments, full source ordering beats the overhead created by a high number of threads (context switching and lock contention overhead). Also, if you use OOB messages, you won't reap the benefits of improvements such as JGRP-2143.
[~pruivo] Unmarshalling only has a higher cost if you need to allocate memory. Parsing into pre-created (or fixed) memory buffers should be fast.
> Improve triangle algorithm: ordering by segment
> -----------------------------------------------
>
> Key: ISPN-7322
> URL: https://issues.jboss.org/browse/ISPN-7322
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Current triangle algorithm uses regular message (FIFO ordered) between the primary owner and backup owners of a key. While it ensures that the backup owners receives the stream of updates in the same order, it makes everything slower since it doesn't allow different keys to be handled in parallel.
> "Triangle unordered" solves this problem by sending OOB messages (not ordered) between the primary and backup. To keep the consistency, Infinispan introduces the TriangleOrderManager that orders the updates based on the segment of the key.
> While it is not as perfect as ordering per key, the segments are static; this removes the complexity and avoids handling the cluster topology changes and key adding/removal while improves the performance.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7322) Improve triangle algorithm: ordering by segment
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-7322?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo commented on ISPN-7322:
-----------------------------------
I'm not sure if we want deliver messages in single thread. For large entries, the unmarshall of the command as a huge impact. Delivering the updates concurrently and ordering after they are unmarshaller seems to improve the performance.
> Improve triangle algorithm: ordering by segment
> -----------------------------------------------
>
> Key: ISPN-7322
> URL: https://issues.jboss.org/browse/ISPN-7322
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Current triangle algorithm uses regular message (FIFO ordered) between the primary owner and backup owners of a key. While it ensures that the backup owners receives the stream of updates in the same order, it makes everything slower since it doesn't allow different keys to be handled in parallel.
> "Triangle unordered" solves this problem by sending OOB messages (not ordered) between the primary and backup. To keep the consistency, Infinispan introduces the TriangleOrderManager that orders the updates based on the segment of the key.
> While it is not as perfect as ordering per key, the segments are static; this removes the complexity and avoids handling the cluster topology changes and key adding/removal while improves the performance.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months