[JBoss JIRA] (ISPN-7322) Improve triangle algorithm: ordering by segment
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-7322?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-7322:
-----------------------------------
[~belaban] I disagree. Since we actually want to order only writes to each key (without respect to the other keys), full source ordering is excessive - one hiccup (lost/discarded message) delays all subsequent messages (until the problem is handled).
About thread creation - do you mean that you deliver (to app) OOB messages received in one batch separately (one msg per thread)? Can we just switch this off and deliver the whole batch sequentially? The processing on backup should be non-blocking (not sure if there's something blocking left yet).
> Improve triangle algorithm: ordering by segment
> -----------------------------------------------
>
> Key: ISPN-7322
> URL: https://issues.jboss.org/browse/ISPN-7322
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Current triangle algorithm uses regular message (FIFO ordered) between the primary owner and backup owners of a key. While it ensures that the backup owners receives the stream of updates in the same order, it makes everything slower since it doesn't allow different keys to be handled in parallel.
> "Triangle unordered" solves this problem by sending OOB messages (not ordered) between the primary and backup. To keep the consistency, Infinispan introduces the TriangleOrderManager that orders the updates based on the segment of the key.
> While it is not as perfect as ordering per key, the segments are static; this removes the complexity and avoids handling the cluster topology changes and key adding/removal while improves the performance.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7322) Improve triangle algorithm: ordering by segment
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-7322?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on ISPN-7322:
--------------------------------
While this may temporarily fix the perf regression, I don't like adding OOB messages again, as they cause increased thread creation in the thread pool.
Let's revisit this decision again when switching from MessageDispatcher to JChannel and using _message batches_ (MessageBatch). In TriCache, I use a separate thread pool (of 1) to apply received BACKUPs and performance is excellent.
> Improve triangle algorithm: ordering by segment
> -----------------------------------------------
>
> Key: ISPN-7322
> URL: https://issues.jboss.org/browse/ISPN-7322
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
>
> Current triangle algorithm uses regular message (FIFO ordered) between the primary owner and backup owners of a key. While it ensures that the backup owners receives the stream of updates in the same order, it makes everything slower since it doesn't allow different keys to be handled in parallel.
> "Triangle unordered" solves this problem by sending OOB messages (not ordered) between the primary and backup. To keep the consistency, Infinispan introduces the TriangleOrderManager that orders the updates based on the segment of the key.
> While it is not as perfect as ordering per key, the segments are static; this removes the complexity and avoids handling the cluster topology changes and key adding/removal while improves the performance.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7323) Reduce number of lambda allocations
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-7323?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-7323:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Reduce number of lambda allocations
> -----------------------------------
>
> Key: ISPN-7323
> URL: https://issues.jboss.org/browse/ISPN-7323
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: performance
> Fix For: 9.0.0.Beta2
>
>
> With the move to non-blocking invocation, interceptors now use a lot of lambdas, sometimes in the form {{this::method}}. But the JVM doesn't cache this kind of lambdas, it only caches {{Class::method}} lambdas, so we end up creating lots of extra lambda instances.
> Of course, these are short-lived and are quite cheap to collect, but they can still pollute the processor cache.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7323) Reduce number of lambda allocations
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-7323?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-7323:
-------------------------------
Status: Open (was: New)
> Reduce number of lambda allocations
> -----------------------------------
>
> Key: ISPN-7323
> URL: https://issues.jboss.org/browse/ISPN-7323
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: performance
> Fix For: 9.0.0.Beta2
>
>
> With the move to non-blocking invocation, interceptors now use a lot of lambdas, sometimes in the form {{this::method}}. But the JVM doesn't cache this kind of lambdas, it only caches {{Class::method}} lambdas, so we end up creating lots of extra lambda instances.
> Of course, these are short-lived and are quite cheap to collect, but they can still pollute the processor cache.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-7323) Reduce number of lambda allocations
by Dan Berindei (JIRA)
Dan Berindei created ISPN-7323:
----------------------------------
Summary: Reduce number of lambda allocations
Key: ISPN-7323
URL: https://issues.jboss.org/browse/ISPN-7323
Project: Infinispan
Issue Type: Task
Components: Core
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.0.0.Beta2
With the move to non-blocking invocation, interceptors now use a lot of lambdas, sometimes in the form {{this::method}}. But the JVM doesn't cache this kind of lambdas, it only caches {{Class::method}} lambdas, so we end up creating lots of extra lambda instances.
Of course, these are short-lived and are quite cheap to collect, but they can still pollute the processor cache.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-3690) Lower allocation cost of instances of org.infinispan.commands.read.GetKeyValueCommand
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3690?page=com.atlassian.jira.plugin.... ]
Dan Berindei resolved ISPN-3690.
--------------------------------
Fix Version/s: 7.1.0.Final
Resolution: Done
> Lower allocation cost of instances of org.infinispan.commands.read.GetKeyValueCommand
> -------------------------------------------------------------------------------------
>
> Key: ISPN-3690
> URL: https://issues.jboss.org/browse/ISPN-3690
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Sanne Grinovero
> Priority: Minor
> Labels: performance
> Fix For: 7.1.0.Final
>
>
> Classes of type {code}org.infinispan.commands.read.GetKeyValueCommand{code} have an high cost in terms of memory allocation.
> Would be great if we could reduce the runtime cost: in an app server test of just 25 minutes - which is stressing way more systems than just Infinispan - just the occasional get operations 43GB of memory accumulated over time.
> This is a high cost on the TLABs, skewing various other values, among others making the default GC options unsuitable.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months
[JBoss JIRA] (ISPN-3690) Lower allocation cost of instances of org.infinispan.commands.read.GetKeyValueCommand
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3690?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-3690:
------------------------------------
This was actually implemented with ISPN-5032.
TBH I'm not very fond of the extra {{InternalEntryFactory}} reference in {{GetCacheEntryCommand}}. It's used to synchronize on the cache entry, but why does {{GetCacheEntryCommand}} need to copy the entry fields atomically while the rest of our code non-atomically reads the value and the metadata separately?
> Lower allocation cost of instances of org.infinispan.commands.read.GetKeyValueCommand
> -------------------------------------------------------------------------------------
>
> Key: ISPN-3690
> URL: https://issues.jboss.org/browse/ISPN-3690
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core
> Reporter: Sanne Grinovero
> Priority: Minor
> Labels: performance
>
> Classes of type {code}org.infinispan.commands.read.GetKeyValueCommand{code} have an high cost in terms of memory allocation.
> Would be great if we could reduce the runtime cost: in an app server test of just 25 minutes - which is stressing way more systems than just Infinispan - just the occasional get operations 43GB of memory accumulated over time.
> This is a high cost on the TLABs, skewing various other values, among others making the default GC options unsuitable.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 4 months