[JBoss JIRA] (ISPN-6804) Upgrade Aesh to 0.66.8
by Andrea Cosentino (JIRA)
Andrea Cosentino created ISPN-6804:
--------------------------------------
Summary: Upgrade Aesh to 0.66.8
Key: ISPN-6804
URL: https://issues.jboss.org/browse/ISPN-6804
Project: Infinispan
Issue Type: Component Upgrade
Reporter: Andrea Cosentino
Assignee: Andrea Cosentino
Priority: Minor
Fix For: 9.0.0.Alpha3
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (ISPN-6803) Precompute a bitset for each flag
by Dan Berindei (JIRA)
Dan Berindei created ISPN-6803:
----------------------------------
Summary: Precompute a bitset for each flag
Key: ISPN-6803
URL: https://issues.jboss.org/browse/ISPN-6803
Project: Infinispan
Issue Type: Task
Components: Core
Affects Versions: 9.0.0.Alpha2
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.0.0.Alpha3
Commands now use keep track of flags as "bitsets" that are actually {{long}}.
However, flag checks still refer to the {{Flag}} instances themselves, and because the ordinal is not a static field, it cannot be optimized away by HotSpot. We can avoid that by precomputing a bitset for each flag, and making it {{static final}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (ISPN-6803) Precompute a bitset for each flag
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-6803?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-6803:
-------------------------------
Status: Open (was: New)
> Precompute a bitset for each flag
> ---------------------------------
>
> Key: ISPN-6803
> URL: https://issues.jboss.org/browse/ISPN-6803
> Project: Infinispan
> Issue Type: Task
> Components: Core
> Affects Versions: 9.0.0.Alpha2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Alpha3
>
>
> Commands now use keep track of flags as "bitsets" that are actually {{long}}.
> However, flag checks still refer to the {{Flag}} instances themselves, and because the ordinal is not a static field, it cannot be optimized away by HotSpot. We can avoid that by precomputing a bitset for each flag, and making it {{static final}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (ISPN-6802) Micro-optimizations for read operations
by Dan Berindei (JIRA)
Dan Berindei created ISPN-6802:
----------------------------------
Summary: Micro-optimizations for read operations
Key: ISPN-6802
URL: https://issues.jboss.org/browse/ISPN-6802
Project: Infinispan
Issue Type: Task
Components: Core
Affects Versions: 9.0.0.Alpha2
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 9.0.0.Alpha3
* L1 entries are written to the data container by L1TxInterceptor/L1NonTxInterceptor directly, so there is no reason to commit the context entries in EntryWrappingInterceptor or to clear the locks in the locking interceptors.
* ClearCommands can no longer be wrapped in PrepareCommands, so we can stop the state transfer in {{EntryWrappingInterceptor.visitClearCommand()}} instead of checking the type for each command.
* In transactional caches, a read operation without an explicit transaction triggers two queries for the current transaction to the transaction manager, which usually means 2 thread-local lookups.
* IsMarshallerInterceptor shouldn't do anything unless there is an asynchronous store
* Transactional remote get commands use NonTxInvocationInterceptor instead of SingleKeyNonTxInterceptor.
* Configuration attributes should be cached, as reading them requires accessing multiple instances + a cast in Attribute.get()
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (ISPN-6799) OOB thread pool fills with threads trying to send remote get responses
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-6799?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero commented on ISPN-6799:
---------------------------------------
Agreed, I regret the bundler part of my suggestion as it's irrelevant. I should have stopped at the first part, what I meant is that there might be several other improvements which can be done in a more maintainable way once you have a clearer model to explicitly handle the "responses to be sent" (over a bunch of schizophrenic threads in a pool).
> OOB thread pool fills with threads trying to send remote get responses
> ----------------------------------------------------------------------
>
> Key: ISPN-6799
> URL: https://issues.jboss.org/browse/ISPN-6799
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Alpha2, 8.2.2.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Alpha3
>
>
> Note: This is a scenario that happens in the stress tests, with 4 nodes in dist mode, and 200+ threads per node doing only reads. I have not been able to reproduce it locally, even with a much lower OOB thread pool size and UFC.max_credits.
> We don't use the {{NO_FC}} flag, so threads sending both requests and responses can block in UFC/MFC. Remote gets are executed directly on the OOB thread, so when we run out of credits for one node, the OOB pool can quickly become full with threads waiting to send a remote get response to that node.
> While we can't send responses to that node, we won't send credits to it, either, as credits are only sent *after* the message has been processed by the application. That means OOB threads on all nodes will start blocking, trying to send remote get responses to us.
> This is made a worse by our staggering of remote gets. As remote get responses block, the stagger timeout kicks in and we send even more remote gets, making it even harder for the system to recover.
> UFC/MFC can send a {{CREDIT_REQUEST}} message to ask for more credits. The {{REPLENISH}} messages are handled on JGroups' internal thread pool, so they are not blocked. However, the CREDIT_REQUEST can be sent at most once every {{UFC.max_block_time}} ms, so they can't be relied on to provide enough credits. With the default settings, the throughput would be {{max_credits / max_block_time == 2mb / 0.5s == 4mb/s}}, which is really small compared to regular throughput.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (ISPN-6745) Locks are lost in pessimistic cache
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-6745?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-6745:
------------------------------
Fix Version/s: 9.0.0.Alpha3
> Locks are lost in pessimistic cache
> -----------------------------------
>
> Key: ISPN-6745
> URL: https://issues.jboss.org/browse/ISPN-6745
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.2.3.Final
> Environment: JBoss DataGrid 6.5.0 (6.3.1.Final-redhat-1)
> 3 nodes in REPL_SYNC mode
> pessimistic locking
> read committed isolation
> Reporter: Eugene Scripnik
> Assignee: Pedro Ruivo
> Fix For: 9.0.0.Alpha3
>
> Attachments: InfinispanNodeFailureTest.java
>
>
> When you perform multiple TX write operations in one transaction (put, replace, lock, etc) and one of the nodes goes down, there is a slight chance that some locks will be lost and acquired by another transaction before current transaction ends.
> So client ends up with two transactions holding the same lock on pessimistic cache at the same time. Both transactions commit at the end successfully.
> I spent some time debugging infinispan code and found that PessimisticLockingInterceptor#releaseLocksOnFailureBeforePrepare releases all locks when OutdatedTopologyException occurs on remote node. But then StateTransferInterceptor#handleTxWriteCommand retries last command. This behavior produces inconsistent state - all locks before last command are released and any other transaction can acquire them.
> I am attaching Test which reproduces this problem
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (ISPN-6745) Locks are lost in pessimistic cache
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-6745?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo updated ISPN-6745:
------------------------------
Status: Open (was: New)
> Locks are lost in pessimistic cache
> -----------------------------------
>
> Key: ISPN-6745
> URL: https://issues.jboss.org/browse/ISPN-6745
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.2.3.Final
> Environment: JBoss DataGrid 6.5.0 (6.3.1.Final-redhat-1)
> 3 nodes in REPL_SYNC mode
> pessimistic locking
> read committed isolation
> Reporter: Eugene Scripnik
> Assignee: Pedro Ruivo
> Attachments: InfinispanNodeFailureTest.java
>
>
> When you perform multiple TX write operations in one transaction (put, replace, lock, etc) and one of the nodes goes down, there is a slight chance that some locks will be lost and acquired by another transaction before current transaction ends.
> So client ends up with two transactions holding the same lock on pessimistic cache at the same time. Both transactions commit at the end successfully.
> I spent some time debugging infinispan code and found that PessimisticLockingInterceptor#releaseLocksOnFailureBeforePrepare releases all locks when OutdatedTopologyException occurs on remote node. But then StateTransferInterceptor#handleTxWriteCommand retries last command. This behavior produces inconsistent state - all locks before last command are released and any other transaction can acquire them.
> I am attaching Test which reproduces this problem
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months