[JBoss JIRA] (ISPN-5975) Flush cache operation for server
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-5975?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5975:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 8.1.0.Final
(was: 9.0.0.Alpha2)
Resolution: Done
> Flush cache operation for server
> --------------------------------
>
> Key: ISPN-5975
> URL: https://issues.jboss.org/browse/ISPN-5975
> Project: Infinispan
> Issue Type: Feature Request
> Components: Server
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Fix For: 8.1.0.Final
>
>
> Implement a flush-cache operation registered for every cache type which, if passivation is enabled, evicts all entries to the store, otherwise if a write-behind cache store is enabled, it waits for completion, otherwise it does nothing.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (ISPN-5883) Node can apply new topology after sending status response
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-5883?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5883:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Node can apply new topology after sending status response
> ---------------------------------------------------------
>
> Key: ISPN-5883
> URL: https://issues.jboss.org/browse/ISPN-5883
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 8.0.1.Final, 7.2.5.Final, 8.1.0.Alpha2, 9.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: testsuite_stability
> Fix For: 9.0.0.Alpha2, 8.1.5.Final, 8.2.2.Final, 9.0.0.Final
>
>
> {{LocalTopologyManagerImpl}} is responsible for sending the {{ClusterTopologyControlCommand(GET_STATUS)}} response, and when it sends the response it doesn't check the current view id against the new coordinator's view id. If the old coordinator already sent a topology update before the merge, that topology update might be processed after sending the status response. The new coordinator will send a topology update with a topology id of {{max(status response topology ids) + 1}}. The node will then process the topology update from the old coordinator, but it will ignore the topology update from the new coordinator with the same topology id.
> This is extra common in the partition handling tests, e.g. {{BasePessimisticTxPartitionAndMergeTest}} subclasses, because the test "injects" the JGroups view on each node serially, and often the 4th node sends the status response before it gets the new view.
> {noformat}
> 22:16:37,776 DEBUG (remote-thread-NodeD-p26-t6:[]) [LocalTopologyManagerImpl] Sending cluster status response for view 10
> // Topology from NodeC
> 22:16:37,778 DEBUG (transport-thread-NodeD-p28-t2:[]) [LocalTopologyManagerImpl] Updating local topology for cache pes-cache: CacheTopology{id=8, rebalanceId=3, currentCH=DefaultConsistentHash{ns=60, owners = (4)[NodeA-37631: 15+15, NodeB-47846: 15+15, NodeC-46467: 15+15, NodeD-30486: 15+15]}, pendingCH=null, unionCH=null, actualMembers=[NodeC-46467, NodeD-30486]}
> // Later, topology from NodeA
> 22:16:37,827 DEBUG (transport-thread-NodeD-p28-t1:[]) [LocalTopologyManagerImpl] Ignoring late consistent hash update for cache pes-cache, current topology is 8: CacheTopology{id=8, rebalanceId=3, currentCH=DefaultConsistentHash{ns=60, owners = (4)[NodeA-37631: 15+15, NodeB-47846: 15+15, NodeC-46467: 15+15, NodeD-30486: 15+15]}, pendingCH=null, unionCH=null, actualMembers=[NodeA-37631, NodeB-47846, NodeC-46467, NodeD-30486]}
> {noformat}
> As a solution, we can delay sending the status response until we have the same view as the coordinator (or a later one). We already check that the sender is the current coordinator before applying a topology update, so this will guarantee that the we don't apply other topology updates from the old coordinator. Since the status request is only sent after the new view was installed, this will not introduce any delays in the vast majority of cases.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (ISPN-6612) Rename internal Hibernate Search directory provider slot
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-6612?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-6612:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 9.0.0.Final
Resolution: Done
> Rename internal Hibernate Search directory provider slot
> --------------------------------------------------------
>
> Key: ISPN-6612
> URL: https://issues.jboss.org/browse/ISPN-6612
> Project: Infinispan
> Issue Type: Feature Request
> Components: WildFly modules
> Affects Versions: 9.0.0.Alpha1
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 9.0.0.Alpha2, 9.0.0.Final
>
>
> We distribute two modules for the hibernate search directory provider: one self contained (with Infinispan deps) to be used together with the Hibernate Search bundled in Wildfly and another for usage with Infinispan query.
> Currently both share the same name convention {{for-hibernate-search-x.y}}, that could lead to clashes.
> We should rename hibernate search the slot of the modules distributed with the zip, so that they are not inadvertently used outside Infinispan and override either the hibernate search that is inside Wildfly or the modules distributed by Hibernate Search itself.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (ISPN-6550) Remote iterator does not work in compatibility mode
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-6550?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-6550:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 9.0.0.Alpha2
8.2.2.Final
9.0.0.Final
Resolution: Done
> Remote iterator does not work in compatibility mode
> ---------------------------------------------------
>
> Key: ISPN-6550
> URL: https://issues.jboss.org/browse/ISPN-6550
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 9.0.0.Alpha2, 8.2.2.Final, 9.0.0.Final
>
>
> There are two issues when trying to iterate caches configured with compatibility:
> 1) Since the client-side key tracker calculates segments based on byte[] keys, and the routing on the server is done via Object, there's a mismatch between segments calculated in the server and the client casing NPEs that prevent the data from the socket to be consumed correctly, resulting sometimes in:
> {code}
> org.infinispan.client.hotrod.exceptions.InvalidResponseException:: Invalid magic number. Expected 0xa1 and received 0x0
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readMagic(Codec20.java:313)
> at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:115)
> at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56)
> at org.infinispan.client.hotrod.impl.operations.IterationEndOperation.execute(IterationEndOperation.java:34)
> at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.close(RemoteCloseableIterator.java:64)
> {code}
> 2) When the cache configuration has a different name than the cache, CCE errors are thrown:
> {code}
> org.infinispan.client.hotrod.exceptions.HotRodClientException: java.lang.ClassCastException: java.lang.Integer cannot be cast to [B
> at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) ~[infinispan-client-hotrod-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT]
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months