[JBoss JIRA] (ISPN-9592) Lockdown query internals
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9592?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9592:
----------------------------------
Sprint: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 10.0.0.Beta1 (was: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2)
> Lockdown query internals
> ------------------------
>
> Key: ISPN-9592
> URL: https://issues.jboss.org/browse/ISPN-9592
> Project: Infinispan
> Issue Type: Task
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Priority: Major
>
> Many things in query implementation are left public although they are very prone to creating insidious bugs if accessed externally. These are not proper extension points and should be made package local or provided strictly via SearchManagerImplementor.
> * Move SearchManager.purge(Class) method to SearchManagerImplementor
> * DefaultSearchWorkCreator should be moved to org.infinispan.query.backend and made package local
> * TransactionalEventTransactionContext is not used, can be removed, and its non-tx counterpart (NoTransactionContext) no longer has to be public
> * all public methods in QueryInterceptor must be reviewed and made package local ASAP
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-9620) Rolling Upgrade Marshaller Changes
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9620?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9620:
----------------------------------
Sprint: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 10.0.0.Beta1 (was: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2)
> Rolling Upgrade Marshaller Changes
> ----------------------------------
>
> Key: ISPN-9620
> URL: https://issues.jboss.org/browse/ISPN-9620
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, Loaders and Stores
> Affects Versions: 9.4.0.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Priority: Major
> Fix For: 10.0.0.Beta1
>
>
> In order to allow for compatibility between infinispan versions it is necessary for us to utilise a marshalling implementation at both the cluster (internal node-to-node communication) and persistence layer that is strictly defined but allows for future changes. This is necessary in order to facilitate both rolling and start/stop upgrades. Protocol buffers should be utilised as the wire/storage format, with protostream providing the implementation.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-9699) Cluster member owning no data
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9699?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9699:
----------------------------------
Sprint: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 10.0.0.Beta1 (was: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2)
> Cluster member owning no data
> -----------------------------
>
> Key: ISPN-9699
> URL: https://issues.jboss.org/browse/ISPN-9699
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core
> Affects Versions: 9.4.1.Final
> Reporter: Thomas Segismont
> Assignee: Katia Aresti
> Priority: Major
> Fix For: 10.0.0.Beta1, 9.4.5.Final
>
>
> Currently, you can set {{capacity-factor}} to zero on a cache if you don't want a node to own any segment.
> If you want a node to own no data at all, you could set this property on all declared caches. But this wouldn't work for internal caches anyway (locks, counters).
> It would be nice to have a global switch.
> This would be useful when your app needs to be a cluster member for discovery/membership but is deployed mostly for processing. Indeed, when such nodes are added/removed from the cluster:
> * there would be no data loss
> * the cluster would be back in healthy state faster as no data would be moved around.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-9703) CI should test that distribution build works
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9703?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9703:
----------------------------------
Sprint: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 10.0.0.Beta1 (was: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2)
> CI should test that distribution build works
> --------------------------------------------
>
> Key: ISPN-9703
> URL: https://issues.jboss.org/browse/ISPN-9703
> Project: Infinispan
> Issue Type: Enhancement
> Affects Versions: 9.4.1.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Priority: Major
> Fix For: 10.0.0.Beta1
>
>
> Currently the CI does not test whether the `-Pdistribution` profile successfully builds or not. As we want master to always be releasable, we should test this on PRs.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-9726) Document max clause property on boolean query
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9726?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9726:
----------------------------------
Sprint: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 10.0.0.Beta1 (was: Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2)
> Document max clause property on boolean query
> ---------------------------------------------
>
> Key: ISPN-9726
> URL: https://issues.jboss.org/browse/ISPN-9726
> Project: Infinispan
> Issue Type: Task
> Components: Embedded Querying, Remote Querying
> Affects Versions: 10.0.0.Alpha1
> Reporter: Katia Aresti
> Assignee: Adrian Nistor
> Priority: Major
> Fix For: 10.0.0.Beta1, 9.4.5.Final
>
>
> Document the ability to override org.apache.lucene.search.BooleanQuery.maxClauseCount using infinispan.query.lucene.max-boolean-clauses JVM system property.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-9801) ClusterTopologyManagerImpl hangs when restarting a node with FORK
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9801?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9801:
----------------------------------
Sprint: Sprint 10.0.0.Alpha2, Sprint 10.0.0.Beta1 (was: Sprint 10.0.0.Alpha2)
> ClusterTopologyManagerImpl hangs when restarting a node with FORK
> -----------------------------------------------------------------
>
> Key: ISPN-9801
> URL: https://issues.jboss.org/browse/ISPN-9801
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 10.0.0.Alpha1, 9.4.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Major
> Fix For: 10.0.0.Beta1, 9.4.5.Final
>
>
> When a server is restarted with `kill -9` or similar, both the old node and the new one can be in the JGroups view for a while. Normally this shouldn't be a problem, but sometimes the new node doesn't receive the {{HeartBeatCommand}} and the coordinator cannot process any new view updates.
> {noformat}
> 14:29:19,981 INFO (jgroups-12,Test-NodeA:[]) [CLUSTER] ISPN000094: Received new cluster view for channel FORKISPN: [Test-NodeA|4] (5) [Test-NodeA, Test-NodeB, Test-NodeC, Test-NodeD, Test-NodeE]
> 14:29:19,982 TRACE (transport-thread-Test-NodeA-p4-t14:[ViewHandling]) [ClusterTopologyManagerImpl] Updating cluster members for all the caches. New list is [Test-NodeA, Test-NodeB, Test-NodeC, Test-NodeD, Test-NodeE]
> 14:29:19,982 TRACE (transport-thread-Test-NodeA-p4-t14:[ViewHandling]) [JGroupsTransport] Test-NodeA sending request 9 to all: org.infinispan.topology.HeartBeatCommand@1163beb6
> 14:29:19,986 TRACE (jgroups-6,Test-NodeA:[]) [JGroupsTransport] Test-NodeA received response for request 9 from Test-NodeC: SuccessfulResponse(null)
> 14:29:19,987 TRACE (jgroups-9,Test-NodeA:[]) [JGroupsTransport] Test-NodeA received response for request 9 from Test-NodeD: SuccessfulResponse(null)
> 14:29:20,032 TRACE (jgroups-6,Test-NodeE:[]) [TCP_NIO2] Test-NodeE: received message batch of 1 messages from Test-NodeA
> 14:29:20,032 TRACE (jgroups-6,Test-NodeE:[]) [NAKACK2] Test-NodeE: message Test-NodeA::39 was added to queue (not yet server)
> 14:29:20,054 TRACE (jgroups-6,Test-NodeE:[]) [NAKACK2] Test-NodeE: received Test-NodeA#38
> 14:29:20,054 TRACE (jgroups-6,Test-NodeE:[]) [NAKACK2] Test-NodeE: delivering Test-NodeA#38
> # not actually delivered :)
> 14:29:20,054 TRACE (jgroups-6,Test-NodeE:[]) [MFC] Test-NodeA used 5 credits, 1999995 remaining
> 14:29:20,149 INFO (ForkThread-1,ForkChannelRestartTest:[]) [CLUSTER] ISPN000094: Received new cluster view for channel FORKISPN: [Test-NodeA|4] (5) [Test-NodeA, Test-NodeB, Test-NodeC, Test-NodeD, Test-NodeE]
> 14:29:21,119 DEBUG (testng-Test-1:[]) [ForkChannelRestartTest] Stopping channel Test-NodeB
> 14:29:23,319 INFO (VERIFY_SUSPECT.TimerThread-32,Test-NodeA:[]) [CLUSTER] ISPN000094: Received new cluster view for channel FORKISPN: [Test-NodeA|5] (4) [Test-NodeA, Test-NodeC, Test-NodeD, Test-NodeE]
> 14:29:23,320 TRACE (remote-thread-Test-NodeA-p2-t1:[]) [MultiTargetRequest] Target Test-NodeB of request 9 left the cluster view
> {noformat}
> So far, it looks like it's a JGroups bug similar to JGRP-2294, but we need to investigate further.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-7624) Change CacheLoaderInterceptor to ignore in memory for certain operations when passivation is not enabled
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-7624?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-7624:
----------------------------------
Sprint: Sprint 9.4.0.CR1, Sprint 9.4.0.CR3, Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 9.4.0.Final, Sprint 10.0.0.Alpha0, Sprint 10.0.0.Beta1 (was: Sprint 9.4.0.CR1, Sprint 9.4.0.CR3, Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 9.4.0.Final, Sprint 10.0.0.Alpha0)
> Change CacheLoaderInterceptor to ignore in memory for certain operations when passivation is not enabled
> --------------------------------------------------------------------------------------------------------
>
> Key: ISPN-7624
> URL: https://issues.jboss.org/browse/ISPN-7624
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Reporter: William Burns
> Assignee: William Burns
> Priority: Major
>
> CacheLoader currently reads all in memory entries and then the store for bulk operations. This can be changed to only load from the store when passivation is not used as it should contain all entries that aren't in memory.
> The only issue is that if Flag.SKIP_CACHE_STORE was used on a write. In this case we could store a boolean in the interceptor so that if this Flag is ever used we always do both. Very few users use this Flag and it should improve performance a bit and entries should be seen more consistently.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-9257) ClustertopologyManagerTest.testAbruptLeaveAfterGetStatus2[SCATTERED_SYNC, tx=false] random failures
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-9257?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-9257:
----------------------------------
Sprint: Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.CR3, Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 9.4.0.Final, Sprint 10.0.0.Alpha0, Sprint 10.0.0.Beta1 (was: Sprint 9.3.0.Final, Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.CR3, Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 9.4.0.Final, Sprint 10.0.0.Alpha0)
> ClustertopologyManagerTest.testAbruptLeaveAfterGetStatus2[SCATTERED_SYNC, tx=false] random failures
> ---------------------------------------------------------------------------------------------------
>
> Key: ISPN-9257
> URL: https://issues.jboss.org/browse/ISPN-9257
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.3.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Minor
> Labels: testsuite_stability
> Fix For: 9.4.5.Final
>
> Attachments: ISPN-8731_wrong_topology_2018-05-18_ClusterTopologyManagerTest-infinispan-core.log.gz
>
>
> The test kills the coordinator NodeA, then while NodeB is trying to recover the caches it also kills NodeC. It expects NodeB to start a rebalance with 2 nodes and discards it, in order to test that it can process the 1-node rebalance first:
> {noformat}
> 00:34:06,582 DEBUG (transport-thread-test-NodeB-p12-t6:[testCache]) [ClusterTopologyManagerTest] Discarding rebalance command CacheTopology{id=8, phase=TRANSITORY, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (2)[test-NodeB-49590: 85, test-NodeC-58596: 85]}, pendingCH=ScatteredConsistentHash{ns=256, rebalanced=true, owners = (2)[test-NodeB-49590: 128, test-NodeC-58596: 128]}, unionCH=null, actualMembers=[test-NodeB-49590, test-NodeC-58596], persistentUUIDs=[6b96414e-15d8-4350-aa3c-4fb4fc34e888, d47dc4a9-2a95-4bb1-a83b-bb8a27c9999f]}
> 00:34:06,609 DEBUG (transport-thread-test-NodeB-p12-t2:[Topology-testCache]) [LocalTopologyManagerImpl] Updating local topology for cache testCache: CacheTopology{id=9, phase=TRANSITORY, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[test-NodeB-49590: 85]}, pendingCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[test-NodeB-49590: 128]}, unionCH=null, actualMembers=[test-NodeB-49590], persistentUUIDs=[6b96414e-15d8-4350-aa3c-4fb4fc34e888]}
> 00:34:06,609 DEBUG (transport-thread-test-NodeB-p12-t2:[Topology-testCache]) [LocalTopologyManagerImpl] Installing fake cache topology CacheTopology{id=8, phase=NO_REBALANCE, rebalanceId=4, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[test-NodeB-49590: 85]}, pendingCH=null, unionCH=null, actualMembers=[test-NodeB-49590], persistentUUIDs=[6b96414e-15d8-4350-aa3c-4fb4fc34e888]} for cache testCache
> {noformat}
> Unfortunately {{PreferAvailabilityStrategy}} has changed a bit and the rebalance ids don't always match the expectations of the test, so that the 1-node rebalance is discarded instead:
> {noformat}
> 09:46:10,530 DEBUG (transport-thread-Test-NodeB-p54539-t3:[testCache]) [Test] Discarding rebalance command CacheTopology{id=9, phase=TRANSITORY, rebalanceId=5, currentCH=ScatteredConsistentHash{ns=256, rebalanced=false, owners = (1)[Test-NodeB-62039: 85]}, pendingCH=ScatteredConsistentHash{ns=256, rebalanced=true, owners = (1)[Test-NodeB-62039: 256]}, unionCH=null, actualMembers=[Test-NodeB-62039], persistentUUIDs=[0ed7be74-4485-489b-baee-28c461c9e5de]}
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months
[JBoss JIRA] (ISPN-5997) JMX operation ClusterCacheStats.resetStatistics() not working
by Tristan Tarrant (Jira)
[ https://issues.jboss.org/browse/ISPN-5997?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-5997:
----------------------------------
Sprint: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.CR3, Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 9.4.0.Final, Sprint 10.0.0.Alpha0, Sprint 10.0.0.Beta1 (was: Sprint 9.4.0.Beta1, Sprint 9.4.0.CR1, Sprint 9.4.0.CR3, Sprint 10.0.0.Alpha1, Sprint 10.0.0.Alpha2, Sprint 9.4.0.Final, Sprint 10.0.0.Alpha0)
> JMX operation ClusterCacheStats.resetStatistics() not working
> -------------------------------------------------------------
>
> Key: ISPN-5997
> URL: https://issues.jboss.org/browse/ISPN-5997
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.1.0.CR1
> Reporter: Jiří Holuša
> Assignee: Vladimir Blagojevic
> Priority: Major
>
> Environment: 2 servers started in domain mode (ISPN 8.1.0.CR1).
> Having some clustered cache, inserted/read some entries (just to create some statistics).
> When connected via JMX (e.g. through JConsole, see https://issues.jboss.org/browse/ISPN-5983 ) to one of the servers, I execute resetStatistics() operation on ClusterCacheStats MBean (object name: jboss.datagrid-infinispan:type=Cache,name="<cache-name>",manager="<cache-container-name>",component=ClusterCacheStats) and nothing happens, the cluster statistics are not reset.
> To actually reset the statistics, I have to connect to each of the servers in domain individually and call resetStatistics() on Statistics MBean, which deletes "a portion" of statistics of that particular server to the cluster stats.
> I would expect ClusterCacheStats.resetStatistics() to do exactly the same: call Statistics.resetStatistics() on each server.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
7 years, 3 months