[JBoss JIRA] (ISPN-5042) Remote gets caused by writes could be replicated only to the primary owner
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5042?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5042:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.2.0.Beta1)
> Remote gets caused by writes could be replicated only to the primary owner
> --------------------------------------------------------------------------
>
> Key: ISPN-5042
> URL: https://issues.jboss.org/browse/ISPN-5042
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, State Transfer
> Affects Versions: 7.1.0.Alpha1
> Reporter: Dan Berindei
> Priority: Minor
> Labels: 7.0
> Fix For: 8.2.0.CR1
>
>
> For write operations that need the previous value, a write CH-only owner that doesn't have a key locally will attempt to retrieve the key from the read CH-owners.
> Sending the remote get command to all the previous owners will create extra load on the cluster during state transfer, so it should be more efficient to send the remote get only to the primary owner. Even though the latency of some write operations will be higher, the average latency should be better.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5083) Hot Rod decoder should use async Cache operations
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5083?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5083:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.2.0.Beta1)
> Hot Rod decoder should use async Cache operations
> -------------------------------------------------
>
> Key: ISPN-5083
> URL: https://issues.jboss.org/browse/ISPN-5083
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Gustavo Fernandes
> Fix For: 8.2.0.CR1
>
>
> Hot Rod decoder is currently tying up Netty threads as a result of calling up to Infinispan sync operations. Instead, Hot Rod decoder should call up async operations, convert the Notifying Futures to Scala Futures, and write up the reply when it's received. This should increase performance specially under heavy load.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5075) org.infinispan.IllegalLifecycleStateException in query tests
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5075?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5075:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.2.0.Beta1)
> org.infinispan.IllegalLifecycleStateException in query tests
> ------------------------------------------------------------
>
> Key: ISPN-5075
> URL: https://issues.jboss.org/browse/ISPN-5075
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Query
> Affects Versions: 7.1.0.Alpha1
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 8.2.0.CR1
>
>
> Many query tests log an error after running:
> {code}
> Exception in thread "Hibernate Search: async deletion of index segments-1" org.infinispan.IllegalLifecycleStateException: ISPN000323: Cache 'LuceneIndexesMetadata' is in 'TERMINATED' state and so it does not accept new invocations. Either restart it or recreate the cache container.
> at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:89)
> at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
> at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:77)
> at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:44)
> at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)
> at org.infinispan.cache.impl.CacheImpl.get(CacheImpl.java:422)
> at org.infinispan.cache.impl.DecoratedCache.get(DecoratedCache.java:427)
> at org.infinispan.lucene.impl.FileListOperations.getFileList(FileListOperations.java:160)
> at org.infinispan.lucene.impl.FileListOperations.deleteFileName(FileListOperations.java:129)
> at org.infinispan.lucene.impl.DirectoryImplementor.deleteFile(DirectoryImplementor.java:64)
> at org.infinispan.lucene.impl.DirectoryLuceneV4$DeleteTask.run(DirectoryLuceneV4.java:195)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> That error does not cause the tests to fail, and are due to the async deletes of segments recently implemented. Need to check if the executors are being flushed correctly when the cache manager stops
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5179) Add distributed execution and map/reduce job statistics
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5179?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5179:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.2.0.Beta1)
> Add distributed execution and map/reduce job statistics
> --------------------------------------------------------
>
> Key: ISPN-5179
> URL: https://issues.jboss.org/browse/ISPN-5179
> Project: Infinispan
> Issue Type: Feature Request
> Components: JMX, reporting and management
> Reporter: Vladimir Blagojevic
> Assignee: Vladimir Blagojevic
> Fix For: 8.2.0.CR1
>
>
> We should add DMR/JMX statistics for the running distributed execution jobs as well as map/reduce jobs. The statistics will also include overview/total system statistics of previously executed jobs; we might store statistics of individual executed jobs in some internal cache. However, the primary objective is to calculate and maintain dist.exec and map/reduce job statistics for Infinispan admin console.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5151) DistributedSharedCacheTwoNodesMapReduceTest.testInvokeMapReduceOnAllKeys random failures
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5151?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5151:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.2.0.Beta1)
> DistributedSharedCacheTwoNodesMapReduceTest.testInvokeMapReduceOnAllKeys random failures
> ----------------------------------------------------------------------------------------
>
> Key: ISPN-5151
> URL: https://issues.jboss.org/browse/ISPN-5151
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 7.0.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 8.2.0.CR1
>
>
> The method {{invokeMapReduce()}} doesn't really invoke the M/R task, it only creates it, and the execution only starts when the test method calls {{task.execute()}} explicitly. It shouldn't try to check the contents of the shared intermediary cache, because the intermediary cache may not exist yet - and it may accidentally create it with the wrong configuration. I get this error when I run only the {{testInvokeMapReduceOnAllKeys}} method:
> {noformat}
> 09:55:37,632 TRACE (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [DefaultCacheManager] About to wire and start cache __tmpMapReduce
> 09:55:37,646 DEBUG (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [MapReduceTask] Invoking CreateCacheCommand{cacheManager=null, cacheNameToCreate='__tmpMapReduce', cacheConfigurationName='__tmpMapReduce', start=true', size=2} across members [DistributedSharedCacheTwoNodesMapReduceTest-NodeA-19271, DistributedSharedCacheTwoNodesMapReduceTest-NodeB-10341]
> 10:32:56,324 ERROR (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [UnitTestTestNGListener] Test testInvokeMapReduceOnAllKeys(org.infinispan.distexec.mapreduce.DistributedSharedCacheTwoNodesMapReduceTest) failed.
> org.infinispan.distexec.mapreduce.MapReduceException: Map phase failed
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhase(MapReduceTask.java:607)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeHelper(MapReduceTask.java:473)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:414)
> at org.infinispan.distexec.mapreduce.BaseWordCountMapReduceTest.testInvokeMapReduceOnAllKeys(BaseWordCountMapReduceTest.java:162)
> Caused by: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:105)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.invokeMapCombineLocally(MapReduceTask.java:1174)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.access$300(MapReduceTask.java:1101)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:1123)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:1119)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapKeysToNodes(MapReduceManagerImpl.java:363)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.migrateIntermediateKeysAndValues(MapReduceManagerImpl.java:327)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombine(MapReduceManagerImpl.java:260)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:103)
> ... 10 more
> {noformat}
> Even if the check is moved after the M/R task is finished, it still wouldn't be correct, because the task only cleans up the shared intermediary cache asynchronously. So it needs to use {{eventually()}} to avoid errors like this:
> {noformat}
> 04:06:32,260 ERROR (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [UnitTestTestNGListener] Test testInvokeMapReduceOnAllKeys(org.infinispan.distexec.mapreduce.DistributedSharedCacheTwoNodesMapReduceTest) failed.
> java.lang.AssertionError: Shared cache __tmpMapReduce is not empty. It has 5 keys/values: [ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=is], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@21ae10d3}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=JUDCon], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@108d6b51}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=cool], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@77949e8f}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=Infinispan], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@712a6071}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=community], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@291bdf76}] expected:<0> but was:<5>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.infinispan.distexec.mapreduce.DistributedSharedCacheTwoNodesMapReduceTest.invokeMapReduce(DistributedSharedCacheTwoNodesMapReduceTest.java:44)
> at org.infinispan.distexec.mapreduce.BaseWordCountMapReduceTest.testInvokeMapReduceOnAllKeys(BaseWordCountMapReduceTest.java:161)
> 04:06:32,579 TRACE (transport-thread-NodeA-p29577-t6:) [InvocationContextInterceptor] Invoked with command RemoveCommand{key=IntermediateCompositeKey [taskId=eb7da48a-5922-4671-9037-4077e209744c, key=RedHat], value=null, flags=null, valueMatcher=MATCH_ALWAYS} and InvocationContext [org.infinispan.context.SingleKeyNonTxInvocationContext@c0bbc61]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5093) Granularity of remote event listener implementations doing the same job
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5093?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5093:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.2.0.Beta1)
> Granularity of remote event listener implementations doing the same job
> -----------------------------------------------------------------------
>
> Key: ISPN-5093
> URL: https://issues.jboss.org/browse/ISPN-5093
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 8.2.0.CR1
>
>
> Currently, if N clients add the same listener to a cache that does the same job, e.g. keeping a near cache consistent, this results in N server-side cluster listeners created, each potentially installed in different nodes. If one of those nodes fails, all clients that had a listener registered to that node will have to find a different node for this listener.
> The downsides of this approach is that there are as many cluster listeners installed as clients have added listeners (or have near cache enabled), which might not very efficient. If a node goes down, all clients that have cluster listeners there need to failover to some other node.
> The advantage of this approach is simplicity of the approach to decide where to add the listener and where to failover to.
> For this type of scenarios, an alternative set up might be worth exploring:
> If all these client side listeners are interested in exactly the same events, and the client ID would be exposed via the RemoteCache API, a server side cluster listener multi-plexing between all these clients could be potentially built. In other words, instead of having N clients register N cluster listeners, the first client would register the cluster listener with a client listener ID, and if more registrations were added with the same client listener ID, the connections would be added to the existing cluster listener implementation.
> The maximise the efficiency of this solution, all clients (even running in different JMVs), given the same client listener ID, should agree upon the node to add the listener in. For a distributed cache, hashing on the cache name would work. For replicated caches, since there's no hashing available, the first node of the view could be used.
> Since the logic to be executed server-side varies between being the first node adding the client listener vs the others, synchronization would be added to make sure that the first invocation only creates the cluster listener, and the others simply add the channel to the listener.
> Failover is a bit more tricky too, because if the node with the cluster listener goes down, all the clients have to failover, which again exposes a 1st vs the others type of logic.
> Advantages of this approach is the reduction in number of cluster listeners and potentially efficiency coming from a single cluster listener implementation server side.
> The disadvantages come from the server side logic to add/failover a cluster listener, which need to take into account if the listener is present or not. Other disadvantages come from needing the clients to use some specific routing for adding listeners for same node.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-5210) Support writing multiple modifications to cache stores in batch when using Transactions
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-5210?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-5210:
-------------------------------
Fix Version/s: 8.2.0.CR1
(was: 8.2.0.Beta1)
> Support writing multiple modifications to cache stores in batch when using Transactions
> ---------------------------------------------------------------------------------------
>
> Key: ISPN-5210
> URL: https://issues.jboss.org/browse/ISPN-5210
> Project: Infinispan
> Issue Type: Feature Request
> Components: Loaders and Stores
> Affects Versions: 6.0.2.Final, 7.1.0.Final
> Reporter: Richard Lucas
> Fix For: 8.2.0.CR1
>
>
> Currently writes to a cache store are performed individually for each modification to the cache.
> While this makes sense when using a non-tx cache it would be beneficial to support writing multiple modifications to a cache store in a single call when using a Tx cache.
> Currently all writes are performed in the commit phase and are done by looping through the modifications in the Tx and writing each one in turn to the the store.
> Instead of doing this it would useful to pass all modifications to the store in a single call, this would allow:
> a) Taking advantage of support for batch updates in underlying stores (JDBC, MongoDB, DynamoDB) for a more efficient write through.
> b) Allow store implementations the chance to clean up if one or more updates in the batch fail (this is especially useful if the store does not support Tx and rollbacks as it means the store implementation can at least try to clean up any partial updates it has performed, something which is not currently possible).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months