[JBoss JIRA] (ISPN-4144) Cleanly shutdown intermediate M/R cache
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4144?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-4144.
---------------------------------
Resolution: Out of Date
Map/Reduce has been removed
> Cleanly shutdown intermediate M/R cache
> ---------------------------------------
>
> Key: ISPN-4144
> URL: https://issues.jboss.org/browse/ISPN-4144
> Project: Infinispan
> Issue Type: Feature Request
> Components: Core, Distributed Execution and Map/Reduce
> Reporter: Vladimir Blagojevic
> Assignee: Dan Berindei
>
> For intermediate per task caches we simply remove that cache from cache manager. This operation is cluster wide but it still triggers rebalancing which in turn possibly creates logs that might raise false alarms for admins. Investigate if calling clear before removing cache from cache manager and/or disabling rebalancing for intermediate cache leads to a "cleaner" cache shutdown.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-4020) Write a test for map/reduce with passivation enabled
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4020?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-4020.
---------------------------------
Resolution: Out of Date
Map/Reduce has been removed
> Write a test for map/reduce with passivation enabled
> ----------------------------------------------------
>
> Key: ISPN-4020
> URL: https://issues.jboss.org/browse/ISPN-4020
> Project: Infinispan
> Issue Type: Task
> Components: Distributed Execution and Map/Reduce, Test Suite - Core
> Reporter: Dan Berindei
> Assignee: Vladimir Blagojevic
>
> If an entry is activated (i.e. removed from the store) during the mapping phase, between the iteration of the data container and the iteration of the cache store, it may be possible for that entry to be skipped. We need to write a test to confirm if this is the case.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-4151) [SimpleTwoNodesMapReduceTest, TopologyAwareTwoNodesMapReduceTest].testInvokeMapWithReduceExceptionPhaseInRemoteExecution fails randomly on Windows and Solaris with JDK7
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4151?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-4151.
---------------------------------
Resolution: Out of Date
Map/Reduce has been removed
> [SimpleTwoNodesMapReduceTest,TopologyAwareTwoNodesMapReduceTest].testInvokeMapWithReduceExceptionPhaseInRemoteExecution fails randomly on Windows and Solaris with JDK7
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-4151
> URL: https://issues.jboss.org/browse/ISPN-4151
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 6.0.2.Final
> Environment: WIndows2012&&JDK7
> Reporter: Vitalii Chepeliuk
> Labels: testsuite_stability
>
> org.testng.TestException:
> Method SimpleTwoNodesMapReduceTest.testInvokeMapWithReduceExceptionPhaseInRemoteExecution()[pri:0, instance:org.infinispan.distexec.mapreduce.SimpleTwoNodesMapReduceTest@7c1943b6] should have thrown an exception of class org.infinispan.commons.CacheException
> at org.testng.internal.Invoker.handleInvocationResults(Invoker.java:1518)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:764)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:907)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1237)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> * Jenkins
> ** Windows
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/JDG/view/FUNC/job/e...
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/JDG/view/FUNC/job/e...
> ** Solaris
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/JDG/view/FUNC/job/e...
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-4334) MapReduceTaskLifecycleService shouldn't keep the list of found lifecycle implementations
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4334?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-4334.
---------------------------------
Resolution: Out of Date
Map/Reduce has been removed
> MapReduceTaskLifecycleService shouldn't keep the list of found lifecycle implementations
> ----------------------------------------------------------------------------------------
>
> Key: ISPN-4334
> URL: https://issues.jboss.org/browse/ISPN-4334
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce, Remote Querying
> Reporter: Jakub Markos
>
> The issue is that this class https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o... searches for the lifecycle classes using current threads classloader, and then caches the result, so the returned list of implementations depends only on the 1st thread creating the singleton instance of the service.
> You can replicate the bug from this branch:
> https://github.com/jmarkos/infinispan/tree/queries
> running
> {code}
> mvn clean verify -Dmaven.test.failure.ignore=true -DfailIfNoTests=false -U -Psuite.others -Dtest=RemoteQueryKeySetTest,ManualIndexingTest
> {code}
> from the server/integration/testsuite directory, results in an exception:
> {code}javax.management.MBeanException
> at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:271)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
> at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.invoke(PluggableMBeanServerImpl.java:527)
> at org.jboss.as.jmx.PluggableMBeanServerImpl.invoke(PluggableMBeanServerImpl.java:263)
> at org.jboss.remotingjmx.protocol.v2.ServerProxy$InvokeHandler.handle(ServerProxy.java:915)
> at org.jboss.remotingjmx.protocol.v2.ServerCommon$MessageReciever$1.run(ServerCommon.java:152)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:269)
> ... 9 more
> Caused by: org.infinispan.commons.CacheException: java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeHelper(MapReduceTask.java:517)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:427)
> at org.infinispan.query.impl.massindex.MapReduceMassIndexer.start(MapReduceMassIndexer.java:25)
> ... 14 more
> Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:1059)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:677)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeHelper(MapReduceTask.java:510)
> ... 16 more
> Caused by: java.lang.NullPointerException
> at org.infinispan.query.impl.massindex.IndexingMapper.map(IndexingMapper.java:38)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl$2.apply(MapReduceManagerImpl.java:207)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl$2.apply(MapReduceManagerImpl.java:202)
> at org.infinispan.container.DefaultDataContainer$1.apply(DefaultDataContainer.java:393)
> at org.infinispan.container.DefaultDataContainer$1.apply(DefaultDataContainer.java:389)
> at org.infinispan.commons.util.concurrent.jdk8backported.ConcurrentParallelHashMapV8$1.apply(ConcurrentParallelHashMapV8.java:48)
> at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8$ForEachMappingTask.compute(EquivalentConcurrentHashMapV8.java:4894)
> at org.infinispan.commons.util.concurrent.jdk8backported.CountedCompleter.exec(CountedCompleter.java:681)
> at org.infinispan.commons.util.concurrent.jdk8backported.ForkJoinTask.doExec(ForkJoinTask.java:264)
> at org.infinispan.commons.util.concurrent.jdk8backported.ForkJoinTask.doInvoke(ForkJoinTask.java:360)
> at org.infinispan.commons.util.concurrent.jdk8backported.ForkJoinTask.invoke(ForkJoinTask.java:692)
> at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8.forEach(EquivalentConcurrentHashMapV8.java:3592)
> at org.infinispan.commons.util.concurrent.jdk8backported.ConcurrentParallelHashMapV8.forEach(ConcurrentParallelHashMapV8.java:44)
> at org.infinispan.container.DefaultDataContainer.executeTask(DefaultDataContainer.java:389)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.map(MapReduceManagerImpl.java:202)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForLocalReduction(MapReduceManagerImpl.java:87)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.invokeMapCombineLocallyForLocalReduction(MapReduceTask.java:1173)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.access$300(MapReduceTask.java:1112)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$2.call(MapReduceTask.java:1144)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$2.call(MapReduceTask.java:1140)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
> at org.jboss.threads.JBossThread.run(JBossThread.java:122)
> {code}
> If you instead use -Dtest=RemoteQueryKeySetTest,ManualIndexingggggggggggggTest (to change the order of the execution, junit probably orders it by length), it passes, because MapReduceTaskLifecycleService is created from a thread which classloader sees the query module and therefore can load this class
> https://github.com/infinispan/infinispan/blob/master/query/src/main/java/... which properly initializes IndexingMapper and avoids the NPE.
> Thanks to Adrian Nistor for his help.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-4318) Infinispan should collect statistics for M/R tasks
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4318?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-4318:
-------------------------------------
We may want to port this over to distributed streams if possible.
> Infinispan should collect statistics for M/R tasks
> --------------------------------------------------
>
> Key: ISPN-4318
> URL: https://issues.jboss.org/browse/ISPN-4318
> Project: Infinispan
> Issue Type: Feature Request
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 7.0.0.Alpha4
> Reporter: Alan Field
>
> Map/Reduce tasks should collect statistics during the task execution that can be returned to the user to help them determine the optimal settings for the task. Here are some thoughts on useful statistics:
> Final status - completed, failed, cancelled, etc.
> Duration - either overall, per node, per phase (map, reduce, combine, collate)
> Number of nodes participating in the task
> Keys in the intermediate cache
> Keys in the result map
> Node specific statistics:
> Status of node - completed, failed, cancelled, etc.
> Number of keys processed
> Max size of collector
> Here are the built in counters that are reported by Hadoop:
> https://www.inkling.com/read/hadoop-definitive-guide-tom-white-3rd/chapte...
>
>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-4173) SuspectExceptions thrown during MapReduceTask while removing the intermediate cache
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4173?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-4173.
---------------------------------
Resolution: Out of Date
Map/Reduce has been removed
> SuspectExceptions thrown during MapReduceTask while removing the intermediate cache
> -----------------------------------------------------------------------------------
>
> Key: ISPN-4173
> URL: https://issues.jboss.org/browse/ISPN-4173
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 6.0.1.Final, 7.0.0.Alpha2
> Reporter: Alan Field
>
> While running the Map/Reduce benchmark with multiple value sizes, I have been seeing this error in the logs from Infinispan 6 and 7:
> {noformat}
> 16:13:51,325 ERROR [org.radargun.stages.MapReduceStage] (pool-1-thread-1) executeMapReduceTask() returned an exception
> org.infinispan.commons.CacheException: Error removing cache
> at org.infinispan.manager.DefaultCacheManager.removeCache(DefaultCacheManager.java:471)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:353)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:634)
> at org.radargun.cachewrappers.InfinispanMapReduce.executeMapReduceTask(InfinispanMapReduce.java:91)
> at org.radargun.cachewrappers.Infinispan51Wrapper.executeMapReduceTask(Infinispan51Wrapper.java:198)
> at org.radargun.stages.MapReduceStage.executeMapReduceTask(MapReduceStage.java:212)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:164)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from edg-perf06-46939, see cause for remote stack trace
> at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:41)
> at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:66)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:547)
> at org.infinispan.manager.DefaultCacheManager.removeCache(DefaultCacheManager.java:463)
> ... 12 more
> Caused by: org.infinispan.commons.CacheException: Problems invoking command.
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:221)
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460)
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:247)
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:665)
> at org.jgroups.JChannel.up(JChannel.java:708)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1015)
> at org.jgroups.protocols.RSVP.up(RSVP.java:187)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:370)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:381)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1010)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:390)
> at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:774)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:570)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:147)
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:184)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:301)
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:209)
> at org.jgroups.protocols.Discovery.up(Discovery.java:379)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1370)
> at org.jgroups.protocols.TP$MyHandler.run(TP.java:1556)
> ... 3 more
> Caused by: java.lang.NullPointerException
> at org.infinispan.commands.RemoteCommandsFactory.fromStream(RemoteCommandsFactory.java:195)
> at org.infinispan.marshall.exts.ReplicableCommandExternalizer.fromStream(ReplicableCommandExternalizer.java:106)
> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:147)
> at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:59)
> at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:389)
> at org.infinispan.marshall.core.ExternalizerTable.readObject(ExternalizerTable.java:205)
> at org.infinispan.marshall.core.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:152)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:355)
> at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:213)
> at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:37)
> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:136)
> at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)
> at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)
> at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:206)
> ... 25 more
> {noformat}
> These exceptions are happening during the execution of the MapReduceTask. I have also seen SuspectExceptions in these logs. This could be related to shutting down the intermediate cache (https://issues.jboss.org/browse/ISPN-4144), so I will check again once this is addressed. If this is the case the fix for ISPN-4144 will need to be fixed in both versions.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-4998) Infinispan 7 MapReduce giving inconsistent results
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-4998?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-4998.
---------------------------------
Resolution: Out of Date
Map/Reduce has been removed
> Infinispan 7 MapReduce giving inconsistent results
> --------------------------------------------------
>
> Key: ISPN-4998
> URL: https://issues.jboss.org/browse/ISPN-4998
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 7.0.0.Final
> Reporter: Vijay Bhoomireddy
> Priority: Blocker
>
> Hi,
> We are using Infinispan Map Reduce for processing our datasets that are spread across nodes in the cluster. We are seeing some surprising results when Infinispan 7 is used. To provide the context, our input data contains 10965 records. When Map Reduce from Infinispan 6.0.2 is used, both Mapper and Reducer are seeing 10965 records and are processing the same. However, with the same code and input data, Infinispan 7 MR gives different results for different invocations of the program. For one run, it gave output as 10902 records and the next run it gave 10872 records. Results seem inconsistent across invocations.
> Not sure is this is an issue with the version7 of the framework. Any help would be greatly appreciated.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-5108) Indexes (aka Filters) for MapReduce
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5108?page=com.atlassian.jira.plugin.... ]
William Burns closed ISPN-5108.
-------------------------------
Resolution: Out of Date
Map/Reduce has been removed
> Indexes (aka Filters) for MapReduce
> -----------------------------------
>
> Key: ISPN-5108
> URL: https://issues.jboss.org/browse/ISPN-5108
> Project: Infinispan
> Issue Type: Feature Request
> Components: Distributed Execution and Map/Reduce
> Reporter: Guillermo GARCIA OCHOA
>
> We are using infinispan in a multi-tenant environment. In our first implementation we had a single group of caches for all the tenants and each object had a _'tenandId'_ (that we used as part of the key of each object too).
> We had to abandon this approach due to the poor performance of our MapReduce task. The main problem is that each task 'iterate' over each element in the "shared" cache when we only need to process the elements of the tenant 'X'.
> To fix this issue we were forced to create caches for each tenant, and now the MapReduce is as good as it gets (Infinispan 7 improved a lot the performance).
> The problem with our current approach is that it does not scale-out: For each tenant, we create several caches that leads to the creation of thread pools and other resources on each node.
> *PROPOSED SOLUTION*
> Allow creating 'indexes' (aka 'filters') that points to a group of element on the cache. The idea is to 'register' some index/filters on each cache an updating it on every put. Then, when executing a MapRecuce task we can indicate the 'index'/'filter' to execute the task over the referred entries only.
> This will help us in our use case but it can also improve any MapReduce task executed over infinispan if it is correctly 'tunned'. We are hopping to get your attention before reaching our scale-up limits :)
> Thanks in advance and happy holidays!
> (i) This is the main feature of Oracle Coherence to improve MapReduce-like tasks (more info [here|http://docs.oracle.com/cd/E18686_01/coh.37/e18692/querylang.htm#CEGG...])
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-5533) M/R DeltaAwareList can add duplicate values because of topology changes
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-5533?page=com.atlassian.jira.plugin.... ]
William Burns resolved ISPN-5533.
---------------------------------
Resolution: Out of Date
> M/R DeltaAwareList can add duplicate values because of topology changes
> -----------------------------------------------------------------------
>
> Key: ISPN-5533
> URL: https://issues.jboss.org/browse/ISPN-5533
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Distributed Execution and Map/Reduce
> Affects Versions: 7.2.2.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Fix For: 9.0.0.Alpha1
>
>
> By default, the intermediate cache is non-transactional, so a topology change will cause write commands to be retried. Because a {{PutKeyValueCommand(K, DeltaAwareList)}} command is not idempotent, a retried command will append extra intermediate values to the list.
> The M/R framework tries to guard against this by waiting for all the nodes to initialize the intermediate cache before starting the reduce phase, but it cannot guard against nodes joining or leaving during the reduce phase.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years