[JBoss JIRA] (ISPN-2519) Test org.infinispan.loaders.decorators.BatchAsyncCacheStoreTest.indexWasStored fails randomly
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2519?page=com.atlassian.jira.plugin.... ]
Pedro Ruivo edited comment on ISPN-2519 at 3/5/13 9:09 AM:
-----------------------------------------------------------
trace log of the test...
It looks like it has not enough time to write the data in the disk. In the previous test I have errors like:
{code}
ISPN000063: Exception while saving bucket Bucket{entries={k3=ImmortalCacheEntry{key=k3, value=ImmortalCacheValue {value=V1992}},
k4=ImmortalCacheEntry{key=k4, value=ImmortalCacheValue {value=V1993}},
k5=ImmortalCacheEntry{key=k5, value=ImmortalCacheValue {value=V1851}},
k6=ImmortalCacheEntry{key=k6, value=ImmortalCacheValue {value=V1852}},
k7=ImmortalCacheEntry{key=k7, value=ImmortalCacheValue {value=V1996}},
k8=ImmortalCacheEntry{key=k8, value=ImmortalCacheValue {value=V1854}},
k9=ImmortalCacheEntry{key=k9, value=ImmortalCacheValue {value=V1855}},
k0=ImmortalCacheEntry{key=k0, value=ImmortalCacheValue {value=V1989}},
k1=ImmortalCacheEntry{key=k1, value=ImmortalCacheValue {value=V1990}},
k2=ImmortalCacheEntry{key=k2, value=ImmortalCacheValue {value=V1991}}}, bucketId='3072'}
java.nio.channels.ClosedChannelException
{code}
and in the test I have errors like:
{code}
Failure on key 'kXX' expected value: 'Vzzzz' actual value: 'Vyyyy'
{code}
was (Author: pruivo):
trace log of the test...
> Test org.infinispan.loaders.decorators.BatchAsyncCacheStoreTest.indexWasStored fails randomly
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2519
> URL: https://issues.jboss.org/browse/ISPN-2519
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite
> Reporter: Anna Manukyan
> Assignee: Mircea Markus
> Labels: testsuite_stability
> Fix For: 5.3.0.Alpha1
>
> Attachments: trace-infinispan.log.gz
>
>
> The failure takes place in case of running testsuit for JDG.
> Failure log:
> https://jenkins.mw.lab.eng.bos.redhat.com/hudson/view/EDG6/view/EDG-REPOR...
> Another failure:
> http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/edg-60-ispn-testsuite...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2888) ManagedOperation.name() is ignored when registering JMX operations
by Mircea Markus (JIRA)
Mircea Markus created ISPN-2888:
-----------------------------------
Summary: ManagedOperation.name() is ignored when registering JMX operations
Key: ISPN-2888
URL: https://issues.jboss.org/browse/ISPN-2888
Project: Infinispan
Issue Type: Bug
Components: JMX, reporting and management
Affects Versions: 5.2.2.Final
Reporter: Mircea Markus
Assignee: Mircea Markus
Fix For: 5.3.0.Alpha1, 5.3.0.Final
ManagedOperation.name should denote the name of the managed operation (as opposed to the actual method name that gets annotated. This is ignored in the case of JMX operation.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2836) org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/ISPN-2836?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on ISPN-2836:
--------------------------------
It looks like https://issues.jboss.org/browse/JGRP-1549 fixed the original issue (sender threads waiting on the send-queue, but consumer de-queuing from a different send-queue).
I'm therefore going to close https://issues.jboss.org/browse/JGRP-1600. The workaround is to set use_send_queues to false, until 3.3 is released. I don't want to backport JGRP-1549 as this involves too many changes.
Feel free to reopen JGRP-1600 if this is still a JGroups issue.
> org.jgroups.TimeoutException after invoking MapCombineCommand in Map/Reduce task with 2 nodes
> ---------------------------------------------------------------------------------------------
>
> Key: ISPN-2836
> URL: https://issues.jboss.org/browse/ISPN-2836
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Execution and Map/Reduce
> Affects Versions: 5.2.1.Final
> Reporter: Alan Field
> Assignee: Vladimir Blagojevic
> Attachments: afield-tcp-521-final.txt, udp-edg-perf01.txt, udp-edg-perf02.txt
>
>
> Using RadarGun and two nodes to execute the example WordCount Map/Reduce job against a cache with ~550 keys with a value size of 1MB is producing a thread deadlock. The cache is distributed with transactions disabled.
> TCP transport deadlocks without throwing an exception. Disabling the send queue and setting UNICAST2.conn_expiry_timeout=0 prevents the deadlock, but the job does not complete. The nodes send "are-you-alive" messages back and forth, and I have seen the following exception:
> {noformat}
> 11:44:29,970 ERROR [org.jgroups.protocols.TCP] (OOB-98,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (76 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:352)
> at org.radargun.cachewrappers.InfinispanMapReduceWrapper.executeMapReduceTask(InfinispanMapReduceWrapper.java:98)
> at org.radargun.stages.MapReduceStage.executeOnSlave(MapReduceStage.java:74)
> at org.radargun.Slave$2.run(Slave.java:103)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: java.util.concurrent.ExecutionException: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:832)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(MapReduceTask.java:477)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:350)
> ... 9 more
> Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.infinispan.util.Util.rewrapAsCacheException(Util.java:541)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> 11:44:29,978 ERROR [org.jgroups.protocols.TCP] (Timer-3,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (60 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:175)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
> at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:254)
> at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:80)
> at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:288)
> ... 5 more
> Caused by: org.jgroups.TimeoutException: timeout sending message to edg-perf02-32536
> at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:390)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:301)
> 11:44:29,979 ERROR [org.jgroups.protocols.TCP] (Timer-4,default,edg-perf01-1907) failed sending message to edg-perf02-32536 (63 bytes): java.net.SocketException: Socket closed, cause: null
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> ... 11 more
> {noformat}
> With UDP transport, both threads are deadlocked. I will attach thread dumps from runs using TCP and UDP transport.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months