[JBoss JIRA] (ISPN-5127) LocalEntryRetrieverWithStoreAsBinaryTest.testFilterWithStoreAsBinaryPartialKeys random failures
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5127?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5127:
--------------------------------
Fix Version/s: 7.2.0.Beta1
(was: 7.2.0.Alpha1)
> LocalEntryRetrieverWithStoreAsBinaryTest.testFilterWithStoreAsBinaryPartialKeys random failures
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-5127
> URL: https://issues.jboss.org/browse/ISPN-5127
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.1.0.Alpha1, 7.0.3.Final
> Reporter: Dan Berindei
> Assignee: William Burns
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 7.2.0.Beta1
>
>
> Sometimes the filtered retriever doesn't return any entries:
> {noformat}
> 15:16:26,328 ERROR (testng-LocalEntryRetrieverWithStoreAsBinaryTest:) [UnitTestTestNGListener] Test testFilterWithStoreAsBinaryPartialKeys(org.infinispan.iteration.LocalEntryRetrieverWithStoreAsBinaryTest) failed.java.util.NoSuchElementException
> at org.infinispan.iteration.impl.LocalEntryRetriever$Itr.next(LocalEntryRetriever.java:486)
> at org.infinispan.iteration.impl.LocalEntryRetriever$Itr.next(LocalEntryRetriever.java:428)
> at org.infinispan.iteration.LocalEntryRetrieverWithStoreAsBinaryTest.testFilterWithStoreAsBinaryPartialKeys(LocalEntryRetrieverWithStoreAsBinaryTest.java:93)
> {noformat}
> http://ci.infinispan.org/viewLog.html?buildId=14964
> The test should also use custom key/value types, as {{String}} keys/values are not marshalled when {{storeAsBinary}} is enabled (see {{MarshalledValue.isTypeExcluded()}}).
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month
[JBoss JIRA] (ISPN-5093) Granularity of remote event listener implementations doing the same job
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5093?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5093:
--------------------------------
Fix Version/s: 7.2.0.Beta1
(was: 7.2.0.Alpha1)
> Granularity of remote event listener implementations doing the same job
> -----------------------------------------------------------------------
>
> Key: ISPN-5093
> URL: https://issues.jboss.org/browse/ISPN-5093
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 7.2.0.Beta1
>
>
> Currently, if N clients add the same listener to a cache that does the same job, e.g. keeping a near cache consistent, this results in N server-side cluster listeners created, each potentially installed in different nodes. If one of those nodes fails, all clients that had a listener registered to that node will have to find a different node for this listener.
> The downsides of this approach is that there are as many cluster listeners installed as clients have added listeners (or have near cache enabled), which might not very efficient. If a node goes down, all clients that have cluster listeners there need to failover to some other node.
> The advantage of this approach is simplicity of the approach to decide where to add the listener and where to failover to.
> For this type of scenarios, an alternative set up might be worth exploring:
> If all these client side listeners are interested in exactly the same events, and the client ID would be exposed via the RemoteCache API, a server side cluster listener multi-plexing between all these clients could be potentially built. In other words, instead of having N clients register N cluster listeners, the first client would register the cluster listener with a client listener ID, and if more registrations were added with the same client listener ID, the connections would be added to the existing cluster listener implementation.
> The maximise the efficiency of this solution, all clients (even running in different JMVs), given the same client listener ID, should agree upon the node to add the listener in. For a distributed cache, hashing on the cache name would work. For replicated caches, since there's no hashing available, the first node of the view could be used.
> Since the logic to be executed server-side varies between being the first node adding the client listener vs the others, synchronization would be added to make sure that the first invocation only creates the cluster listener, and the others simply add the channel to the listener.
> Failover is a bit more tricky too, because if the node with the cluster listener goes down, all the clients have to failover, which again exposes a 1st vs the others type of logic.
> Advantages of this approach is the reduction in number of cluster listeners and potentially efficiency coming from a single cluster listener implementation server side.
> The disadvantages come from the server side logic to add/failover a cluster listener, which need to take into account if the listener is present or not. Other disadvantages come from needing the clients to use some specific routing for adding listeners for same node.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month
[JBoss JIRA] (ISPN-5163) A write operation with the SKIP_LOCKING flag can roll back the transaction
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5163?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5163:
--------------------------------
Fix Version/s: 7.2.0.Beta1
(was: 7.2.0.Alpha1)
> A write operation with the SKIP_LOCKING flag can roll back the transaction
> --------------------------------------------------------------------------
>
> Key: ISPN-5163
> URL: https://issues.jboss.org/browse/ISPN-5163
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.0.3.Final, 7.1.0.Beta1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 7.2.0.Beta1
>
>
> When a write operation has the SKIP_LOCKING flag, it does not send a {{LockControlCommand}} to the primary owner, but it can send a {{ClusteredGetCommand}} with {{acquireRemoteLocks=true}} instead. The {{ClusteredGetCommmand}} will then execute a {{LockControlCommand}} with the origin not set properly, and {{TxInterceptor}} will roll back the transaction because the originator ({{null}}) appears to have left the cluster.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month
[JBoss JIRA] (ISPN-5151) DistributedSharedCacheTwoNodesMapReduceTest.testInvokeMapReduceOnAllKeys random failures
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5151?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5151:
--------------------------------
Fix Version/s: 7.2.0.Beta1
(was: 7.2.0.Alpha1)
> DistributedSharedCacheTwoNodesMapReduceTest.testInvokeMapReduceOnAllKeys random failures
> ----------------------------------------------------------------------------------------
>
> Key: ISPN-5151
> URL: https://issues.jboss.org/browse/ISPN-5151
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 7.0.3.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 7.2.0.Beta1
>
>
> The method {{invokeMapReduce()}} doesn't really invoke the M/R task, it only creates it, and the execution only starts when the test method calls {{task.execute()}} explicitly. It shouldn't try to check the contents of the shared intermediary cache, because the intermediary cache may not exist yet - and it may accidentally create it with the wrong configuration. I get this error when I run only the {{testInvokeMapReduceOnAllKeys}} method:
> {noformat}
> 09:55:37,632 TRACE (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [DefaultCacheManager] About to wire and start cache __tmpMapReduce
> 09:55:37,646 DEBUG (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [MapReduceTask] Invoking CreateCacheCommand{cacheManager=null, cacheNameToCreate='__tmpMapReduce', cacheConfigurationName='__tmpMapReduce', start=true', size=2} across members [DistributedSharedCacheTwoNodesMapReduceTest-NodeA-19271, DistributedSharedCacheTwoNodesMapReduceTest-NodeB-10341]
> 10:32:56,324 ERROR (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [UnitTestTestNGListener] Test testInvokeMapReduceOnAllKeys(org.infinispan.distexec.mapreduce.DistributedSharedCacheTwoNodesMapReduceTest) failed.
> org.infinispan.distexec.mapreduce.MapReduceException: Map phase failed
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhase(MapReduceTask.java:607)
> at org.infinispan.distexec.mapreduce.MapReduceTask.executeHelper(MapReduceTask.java:473)
> at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:414)
> at org.infinispan.distexec.mapreduce.BaseWordCountMapReduceTest.testInvokeMapReduceOnAllKeys(BaseWordCountMapReduceTest.java:162)
> Caused by: org.infinispan.commons.CacheException: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:105)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.invokeMapCombineLocally(MapReduceTask.java:1174)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.access$300(MapReduceTask.java:1101)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:1123)
> at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:1119)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapKeysToNodes(MapReduceManagerImpl.java:363)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.migrateIntermediateKeysAndValues(MapReduceManagerImpl.java:327)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombine(MapReduceManagerImpl.java:260)
> at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:103)
> ... 10 more
> {noformat}
> Even if the check is moved after the M/R task is finished, it still wouldn't be correct, because the task only cleans up the shared intermediary cache asynchronously. So it needs to use {{eventually()}} to avoid errors like this:
> {noformat}
> 04:06:32,260 ERROR (testng-DistributedSharedCacheTwoNodesMapReduceTest:) [UnitTestTestNGListener] Test testInvokeMapReduceOnAllKeys(org.infinispan.distexec.mapreduce.DistributedSharedCacheTwoNodesMapReduceTest) failed.
> java.lang.AssertionError: Shared cache __tmpMapReduce is not empty. It has 5 keys/values: [ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=is], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@21ae10d3}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=JUDCon], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@108d6b51}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=cool], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@77949e8f}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=Infinispan], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@712a6071}, ImmortalCacheEntry{key=IntermediateCompositeKey [taskId=88948a8b-2a8a-4c13-bc45-4dc3a9f6b0fb, key=community], value=org.infinispan.distexec.mapreduce.MapReduceManagerImpl$DeltaAwareList@291bdf76}] expected:<0> but was:<5>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.infinispan.distexec.mapreduce.DistributedSharedCacheTwoNodesMapReduceTest.invokeMapReduce(DistributedSharedCacheTwoNodesMapReduceTest.java:44)
> at org.infinispan.distexec.mapreduce.BaseWordCountMapReduceTest.testInvokeMapReduceOnAllKeys(BaseWordCountMapReduceTest.java:161)
> 04:06:32,579 TRACE (transport-thread-NodeA-p29577-t6:) [InvocationContextInterceptor] Invoked with command RemoveCommand{key=IntermediateCompositeKey [taskId=eb7da48a-5922-4671-9037-4077e209744c, key=RedHat], value=null, flags=null, valueMatcher=MATCH_ALWAYS} and InvocationContext [org.infinispan.context.SingleKeyNonTxInvocationContext@c0bbc61]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month
[JBoss JIRA] (ISPN-5244) Include stats enabled check for all container cache stats
by Vladimir Blagojevic (JIRA)
Vladimir Blagojevic created ISPN-5244:
-----------------------------------------
Summary: Include stats enabled check for all container cache stats
Key: ISPN-5244
URL: https://issues.jboss.org/browse/ISPN-5244
Project: Infinispan
Issue Type: Bug
Components: JMX, reporting and management
Reporter: Vladimir Blagojevic
Assignee: Vladimir Blagojevic
Priority: Minor
Fix For: 7.2.0.Beta1
In container cache stats we sometimes blindly aggregate cache stats which can lead to unusual results in some specific situations. For example, we add numberOfEntries across all caches but numberOfEntries stat for caches is initialized with value of -1. Therefore adding such values results in negative total numberOfEntries for container stats which users will confuse users.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month
[JBoss JIRA] (ISPN-5243) Configuration attribute holders
by Tristan Tarrant (JIRA)
Tristan Tarrant created ISPN-5243:
-------------------------------------
Summary: Configuration attribute holders
Key: ISPN-5243
URL: https://issues.jboss.org/browse/ISPN-5243
Project: Infinispan
Issue Type: Enhancement
Components: Configuration
Reporter: Tristan Tarrant
Assignee: Tristan Tarrant
Fix For: 7.2.0.Final
Configuration attributes are stored in simple variables, which makes it impossible to determine if they have been user-set or they have default values. The purpose of this issue is to introduce attribute wrappers which track modifications. Also we want to be able to react to changes by adding attribute listeners
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month
[JBoss JIRA] (ISPN-3224) RemoteCacheManager of HotRod client is not able connect to server because of wrong parsing IPv6 address for pure IPv6 machines and gets wrong address on dual stack machines
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-3224?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant closed ISPN-3224.
---------------------------------
Resolution: Out of Date
> RemoteCacheManager of HotRod client is not able connect to server because of wrong parsing IPv6 address for pure IPv6 machines and gets wrong address on dual stack machines
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-3224
> URL: https://issues.jboss.org/browse/ISPN-3224
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 5.2.5.Final, 5.3.0.Final
> Reporter: Vitalii Chepeliuk
> Priority: Minor
>
> ########################Run hotrod client with pure IPv6#############################################
> Hotrod client fails when want connect to server, below is exception from pure IPv6 machines it doesn't really undedstand IPv6 address it should connect."Could not connect to server: /0.0.10.60:52" Here should
> be IPv6 address but it looks like some wrong IPv4 address it want to connect to and I use complicated address 2620:52:0:105f:0:0:ffff:32%2:11222 as host variable and it is not specified in /etc/hosts
> {code}
> public RemoteCacheManager(String host, int port, boolean start, ClassLoader classLoader) {
> config = new ConfigurationProperties(host + ":" + port); <<< host=2620:52:0:105f:0:0:ffff:32%2 and port=11222
> this.classLoader = classLoader;
> if (start) start();
> }
> {code}
>
> then in start method
> {code}
> @Override
> public void start() {
> // Workaround for JDK6 NPE: http://bugs.sun.com/view_bug.do?bug_id=6427854
> SysPropertyActions.setProperty("sun.nio.ch.bugLevel", "\"\"");
> forceReturnValueDefault = config.getForceReturnValues();
> codec = CodecFactory.getCodec(config.getProtocolVersion());
> String factory = config.getTransportFactory();
> transportFactory = (TransportFactory) getInstance(factory, classLoader);
> Collection<SocketAddress> servers = config.getServerList(); <<< we get list of servers but getServerList() method should be improved see below!
>
> transportFactory.start(codec, config, servers, topologyId, classLoader); <<< and pass to transportFactory
> if (marshaller == null) {
> String marshallerName = config.getMarshaller();
> setMarshaller((Marshaller) getInstance(marshallerName, classLoader));
> }
> if (asyncExecutorService == null) {
> String asyncExecutorClass = config.getAsyncExecutorFactory();
> ExecutorFactory executorFactory = (ExecutorFactory) getInstance(asyncExecutorClass, classLoader);
> asyncExecutorService = executorFactory.getExecutor(config.getProperties());
> }
> synchronized (cacheName2RemoteCache) {
> for (RemoteCacheHolder rcc : cacheName2RemoteCache.values()) {
> startRemoteCache(rcc);
> }
> }
> // Print version to help figure client version run
> log.version(org.infinispan.Version.printVersion());
> started = true;
> }
> {code}
>
> and "servers" variable contain the same IP address 2620:52:0:105f:0:0:ffff:32%2:11222
> {code}
> public Collection<SocketAddress> getServerList() {
> Set<SocketAddress> addresses = new HashSet<SocketAddress>();
> String servers = props.getProperty(SERVER_LIST, "127.0.0.1:" + DEFAULT_HOTROD_PORT); <<< got 2620:52:0:105f:0:0:ffff:32%2:11222
> for (String server : servers.split(";")) {
> String[] components = server.trim().split(":"); <<< just only here the splitting it wrong, we devide address into 9 chunks
> String host = components[0]; <<< host name shoud be 1 chunk 2620
> int port = DEFAULT_HOTROD_PORT;
> if (components.length > 1) port = Integer.parseInt(components[1]); <<< and port 52
> addresses.add(new InetSocketAddress(host, port)); <<< and again we pass wrong parameteres to this constructor with IntetSocketAddress(2620, 52)
> }
> if (addresses.isEmpty()) throw new IllegalStateException("No Hot Rod servers specified!");
> return addresses; << here we just get some strange IPv4 address as 0.0.10.60:52
> }
> {code}
> and exception is the following
> {code}
> Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: /0.0.10.60:52
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:88)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:57)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:38)
> at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:271)
> {code}
> ########################Other issue is when hotrod client connects in dual stack mode#################
> /etc/hosts file is---------------------------------------------------------
> 127.0.0.1 myhost localhost.localdomain localhost
> ::1 myhost localhost6.localdomain6 localhost6
> ---------------------------------------------------------------------------
> Then is the same problem in ConfigurationProperties.java getServerList() method we add address to addresses <<<addresses.add(new InetSocketAddress(host, port));>>>
> so InetSocketAddress constructor is called
> {code}
> public InetSocketAddress(String hostname, int port) {
> checkHost(hostname);
> InetAddress addr = null;
> String host = null;
> try {
> addr = InetAddress.getByName(hostname); <<< we should get InetAddress with hostname
> } catch(UnknownHostException e) {
> host = hostname;
> }
> holder = new InetSocketAddressHolder(host, addr, checkPort(
> }
> {code}
> but we have 2! different inet addresses with the same hostname one is 127.0.0.1 and other ::1 and if i run it on IPv6 there should be ::1 and not 127.0.0.1!
> And
> public static InetAddress getByName(String host)
> throws UnknownHostException {
> return InetAddress.getAllByName(host)[0]; <<< but here we get only first address in array and got always 127.0.0.1
> }
> and then other exception is thrown
> {code}
> Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: vchepQA/127.0.0.1:11222
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:88)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:57)
> at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:38)
> at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
> at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:271)
> ... 97 more
> {code}
> and i forget to add trace log just download it here http://dropmefiles.com/en/H5wvu
> ##################################################INFINISPAN 5.3.0.FINAL#####################################################################
> org.infinispan.client.hotrod.configuration.ConfigurationBuilder has this method
> {code}
> @Override
> public ConfigurationBuilder addServers(String servers) {
> for (String server : servers.split(";")) {
> String[] components = server.trim().split(":");
> String host = components[0];
> int port = ConfigurationProperties.DEFAULT_HOTROD_PORT;
> if (components.length > 1)
> port = Integer.parseInt(components[1]);
> this.addServer().host(host).port(port);
> }
> return this;
> }
> {code}
> And what if I put in <servers> argument something like addServers("[fe80::3e97:eff:fe19:3045]:11222;[fe80::3e97:eff:fe19:3046]:11322")? I think that It is not parsing correctly and we can use
> <servers> argument something like addServers("localhost6:11222; localhost6.localdomain6:11322") or other ipv6 hostnames.
> Because I want to use ip addresses and not hostnames and not to change /etc/hosts file only for mapping ipv6 address to some dummy hostname
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month
[JBoss JIRA] (ISPN-5198) FuturesTest.testCombineWithCompletionErrors random failures
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-5198?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-5198:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 7.2.0.Alpha1
Resolution: Done
Integrated in master. Thanks [~gustavonalle]!
> FuturesTest.testCombineWithCompletionErrors random failures
> -----------------------------------------------------------
>
> Key: ISPN-5198
> URL: https://issues.jboss.org/browse/ISPN-5198
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 7.1.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 7.2.0.Alpha1
>
>
> {code}
> [pri:0, instance:org.infinispan.commons.util.concurrent.FuturesTest@60e31054] should have thrown an exception of class java.util.concurrent.ExecutionException
> at org.testng.internal.Invoker.handleInvocationResults(Invoker.java:1512)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:754)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ------- Stdout: -------
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
11 years, 1 month