[JBoss JIRA] (ISPN-2713) REBALANCE_START and REBALANCE_CONFIRM commands deadlock when RSVP.ack_on_delivery=true
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2713?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-2713:
------------------------------------
With the fix for ISPN-2825, the thread sending the REBALANCE_START command won't hold the lock on the ClusterCacheStatus any more, and the REBALANCE_CONFIRM command will be able to proceed.
> REBALANCE_START and REBALANCE_CONFIRM commands deadlock when RSVP.ack_on_delivery=true
> --------------------------------------------------------------------------------------
>
> Key: ISPN-2713
> URL: https://issues.jboss.org/browse/ISPN-2713
> Project: Infinispan
> Issue Type: Bug
> Components: State transfer
> Affects Versions: 5.2.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 5.3.0.Final
>
>
> When the coordinator sends a REBALANCE_START command, it holds a lock on the ClusterCacheStatus until it receives the responses from all the other members.
> If a node doesn't need to request any new state, it sends the rebalance confirmation to the coordinator on the same thread that received the REBALANCE_START command. The REBALANCE_CONFIRM command also wants to acquire a lock on the ClusterCacheStatus on the coordinator, but because the REBALANCE_CONFIRM command is sent asynchronously, it doesn't deadlock with the thread waiting for REBALANCE_START responses on the coordinator.
> At least, that's what happens when {{RSVP.ack_on_delivery=false}} (the Infinispan default). When {{RSVP.ack_on_delivery=true}} (the JGroups default), the "asynchronous" REBALANCE_CONFIRM command becomes synchronous, and it generates a deadlock. The rebalance then fails after the RSVP timeout expires (10 seconds by default).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2402) Cache operations or transactions should never fail with SuspectException
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2402?page=com.atlassian.jira.plugin.... ]
Mircea Markus commented on ISPN-2402:
-------------------------------------
changed the priority to minor as the user should catch CacheException anyway and react to them.
> Cache operations or transactions should never fail with SuspectException
> ------------------------------------------------------------------------
>
> Key: ISPN-2402
> URL: https://issues.jboss.org/browse/ISPN-2402
> Project: Infinispan
> Issue Type: Task
> Components: RPC, State transfer
> Affects Versions: 5.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Minor
> Fix For: 5.3.0.Beta1
>
> Attachments: vrstt.log
>
>
> This is an extension of ISPN-1896 of sorts, but for all the cache operations that are visible to the user.
> After a node leaves, the other nodes that have sent commands to that node should either ignore SuspectExceptions or, if not possible, they should retry the operation (e.g. if they didn't get any response back).
> For example, VersionReplStateTransferTest quite often on my machine with a SuspectException, because the versioned prepare command expects a response from the coordinator and the coordinator has just left.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2402) Cache operations or transactions should never fail with SuspectException
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2402?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2402:
--------------------------------
Priority: Minor (was: Major)
> Cache operations or transactions should never fail with SuspectException
> ------------------------------------------------------------------------
>
> Key: ISPN-2402
> URL: https://issues.jboss.org/browse/ISPN-2402
> Project: Infinispan
> Issue Type: Task
> Components: RPC, State transfer
> Affects Versions: 5.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Minor
> Fix For: 5.3.0.Beta1
>
> Attachments: vrstt.log
>
>
> This is an extension of ISPN-1896 of sorts, but for all the cache operations that are visible to the user.
> After a node leaves, the other nodes that have sent commands to that node should either ignore SuspectExceptions or, if not possible, they should retry the operation (e.g. if they didn't get any response back).
> For example, VersionReplStateTransferTest quite often on my machine with a SuspectException, because the versioned prepare command expects a response from the coordinator and the coordinator has just left.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2897) NPE in DefaultConsistentHash
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-2897?page=com.atlassian.jira.plugin.... ]
Adrian Nistor edited comment on ISPN-2897 at 3/6/13 11:17 AM:
--------------------------------------------------------------
The log indicates node2 receives a REBALACE_START with null pendingCH. This is illegal.
EDIT: Please ignore my comment. I did not read the description... :)
was (Author: anistor):
The log indicates node2 receives a REBALACE_START with null pendingCH. This is illegal.
> NPE in DefaultConsistentHash
> ----------------------------
>
> Key: ISPN-2897
> URL: https://issues.jboss.org/browse/ISPN-2897
> Project: Infinispan
> Issue Type: Bug
> Components: Distributed Cache
> Affects Versions: 5.2.3.Final
> Reporter: Michal Linhard
> Assignee: Dan Berindei
> Priority: Critical
> Labels: 5.2.x
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
> Attachments: serverlogs.zip
>
>
> Happened in local testing on my laptop with 4 nodes JDG 6.1.0.CR1 (ISPN 5.2.3.Final)
> For some reason there's a rebalance with pendingCH=null and this is not handled well by DefaultConsistentHash.union:
> {code}
> 15:11:04,782 DEBUG [org.infinispan.topology.LocalTopologyManagerImpl] (OOB-5,shared=udp) Starting local rebalance for cache testCache, topology = CacheTopology{id=3, currentCH=DefaultConsistentHash{numSegments=40, numOwners=2, members=[node04/default]}, pendingCH=null}
> 15:11:04,782 WARN [org.infinispan.topology.CacheTopologyControlCommand] (OOB-5,shared=udp) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=testCache, type=REBALANCE_START, sender=node04/default, joinInfo=null, topologyId=3, currentCH=DefaultConsistentHash{numSegments=40, numOwners=2, members=[node04/default]}, pendingCH=null, throwable=null, viewId=4}: java.lang.NullPointerException
> at org.infinispan.distribution.ch.DefaultConsistentHash.union(DefaultConsistentHash.java:243) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.infinispan.distribution.ch.DefaultConsistentHashFactory.union(DefaultConsistentHashFactory.java:120) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.infinispan.distribution.ch.DefaultConsistentHashFactory.union(DefaultConsistentHashFactory.java:45) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.infinispan.topology.LocalTopologyManagerImpl.handleRebalance(LocalTopologyManagerImpl.java:228) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:168) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:137) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:253) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220) [infinispan-core-5.2.3.Final-redhat-1.jar:5.2.3.Final-redhat-1]
> at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:598) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.JChannel.up(JChannel.java:707) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1020) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.RSVP.up(RSVP.java:172) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:400) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:453) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:721) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:574) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:187) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2616) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1263) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1798) [jgroups-3.2.7.Final-redhat-1.jar:3.2.7.Final-redhat-1]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_37]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_37]
> at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_37]
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2808) Make Infinispan use its own thread pool for sending messages in order to avoid thread deadlocks
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-2808?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-2808:
-----------------------------------
The deadlock will eventually time out (after sync.replTimeout) and if the nodes which stopped communicating are not removed from the cluster, the operations may continue. Partial solution is provided in JGRP-1599 so that the node keeps receiving heartbeats. Also, the FD protocols (cluster partition detection) may be disabled - then the nodes are not removed.
> Make Infinispan use its own thread pool for sending messages in order to avoid thread deadlocks
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-2808
> URL: https://issues.jboss.org/browse/ISPN-2808
> Project: Infinispan
> Issue Type: Feature Request
> Reporter: Mircea Markus
> Assignee: Pedro Ruivo
> Fix For: 5.3.0.Beta1, 5.3.0.Final
>
>
> - when an OOB thread sends a sync request it blocks waiting on a sync in jgroups RequestCorrelator
> - it gets released by an another OOB thread when the remote node responds
> Now if all the OOB threads are blocked in sending, then there's no available OOB thread to unblock them even if responses from remote nodes have arrived - deadlock. In order to avoid this deadlock we can use a different thread pool for sending OOB messages.
> For a discussion around this please refer to: http://infinispan.markmail.org/search/#query:%20list%3Aorg.jboss.lists.in...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2892) View installation loop when restarting cache on multiple nodes
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-2892?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-2892:
------------------------------------
Dennis, can you describe the scenario in more detail? Are we talking about restarting cache managers together with their JChannels, cache managers but not their JChannels, or just individual caches?
Normally, if CacheViewsManagerImpl.isRunning() returns false, it means that the current node is shutting down and surviving nodes will pick up another JGroups coordinator, which should restart the cache view installation (with a higher view id). Since that's not happening, I'm thinking that maybe the cache manager on the coordinator is shut down, but the JGroups channel keeps running (which we don't support AFAIK).
It's also odd that the remote node would throw a "Received cache view prepare request after the local node has already shut down" exception while it's joining, because the first thing CacheViewsManagerImpl.join() does is install the StateTransferManager as a listener, and the listener is not removed until the cache is stopped. I'm not sure what to make of that, but if it's a race condition I believe waiting for a few seconds between stop and start should work around the problem.
> View installation loop when restarting cache on multiple nodes
> --------------------------------------------------------------
>
> Key: ISPN-2892
> URL: https://issues.jboss.org/browse/ISPN-2892
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 5.1.7.Final
> Reporter: Dennis Reed
> Assignee: Mircea Markus
>
> Restarting a cache on multiple nodes at the same time can cause the following error:
> ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-19,node1/web) ISPN000172: Failed to prepare view CacheView{viewId=18, members=[node2/web]} for cache default-host/test, rolling back to view CacheView{viewId=17, members=[node1/web, node2/web]}: java.util.concurrent.ExecutionException: org.infinispan.CacheException: java.lang.IllegalStateException: default-host/test: Received cache view prepare request after the local node has already shut down
> After the initial error, the following error began repeating every second for a few minutes until BaseStateTransferManagerImpl.waitForJoinToComplete() timed out and the cache failed to start:
> ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-19,node1/web) ISPN000172: Failed to prepare view CacheView{viewId=21, members=[node2/web]} for cache default-host/test, rolling back to view CacheView{viewId=20, members=[]}: java.util.concurrent.ExecutionException: org.infinispan.CacheException: java.lang.IllegalStateException: Cannot prepare new view CacheView{viewId=21, members=[node2/web]} on cache default-host/test, we are currently preparing view CacheView{viewId=18, members=[node2/web]}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2504) WriteSkew check fails for entries which are inserted first time
by Pedro Ruivo (JIRA)
[ https://issues.jboss.org/browse/ISPN-2504?page=com.atlassian.jira.plugin.... ]
Work on ISPN-2504 started by Pedro Ruivo.
> WriteSkew check fails for entries which are inserted first time
> ---------------------------------------------------------------
>
> Key: ISPN-2504
> URL: https://issues.jboss.org/browse/ISPN-2504
> Project: Infinispan
> Issue Type: Bug
> Components: Locking and Concurrency
> Affects Versions: 5.2.0.Beta3
> Reporter: Radim Vansa
> Assignee: Pedro Ruivo
> Fix For: 5.3.0.Final
>
>
> If optimistic locking and write skew check are configured and there are two concurrent transactions performing
> {code}
> read(key) -> null
> write(key, value)
> {code}
> one of them should fail (if both read {{null}}). However, both transaction succeed in this case. The reason is that that the {{VersionedPrepareCommand}} has {{null}} version for the key (because it was null) but in {{WriteSkewHelper.performWriteSkewCheckAndReturnNewVersions}} there is
> {code}
> EntryVersion versionSeen = prepareCommand.getVersionsSeen().get(k);
> if (versionSeen != null) entry.setVersion(versionSeen);
> {code}
> As the {{entry}} contains the version injected into context from {{dataContainer}} in {{EntryFactoryImpl.wrapInternalCacheEntryForPut}} lately during the {{VersionedPrepareCommand}} execution, and the version is not overwritten from the {{getVersionsSeen()}} value (as this is null), the performWriteSkewCheck does not report this entry as changed.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2891) Calls to replace don't update cache until after tx commits
by Jim Crossley (JIRA)
[ https://issues.jboss.org/browse/ISPN-2891?page=com.atlassian.jira.plugin.... ]
Jim Crossley commented on ISPN-2891:
------------------------------------
Neither. This is all with local caches.
> Calls to replace don't update cache until after tx commits
> ----------------------------------------------------------
>
> Key: ISPN-2891
> URL: https://issues.jboss.org/browse/ISPN-2891
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 5.2.1.Final
> Reporter: Jim Crossley
> Assignee: Galder Zamarreño
> Labels: 5.2.x
> Fix For: 5.3.0.Alpha1, 5.3.0.Final
>
> Attachments: bad.log, good.log, ugly.log
>
>
> Since upgrading our AS7.2 dependency in Immutant (transitively pulling in 5.2.1.Final), one of our integration tests has begun failing intermittently on our CI server. We've yet to see the failure in local runs, only on CI, so I suspect there's something racist going on.
> The two tests (one for optimistic locking, the other for pessimistic) integrate an Infinispan cache (on which the Immutant cache is built) with HornetQ and XA transactions. A number of queue listeners respond to messages by attempting to increment a value in the cache. The failure occurs with both locking schemes, but much more often with optimistic.
> We've confirmed the failure on 5.2.2 as well.
> Attached you'll find three traces of the optimistic test: the good, the bad, and the ugly. All three correspond to this test: https://github.com/immutant/immutant/blob/31a2ef6222088ccb828898e9e3e4531...
> So you can correlate the log messages prefixed with "JC:" in the traces to the code. Note in particular the last two lines in locking.clj: a logged message containing the count, and then an assertion of the count. Note that the "bad" trace was an actual failing test, but the "ugly" trace was a successful test, even though the trace clearly shows the count logged as 2, not 3. The Infinispan TRACE output clearly shows the value as 3, hence the ugliness of this test.
> It's important to understand that the "work" function occurs within an XA transaction. This means, as I understand it, that if three messages are published to "/queue/done", the cached count should equal 3. Line #44 in locking.clj will block until it receives 3 messages, after which the cached count should be 3.
> These tests always pass locally. They only ever fail on CI, which runs *very* slowly.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months
[JBoss JIRA] (ISPN-2895) org.infinispan.lucene.InfinispanDirectoryIOTest.testReadWholeFile fails randomly
by Mircea Markus (JIRA)
[ https://issues.jboss.org/browse/ISPN-2895?page=com.atlassian.jira.plugin.... ]
Mircea Markus updated ISPN-2895:
--------------------------------
Fix Version/s: 5.3.0.Alpha1
> org.infinispan.lucene.InfinispanDirectoryIOTest.testReadWholeFile fails randomly
> --------------------------------------------------------------------------------
>
> Key: ISPN-2895
> URL: https://issues.jboss.org/browse/ISPN-2895
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 5.2.2.Final
> Reporter: Anna Manukyan
> Assignee: Sanne Grinovero
> Labels: testsuite_stability
> Fix For: 5.3.0.Alpha1
>
>
> The test fails randomly and the error message is:
> {code}
> java.lang.NullPointerException
> at org.infinispan.lucene.DirectoryIntegrityCheck.verifyDirectoryStructure(DirectoryIntegrityCheck.java:76)
> at org.infinispan.lucene.DirectoryIntegrityCheck.verifyDirectoryStructure(DirectoryIntegrityCheck.java:56)
> at org.infinispan.lucene.InfinispanDirectoryIOTest.verifyOnBuffer(InfinispanDirectoryIOTest.java:184)
> at org.infinispan.lucene.InfinispanDirectoryIOTest.testReadWholeFile(InfinispanDirectoryIOTest.java:153)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:80)
> at org.testng.internal.Invoker.invokeMethod(Invoker.java:715)
> at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:907)
> at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1237)
> at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:37)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:368)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:722)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 10 months