[JBoss JIRA] (ISPN-8079) QueryInterceptor : Missing value during early rebalance in functional commands
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-8079?page=com.atlassian.jira.plugin.... ]
Radim Vansa resolved ISPN-8079.
-------------------------------
Resolution: Duplicate Issue
> QueryInterceptor : Missing value during early rebalance in functional commands
> ------------------------------------------------------------------------------
>
> Key: ISPN-8079
> URL: https://issues.jboss.org/browse/ISPN-8079
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.1.0.Final
> Reporter: Katia Aresti
> Assignee: Katia Aresti
>
> Functional commands, including ComputeCommand, calculate the value to be set or removed inside the lambda when the command is executing. If the value previous value has to be removed for any reason, the value has to be picked up from the context.
> In some rare cases, during topology changes, the value might not be in the context and so it can't be retrieved or removed. In that particular case, the method provided by the ISPN-7990 issue has to be used to delete the key in every context.
> In this issue, we need to add a reproducer for the problem and correct it.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-8177) OSGi integration tests ignored error messages
by Dan Berindei (JIRA)
Dan Berindei created ISPN-8177:
----------------------------------
Summary: OSGi integration tests ignored error messages
Key: ISPN-8177
URL: https://issues.jboss.org/browse/ISPN-8177
Project: Infinispan
Issue Type: Bug
Components: Build process
Affects Versions: 9.1.0.Final
Reporter: Dan Berindei
Priority: Critical
Fix For: 9.1.1.Final
This is logged at start:
{noformat}
Exception in thread "JMX Connector Thread [service:jmx:rmi://0.0.0.0:44444/jndi/rmi://0.0.0.0:1099/karaf-root]" java.lang.RuntimeException:
Port already in use: 44444;
You may have started two containers. If you need to start a second container or the default ports are already in use update the config file etc/org.apache.karaf.management.cfg and change the Registry Port and Server Port to unused ports
at org.apache.karaf.management.ConnectorServerFactory$1.run(ConnectorServerFactory.java:278)
{noformat}
And at the end:
{noformat}
[ERROR] There are test failures.
Please refer to /home/infinispan/workspace/Infinispan_master-5OURPVBKVS5PLGVSKVWLBFUZPLIO2DZSV3U7HWSZJBYGBGKYSFXQ/integrationtests/osgi/target/surefire-reports for the individual test results.
Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
Command was /bin/sh -c cd /home/infinispan/workspace/Infinispan_master-5OURPVBKVS5PLGVSKVWLBFUZPLIO2DZSV3U7HWSZJBYGBGKYSFXQ/integrationtests/osgi && /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-1.b16.el7_3.x86_64/jre/bin/java -Xmx2G -jar /home/infinispan/workspace/Infinispan_master-5OURPVBKVS5PLGVSKVWLBFUZPLIO2DZSV3U7HWSZJBYGBGKYSFXQ/integrationtests/osgi/target/surefire/surefirebooter7771544774109157019.jar /home/infinispan/workspace/Infinispan_master-5OURPVBKVS5PLGVSKVWLBFUZPLIO2DZSV3U7HWSZJBYGBGKYSFXQ/integrationtests/osgi/target/surefire 2017-08-04T07-25-46_539-jvmRun1 surefire3570357305639890359tmp surefire_428610915757147431769tmp
Error occurred in starting fork, check output in log
Process Exit Code: 1
org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called?
Command was /bin/sh -c cd /home/infinispan/workspace/Infinispan_master-5OURPVBKVS5PLGVSKVWLBFUZPLIO2DZSV3U7HWSZJBYGBGKYSFXQ/integrationtests/osgi && /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-1.b16.el7_3.x86_64/jre/bin/java -Xmx2G -jar /home/infinispan/workspace/Infinispan_master-5OURPVBKVS5PLGVSKVWLBFUZPLIO2DZSV3U7HWSZJBYGBGKYSFXQ/integrationtests/osgi/target/surefire/surefirebooter7771544774109157019.jar /home/infinispan/workspace/Infinispan_master-5OURPVBKVS5PLGVSKVWLBFUZPLIO2DZSV3U7HWSZJBYGBGKYSFXQ/integrationtests/osgi/target/surefire 2017-08-04T07-25-46_539-jvmRun1 surefire3570357305639890359tmp surefire_428610915757147431769tmp
Error occurred in starting fork, check output in log
Process Exit Code: 1
at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:679)
at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:533)
at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:279)
at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:243)
at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1077)
at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:907)
at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:785)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
{noformat}
{noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-8027) Random failures in data changes in remote Hibernate cache strategies
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-8027?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño commented on ISPN-8027:
----------------------------------------
The problem is the result of the asynchronous put operation encountering an outdate topology. See [log snippet|https://gist.github.com/galderz/5707586a85776c4b31df23de1068f3fa].
As [~dan.berindei] said, since the put is asynchronous, there are no retries:
{code}
<dberindei> galderz: I don't see the originator receiving the OTE
either
> yeah, the remote node does not send back the originator
<dberindei> galderz: or maybe you're just not logging JGroupsTransport
messages?
> no, i'm logging everything
<dberindei> galderz: ok, that means "notifying the originator" is a
lie :)
> maybe that's the issue, that the remote node swallows it
<dberindei> galderz: I think it might be the FORCE_ASYNCHRONOUS
<dberindei> galderz: if the command is sent with
ResponseMode.ASYNCHRONOUS, the message won't include a request id
<dberindei> galderz: so the target node can't send a response back to
say the topology changed
<dberindei> galderz: the "notifying the originator" message is logged
with the BaseBlockingRunnable's response is set, but if the
command is async it won't actually send anything
{code}
> Random failures in data changes in remote Hibernate cache strategies
> --------------------------------------------------------------------
>
> Key: ISPN-8027
> URL: https://issues.jboss.org/browse/ISPN-8027
> Project: Infinispan
> Issue Type: Bug
> Components: Hibernate Cache
> Affects Versions: 9.1.0.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Labels: testsuite_stability
>
> Randomly failing test on CI:
> {{org.infinispan.test.hibernate.cache.timestamp.TimestampsRegionImplTest.testEvict[JTA, INVALIDATION_SYNC,AccessType[transactional]]}}
> {code}
> java.lang.AssertionError: expected:<value1> but was:<null>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at org.infinispan.test.hibernate.cache.AbstractGeneralDataRegionTest.lambda$testEvict$4(AbstractGeneralDataRegionTest.java:146)
> at org.infinispan.test.hibernate.cache.AbstractGeneralDataRegionTest.withSessionFactoriesAndRegions(AbstractGeneralDataRegionTest.java:104)
> at org.infinispan.test.hibernate.cache.AbstractGeneralDataRegionTest.testEvict(AbstractGeneralDataRegionTest.java:117)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at org.hibernate.testing.junit4.ExtendedFrameworkMethod.invokeExplosively(ExtendedFrameworkMethod.java:45)
> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.infinispan.test.hibernate.cache.util.InfinispanTestingSetup$1.evaluate(InfinispanTestingSetup.java:38)
> at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-8176) RemoteCacheStoreIT.testReadOnly random failures
by Gustavo Fernandes (JIRA)
Gustavo Fernandes created ISPN-8176:
---------------------------------------
Summary: RemoteCacheStoreIT.testReadOnly random failures
Key: ISPN-8176
URL: https://issues.jboss.org/browse/ISPN-8176
Project: Infinispan
Issue Type: Bug
Affects Versions: 9.1.0.Final
Reporter: Gustavo Fernandes
java.lang.AssertionError: expected null, but was:<v1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotNull(Assert.java:664)
at org.junit.Assert.assertNull(Assert.java:646)
at org.junit.Assert.assertNull(Assert.java:656)
at org.infinispan.server.test.cs.remote.RemoteCacheStoreIT.testReadOnly(RemoteCacheStoreIT.java:85)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-8168) LiveRunningTest random failures
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8168?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-8168:
-----------------------------------------
Commenting out the clone issue temporarily, another issue is the func. command when it does not have the previous value it does a {{[delta.merge(null)|https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/atomic/impl/ApplyDelta.java#L60]}} which will result in an empty {{FileListCacheValue}} thus corrupting the index
> LiveRunningTest random failures
> -------------------------------
>
> Key: ISPN-8168
> URL: https://issues.jboss.org/browse/ISPN-8168
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 9.1.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Labels: testsuite_stability
> Attachments: trace.zip
>
>
> The test fails very often with
> {noformat}
> Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='emails'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:726)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683)
> {noformat}
> The cache entry that contains the list of files the lucene directory (FileListCacheValue) for some reason is empty, although the index is not. The missing value for FileListCacheValue causes the index reader to think the index is empty and thus the error
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-6940) Unavailable servers with Replication timeout exception
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/ISPN-6940?page=com.atlassian.jira.plugin.... ]
Bogdan Sikora updated ISPN-6940:
--------------------------------
Tester: (was: Bogdan Sikora)
> Unavailable servers with Replication timeout exception
> ------------------------------------------------------
>
> Key: ISPN-6940
> URL: https://issues.jboss.org/browse/ISPN-6940
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.2.3.Final
> Reporter: Bogdan Sikora
> Priority: Critical
> Attachments: clusterbench.war
>
>
> Exception in log after every request
> {noformat}
> ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-1) ISPN000136: Error executing command GetKeyValueCommand, writing keys []: org.infinispan.util.concurrent.TimeoutException: Replication timeout for jboss-eap-7.1-1
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:642)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.staggeredProcessNext(CommandAwareRpcDispatcher.java:375)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.lambda$processCallsStaggered$3(CommandAwareRpcDispatcher.java:357)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 09:46:25,427 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to /clusterbench/jvmroute: org.infinispan.util.concurrent.TimeoutException: Replication timeout for jboss-eap-7.1-1
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:642)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.staggeredProcessNext(CommandAwareRpcDispatcher.java:375)
> at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.lambda$processCallsStaggered$3(CommandAwareRpcDispatcher.java:357)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> But it never disabled whole server to every request, eap generates this pretty page with replication error exception
> {noformat}
> <html><head><title>ERROR</title><style>body {
> font-family: "Lucida Grande", "Lucida Sans Unicode", "Trebuchet MS", Helvetica, Arial, Verdana, sans-serif;
> margin: 5px;
> }
> .header {
> background-image: linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -o-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -moz-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -webkit-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
> background-image: -ms-linear-gradient(bottom, rgb(153,151,153) 8%, rgb(199,199,199) 54%);
>
> background-image: -webkit-gradient(
> linear,
> left bottom,
> left top,
> color-stop(0.08, rgb(153,151,153)),
> color-stop(0.54, rgb(199,199,199))
> );
> color: black;
> padding: 2px;
> font-weight: normal;
> border: solid 1px;
> font-size: 170%;
> text-align: left;
> vertical-align: middle;
> height: 32px;
> }
> .error-div {
> display: inline-block; width: 32px; height: 32px; background: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABmJLR0QAAAAAAAD5Q7t/AAAACXBIWXMAAABIAAAASABGyWs+AAAACXZwQWcAAAAgAAAAIACH+pydAAAGGElEQVRYw8WXW2wcVxnHf+fM7NXrdXbdtZ3ipGqCEzvQKmlFoUggwkvvGBoaqVVV5Y0nkCoQPPCAhMQbQlzUh/IWHoh6QUVKZVLxUCGogKZRW9uJkyg0iVzZ2dhre9c7u7M7c87hYWc2M7t2bBASI306O/85+v7/7zLn24H/8yV2u/EcFNvwVNK2X0DKw1qpYaX1gCWlY0m5arRe8JQ6I2DmaVj/nwk4CxO2Zf1aw9dKo6N67+eOZHN7x0gViiSHBmlXa7hrazjLt1i+eKmxUi5bwpg/e8a8PA3X/msBr0MyJ+WvgFOffehY8v7HH5OW78HqGjgOtNrge2AnIJWAgQEYLqBsm3/NvKM/mZ1rYczpRa2/9x3w/iMBM1BCiHfuGRmZevClF9NJ5cP1m9B0+/aa3t/ZDNw3TtuSfPy737trK6sXtTGPPwOruxLwNhSEEHMHpqZGDz93wubKFdio3ZXUGNOHs2cIDh/g6pt/9G8uXLnlGPP5k1Dt5bN60m6lhXj34OTkxKET37SZm4d6oz/KgFRH7yOrBozbwlTWGf7ql6RZrWTcytrxaTj9Ro9OGb3JS/mz0nDxgYnnvmWbuUsdJ6FTY2JmImShqZA83N90UbOXOXDiG4nCcPFoTsqfbluCP8DenJQLx1/+7pBcXILaZl9qo2svFmalb48xkM9hxkf5+29+u9HSenIayn0ZyEr5i0NHpnKy7aGrtW6UehsLozRBRlRQEh3BwzLpWh3h++yfPJRLSvnzvhK8DkPamOl7n3nCUjcXtyXuK0X0WViaAFc9QtSNJe594uu2MubZGcjHBGTgqeFiQQml0K12nDgSpd4iyi4uRMci6Q8xJSWq7SGEZCg/qD14LCYgIcTzoxMHc6qygYk40aETKTFSdh2aANOWhbEsjJSYCK6C/V0LcH+9xvCB/TlbiBdCAXbQMJPZ/fvQjUbXEULAa6/1deyuh0dwNU+e7DSzEOhmi8xn9iI+ujgVE6ChlBwpYcqVjuIgA3IXBMYYtNbd30IIpJQI0ZEa+gPQnk9ieA8GSrESaGNyiWIBrVQntUHqdrq01iiluo0WilBK4fs+WutOr4S9YAyJoRzamMGYAClEve046ETyTr13EKCU6kZ+N4GNCxfwKpVOryQSuOUVhBCbsRIIuN1cWRkaSCfRrdaOTncijgltNPDn5xHZLIn7x3GNh4CVmADgcv3a9YmBY0cxm06nAQH31KlOao3BL5fxbtzAOE73hAt7JXb6RU7PGO44WK5Do1ZHw6WYAN+YM+X5S8dLXziWizVNvY5aWcFfWuq8Idsctb1Yr8DwmRwrUr1yfVMbcyYmwIWZ2mbdai9+gvfBAsaAbrUwrrtllLuJPibIGGQ2hUlIHLdlA+diTXgSqkKIt5b/OevL+8bwqlWU63anW3jyhaa4c9T2Hs0hrsL5EOCJQ/sov7/gS8Mb07AZEwDgav2D5aVy3eSziEL+Dllk0PSRcmcMqx6BUeGykINsksrttboPP4w2aVfACVhGiFevvft+I/XwJCad3DF6tc2MiD4zmRSZhyZY/OusI4V4JTqKYwIA6lr/2Gm6s5/+7UMv/egDmEyqL7VbRR/LViR6sikGvniYpffm2m6r/VFd65/0vqZ9R/tbsMeG+WKpMDL+lWMJ58Jl/NVqvOG26PAoDmCXhhg4epDl9+a9zUp1uQkPbvWfcMvZchbuEXAuk0kf2X/84YxpuDQv3kA5zW1fw9CswSyZyXFEJsmnf/m46bntuTY8+SxUtuLadri9CokxeEUK8WJhtJgaeeSIxG3RurWGv+Gg3RbaU5CwkOkU1lCWxFgBmbS4ff6qX7297rXh9Gn4/llobMcjtsFywACQ+zZMTsOP8vBIOpNS+bFiJj2cxx7MYg+k8ZwmrWqD9lqNjVvrTc9tWRtw/k345dtwFagDTmTdUUA2EDAYXffB2JPw6FH4ch7GUpCzIemD50J9A1Y+hH/8Cc4vdTq9Tud9D9dNOt+MajclyNL53xZmIhusmcBSQJLOd4UJnHqACzTppDy0kHizl/yuPRB5nggIM0A6IE4GuA34EQFtoBUQuwGm7kbwb+eaEEXmuV5dAAAAJXRFWHRjcmVhdGUtZGF0ZQAyMDA5LTExLTEwVDE5OjM4OjI0LTA3OjAwdDKp4gAAACV0RVh0ZGF0ZTpjcmVhdGUAMjAxMC0wMi0yMFQyMzoyNjoyNC0wNzowMC7DUNYAAAAldEVYdGRhdGU6bW9kaWZ5ADIwMTAtMDEtMTFUMDg6NTc6MzUtMDc6MDCruapPAAAAMnRFWHRMaWNlbnNlAGh0dHA6Ly9lbi53aWtpcGVkaWEub3JnL3dpa2kvUHVibGljX2RvbWFpbj/96s8AAAAldEVYdG1vZGlmeS1kYXRlADIwMDktMTEtMTBUMTk6Mzg6MjQtMDc6MDArg9/WAAAAGXRFWHRTb3VyY2UAVGFuZ28gSWNvbiBMaWJyYXJ5VM/tggAAADp0RVh0U291cmNlX1VSTABodHRwOi8vdGFuZ28uZnJlZWRlc2t0b3Aub3JnL1RhbmdvX0ljb25fTGlicmFyebzIrdYAAAAASUVORK5CYII=') left center no-repeat;
> }.error-text-div {
> display: inline-block; vertical-align: top; height: 32px;}.label { font-weight:bold; display: inline-block;}.value { display: inline-block;}</style></head><body><div class="header"><div class="error-div"></div><div class="error-text-div">Error processing request</div></div><div class="label">Context Path:</div><div class="value">/clusterbench</div><br/><div class="label">Servlet Path:</div><div class="value">/jvmroute</div><br/><div class="label">Path Info:</div><div class="value">null</div><br/><div class="label">Query String:</div><div class="value">null</div><br/><b>Stack Trace</b><br/>org.infinispan.util.concurrent.TimeoutException: Replication timeout for jboss-eap-7.1-1<br/>org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)<br/>org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:642)<br/>java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)<br/>java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)<br/>java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)<br/>java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)<br/>org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.staggeredProcessNext(CommandAwareRpcDispatcher.java:375)<br/>org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.lambda$processCallsStaggered$3(CommandAwareRpcDispatcher.java:357)<br/>java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)<br/>java.util.concurrent.FutureTask.run(FutureTask.java:266)<br/>java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)<br/>java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)<br/>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)<br/>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)<br/>java.lang.Thread.run(Thread.java:745)<br/></body></html>
> {noformat}
> Reproducing
> # Unzip jboss-eap-7.1.0.DR2.zip
> # copy recurcive (cp -R) to have 3 jboss-eap folders (jboss-eap-7.1, jboss-eap-7.1-2, jboss-eap-7.1-3)
> # open configuration file standalone-ha.xml
> # add some offset to server ports (management-http, management-https, ajp,http, https) to remove conficts
> # add with names (value) what you want
> {noformat}
> <system-properties>
> <property name="jboss.mod_cluster.jvmRoute" value="jboss-eap-7.1-3"/>
> <property name="jboss.node.name" value="jboss-eap-7.1-3"/>
> </system-properties>
> {noformat}
> # You should probably change ip from localhost to prevent firewall to misbehave
> # copy clusterbench.war ro standalone/deployments
> # start servers with standalone-ha.xml
> # curl {noformat} ${YOUR_SET_IP_ADDRESS}:${SERVER_PORT}/clusterbench/jvmroute {noformat}
> # one of the servers should return 500, if not restart one of them
> I had third server unable to response, when i turn off second one, third came to live. After start of second one, third one stops responding again. Sometimes more then one server stops to respond, i have got scenario where whole cluster stopped responding.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-7880) The HotRod server build fails if test keys already exist
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-7880?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-7880:
-----------------------------------
Hi Dan, I am getting a warning since there are two antrun plugins in {{infinispan-cachestore-remote}} - could you merge those to have just two executions?
> The HotRod server build fails if test keys already exist
> --------------------------------------------------------
>
> Key: ISPN-7880
> URL: https://issues.jboss.org/browse/ISPN-7880
> Project: Infinispan
> Issue Type: Bug
> Components: Build process
> Affects Versions: 9.0.1.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.1.0.Alpha1
>
>
> Reproducible by invoking `mvn install` twice (without `clean`):
> {noformat}
> [INFO] --- keytool-maven-plugin:1.5:generateKeyPair (password_server) @ infinispan-server-hotrod ---
> [INFO] /bin/sh -c cd /home/dan/Work/infinispan/server/hotrod && /home/dan/Tools/jdk1.8.0_121/jre/../bin/keytool -genkeypair -v -keystore /home/dan/Work/infinispan/server/hotrod/target/test-classes/password_server_keystore.jks -storepass secret -alias default -dname 'CN=HotRod_1,OU=Infinispan,O=JBoss,L=Red Hat,ST=World,C=WW' -keypass secret -validity 365 -keyalg RSA -keysize 2048
> [INFO] keytool error: java.lang.Exception: Key pair not generated, alias <default> already exists
> [INFO] java.lang.Exception: Key pair not generated, alias <default> already exists
> [INFO] at sun.security.tools.keytool.Main.doGenKeyPair(Main.java:1597)
> [INFO] at sun.security.tools.keytool.Main.doCommands(Main.java:966)
> [INFO] at sun.security.tools.keytool.Main.run(Main.java:343)
> [INFO] at sun.security.tools.keytool.Main.main(Main.java:336)
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-5083) Hot Rod decoder should use async Cache operations
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5083?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-5083:
-----------------------------------------
[~rvansa] The recent changes on the server solve the "Hot Rod decoder is currently tying up Netty threads as a result of calling up to Infinispan sync operations", that's why I closed it.
But you are right, it's not using cache async ops, I'll reopen it and let [~galder.zamarreno] and [~rpwburns] comment on it
> Hot Rod decoder should use async Cache operations
> -------------------------------------------------
>
> Key: ISPN-5083
> URL: https://issues.jboss.org/browse/ISPN-5083
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Gustavo Fernandes
> Fix For: 9.2.0.Final
>
>
> Hot Rod decoder is currently tying up Netty threads as a result of calling up to Infinispan sync operations. Instead, Hot Rod decoder should call up async operations, convert the Notifying Futures to Scala Futures, and write up the reply when it's received. This should increase performance specially under heavy load.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-5083) Hot Rod decoder should use async Cache operations
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-5083?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes reopened ISPN-5083:
-------------------------------------
> Hot Rod decoder should use async Cache operations
> -------------------------------------------------
>
> Key: ISPN-5083
> URL: https://issues.jboss.org/browse/ISPN-5083
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Gustavo Fernandes
> Fix For: 9.2.0.Final
>
>
> Hot Rod decoder is currently tying up Netty threads as a result of calling up to Infinispan sync operations. Instead, Hot Rod decoder should call up async operations, convert the Notifying Futures to Scala Futures, and write up the reply when it's received. This should increase performance specially under heavy load.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months
[JBoss JIRA] (ISPN-8168) LiveRunningTest random failures
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-8168?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-8168:
-----------------------------------------
Digging a bit further, the entry is cloned during the func command execution (probably introduced by ISPN-7029), and this clone sometimes is returning a wrong result, will probably need to tweak the FileListCacheValue to ensure all the state is copied correctly
> LiveRunningTest random failures
> -------------------------------
>
> Key: ISPN-8168
> URL: https://issues.jboss.org/browse/ISPN-8168
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 9.1.0.Final
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Labels: testsuite_stability
> Attachments: trace.zip
>
>
> The test fails very often with
> {noformat}
> Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file found in InfinispanDirectory{indexName='emails'}: files: []
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:726)
> at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:683)
> {noformat}
> The cache entry that contains the list of files the lucene directory (FileListCacheValue) for some reason is empty, although the index is not. The missing value for FileListCacheValue causes the index reader to think the index is empty and thus the error
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 8 months