[JBoss JIRA] (ISPN-11188) Deprecate ClientEvents.mkCachefailoverEvent
by Gustavo Fernandes (Jira)
[ https://issues.redhat.com/browse/ISPN-11188?page=com.atlassian.jira.plugi... ]
Gustavo Fernandes updated ISPN-11188:
-------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Deprecate ClientEvents.mkCachefailoverEvent
> -------------------------------------------
>
> Key: ISPN-11188
> URL: https://issues.redhat.com/browse/ISPN-11188
> Project: Infinispan
> Issue Type: Task
> Reporter: Nistor Adrian
> Assignee: Nistor Adrian
> Priority: Major
> Fix For: 10.1.2.Final, 11.0.0.Final
>
>
> This should have been an internal method. There is nothing a client can do with this except maybe check than an even is a failover event by doing an == comparison. But that's not really how it should be done since we have an even type enum that better serves that purpose.
> This can be removed in 11.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11170) Infinispan directory does not work with pre-declared indexed entities when sharing user cache and data cache
by Gustavo Fernandes (Jira)
[ https://issues.redhat.com/browse/ISPN-11170?page=com.atlassian.jira.plugi... ]
Gustavo Fernandes updated ISPN-11170:
-------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Infinispan directory does not work with pre-declared indexed entities when sharing user cache and data cache
> ------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-11170
> URL: https://issues.redhat.com/browse/ISPN-11170
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying
> Affects Versions: 8.2.0.Final
> Reporter: Nistor Adrian
> Assignee: Nistor Adrian
> Priority: Major
> Fix For: 9.4.18.Final, 10.1.2.Final, 11.0.0.Final
>
>
> This config combination leads to a circular initialisation dependency of the infinispan directory which has the potential to lead to deadlock, depending on the order the user cache and the infinispan directory's internal caches are started.
> This problem of cache dependencies was always known and the LifecycleManager of query module has code to mitigate it. Unfortunately the code is wrong.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11205) DataSource support in the Server
by Tristan Tarrant (Jira)
Tristan Tarrant created ISPN-11205:
--------------------------------------
Summary: DataSource support in the Server
Key: ISPN-11205
URL: https://issues.redhat.com/browse/ISPN-11205
Project: Infinispan
Issue Type: Feature Request
Components: Server
Affects Versions: 10.1.1.Final
Reporter: Tristan Tarrant
Assignee: Tristan Tarrant
Fix For: 11.0.0.Alpha1, 11.0.0.Final
The server needs to have the ability to manage datasources/connection pools to databases so that they can be shared by multiple cache stores/loaders.
We should use Agroal for this.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11179) server-runtime test suite okhttp thread leaks
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-11179?page=com.atlassian.jira.plugi... ]
Dan Berindei updated ISPN-11179:
--------------------------------
Description:
{{Testcontainers}} connects to the Docker daemon using the REST API over the Unix socket at {{/var/run/docker.sock}} (using {{dockerjava}} and {{OkHttpClient}}).
Following logs requires a long-running connection, and {{LogUtils.attachConsumer}} discards the stream from OkHttpClient/dockerjava, so the connection is never closed. Perhaps the Testcontainers authors assumed that the docker server will kill the connection when the container is stopped, but that doesn't happen.
{noformat}
23:30:59,573 ERROR [TestSuiteProgress] Test failed: UNKNOWN.ThreadLeakChecker
org.infinispan.commons.test.ThreadLeakChecker$LeakException: Leaked thread: tc-okhttp-stream-513080861 << testng-ResilienceIT << UNKNOWN
at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocketLibrary.read(Native Method) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocket$UnixDomainSocketInputStream.doRead(UnixDomainSocket.java:149) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocket$UnixDomainSocketInputStream.read(UnixDomainSocket.java:136) ~[testcontainers-1.12.4.jar:?]
at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:?]
at org.testcontainers.dockerclient.transport.okhttp.UnixSocketFactory$1$1.read(UnixSocketFactory.java:46) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okio.Okio$2.read(Okio.java:140) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okio.AsyncTimeout$2.read(AsyncTimeout.java:237) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okio.RealBufferedSource.request(RealBufferedSource.java:72) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okio.RealBufferedSource.require(RealBufferedSource.java:65) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:307) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.java:286) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.shaded.okio.RealBufferedSource.exhausted(RealBufferedSource.java:61) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder$FramedSink.accept(OkHttpInvocationBuilder.java:363) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder$FramedSink.accept(OkHttpInvocationBuilder.java:352) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.lambda$executeAndStream$3(OkHttpInvocationBuilder.java:314) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder$$Lambda$1863/0x0000000100fd5840.run(Unknown Source) ~[?:?]
at java.lang.Thread.run(Thread.java:834) ~[?:?]
Caused by: org.infinispan.commons.test.ThreadLeakChecker$LeakException: testng-ResilienceIT << UNKNOWN
at org.infinispan.commons.test.ThreadLeakChecker$ThreadInfoLocal.childValue(ThreadLeakChecker.java:107) ~[infinispan-commons-test-11.0.0-SNAPSHOT.jar:11.0.0-SNAPSHOT]
at org.infinispan.commons.test.ThreadLeakChecker$ThreadInfoLocal.childValue(ThreadLeakChecker.java:104) ~[infinispan-commons-test-11.0.0-SNAPSHOT.jar:11.0.0-SNAPSHOT]
at java.lang.ThreadLocal$ThreadLocalMap.<init>(ThreadLocal.java:411) ~[?:?]
at java.lang.ThreadLocal.createInheritedMap(ThreadLocal.java:276) ~[?:?]
at java.lang.Thread.<init>(Thread.java:450) ~[?:?]
at java.lang.Thread.<init>(Thread.java:709) ~[?:?]
at java.lang.Thread.<init>(Thread.java:630) ~[?:?]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:319) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:295) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.get(OkHttpInvocationBuilder.java:89) ~[testcontainers-1.12.4.jar:?]
at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:42) ~[testcontainers-1.12.4.jar:?]
at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:12) ~[testcontainers-1.12.4.jar:?]
at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.execute(AbstrAsyncDockerCmdExec.java:56) ~[testcontainers-1.12.4.jar:?]
at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:21) ~[testcontainers-1.12.4.jar:?]
at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:12) ~[testcontainers-1.12.4.jar:?]
at com.github.dockerjava.core.command.AbstrAsyncDockerCmd.exec(AbstrAsyncDockerCmd.java:21) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.utility.LogUtils.attachConsumer(LogUtils.java:99) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:36) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:51) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.containers.Container.followOutput(Container.java:391) ~[testcontainers-1.12.4.jar:?]
at java.util.ArrayList.forEach(ArrayList.java:1540) ~[?:?]
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:412) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:317) ~[testcontainers-1.12.4.jar:?]
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81) ~[duct-tape-1.0.8.jar:?]
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:315) ~[testcontainers-1.12.4.jar:?]
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:302) ~[testcontainers-1.12.4.jar:?]
at org.infinispan.server.test.ContainerInfinispanServerDriver.start(ContainerInfinispanServerDriver.java:146) ~[test-classes/:?]
at org.infinispan.server.test.InfinispanServerDriver.start(InfinispanServerDriver.java:109) ~[test-classes/:?]
at org.infinispan.server.test.InfinispanServerRule$1.evaluate(InfinispanServerRule.java:86) ~[test-classes/:?]
{noformat}
was:
{{Testcontainers}} connects to the Docker daemon using the REST API over the Unix socket at {{/var/run/docker.sock}} (using {{dockerjava}} and {{OkHttpClient}}).
Following logs requires a long-running connection, and {{LogUtils.attachConsumer}} discards the stream from OkHttpClient/dockerjava, so the connection is never closed. Perhaps the Testcontainers authors assumed that the docker server will kill the connection when the container is stopped, but that doesn't happen.
{noformat}
testng-ResilienceIT starting thread tc-okhttp-stream-1106493516
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:322)
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:295)
at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.get(OkHttpInvocationBuilder.java:89)
at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:42)
at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:12)
at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.execute(AbstrAsyncDockerCmdExec.java:56)
at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:21)
at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:12)
at com.github.dockerjava.core.command.AbstrAsyncDockerCmd.exec(AbstrAsyncDockerCmd.java:21)
at org.testcontainers.utility.LogUtils.attachConsumer(LogUtils.java:99)
at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:36)
at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:51)
at org.testcontainers.containers.Container.followOutput(Container.java:391)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:412)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:317)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:315)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:302)
at org.infinispan.server.test.ContainerInfinispanServerDriver.start(ContainerInfinispanServerDriver.java:146)
at org.infinispan.server.test.InfinispanServerDriver.start(InfinispanServerDriver.java:109)
at org.infinispan.server.test.InfinispanServerRule$1.evaluate(InfinispanServerRule.java:86)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
{noformat}
> server-runtime test suite okhttp thread leaks
> ---------------------------------------------
>
> Key: ISPN-11179
> URL: https://issues.redhat.com/browse/ISPN-11179
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite
> Affects Versions: 10.1.0.Final
> Reporter: Dan Berindei
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.1.2.Final
>
>
> {{Testcontainers}} connects to the Docker daemon using the REST API over the Unix socket at {{/var/run/docker.sock}} (using {{dockerjava}} and {{OkHttpClient}}).
> Following logs requires a long-running connection, and {{LogUtils.attachConsumer}} discards the stream from OkHttpClient/dockerjava, so the connection is never closed. Perhaps the Testcontainers authors assumed that the docker server will kill the connection when the container is stopped, but that doesn't happen.
> {noformat}
> 23:30:59,573 ERROR [TestSuiteProgress] Test failed: UNKNOWN.ThreadLeakChecker
> org.infinispan.commons.test.ThreadLeakChecker$LeakException: Leaked thread: tc-okhttp-stream-513080861 << testng-ResilienceIT << UNKNOWN
> at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocketLibrary.read(Native Method) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocket$UnixDomainSocketInputStream.doRead(UnixDomainSocket.java:149) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.org.scalasbt.ipcsocket.UnixDomainSocket$UnixDomainSocketInputStream.read(UnixDomainSocket.java:136) ~[testcontainers-1.12.4.jar:?]
> at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:?]
> at org.testcontainers.dockerclient.transport.okhttp.UnixSocketFactory$1$1.read(UnixSocketFactory.java:46) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okio.Okio$2.read(Okio.java:140) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okio.AsyncTimeout$2.read(AsyncTimeout.java:237) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okio.RealBufferedSource.request(RealBufferedSource.java:72) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okio.RealBufferedSource.require(RealBufferedSource.java:65) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.java:307) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.java:492) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.java:471) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.java:286) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.shaded.okio.RealBufferedSource.exhausted(RealBufferedSource.java:61) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder$FramedSink.accept(OkHttpInvocationBuilder.java:363) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder$FramedSink.accept(OkHttpInvocationBuilder.java:352) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.lambda$executeAndStream$3(OkHttpInvocationBuilder.java:314) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder$$Lambda$1863/0x0000000100fd5840.run(Unknown Source) ~[?:?]
> at java.lang.Thread.run(Thread.java:834) ~[?:?]
> Caused by: org.infinispan.commons.test.ThreadLeakChecker$LeakException: testng-ResilienceIT << UNKNOWN
> at org.infinispan.commons.test.ThreadLeakChecker$ThreadInfoLocal.childValue(ThreadLeakChecker.java:107) ~[infinispan-commons-test-11.0.0-SNAPSHOT.jar:11.0.0-SNAPSHOT]
> at org.infinispan.commons.test.ThreadLeakChecker$ThreadInfoLocal.childValue(ThreadLeakChecker.java:104) ~[infinispan-commons-test-11.0.0-SNAPSHOT.jar:11.0.0-SNAPSHOT]
> at java.lang.ThreadLocal$ThreadLocalMap.<init>(ThreadLocal.java:411) ~[?:?]
> at java.lang.ThreadLocal.createInheritedMap(ThreadLocal.java:276) ~[?:?]
> at java.lang.Thread.<init>(Thread.java:450) ~[?:?]
> at java.lang.Thread.<init>(Thread.java:709) ~[?:?]
> at java.lang.Thread.<init>(Thread.java:630) ~[?:?]
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:319) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:295) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.get(OkHttpInvocationBuilder.java:89) ~[testcontainers-1.12.4.jar:?]
> at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:42) ~[testcontainers-1.12.4.jar:?]
> at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:12) ~[testcontainers-1.12.4.jar:?]
> at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.execute(AbstrAsyncDockerCmdExec.java:56) ~[testcontainers-1.12.4.jar:?]
> at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:21) ~[testcontainers-1.12.4.jar:?]
> at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:12) ~[testcontainers-1.12.4.jar:?]
> at com.github.dockerjava.core.command.AbstrAsyncDockerCmd.exec(AbstrAsyncDockerCmd.java:21) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.utility.LogUtils.attachConsumer(LogUtils.java:99) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:36) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:51) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.containers.Container.followOutput(Container.java:391) ~[testcontainers-1.12.4.jar:?]
> at java.util.ArrayList.forEach(ArrayList.java:1540) ~[?:?]
> at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:412) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:317) ~[testcontainers-1.12.4.jar:?]
> at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81) ~[duct-tape-1.0.8.jar:?]
> at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:315) ~[testcontainers-1.12.4.jar:?]
> at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:302) ~[testcontainers-1.12.4.jar:?]
> at org.infinispan.server.test.ContainerInfinispanServerDriver.start(ContainerInfinispanServerDriver.java:146) ~[test-classes/:?]
> at org.infinispan.server.test.InfinispanServerDriver.start(InfinispanServerDriver.java:109) ~[test-classes/:?]
> at org.infinispan.server.test.InfinispanServerRule$1.evaluate(InfinispanServerRule.java:86) ~[test-classes/:?]
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11179) server-runtime test suite okhttp thread leaks
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-11179?page=com.atlassian.jira.plugi... ]
Dan Berindei commented on ISPN-11179:
-------------------------------------
I created a Testcontainers issue for the leak:
https://github.com/testcontainers/testcontainers-java/issues/2276
> server-runtime test suite okhttp thread leaks
> ---------------------------------------------
>
> Key: ISPN-11179
> URL: https://issues.redhat.com/browse/ISPN-11179
> Project: Infinispan
> Issue Type: Bug
> Components: Server, Test Suite
> Affects Versions: 10.1.0.Final
> Reporter: Dan Berindei
> Priority: Major
> Labels: testsuite_stability
> Fix For: 10.1.2.Final
>
>
> {{Testcontainers}} connects to the Docker daemon using the REST API over the Unix socket at {{/var/run/docker.sock}} (using {{dockerjava}} and {{OkHttpClient}}).
> Following logs requires a long-running connection, and {{LogUtils.attachConsumer}} discards the stream from OkHttpClient/dockerjava, so the connection is never closed. Perhaps the Testcontainers authors assumed that the docker server will kill the connection when the container is stopped, but that doesn't happen.
> {noformat}
> testng-ResilienceIT starting thread tc-okhttp-stream-1106493516
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:322)
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.executeAndStream(OkHttpInvocationBuilder.java:295)
> at org.testcontainers.dockerclient.transport.okhttp.OkHttpInvocationBuilder.get(OkHttpInvocationBuilder.java:89)
> at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:42)
> at com.github.dockerjava.core.exec.LogContainerCmdExec.execute0(LogContainerCmdExec.java:12)
> at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.execute(AbstrAsyncDockerCmdExec.java:56)
> at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:21)
> at com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec.exec(AbstrAsyncDockerCmdExec.java:12)
> at com.github.dockerjava.core.command.AbstrAsyncDockerCmd.exec(AbstrAsyncDockerCmd.java:21)
> at org.testcontainers.utility.LogUtils.attachConsumer(LogUtils.java:99)
> at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:36)
> at org.testcontainers.utility.LogUtils.followOutput(LogUtils.java:51)
> at org.testcontainers.containers.Container.followOutput(Container.java:391)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
> at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:412)
> at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:317)
> at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
> at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:315)
> at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:302)
> at org.infinispan.server.test.ContainerInfinispanServerDriver.start(ContainerInfinispanServerDriver.java:146)
> at org.infinispan.server.test.InfinispanServerDriver.start(InfinispanServerDriver.java:109)
> at org.infinispan.server.test.InfinispanServerRule$1.evaluate(InfinispanServerRule.java:86)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11172) GracefulShutdownRestartIT fails
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-11172?page=com.atlassian.jira.plugi... ]
Dan Berindei commented on ISPN-11172:
-------------------------------------
I'm also seeing a different failure locally:
{noformat}
23:30:48,775 ERROR [TestSuiteProgress] Test failed: GracefulShutdownRestartIT.testGracefulShutdownRestart
java.lang.IllegalStateException: Unable to create directory /tmp/infinispanTempFiles/org.infinispan.server.resilience.GracefulShutdownRestartIT/0/data
at org.infinispan.server.test.InfinispanServerDriver.createServerHierarchy(InfinispanServerDriver.java:139) ~[test-classes/:?]
at org.infinispan.server.test.ContainerInfinispanServerDriver.createContainer(ContainerInfinispanServerDriver.java:154) ~[test-classes/:?]
at org.infinispan.server.test.ContainerInfinispanServerDriver.restartCluster(ContainerInfinispanServerDriver.java:261) ~[test-classes/:?]
at org.infinispan.server.resilience.GracefulShutdownRestartIT.testGracefulShutdownRestart(GracefulShutdownRestartIT.java:55) ~[test-classes/:?]
{noformat}
> GracefulShutdownRestartIT fails
> -------------------------------
>
> Key: ISPN-11172
> URL: https://issues.redhat.com/browse/ISPN-11172
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 10.1.0.Final
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Priority: Major
>
> {noformat}
> Error Message
> Cluster did not shutdown within timeout
> Stacktrace
> java.lang.AssertionError: Cluster did not shutdown within timeout
> at org.infinispan.commons.util.Eventually.lambda$eventually$0(Eventually.java:33)
> at org.infinispan.commons.util.Eventually.eventually(Eventually.java:25)
> at org.infinispan.commons.util.Eventually.eventually(Eventually.java:33)
> at org.infinispan.server.resilience.GracefulShutdownRestartIT.testGracefulShutdownRestart(GracefulShutdownRestartIT.java:50)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.infinispan.server.test.InfinispanServerTestMethodRule$1.evaluate(InfinispanServerTestMethodRule.java:69)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.infinispan.server.test.InfinispanServerRule$1.evaluate(InfinispanServerRule.java:90)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Standard Output
> [OK: 131, KO: 0, SKIP: 0] Test starting: GracefulShutdownRestartIT.testGracefulShutdownRestart
> [0] STDOUT: 12:36:05,513 WARN (async-thread--p2-t6) [CONFIG] ISPN000564: Configured store 'SingleFileStore' is segmented and may use a large number of file descriptors&#27;[m
> [0] STDOUT: 12:36:05,514 WARN (async-thread--p2-t6) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,515 WARN (async-thread--p2-t6) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [1] STDOUT: 12:36:05,558 WARN (remote-thread--p3-t2) [CONFIG] ISPN000564: Configured store 'SingleFileStore' is segmented and may use a large number of file descriptors&#27;[m
> [1] STDOUT: 12:36:05,560 WARN (remote-thread--p3-t2) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [1] STDOUT: 12:36:05,562 WARN (remote-thread--p3-t2) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,771 WARN (jgroups-13,06978f4151c0-63390) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,776 WARN (jgroups-13,06978f4151c0-63390) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,883 INFO (transport-thread--p5-t7) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100002: Starting rebalance with members [63781b79183e-43839, 06978f4151c0-63390], phase READ_OLD_WRITE_ALL, topology id 2&#27;[m
> [0] STDOUT: 12:36:05,963 INFO (remote-thread--p3-t1) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3&#27;[m
> [0] STDOUT: 12:36:05,978 INFO (remote-thread--p3-t1) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4&#27;[m
> [0] STDOUT: 12:36:05,985 INFO (async-thread--p2-t17) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100010: Finished rebalance with members [63781b79183e-43839, 06978f4151c0-63390], topology id 5&#27;[m
> [0] STDERR: WARNING: An illegal reflective access operation has occurred
> [0] STDERR: WARNING: Illegal reflective access by protostream.com.google.protobuf.UnsafeUtil (file:/opt/infinispan/lib/protostream-4.3.1.Final.jar) to field java.nio.Buffer.address
> [0] STDERR: WARNING: Please consider reporting this to the maintainers of protostream.com.google.protobuf.UnsafeUtil
> [0] STDERR: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> [0] STDERR: WARNING: All illegal access operations will be denied in a future release
> [1] STDERR: WARNING: An illegal reflective access operation has occurred
> [1] STDERR: WARNING: Illegal reflective access by protostream.com.google.protobuf.UnsafeUtil (file:/opt/infinispan/lib/protostream-4.3.1.Final.jar) to field java.nio.Buffer.address
> [1] STDERR: WARNING: Please consider reporting this to the maintainers of protostream.com.google.protobuf.UnsafeUtil
> [1] STDERR: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> [1] STDERR: WARNING: All illegal access operations will be denied in a future release
> [0] STDOUT: 12:36:06,811 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=___protobuf_metadata]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [0] STDOUT: 12:36:06,860 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [0] STDOUT: 12:36:06,898 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=memcachedCache]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [1] STDOUT: 12:36:06,904 WARN (remote-thread--p3-t2) [CLUSTER] ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___script_cache, type=SHUTDOWN_PERFORM, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, phase=null, actualMembers=null, throwable=null, viewId=0} java.lang.NullPointerException
> [1] STDOUT: at org.infinispan.topology.LocalTopologyManagerImpl.handleCacheShutdown(LocalTopologyManagerImpl.java:743)
> [1] STDOUT: at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:189)
> [1] STDOUT: at org.infinispan.topology.CacheTopologyControlCommand.invokeAsync(CacheTopologyControlCommand.java:160)
> [1] STDOUT: at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:160)
> [1] STDOUT: at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:215)
> [1] STDOUT: at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> [1] STDOUT: at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> [1] STDOUT: at java.base/java.lang.Thread.run(Thread.java:834)
> [1] STDOUT: &#27;[m
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11172) GracefulShutdownRestartIT fails
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-11172?page=com.atlassian.jira.plugi... ]
Dan Berindei edited comment on ISPN-11172 at 1/21/20 5:08 PM:
--------------------------------------------------------------
I'm also seeing a different failure locally, when the data directory already exists and is readable:
{noformat}
23:30:48,775 ERROR [TestSuiteProgress] Test failed: GracefulShutdownRestartIT.testGracefulShutdownRestart
java.lang.IllegalStateException: Unable to create directory /tmp/infinispanTempFiles/org.infinispan.server.resilience.GracefulShutdownRestartIT/0/data
at org.infinispan.server.test.InfinispanServerDriver.createServerHierarchy(InfinispanServerDriver.java:139) ~[test-classes/:?]
at org.infinispan.server.test.ContainerInfinispanServerDriver.createContainer(ContainerInfinispanServerDriver.java:154) ~[test-classes/:?]
at org.infinispan.server.test.ContainerInfinispanServerDriver.restartCluster(ContainerInfinispanServerDriver.java:261) ~[test-classes/:?]
at org.infinispan.server.resilience.GracefulShutdownRestartIT.testGracefulShutdownRestart(GracefulShutdownRestartIT.java:55) ~[test-classes/:?]
{noformat}
was (Author: dan.berindei):
I'm also seeing a different failure locally:
{noformat}
23:30:48,775 ERROR [TestSuiteProgress] Test failed: GracefulShutdownRestartIT.testGracefulShutdownRestart
java.lang.IllegalStateException: Unable to create directory /tmp/infinispanTempFiles/org.infinispan.server.resilience.GracefulShutdownRestartIT/0/data
at org.infinispan.server.test.InfinispanServerDriver.createServerHierarchy(InfinispanServerDriver.java:139) ~[test-classes/:?]
at org.infinispan.server.test.ContainerInfinispanServerDriver.createContainer(ContainerInfinispanServerDriver.java:154) ~[test-classes/:?]
at org.infinispan.server.test.ContainerInfinispanServerDriver.restartCluster(ContainerInfinispanServerDriver.java:261) ~[test-classes/:?]
at org.infinispan.server.resilience.GracefulShutdownRestartIT.testGracefulShutdownRestart(GracefulShutdownRestartIT.java:55) ~[test-classes/:?]
{noformat}
> GracefulShutdownRestartIT fails
> -------------------------------
>
> Key: ISPN-11172
> URL: https://issues.redhat.com/browse/ISPN-11172
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 10.1.0.Final
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Priority: Major
>
> {noformat}
> Error Message
> Cluster did not shutdown within timeout
> Stacktrace
> java.lang.AssertionError: Cluster did not shutdown within timeout
> at org.infinispan.commons.util.Eventually.lambda$eventually$0(Eventually.java:33)
> at org.infinispan.commons.util.Eventually.eventually(Eventually.java:25)
> at org.infinispan.commons.util.Eventually.eventually(Eventually.java:33)
> at org.infinispan.server.resilience.GracefulShutdownRestartIT.testGracefulShutdownRestart(GracefulShutdownRestartIT.java:50)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.infinispan.server.test.InfinispanServerTestMethodRule$1.evaluate(InfinispanServerTestMethodRule.java:69)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.infinispan.server.test.InfinispanServerRule$1.evaluate(InfinispanServerRule.java:90)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Standard Output
> [OK: 131, KO: 0, SKIP: 0] Test starting: GracefulShutdownRestartIT.testGracefulShutdownRestart
> [0] STDOUT: 12:36:05,513 WARN (async-thread--p2-t6) [CONFIG] ISPN000564: Configured store 'SingleFileStore' is segmented and may use a large number of file descriptors&#27;[m
> [0] STDOUT: 12:36:05,514 WARN (async-thread--p2-t6) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,515 WARN (async-thread--p2-t6) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [1] STDOUT: 12:36:05,558 WARN (remote-thread--p3-t2) [CONFIG] ISPN000564: Configured store 'SingleFileStore' is segmented and may use a large number of file descriptors&#27;[m
> [1] STDOUT: 12:36:05,560 WARN (remote-thread--p3-t2) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [1] STDOUT: 12:36:05,562 WARN (remote-thread--p3-t2) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,771 WARN (jgroups-13,06978f4151c0-63390) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,776 WARN (jgroups-13,06978f4151c0-63390) [CONFIG] ISPN000149: Fetch persistent state and purge on startup are both disabled, cache may contain stale entries on startup&#27;[m
> [0] STDOUT: 12:36:05,883 INFO (transport-thread--p5-t7) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100002: Starting rebalance with members [63781b79183e-43839, 06978f4151c0-63390], phase READ_OLD_WRITE_ALL, topology id 2&#27;[m
> [0] STDOUT: 12:36:05,963 INFO (remote-thread--p3-t1) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3&#27;[m
> [0] STDOUT: 12:36:05,978 INFO (remote-thread--p3-t1) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4&#27;[m
> [0] STDOUT: 12:36:05,985 INFO (async-thread--p2-t17) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100010: Finished rebalance with members [63781b79183e-43839, 06978f4151c0-63390], topology id 5&#27;[m
> [0] STDERR: WARNING: An illegal reflective access operation has occurred
> [0] STDERR: WARNING: Illegal reflective access by protostream.com.google.protobuf.UnsafeUtil (file:/opt/infinispan/lib/protostream-4.3.1.Final.jar) to field java.nio.Buffer.address
> [0] STDERR: WARNING: Please consider reporting this to the maintainers of protostream.com.google.protobuf.UnsafeUtil
> [0] STDERR: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> [0] STDERR: WARNING: All illegal access operations will be denied in a future release
> [1] STDERR: WARNING: An illegal reflective access operation has occurred
> [1] STDERR: WARNING: Illegal reflective access by protostream.com.google.protobuf.UnsafeUtil (file:/opt/infinispan/lib/protostream-4.3.1.Final.jar) to field java.nio.Buffer.address
> [1] STDERR: WARNING: Please consider reporting this to the maintainers of protostream.com.google.protobuf.UnsafeUtil
> [1] STDERR: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> [1] STDERR: WARNING: All illegal access operations will be denied in a future release
> [0] STDOUT: 12:36:06,811 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=___protobuf_metadata]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [0] STDOUT: 12:36:06,860 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=C3F31F9D169B25D0C68FE5A26A6600FFA0E2E3F4]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [0] STDOUT: 12:36:06,898 INFO (SINGLE_PORT-ServerIO-8-2) [CLUSTER] [Context=memcachedCache]ISPN100008: Updating cache members list [63781b79183e-43839], topology id 6&#27;[m
> [1] STDOUT: 12:36:06,904 WARN (remote-thread--p3-t2) [CLUSTER] ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=___script_cache, type=SHUTDOWN_PERFORM, sender=null, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, phase=null, actualMembers=null, throwable=null, viewId=0} java.lang.NullPointerException
> [1] STDOUT: at org.infinispan.topology.LocalTopologyManagerImpl.handleCacheShutdown(LocalTopologyManagerImpl.java:743)
> [1] STDOUT: at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:189)
> [1] STDOUT: at org.infinispan.topology.CacheTopologyControlCommand.invokeAsync(CacheTopologyControlCommand.java:160)
> [1] STDOUT: at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:160)
> [1] STDOUT: at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:215)
> [1] STDOUT: at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> [1] STDOUT: at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> [1] STDOUT: at java.base/java.lang.Thread.run(Thread.java:834)
> [1] STDOUT: &#27;[m
> {noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months
[JBoss JIRA] (ISPN-11204) Response Collector addResponse value can be ignored
by Will Burns (Jira)
Will Burns created ISPN-11204:
---------------------------------
Summary: Response Collector addResponse value can be ignored
Key: ISPN-11204
URL: https://issues.redhat.com/browse/ISPN-11204
Project: Infinispan
Issue Type: Bug
Components: Core
Reporter: Will Burns
Assignee: Will Burns
Fix For: 11.0.0.Alpha1
A ResponseCollector can return a non value from `addResponse` that is a sign that the response is complete. However it is possible to get a non null value from one response and get another response that has null and the `finish` value is used instead. We should make sure that when a non null value is returned via `addResponse` that its value is always used.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
6 years, 2 months