[Red Hat JIRA] (ISPN-12571) jcache/tck-runner-remote random failures starting server
by Dan Berindei (Jira)
Dan Berindei created ISPN-12571:
-----------------------------------
Summary: jcache/tck-runner-remote random failures starting server
Key: ISPN-12571
URL: https://issues.redhat.com/browse/ISPN-12571
Project: Infinispan
Issue Type: Bug
Components: Core, Server
Affects Versions: 12.0.0.Dev07
Reporter: Dan Berindei
Fix For: 12.0.0.Final
The {{jcache/tck-runner-remote}} module starts an Infinispan server with the default configuration. Sometimes, the server fails to start:
{noformat}
2020-12-10 08:33:49,367 INFO (main) [org.infinispan.CLUSTER] ISPN000079: Channel cluster local address is rhos-infinispan-ci-4-5381, physical addresses are [10.0.148.134:7800]
2020-12-10 08:33:49,416 INFO (main) [org.infinispan.CONTAINER] ISPN000390: Persisted state, version=12.0.0-SNAPSHOT timestamp=2020-12-10T13:33:49.414476Z
2020-12-10 08:33:49,737 INFO (main) [org.jboss.threads] JBoss Threads version 2.3.3.Final
2020-12-10 08:33:49,811 INFO (main) [org.infinispan.CONTAINER] ISPN000104: Using EmbeddedTransactionManager
2020-12-10 08:33:50,362 WARN (blocking-thread--p3-t1) [org.infinispan.encoding.impl.StorageConfigurationManager] ISPN000599: Configuration for cache 'ensure_mbeanserver_created_cache' does not define the encoding for keys or values. If you use operations that require data conversion or queries, you should configure the cache with a specific MediaType for keys or values.
2020-12-10 08:33:50,369 WARN (blocking-thread--p3-t2) [org.infinispan.encoding.impl.StorageConfigurationManager] ISPN000599: Configuration for cache 'SampleCache' does not define the encoding for keys or values. If you use operations that require data conversion or queries, you should configure the cache with a specific MediaType for keys or values.
2020-12-10 08:33:50,405 INFO (main) [org.infinispan.CLUSTER] ISPN000080: Disconnecting JGroups channel cluster
2020-12-10 08:33:50,417 INFO (main) [org.infinispan.CONTAINER] ISPN000390: Persisted state, version=12.0.0-SNAPSHOT timestamp=2020-12-10T13:33:50.416703Z
2020-12-10 08:33:50,422 FATAL (main) [org.infinispan.SERVER] ISPN080028: Infinispan Server failed to start java.util.concurrent.ExecutionException: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.globalstate.GlobalConfigurationManager
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.infinispan.server.Bootstrap.runInternal(Bootstrap.java:158)
at org.infinispan.server.tool.Main.run(Main.java:98)
at org.infinispan.server.Bootstrap.main(Bootstrap.java:46)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.infinispan.server.loader.Loader.run(Loader.java:103)
at org.infinispan.server.loader.Loader.main(Loader.java:48)
Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.globalstate.GlobalConfigurationManager
at org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:751)
at org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:717)
at org.infinispan.server.SecurityActions.lambda$startCacheManager$1(SecurityActions.java:67)
at org.infinispan.security.Security.doPrivileged(Security.java:45)
at org.infinispan.server.SecurityActions.doPrivileged(SecurityActions.java:39)
at org.infinispan.server.SecurityActions.startCacheManager(SecurityActions.java:70)
at org.infinispan.server.Server.run(Server.java:346)
... 9 more
Caused by: org.infinispan.commons.CacheConfigurationException: Error starting component org.infinispan.globalstate.GlobalConfigurationManager
at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:572)
at org.infinispan.factories.impl.BasicComponentRegistryImpl.access$700(BasicComponentRegistryImpl.java:30)
at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:787)
at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:341)
at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:237)
at org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:746)
... 15 more
Caused by: java.util.concurrent.CompletionException: org.infinispan.commons.CacheConfigurationException: ISPN000502: Error while persisting global configuration state
at org.infinispan.util.concurrent.CompletionStages.join(CompletionStages.java:81)
at org.infinispan.globalstate.impl.GlobalConfigurationManagerImpl.lambda$start$0(GlobalConfigurationManagerImpl.java:115)
at org.infinispan.cache.impl.EncoderCache.lambda$forEach$7(EncoderCache.java:762)
at java.base/java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122)
at org.infinispan.cache.impl.AbstractDelegatingCache.forEach(AbstractDelegatingCache.java:479)
at org.infinispan.cache.impl.AbstractDelegatingCache.forEach(AbstractDelegatingCache.java:479)
at org.infinispan.cache.impl.AbstractDelegatingCache.forEach(AbstractDelegatingCache.java:479)
at org.infinispan.cache.impl.EncoderCache.forEach(EncoderCache.java:759)
at org.infinispan.globalstate.impl.GlobalConfigurationManagerImpl.start(GlobalConfigurationManagerImpl.java:106)
at org.infinispan.globalstate.impl.CorePackageImpl$2.start(CorePackageImpl.java:59)
at org.infinispan.globalstate.impl.CorePackageImpl$2.start(CorePackageImpl.java:48)
at org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:604)
at org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:595)
at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:564)
... 20 more
Caused by: org.infinispan.commons.CacheConfigurationException: ISPN000502: Error while persisting global configuration state
at org.infinispan.globalstate.impl.OverlayLocalConfigurationStorage.persistConfigurations(OverlayLocalConfigurationStorage.java:156)
at org.infinispan.globalstate.impl.OverlayLocalConfigurationStorage.storeCaches(OverlayLocalConfigurationStorage.java:130)
at org.infinispan.globalstate.impl.OverlayLocalConfigurationStorage.lambda$createCache$2(OverlayLocalConfigurationStorage.java:79)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.infinispan.commons.CacheConfigurationException: ISPN000508: Cannot rename file /home/infinispan/workspace/Infinispan_PR-8921/jcache/tck-runner-remote/target/infinispan-server/server/data/caches9096240479751965790.tmp to /home/infinispan/workspace/Infinispan_PR-8921/jcache/tck-runner-remote/target/infinispan-server/server/data/caches.xml
at org.infinispan.globalstate.impl.OverlayLocalConfigurationStorage.persistConfigurations(OverlayLocalConfigurationStorage.java:153)
... 9 more
Caused by: java.nio.channels.OverlappingFileLockException
at java.base/sun.nio.ch.FileLockTable.checkList(FileLockTable.java:229)
at java.base/sun.nio.ch.FileLockTable.add(FileLockTable.java:123)
at java.base/sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:1109)
at java.base/java.nio.channels.FileChannel.lock(FileChannel.java:1063)
at org.infinispan.commons.util.Util.renameTempFile(Util.java:1087)
at org.infinispan.globalstate.impl.OverlayLocalConfigurationStorage.persistConfigurations(OverlayLocalConfigurationStorage.java:151)
... 9 more
{noformat}
Probably unrelated, but still something that to fix: because of the default configuration, the servers started by 2 builds running in the same network will form a cluster:
{noformat}
2020-12-10 08:33:49,359 INFO (main) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel cluster: [rhos-infinispan-ci-3-54013|1] (2) [rhos-infinispan-ci-3-54013, rhos-infinispan-ci-4-5381]
{noformat}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 3 months
[Red Hat JIRA] (ISPN-12568) Docs: Add help for alias command
by Pedro Ruivo (Jira)
[ https://issues.redhat.com/browse/ISPN-12568?page=com.atlassian.jira.plugi... ]
Pedro Ruivo updated ISPN-12568:
-------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Docs: Add help for alias command
> --------------------------------
>
> Key: ISPN-12568
> URL: https://issues.redhat.com/browse/ISPN-12568
> Project: Infinispan
> Issue Type: Enhancement
> Components: Documentation
> Reporter: Donald Naro
> Assignee: Donald Naro
> Priority: Major
> Fix For: 12.0.0.Final, 11.0.9.Final
>
>
> Need help for creating command aliases.
>
> [disconnected]>
> alias clear clusters connect delete encoding help migrate quit shell uninstall version
> benchmark cluster config credentials echo export install patch run unalias user
> [disconnected]> help alias
> [disconnected]>
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 4 months
[Red Hat JIRA] (ISPN-12570) REST server stop hangs when channel is open
by Dan Berindei (Jira)
[ https://issues.redhat.com/browse/ISPN-12570?page=com.atlassian.jira.plugi... ]
Dan Berindei updated ISPN-12570:
--------------------------------
Description:
{{NettyTransport.stop()}} calls {{acceptedChannels.close()}} after shutting down the both the {{masterGroup}} and the {{ioGroup}}. But if a channel is still open, closing it requires submitting a task to the channel's event loop, which is now shut down.
{{AbstractChannelHandlerContext.safeExecute()}} hides the rejection exception, but the {{ChannelPromise}} returned by {{channel.close()}} never completes, and the server doesn't stop.
{noformat}
java.util.concurrent.RejectedExecutionException: event executor terminated
at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:926)
at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:353)
at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:346)
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:828)
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:818)
at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:989)
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:472)
at io.netty.channel.DefaultChannelPipeline.close(DefaultChannelPipeline.java:957)
at io.netty.channel.AbstractChannel.close(AbstractChannel.java:232)
at io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:342)
at io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:221)
at org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:135)
at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:202)
at org.infinispan.rest.RestServer.stop(RestServer.java:100)
at org.infinispan.rest.helper.RestServerHelper.stop(RestServerHelper.java:86)
{noformat}
{noformat}
java.lang.RuntimeException: Test timed out after 300 seconds
at java.base(a)11.0.9/java.lang.Object.$$BlockHound$$_wait(Native Method)
at java.base(a)11.0.9/java.lang.Object.wait(Object.java)
at java.base@11.0.9/java.lang.Object.wait(Object.java:328)
at app//io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:274)
at app//io.netty.channel.group.DefaultChannelGroupFuture.awaitUninterruptibly(DefaultChannelGroupFuture.java:178)
at app//io.netty.channel.group.DefaultChannelGroupFuture.awaitUninterruptibly(DefaultChannelGroupFuture.java:41)
at app//org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:149)
at app//org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:202)
at app//org.infinispan.rest.RestServer.stop(RestServer.java:100)
at app//org.infinispan.rest.helper.RestServerHelper.stop(RestServerHelper.java:86)
{noformat}
{{CacheV2ResourceTest.afterSuite()}} sometimes hangs this way. Initially I attributed the hanging to ISPN-12558, but it looks like a separate issue.
was:
{{NettyTransport.stop()}} calls {{acceptedChannels.close()}} after shutting down the both the {{masterGroup}} and the {{ioGroup}}. But if a channel is still open, closing it requires submitting a task to the channel's event loop, which is now shut down.
{{AbstractChannelHandlerContext.safeExecute()}} hides the rejection exception, but the {{ChannelPromise}} returned by {{channel.close()}} never completes, and the server doesn't stop.
{noformat}
java.util.concurrent.RejectedExecutionException: event executor terminated
at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:926)
at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:353)
at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:346)
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:828)
at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:818)
at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:989)
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:472)
at io.netty.channel.DefaultChannelPipeline.close(DefaultChannelPipeline.java:957)
at io.netty.channel.AbstractChannel.close(AbstractChannel.java:232)
at io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:342)
at io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:221)
at org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:135)
at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:202)
at org.infinispan.rest.RestServer.stop(RestServer.java:100)
at org.infinispan.rest.helper.RestServerHelper.stop(RestServerHelper.java:86)
{noformat}
{noformat}
at java.lang.Object.$$BlockHound$$_wait(Object.java:-2)
at java.lang.Object.wait(Object.java:-1)
at java.lang.Object.wait(Object.java:328)
at io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:264)
at io.netty.channel.group.DefaultChannelGroupFuture.awaitUninterruptibly(DefaultChannelGroupFuture.java:178)
at io.netty.channel.group.DefaultChannelGroupFuture.awaitUninterruptibly(DefaultChannelGroupFuture.java:41)
at org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:149)
at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:202)
at org.infinispan.rest.RestServer.stop(RestServer.java:100)
at org.infinispan.rest.helper.RestServerHelper.stop(RestServerHelper.java:86)
{noformat}
{{CacheV2ResourceTest.afterSuite()}} sometimes hangs this way. Initially I attributed the hanging to ISPN-12558, but it looks like a separate issue.
> REST server stop hangs when channel is open
> -------------------------------------------
>
> Key: ISPN-12570
> URL: https://issues.redhat.com/browse/ISPN-12570
> Project: Infinispan
> Issue Type: Bug
> Components: REST, Server
> Affects Versions: 12.0.0.Dev07
> Reporter: Dan Berindei
> Priority: Major
> Fix For: 12.0.0.CR1
>
>
> {{NettyTransport.stop()}} calls {{acceptedChannels.close()}} after shutting down the both the {{masterGroup}} and the {{ioGroup}}. But if a channel is still open, closing it requires submitting a task to the channel's event loop, which is now shut down.
> {{AbstractChannelHandlerContext.safeExecute()}} hides the rejection exception, but the {{ChannelPromise}} returned by {{channel.close()}} never completes, and the server doesn't stop.
> {noformat}
> java.util.concurrent.RejectedExecutionException: event executor terminated
> at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:926)
> at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:353)
> at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:346)
> at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:828)
> at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:818)
> at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:989)
> at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
> at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:472)
> at io.netty.channel.DefaultChannelPipeline.close(DefaultChannelPipeline.java:957)
> at io.netty.channel.AbstractChannel.close(AbstractChannel.java:232)
> at io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:342)
> at io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:221)
> at org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:135)
> at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:202)
> at org.infinispan.rest.RestServer.stop(RestServer.java:100)
> at org.infinispan.rest.helper.RestServerHelper.stop(RestServerHelper.java:86)
> {noformat}
> {noformat}
> java.lang.RuntimeException: Test timed out after 300 seconds
> at java.base(a)11.0.9/java.lang.Object.$$BlockHound$$_wait(Native Method)
> at java.base(a)11.0.9/java.lang.Object.wait(Object.java)
> at java.base@11.0.9/java.lang.Object.wait(Object.java:328)
> at app//io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:274)
> at app//io.netty.channel.group.DefaultChannelGroupFuture.awaitUninterruptibly(DefaultChannelGroupFuture.java:178)
> at app//io.netty.channel.group.DefaultChannelGroupFuture.awaitUninterruptibly(DefaultChannelGroupFuture.java:41)
> at app//org.infinispan.server.core.transport.NettyTransport.stop(NettyTransport.java:149)
> at app//org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.java:202)
> at app//org.infinispan.rest.RestServer.stop(RestServer.java:100)
> at app//org.infinispan.rest.helper.RestServerHelper.stop(RestServerHelper.java:86)
> {noformat}
> {{CacheV2ResourceTest.afterSuite()}} sometimes hangs this way. Initially I attributed the hanging to ISPN-12558, but it looks like a separate issue.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
5 years, 4 months