[Red Hat JIRA] (WFLY-14428) [Wildfly Artemis] Message mix if traffic load high
by terry liang (Jira)
[ https://issues.redhat.com/browse/WFLY-14428?page=com.atlassian.jira.plugi... ]
terry liang edited comment on WFLY-14428 at 2/13/21 8:53 AM:
-------------------------------------------------------------
Hi [~ehugonnet] and [~brian.stansberry] , about the code, it is using the JMS client and got below exception sometimes, but once got below exception, we cannot receive messages any more.
{code:java}
2021-02-12 11:02:30,840 ERROR [org.hornetq.core.client] (Thread-2 (HornetQ-client-netty-threads-1811187701)) HQ214013: Failed to decode packet: java.lang.IllegalArgumentException: HQ119032: Invalid type: 0
at org.hornetq.core.protocol.core.impl.PacketDecoder.decode(PacketDecoder.java:447) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:56) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:493) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1712) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.channelRead(HornetQChannelHandler.java:73) [hornetq-core-client-2.4.7.Final.jar:]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:563) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212]
{code}
What I have tried is building a pakcage hornetq-core-client-2.4.x-SNAPSHOT.jar from repo [https://github.com/hornetq/hornetq.git] and branch 2.4.x, and it is working now!
was (Author: terryyrliang):
Hi [~ehugonnet] and [~brian.stansberry] , about the code, it is using the JMS client and got below exception sometimes, but once got below exception, we cannot receive messages any more.
{code:java}
2021-02-12 11:02:30,840 ERROR [org.hornetq.core.client] (Thread-2 (HornetQ-client-netty-threads-1811187701)) HQ214013: Failed to decode packet: java.lang.IllegalArgumentException: HQ119032: Invalid type: 0
at org.hornetq.core.protocol.core.impl.PacketDecoder.decode(PacketDecoder.java:447) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:56) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:493) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1712) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.channelRead(HornetQChannelHandler.java:73) [hornetq-core-client-2.4.7.Final.jar:]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:563) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212]
{code}
What I have tried is building a pakcage from repo [https://github.com/hornetq/hornetq.git] and branch 2.4.x, and it is working now!
> [Wildfly Artemis] Message mix if traffic load high
> --------------------------------------------------
>
> Key: WFLY-14428
> URL: https://issues.redhat.com/browse/WFLY-14428
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 16.0.0.Final
> Reporter: terry liang
> Assignee: Emmanuel Hugonnet
> Priority: Major
>
> The Artemis would mix up message if traffic load is high, such as JMSCorrelationId is the same, but the body change between sending and receiving request.
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
5 years, 2 months
[Red Hat JIRA] (WFLY-14428) [Wildfly Artemis] Message mix if traffic load high
by terry liang (Jira)
[ https://issues.redhat.com/browse/WFLY-14428?page=com.atlassian.jira.plugi... ]
terry liang commented on WFLY-14428:
------------------------------------
Hi [~ehugonnet] and [~brian.stansberry] , about the code, it is using the JMS client and got below exception sometimes, but once got below exception, we cannot receive messages any more.
{code:java}
2021-02-12 11:02:30,840 ERROR [org.hornetq.core.client] (Thread-2 (HornetQ-client-netty-threads-1811187701)) HQ214013: Failed to decode packet: java.lang.IllegalArgumentException: HQ119032: Invalid type: 0
at org.hornetq.core.protocol.core.impl.PacketDecoder.decode(PacketDecoder.java:447) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:56) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:493) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1712) [hornetq-core-client-2.4.7.Final.jar:]
at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.channelRead(HornetQChannelHandler.java:73) [hornetq-core-client-2.4.7.Final.jar:]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:563) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-all-4.1.29.Final.jar:4.1.29.Final]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212]
{code}
What I have tried is building a pakcage from repo [https://github.com/hornetq/hornetq.git] and branch 2.4.x, and it is working now!
> [Wildfly Artemis] Message mix if traffic load high
> --------------------------------------------------
>
> Key: WFLY-14428
> URL: https://issues.redhat.com/browse/WFLY-14428
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 16.0.0.Final
> Reporter: terry liang
> Assignee: Emmanuel Hugonnet
> Priority: Major
>
> The Artemis would mix up message if traffic load is high, such as JMSCorrelationId is the same, but the body change between sending and receiving request.
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
5 years, 2 months
[Red Hat JIRA] (WFCORE-4827) Errors Missing on Invalid Configuration
by Brian Stansberry (Jira)
[ https://issues.redhat.com/browse/WFCORE-4827?page=com.atlassian.jira.plug... ]
Brian Stansberry commented on WFCORE-4827:
------------------------------------------
[~spyrkob] It's not ok for a Host Controller to not fail if there is any error during boot. A broken HC can mess up the entire domain. So, no that can't be changed. (FWIW I'd like to see the default behavior or a standalone server changed as well, as allowing a process to continue on in the presence of boot errors is not a good default IMO. That's an RFE though.)
What's the failure in a standalone server?
> Errors Missing on Invalid Configuration
> ---------------------------------------
>
> Key: WFCORE-4827
> URL: https://issues.redhat.com/browse/WFCORE-4827
> Project: WildFly Core
> Issue Type: Bug
> Components: Security
> Affects Versions: 11.0.0.Beta7
> Reporter: Darran Lofthouse
> Assignee: Richard Opalka
> Priority: Critical
> Labels: domain-mode
>
> [~ropalka] I believe this is caused by the MSC refactoring.
> Steps, in the default host.xml for domain mode.
> 1. Define the following security realm: -
> {noformat}
> <security-realms>
> <security-realm name="ldap_security_realm">
> <server-identities>
> <ssl>
> <keystore path="generated.keystore" relative-to="jboss.server.config.dir" keystore-password="password" alias="server" key-password="password" generate-self-signed-certificate-host="localhost"/>
> </ssl>
> </server-identities>
> <authentication>
> <ldap connection="testLdap" base-dn="dc=test,dc=sbc,dc=com" recursive="true">
> <username-filter attribute="samaccountname"/>
> </ldap>
> </authentication>
> </security-realm>
> {noformat}
> 2. Define the following outbound connection: -
> {noformat}
> <outbound-connections>
> <ldap name="testLdap" url="ldap://localhost:636" search-dn="CN=mxxxxxx,OU=GenericID,OU=testUsers,DC=testServices,DC=test,DC=com" search-credential="passowrd" />
> </outbound-connections>
> {noformat}
> 3. Update the management interfaces to: -
> {noformat}
> <management-interfaces>
> <http-interface security-realm="ldap_security_realm">
> <http-upgrade enabled="true"/>
> <socket interface="management" port="${jboss.management.http.port:9990}"/>
> </http-interface>
> </management-interfaces>
> {noformat}
> The server fails to boot with just the following error: -
> {noformat}
> [Host Controller] 17:56:40,052 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0034: Host Controller boot has failed in an unrecoverable manner; exiting. See previous messages for details.
> {noformat}
> If the management interface is then updated to reference the ManagementRealm instead the error is now: -
> {noformat}
> [Host Controller] 18:01:48,595 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
> [Host Controller] ("host" => "master"),
> [Host Controller] ("core-service" => "management"),
> [Host Controller] ("security-realm" => "ldap_security_realm")
> [Host Controller] ]) - failure description: {
> [Host Controller] "WFLYCTL0412: Required services that are not installed:" => ["jboss.server.path.\"jboss.server.config.dir\""],
> [Host Controller] "WFLYCTL0180: Services with missing/unavailable dependencies" => ["org.wildfly.core.management.security.realm.ldap_security_realm.key-manager is missing [jboss.server.path.\"jboss.server.config.dir\"]"]
> [Host Controller] }
> {noformat}
> This error is expected as the realm defined in step 1 referenced an invalid path.
> I believe the error reporting should come from this method: -
> org.jboss.as.controller.ServiceVerificationHelper.execute(OperationContext, ModelNode)
> However something seems to have changes with the MSC migration.
> This was recently encountered debugging the bug report in https://issues.redhat.com/browse/WFCORE-4820, if you see an error "Multiple CallbackHandlerServices for the same mechanism (PLAIN)" that has been covered by WFCORE-4820.
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
5 years, 2 months
[Red Hat JIRA] (WFCORE-4827) Errors Missing on Invalid Configuration
by Brian Stansberry (Jira)
[ https://issues.redhat.com/browse/WFCORE-4827?page=com.atlassian.jira.plug... ]
Brian Stansberry updated WFCORE-4827:
-------------------------------------
Labels: domain-mode (was: )
> Errors Missing on Invalid Configuration
> ---------------------------------------
>
> Key: WFCORE-4827
> URL: https://issues.redhat.com/browse/WFCORE-4827
> Project: WildFly Core
> Issue Type: Bug
> Components: Security
> Affects Versions: 11.0.0.Beta7
> Reporter: Darran Lofthouse
> Assignee: Richard Opalka
> Priority: Critical
> Labels: domain-mode
>
> [~ropalka] I believe this is caused by the MSC refactoring.
> Steps, in the default host.xml for domain mode.
> 1. Define the following security realm: -
> {noformat}
> <security-realms>
> <security-realm name="ldap_security_realm">
> <server-identities>
> <ssl>
> <keystore path="generated.keystore" relative-to="jboss.server.config.dir" keystore-password="password" alias="server" key-password="password" generate-self-signed-certificate-host="localhost"/>
> </ssl>
> </server-identities>
> <authentication>
> <ldap connection="testLdap" base-dn="dc=test,dc=sbc,dc=com" recursive="true">
> <username-filter attribute="samaccountname"/>
> </ldap>
> </authentication>
> </security-realm>
> {noformat}
> 2. Define the following outbound connection: -
> {noformat}
> <outbound-connections>
> <ldap name="testLdap" url="ldap://localhost:636" search-dn="CN=mxxxxxx,OU=GenericID,OU=testUsers,DC=testServices,DC=test,DC=com" search-credential="passowrd" />
> </outbound-connections>
> {noformat}
> 3. Update the management interfaces to: -
> {noformat}
> <management-interfaces>
> <http-interface security-realm="ldap_security_realm">
> <http-upgrade enabled="true"/>
> <socket interface="management" port="${jboss.management.http.port:9990}"/>
> </http-interface>
> </management-interfaces>
> {noformat}
> The server fails to boot with just the following error: -
> {noformat}
> [Host Controller] 17:56:40,052 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0034: Host Controller boot has failed in an unrecoverable manner; exiting. See previous messages for details.
> {noformat}
> If the management interface is then updated to reference the ManagementRealm instead the error is now: -
> {noformat}
> [Host Controller] 18:01:48,595 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
> [Host Controller] ("host" => "master"),
> [Host Controller] ("core-service" => "management"),
> [Host Controller] ("security-realm" => "ldap_security_realm")
> [Host Controller] ]) - failure description: {
> [Host Controller] "WFLYCTL0412: Required services that are not installed:" => ["jboss.server.path.\"jboss.server.config.dir\""],
> [Host Controller] "WFLYCTL0180: Services with missing/unavailable dependencies" => ["org.wildfly.core.management.security.realm.ldap_security_realm.key-manager is missing [jboss.server.path.\"jboss.server.config.dir\"]"]
> [Host Controller] }
> {noformat}
> This error is expected as the realm defined in step 1 referenced an invalid path.
> I believe the error reporting should come from this method: -
> org.jboss.as.controller.ServiceVerificationHelper.execute(OperationContext, ModelNode)
> However something seems to have changes with the MSC migration.
> This was recently encountered debugging the bug report in https://issues.redhat.com/browse/WFCORE-4820, if you see an error "Multiple CallbackHandlerServices for the same mechanism (PLAIN)" that has been covered by WFCORE-4820.
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
5 years, 2 months
[Red Hat JIRA] (WFLY-14421) Too many open file Descriptors in Wildfly-18
by Brian Stansberry (Jira)
[ https://issues.redhat.com/browse/WFLY-14421?page=com.atlassian.jira.plugi... ]
Brian Stansberry commented on WFLY-14421:
-----------------------------------------
I'm not sure, but assuming the JDK version you are using is the same, the likely reason is JBoss Module changed some of its resource loading to using NIO, e.g. java.nio.files.Files.newInputStream instead of new java.io.FileInputStream. The latter class has a finalize implementation that calls close; the IS returned from Files.newInputStream does not.
> Too many open file Descriptors in Wildfly-18
> --------------------------------------------
>
> Key: WFLY-14421
> URL: https://issues.redhat.com/browse/WFLY-14421
> Project: WildFly
> Issue Type: Task
> Reporter: Manas Panda
> Assignee: Brian Stansberry
> Priority: Major
>
> In the application code deployed in wildfly10, if the fileinputstream is not closed programmatically (its a miss in the code), during the GC (G1GC) cycle, the orphan references of the opened file input streams are closed, so the number of open file descriptor is not growing.
> However, after the upgrade of the wildfly10 to wildfly18, the file input streams which are not closed programmatically are not getting closed during the GC (G1GC) cycle due to which the number of open file descriptors are growing.
> We agree that the input streams have to be closed programmatically in the application code which we did now, but would like to understand the reason behind this behavior change in wildfly18.
--
This message was sent by Atlassian Jira
(v8.13.1#813001)
5 years, 2 months