[JBoss JIRA] (WFLY-9364) Closing an EJBClientContext sometimes hangs causing high CPU usage
by Marius Tantareanu (JIRA)
[ https://issues.jboss.org/browse/WFLY-9364?page=com.atlassian.jira.plugin.... ]
Marius Tantareanu commented on WFLY-9364:
-----------------------------------------
Our application is dependent on specific WildFly versions. Unfortunately upgrading WildFly is not possible for released versions of the application.
Moreover, from https://issues.jboss.org/browse/JBEAP-9401 I understand that the EJBClientContext.close() method is no longer available in WildFly 11. So we would have to change our client.
Any chance of fixing this for 10.1 version?
> Closing an EJBClientContext sometimes hangs causing high CPU usage
> ------------------------------------------------------------------
>
> Key: WFLY-9364
> URL: https://issues.jboss.org/browse/WFLY-9364
> Project: WildFly
> Issue Type: Bug
> Affects Versions: 10.1.0.Final
> Environment: Server:
> - OS:
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> - JDK:
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
> - WildFly:
> WildFly 10.1.0.Final
> Client:
> - OS:
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> - JDK:
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
> Reporter: Marius Tantareanu
> Assignee: Jason Greene
> Attachments: echo-client.zip, output_rhel7.txt, output_win10.txt, simple-ear.ear, td_rhel7.txt, td_win10.txt
>
>
> EJBClientContext.close() sometimes hangs and causes high CPU usage. We have a WildFly client that uses EJB client API to invoke some EJBs remotely.
> Basically the client executes the following actions in a loop:
> - setup an EJBClientContext programatically
> - create a JNDI context
> - lookup an EJB and invoke some operations on it
> - close the JNDI context and the EJBClientContext
> After the client runs a few hundreds iterations (the actual number of iterations varies quite a lot from one run to another) it blocks while invoking EJBClientContext.close(). Also one XNIO thread in the client app is constantly consuming one CPU core (full thread dump from the client app is attached). I was only able to reproduce this when connecting over TLS (port 8443). When using the unsecure port (8080) the problem does not reproduce (or it reproduces much less frequently and I didn't run enough iterations to catch it).
> Once the client app enters this state, top shows something like (notice the CPU usage of thread 12512):
> >top -H -p 6463
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 12512 root 20 0 11.527g 222680 13788 R 99.3 0.7 141:10.99 java
> 6466 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:07.05 java
> 6467 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:06.97 java
> 6477 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:04.24 java
> 6463 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 6464 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.46 java
> 6465 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.94 java
> 6468 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:07.02 java
> 6469 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:16.42 java
> 6470 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
> 6471 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
> 6472 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 6473 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.87 java
> 6474 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.70 java
> 6475 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.09 java
> 6476 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12513 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12514 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12515 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12516 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12517 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12518 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12519 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12520 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12523 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12524 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12597 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> The thread that causes the CPU usage is the following:
> "Remoting "config-based-ejb-client-endpoint" I/O-1" #6025 daemon prio=5 os_prio=0 tid=0x00007f2f709d3000 nid=0x30e0 runnable [0x00007f2e7f9be000]
> java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x00000005cc1844e8> (a sun.nio.ch.Util$3)
> - locked <0x00000005cc19f258> (a java.util.Collections$UnmodifiableSet)
> - locked <0x00000005cc184468> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:515)
> The client app main thread is blocked as below:
> "main" #1 prio=5 os_prio=0 tid=0x00007f2f70008800 nid=0x1940 in Object.wait() [0x00007f2f76350000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.jboss.remoting3.spi.AbstractHandleableCloseable.close(AbstractHandleableCloseable.java:190)
> - locked <0x00000005cc18de48> (a java.lang.Object)
> at org.jboss.ejb.client.remoting.ConnectionPool.safeClose(ConnectionPool.java:177)
> at org.jboss.ejb.client.remoting.ConnectionPool.release(ConnectionPool.java:104)
> - locked <0x00000005cbd369e0> (a org.jboss.ejb.client.remoting.ConnectionPool)
> at org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection.close(ConnectionPool.java:198)
> at org.jboss.ejb.client.remoting.RemotingConnectionManager.safeClose(RemotingConnectionManager.java:65)
> - locked <0x00000005cc1a6840> (a java.util.Collections$SynchronizedRandomAccessList)
> at org.jboss.ejb.client.remoting.ConfigBasedEJBClientContextSelector$ContextCloseListener.contextClosed(ConfigBasedEJBClientContextSelector.java:220)
> at org.jboss.ejb.client.EJBClientContext.close(EJBClientContext.java:1305)
> - locked <0x00000005cc1a6888> (a org.jboss.ejb.client.EJBClientContext)
> at com.microfocus.echoclient.EchoClient.disconnect(EchoClient.java:66)
> at com.microfocus.echoclient.EchoClient.connectDisconnect(EchoClient.java:54)
> at com.microfocus.echoclient.EchoClient.main(EchoClient.java:36)
> The problem also reproduces on Windows (full thread dump of the client app is attached).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9364) Closing an EJBClientContext sometimes hangs causing high CPU usage
by jaikiran pai (JIRA)
[ https://issues.jboss.org/browse/WFLY-9364?page=com.atlassian.jira.plugin.... ]
jaikiran pai commented on WFLY-9364:
------------------------------------
Any chance you can try this against the latest released 11.0.0.CR1 release and see if this is still an issue in there?
> Closing an EJBClientContext sometimes hangs causing high CPU usage
> ------------------------------------------------------------------
>
> Key: WFLY-9364
> URL: https://issues.jboss.org/browse/WFLY-9364
> Project: WildFly
> Issue Type: Bug
> Affects Versions: 10.1.0.Final
> Environment: Server:
> - OS:
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> - JDK:
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
> - WildFly:
> WildFly 10.1.0.Final
> Client:
> - OS:
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> - JDK:
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
> Reporter: Marius Tantareanu
> Assignee: Jason Greene
> Attachments: echo-client.zip, output_rhel7.txt, output_win10.txt, simple-ear.ear, td_rhel7.txt, td_win10.txt
>
>
> EJBClientContext.close() sometimes hangs and causes high CPU usage. We have a WildFly client that uses EJB client API to invoke some EJBs remotely.
> Basically the client executes the following actions in a loop:
> - setup an EJBClientContext programatically
> - create a JNDI context
> - lookup an EJB and invoke some operations on it
> - close the JNDI context and the EJBClientContext
> After the client runs a few hundreds iterations (the actual number of iterations varies quite a lot from one run to another) it blocks while invoking EJBClientContext.close(). Also one XNIO thread in the client app is constantly consuming one CPU core (full thread dump from the client app is attached). I was only able to reproduce this when connecting over TLS (port 8443). When using the unsecure port (8080) the problem does not reproduce (or it reproduces much less frequently and I didn't run enough iterations to catch it).
> Once the client app enters this state, top shows something like (notice the CPU usage of thread 12512):
> >top -H -p 6463
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 12512 root 20 0 11.527g 222680 13788 R 99.3 0.7 141:10.99 java
> 6466 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:07.05 java
> 6467 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:06.97 java
> 6477 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:04.24 java
> 6463 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 6464 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.46 java
> 6465 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.94 java
> 6468 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:07.02 java
> 6469 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:16.42 java
> 6470 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
> 6471 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
> 6472 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 6473 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.87 java
> 6474 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.70 java
> 6475 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.09 java
> 6476 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12513 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12514 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12515 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12516 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12517 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12518 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12519 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12520 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12523 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12524 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> 12597 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
> The thread that causes the CPU usage is the following:
> "Remoting "config-based-ejb-client-endpoint" I/O-1" #6025 daemon prio=5 os_prio=0 tid=0x00007f2f709d3000 nid=0x30e0 runnable [0x00007f2e7f9be000]
> java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x00000005cc1844e8> (a sun.nio.ch.Util$3)
> - locked <0x00000005cc19f258> (a java.util.Collections$UnmodifiableSet)
> - locked <0x00000005cc184468> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:515)
> The client app main thread is blocked as below:
> "main" #1 prio=5 os_prio=0 tid=0x00007f2f70008800 nid=0x1940 in Object.wait() [0x00007f2f76350000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.jboss.remoting3.spi.AbstractHandleableCloseable.close(AbstractHandleableCloseable.java:190)
> - locked <0x00000005cc18de48> (a java.lang.Object)
> at org.jboss.ejb.client.remoting.ConnectionPool.safeClose(ConnectionPool.java:177)
> at org.jboss.ejb.client.remoting.ConnectionPool.release(ConnectionPool.java:104)
> - locked <0x00000005cbd369e0> (a org.jboss.ejb.client.remoting.ConnectionPool)
> at org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection.close(ConnectionPool.java:198)
> at org.jboss.ejb.client.remoting.RemotingConnectionManager.safeClose(RemotingConnectionManager.java:65)
> - locked <0x00000005cc1a6840> (a java.util.Collections$SynchronizedRandomAccessList)
> at org.jboss.ejb.client.remoting.ConfigBasedEJBClientContextSelector$ContextCloseListener.contextClosed(ConfigBasedEJBClientContextSelector.java:220)
> at org.jboss.ejb.client.EJBClientContext.close(EJBClientContext.java:1305)
> - locked <0x00000005cc1a6888> (a org.jboss.ejb.client.EJBClientContext)
> at com.microfocus.echoclient.EchoClient.disconnect(EchoClient.java:66)
> at com.microfocus.echoclient.EchoClient.connectDisconnect(EchoClient.java:54)
> at com.microfocus.echoclient.EchoClient.main(EchoClient.java:36)
> The problem also reproduces on Windows (full thread dump of the client app is attached).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9365) Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
by Masafumi Miura (JIRA)
[ https://issues.jboss.org/browse/WFLY-9365?page=com.atlassian.jira.plugin.... ]
Masafumi Miura updated WFLY-9365:
---------------------------------
Steps to Reproduce:
# Execute add-user.sh to create a JMS user
{code}
./bin/add-user.sh -a -u 'test' -p 'test@123' -g 'guest'
{code}
# Start WildFly 11.0.0.CR1 with standalone-full-ha.xml
# Execute jboss-cli to create a JMS Queue
{code}
jms-queue add --queue-address=testQueue --entries=queue/test,java:jboss/exported/jms/queue/testQueue
{code}
# Extract the attached reproducer
# Copy {{$JBOSS_HOME/bin/client/jboss-client.jar}} under {{lib/}} directory
# Execute {{mvn initialize}} to install it in local maven repo
{code}
mvn initialize
{code}
# Execute a producer (or consumer)
{code}
mvn compile exec:java -Dexec.mainClass=com.redhat.jboss.support.example.client.JGroupsDiscoveryProducer -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1 -Djava.util.logging.config.file=custom-logging.properties
{code}
{code}
mvn compile exec:java -Dexec.mainClass=com.redhat.jboss.support.example.client.JGroupsDiscoveryConsumer -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1 -Djava.util.logging.config.file=custom-logging.properties
{code}
Then, you will see NPE repeatedly.
was:
# Execute add-user.sh to create a JMS user
{code}
./bin/add-user.sh -a -u 'test' -p 'test@123' -g 'guest'
{code}
# Execute jboss-clie to create a JMS Queue
{code}
jms-queue add --queue-address=testQueue --entries=queue/test,java:jboss/exported/jms/queue/testQueue
{code}
# Extract the attached reproducer
# Copy {{$JBOSS_HOME/bin/client/jboss-client.jar}} under {{lib/}} directory
# Execute {{mvn initialize}} to install it in local maven repo
{code}
mvn initialize
{code}
# Execute a producer (or consumer)
{code}
mvn compile exec:java -Dexec.mainClass=com.redhat.jboss.support.example.client.JGroupsDiscoveryProducer -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1 -Djava.util.logging.config.file=custom-logging.properties
{code}
{code}
mvn compile exec:java -Dexec.mainClass=com.redhat.jboss.support.example.client.JGroupsDiscoveryConsumer -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1 -Djava.util.logging.config.file=custom-logging.properties
{code}
Then, you will see NPE repeatedly.
> Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
> ----------------------------------------------------------------------------------------------------
>
> Key: WFLY-9365
> URL: https://issues.jboss.org/browse/WFLY-9365
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 11.0.0.CR1
> Reporter: Masafumi Miura
> Assignee: Jeff Mesnil
> Attachments: artemis-core-api-client.zip
>
>
> When a standalone Java client uses Artemis Core API for JGroups Dynamic Discovery, it repeats throwing the following NPE:
> {code}
> SEVERE [null] JGRP000027: failed passing message up (org.jgroups.protocols.TP$SingleMessageHandler run)
> java.lang.NullPointerException
> at java.util.concurrent.LinkedBlockingDeque.offerLast(LinkedBlockingDeque.java:357)
> at java.util.concurrent.LinkedBlockingDeque.addLast(LinkedBlockingDeque.java:334)
> at java.util.concurrent.LinkedBlockingDeque.add(LinkedBlockingDeque.java:633)
> at org.apache.activemq.artemis.api.core.jgroups.JGroupsReceiver.receive(JGroupsReceiver.java:41)
> at org.apache.activemq.artemis.api.core.jgroups.JChannelWrapper$1.receive(JChannelWrapper.java:69)
> at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
> at org.jgroups.JChannel.up(JChannel.java:741)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:374)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1037)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:442)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:649)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
> at org.jgroups.protocols.FD.up(FD.java:260)
> at org.jgroups.protocols.MERGE3.up(MERGE3.java:292)
> at org.jgroups.protocols.Discovery.up(Discovery.java:296)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1657)
> at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1872)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Also the following IndexOutOfBoundsException sometime occurred. (Not continually but sometime frequently):
> {code}
> ERROR [org.apache.activemq.artemis.core.client] AMQ214010: Failed to receive datagram (org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable run)
> java.lang.IndexOutOfBoundsException: readerIndex(8) + length(1040187409) exceeds writerIndex(11): UnpooledHeapByteBuf(ridx: 8, widx: 11, cap: 11/11)
> at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1396)
> at io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1383)
> at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:850)
> at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:858)
> at io.netty.buffer.WrappedByteBuf.readBytes(WrappedByteBuf.java:649)
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readSimpleStringInternal(ChannelBufferWrapper.java:93)
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readStringInternal(ChannelBufferWrapper.java:114)
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readString(ChannelBufferWrapper.java:99)
> at org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable.run(DiscoveryGroup.java:274)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> The standalone Java client calles Artemis Core API to use JGroups Dynamic Discovery like the following. See the attached reproducer for details.
> {code}
> private static final String channelName = "ee";
> private static final String jgroupsConfigFile = "jgroups-custom-config.xml";
> String uri = "jgroups://" + channelName + "?file=" + jgroupsConfigFile + "&type=QUEUE_CF";
> ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(uri, "...");
> Queue queue = ActiveMQJMSClient.createQueue("testQueue");
> try (JMSContext jmsContext = cf.createContext("test", "test@123")) {
> // sending messages or recieving messages
> }
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9366) Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
by Masafumi Miura (JIRA)
Masafumi Miura created WFLY-9366:
------------------------------------
Summary: Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
Key: WFLY-9366
URL: https://issues.jboss.org/browse/WFLY-9366
Project: WildFly
Issue Type: Bug
Components: JMS
Affects Versions: 11.0.0.CR1
Reporter: Masafumi Miura
Assignee: Jeff Mesnil
Attachments: artemis-core-api-client.zip
When a standalone Java client uses Artemis Core API for JGroups Dynamic Discovery, it repeats throwing the following NPE:
{code}
SEVERE [null] JGRP000027: failed passing message up (org.jgroups.protocols.TP$SingleMessageHandler run)
java.lang.NullPointerException
at java.util.concurrent.LinkedBlockingDeque.offerLast(LinkedBlockingDeque.java:357)
at java.util.concurrent.LinkedBlockingDeque.addLast(LinkedBlockingDeque.java:334)
at java.util.concurrent.LinkedBlockingDeque.add(LinkedBlockingDeque.java:633)
at org.apache.activemq.artemis.api.core.jgroups.JGroupsReceiver.receive(JGroupsReceiver.java:41)
at org.apache.activemq.artemis.api.core.jgroups.JChannelWrapper$1.receive(JChannelWrapper.java:69)
at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
at org.jgroups.JChannel.up(JChannel.java:741)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:374)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1037)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:442)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:649)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
at org.jgroups.protocols.FD.up(FD.java:260)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:292)
at org.jgroups.protocols.Discovery.up(Discovery.java:296)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1657)
at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1872)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}
Also the following IndexOutOfBoundsException sometime occurred. (Not continually but sometime frequently):
{code}
ERROR [org.apache.activemq.artemis.core.client] AMQ214010: Failed to receive datagram (org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable run)
java.lang.IndexOutOfBoundsException: readerIndex(8) + length(1040187409) exceeds writerIndex(11): UnpooledHeapByteBuf(ridx: 8, widx: 11, cap: 11/11)
at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1396)
at io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1383)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:850)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:858)
at io.netty.buffer.WrappedByteBuf.readBytes(WrappedByteBuf.java:649)
at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readSimpleStringInternal(ChannelBufferWrapper.java:93)
at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readStringInternal(ChannelBufferWrapper.java:114)
at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readString(ChannelBufferWrapper.java:99)
at org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable.run(DiscoveryGroup.java:274)
at java.lang.Thread.run(Thread.java:748)
{code}
The standalone Java client calles Artemis Core API to use JGroups Dynamic Discovery like the following. See the attached reproducer for details.
{code}
private static final String channelName = "ee";
private static final String jgroupsConfigFile = "jgroups-custom-config.xml";
String uri = "jgroups://" + channelName + "?file=" + jgroupsConfigFile + "&type=QUEUE_CF";
ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(uri, "...");
Queue queue = ActiveMQJMSClient.createQueue("testQueue");
try (JMSContext jmsContext = cf.createContext("test", "test@123")) {
// sending messages or recieving messages
}
{code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9365) Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
by Masafumi Miura (JIRA)
[ https://issues.jboss.org/browse/WFLY-9365?page=com.atlassian.jira.plugin.... ]
Masafumi Miura updated WFLY-9365:
---------------------------------
Attachment: artemis-core-api-client.zip
> Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
> ----------------------------------------------------------------------------------------------------
>
> Key: WFLY-9365
> URL: https://issues.jboss.org/browse/WFLY-9365
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 11.0.0.CR1
> Reporter: Masafumi Miura
> Assignee: Jeff Mesnil
> Attachments: artemis-core-api-client.zip
>
>
> When a standalone Java client uses Artemis Core API for JGroups Dynamic Discovery, it repeats throwing the following NPE:
> {code}
> SEVERE [null] JGRP000027: failed passing message up (org.jgroups.protocols.TP$SingleMessageHandler run)
> java.lang.NullPointerException
> at java.util.concurrent.LinkedBlockingDeque.offerLast(LinkedBlockingDeque.java:357)
> at java.util.concurrent.LinkedBlockingDeque.addLast(LinkedBlockingDeque.java:334)
> at java.util.concurrent.LinkedBlockingDeque.add(LinkedBlockingDeque.java:633)
> at org.apache.activemq.artemis.api.core.jgroups.JGroupsReceiver.receive(JGroupsReceiver.java:41)
> at org.apache.activemq.artemis.api.core.jgroups.JChannelWrapper$1.receive(JChannelWrapper.java:69)
> at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
> at org.jgroups.JChannel.up(JChannel.java:741)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:374)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1037)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
> at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:442)
> at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:649)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
> at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
> at org.jgroups.protocols.FD.up(FD.java:260)
> at org.jgroups.protocols.MERGE3.up(MERGE3.java:292)
> at org.jgroups.protocols.Discovery.up(Discovery.java:296)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1657)
> at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1872)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Also the following IndexOutOfBoundsException sometime occurred. (Not continually but sometime frequently):
> {code}
> ERROR [org.apache.activemq.artemis.core.client] AMQ214010: Failed to receive datagram (org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable run)
> java.lang.IndexOutOfBoundsException: readerIndex(8) + length(1040187409) exceeds writerIndex(11): UnpooledHeapByteBuf(ridx: 8, widx: 11, cap: 11/11)
> at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1396)
> at io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1383)
> at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:850)
> at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:858)
> at io.netty.buffer.WrappedByteBuf.readBytes(WrappedByteBuf.java:649)
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readSimpleStringInternal(ChannelBufferWrapper.java:93)
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readStringInternal(ChannelBufferWrapper.java:114)
> at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readString(ChannelBufferWrapper.java:99)
> at org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable.run(DiscoveryGroup.java:274)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> The standalone Java client calles Artemis Core API to use JGroups Dynamic Discovery like the following. See the attached reproducer for details.
> {code}
> private static final String channelName = "ee";
> private static final String jgroupsConfigFile = "jgroups-custom-config.xml";
> String uri = "jgroups://" + channelName + "?file=" + jgroupsConfigFile + "&type=QUEUE_CF";
> ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(uri, "...");
> Queue queue = ActiveMQJMSClient.createQueue("testQueue");
> try (JMSContext jmsContext = cf.createContext("test", "test@123")) {
> // sending messages or recieving messages
> }
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9365) Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
by Masafumi Miura (JIRA)
Masafumi Miura created WFLY-9365:
------------------------------------
Summary: Standalone JMS Client repeats throwing NPE when using Artemis Core API for JGroups Dynamic Discovery
Key: WFLY-9365
URL: https://issues.jboss.org/browse/WFLY-9365
Project: WildFly
Issue Type: Bug
Components: JMS
Affects Versions: 11.0.0.CR1
Reporter: Masafumi Miura
Assignee: Jeff Mesnil
When a standalone Java client uses Artemis Core API for JGroups Dynamic Discovery, it repeats throwing the following NPE:
{code}
SEVERE [null] JGRP000027: failed passing message up (org.jgroups.protocols.TP$SingleMessageHandler run)
java.lang.NullPointerException
at java.util.concurrent.LinkedBlockingDeque.offerLast(LinkedBlockingDeque.java:357)
at java.util.concurrent.LinkedBlockingDeque.addLast(LinkedBlockingDeque.java:334)
at java.util.concurrent.LinkedBlockingDeque.add(LinkedBlockingDeque.java:633)
at org.apache.activemq.artemis.api.core.jgroups.JGroupsReceiver.receive(JGroupsReceiver.java:41)
at org.apache.activemq.artemis.api.core.jgroups.JChannelWrapper$1.receive(JChannelWrapper.java:69)
at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
at org.jgroups.JChannel.up(JChannel.java:741)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:374)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1037)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:442)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:649)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)
at org.jgroups.protocols.FD.up(FD.java:260)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:292)
at org.jgroups.protocols.Discovery.up(Discovery.java:296)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1657)
at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1872)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}
Also the following IndexOutOfBoundsException sometime occurred. (Not continually but sometime frequently):
{code}
ERROR [org.apache.activemq.artemis.core.client] AMQ214010: Failed to receive datagram (org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable run)
java.lang.IndexOutOfBoundsException: readerIndex(8) + length(1040187409) exceeds writerIndex(11): UnpooledHeapByteBuf(ridx: 8, widx: 11, cap: 11/11)
at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1396)
at io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1383)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:850)
at io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:858)
at io.netty.buffer.WrappedByteBuf.readBytes(WrappedByteBuf.java:649)
at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readSimpleStringInternal(ChannelBufferWrapper.java:93)
at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readStringInternal(ChannelBufferWrapper.java:114)
at org.apache.activemq.artemis.core.buffers.impl.ChannelBufferWrapper.readString(ChannelBufferWrapper.java:99)
at org.apache.activemq.artemis.core.cluster.DiscoveryGroup$DiscoveryRunnable.run(DiscoveryGroup.java:274)
at java.lang.Thread.run(Thread.java:748)
{code}
The standalone Java client calles Artemis Core API to use JGroups Dynamic Discovery like the following. See the attached reproducer for details.
{code}
private static final String channelName = "ee";
private static final String jgroupsConfigFile = "jgroups-custom-config.xml";
String uri = "jgroups://" + channelName + "?file=" + jgroupsConfigFile + "&type=QUEUE_CF";
ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(uri, "...");
Queue queue = ActiveMQJMSClient.createQueue("testQueue");
try (JMSContext jmsContext = cf.createContext("test", "test@123")) {
// sending messages or recieving messages
}
{code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (WFLY-9364) Closing an EJBClientContext sometimes hangs causing high CPU usage
by Marius Tantareanu (JIRA)
Marius Tantareanu created WFLY-9364:
---------------------------------------
Summary: Closing an EJBClientContext sometimes hangs causing high CPU usage
Key: WFLY-9364
URL: https://issues.jboss.org/browse/WFLY-9364
Project: WildFly
Issue Type: Bug
Affects Versions: 10.1.0.Final
Environment: Server:
- OS:
Red Hat Enterprise Linux Server release 7.2 (Maipo)
- JDK:
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
- WildFly:
WildFly 10.1.0.Final
Client:
- OS:
Red Hat Enterprise Linux Server release 7.2 (Maipo)
- JDK:
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
Reporter: Marius Tantareanu
Assignee: Jason Greene
Attachments: echo-client.zip, output_rhel7.txt, output_win10.txt, simple-ear.ear, td_rhel7.txt, td_win10.txt
EJBClientContext.close() sometimes hangs and causes high CPU usage. We have a WildFly client that uses EJB client API to invoke some EJBs remotely.
Basically the client executes the following actions in a loop:
- setup an EJBClientContext programatically
- create a JNDI context
- lookup an EJB and invoke some operations on it
- close the JNDI context and the EJBClientContext
After the client runs a few hundreds iterations (the actual number of iterations varies quite a lot from one run to another) it blocks while invoking EJBClientContext.close(). Also one XNIO thread in the client app is constantly consuming one CPU core (full thread dump from the client app is attached). I was only able to reproduce this when connecting over TLS (port 8443). When using the unsecure port (8080) the problem does not reproduce (or it reproduces much less frequently and I didn't run enough iterations to catch it).
Once the client app enters this state, top shows something like (notice the CPU usage of thread 12512):
>top -H -p 6463
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12512 root 20 0 11.527g 222680 13788 R 99.3 0.7 141:10.99 java
6466 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:07.05 java
6467 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:06.97 java
6477 root 20 0 11.527g 222680 13788 S 0.3 0.7 0:04.24 java
6463 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
6464 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.46 java
6465 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.94 java
6468 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:07.02 java
6469 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:16.42 java
6470 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
6471 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.01 java
6472 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
6473 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.87 java
6474 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:06.70 java
6475 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:03.09 java
6476 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12513 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12514 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12515 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12516 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12517 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12518 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12519 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12520 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12523 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12524 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
12597 root 20 0 11.527g 222680 13788 S 0.0 0.7 0:00.00 java
The thread that causes the CPU usage is the following:
"Remoting "config-based-ejb-client-endpoint" I/O-1" #6025 daemon prio=5 os_prio=0 tid=0x00007f2f709d3000 nid=0x30e0 runnable [0x00007f2e7f9be000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000005cc1844e8> (a sun.nio.ch.Util$3)
- locked <0x00000005cc19f258> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000005cc184468> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:515)
The client app main thread is blocked as below:
"main" #1 prio=5 os_prio=0 tid=0x00007f2f70008800 nid=0x1940 in Object.wait() [0x00007f2f76350000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.jboss.remoting3.spi.AbstractHandleableCloseable.close(AbstractHandleableCloseable.java:190)
- locked <0x00000005cc18de48> (a java.lang.Object)
at org.jboss.ejb.client.remoting.ConnectionPool.safeClose(ConnectionPool.java:177)
at org.jboss.ejb.client.remoting.ConnectionPool.release(ConnectionPool.java:104)
- locked <0x00000005cbd369e0> (a org.jboss.ejb.client.remoting.ConnectionPool)
at org.jboss.ejb.client.remoting.ConnectionPool$PooledConnection.close(ConnectionPool.java:198)
at org.jboss.ejb.client.remoting.RemotingConnectionManager.safeClose(RemotingConnectionManager.java:65)
- locked <0x00000005cc1a6840> (a java.util.Collections$SynchronizedRandomAccessList)
at org.jboss.ejb.client.remoting.ConfigBasedEJBClientContextSelector$ContextCloseListener.contextClosed(ConfigBasedEJBClientContextSelector.java:220)
at org.jboss.ejb.client.EJBClientContext.close(EJBClientContext.java:1305)
- locked <0x00000005cc1a6888> (a org.jboss.ejb.client.EJBClientContext)
at com.microfocus.echoclient.EchoClient.disconnect(EchoClient.java:66)
at com.microfocus.echoclient.EchoClient.connectDisconnect(EchoClient.java:54)
at com.microfocus.echoclient.EchoClient.main(EchoClient.java:36)
The problem also reproduces on Windows (full thread dump of the client app is attached).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months
[JBoss JIRA] (AG-34) Reset connection attributes when returning to pool
by Luis Barreiro (JIRA)
[ https://issues.jboss.org/browse/AG-34?page=com.atlassian.jira.plugin.syst... ]
Luis Barreiro updated AG-34:
----------------------------
Summary: Reset connection attributes when returning to pool (was: Reset connection attributes on close)
> Reset connection attributes when returning to pool
> --------------------------------------------------
>
> Key: AG-34
> URL: https://issues.jboss.org/browse/AG-34
> Project: Agroal
> Issue Type: Bug
> Components: pool
> Affects Versions: 0,2
> Reporter: Luis Barreiro
> Assignee: Luis Barreiro
> Fix For: 0.3
>
>
> Connections that are acquired from the pool should have their attributes according to the configuration.
>
> This is particularly important for attributes relating to transactions (and these are the ones addressable now, as there are default values for them supplied in the configuration)
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 7 months