[JBoss JIRA] (WFLY-6686) JGroups ForkChannel can hold references to Hibernate SessionFactoryImpl which cause memory leak
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-6686?page=com.atlassian.jira.plugin.... ]
Paul Ferraro updated WFLY-6686:
-------------------------------
Fix Version/s: 10.1.0.Final
> JGroups ForkChannel can hold references to Hibernate SessionFactoryImpl which cause memory leak
> -----------------------------------------------------------------------------------------------
>
> Key: WFLY-6686
> URL: https://issues.jboss.org/browse/WFLY-6686
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Environment: Using WF 10 build #2291 (Jun 6, 2016 3:40:44 PM)
> https://ci.jboss.org/hudson/job/WildFly-latest-master/2291/changes
> Reporter: Mathieu Lachance
> Assignee: Paul Ferraro
> Fix For: 10.1.0.Final
>
>
> We are using Hibernate second level cache through JPA configured as such:
> {code}
> <properties>
> <property name="hibernate.cache.use_query_cache" value="true"/>
> <property name="hibernate.cache.use_second_level_cache" value="true"/>
> <property name="hibernate.cache.region.factory_class" value="org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory"/>
> </properties>
> {code}
> After heap dump inspection, it seems that the JGroups ForkChannel identified by "hibernate" can hold a listener that hold the SessionFactoryImpl which then hold the whole application classloader.
> When undeploying the application, this can lead to classloader leak.
> I took an heap dump of such scenario and analysed it using eclipse memory analyzer (MAT) and here is the result:
> {code}
> Class Name | Ref. Objects | Shallow Heap | Ref. Shallow Heap | Retained Heap
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> channel org.jgroups.JChannel @ 0xc0eac6a0 | 1 | 112 | 136 | 448
> '- channel_listeners java.util.concurrent.CopyOnWriteArraySet @ 0xc0ecd260 | 1 | 16 | 136 | 112
> '- al java.util.concurrent.CopyOnWriteArrayList @ 0xc0ecd270 | 1 | 24 | 136 | 96
> '- array java.lang.Object[2] @ 0xc0ecd2b8 | 1 | 24 | 136 | 24
> '- [0] org.jgroups.fork.ForkChannel @ 0xc103d250 | 1 | 120 | 136 | 1 328
> '- channel_listeners java.util.concurrent.CopyOnWriteArraySet @ 0xc103d350 | 1 | 16 | 136 | 968
> '- al java.util.concurrent.CopyOnWriteArrayList @ 0xc103d360 | 1 | 24 | 136 | 952
> '- array java.lang.Object[1] @ 0xc103d3a8 | 1 | 24 | 136 | 880
> '- [0] org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher @ 0xc103d1f0| 1 | 96 | 136 | 856
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> {code}
> To remove that leak I ugily patched JGroups ForkChannel close method as such:
> {code}
> /** Closes the fork-channel, essentially setting its state to CLOSED. Note that - contrary to a regular channel -
> * a closed fork-channel can be connected again: this means re-attaching the fork-channel to the main-channel*/
> @Override
> public void close() {
> ((ForkProtocolStack)prot_stack).remove(fork_channel_id);
> if(state == State.CLOSED)
> return;
> disconnect(); // leave group if connected
> prot_stack.destroy();
> state=State.CLOSED;
> notifyChannelClosed(this);
> this.clearChannelListeners(); // <-- this is the line I added
> }
> {code}
> With that change in place, the memory leak is gone. I highly doubt though this is an acceptable fix. Though it does confirm my theory.
> I doubt that JGroups is really the culprit -- I'm more in the thinking that the "thing managing" JGroups is the culprit.
> Since I'm not an expert around that field I've opened the issue against the Wildfly project. Feel free to move it to the proper project.
> If you need any other informations let me know.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6685) Cache container enable statistics can lead to classloader leak
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-6685?page=com.atlassian.jira.plugin.... ]
Paul Ferraro updated WFLY-6685:
-------------------------------
Fix Version/s: 10.1.0.Final
> Cache container enable statistics can lead to classloader leak
> --------------------------------------------------------------
>
> Key: WFLY-6685
> URL: https://issues.jboss.org/browse/WFLY-6685
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Final
> Reporter: Mathieu Lachance
> Assignee: Paul Ferraro
> Fix For: 10.1.0.Final
>
>
> In standalone.xml, if we enable statistics such as:
> {code}
> <cache-container name="web" default-cache="repl" module="org.wildfly.clustering.web.infinispan" statistics-enabled="true">
> <transport lock-timeout="60000"/>
> <replicated-cache name="repl" statistics-enabled="true" mode="ASYNC">
> <locking isolation="READ_COMMITTED"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> <state-transfer chunk-size="512" timeout="240000"/>
> </replicated-cache>
> </cache-container>
> {code}
> The following MBean:
> {code}
> jboss.infinispan:cluster=web,type=channel
> {code}
> is getting eventually registered when deploying any application.
> When undeploying that application, the MBean is not getting unregistered and thus can leak pretty much anything.
> Also, if you try to deploy the very same application right after another MBean will get registered:
> {code}
> jboss.infinispan2:cluster=web,type=channel
> {code}
> Note: cache enable-statistics=true doesn't seems to leak anything.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6682) Upgrade Hibernate to 5.2.0
by Scott Marlow (JIRA)
[ https://issues.jboss.org/browse/WFLY-6682?page=com.atlassian.jira.plugin.... ]
Scott Marlow commented on WFLY-6682:
------------------------------------
Hi Frank, I'm not really sure when Hibernate 5.2.0 comes into WildFly. Some WildFly changes will be needed (hibernate-entity jar was merged to hibernate-core). Is there a Hibernate 5.2.0 change that you depend on being in WildFly?
[~gbadner] fyi ^
> Upgrade Hibernate to 5.2.0
> --------------------------
>
> Key: WFLY-6682
> URL: https://issues.jboss.org/browse/WFLY-6682
> Project: WildFly
> Issue Type: Component Upgrade
> Components: JPA / Hibernate
> Affects Versions: 10.0.0.Final
> Reporter: Frank Langelage
> Assignee: Scott Marlow
> Priority: Critical
> Fix For: 10.1.0.Final
>
>
> Upgrade Hibernate to latest version 5.2.0.Final.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6688) NPE in Http2PriorityTree$Http2PriorityNode.addDependent
by Tomaz Cerar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6688?page=com.atlassian.jira.plugin.... ]
Tomaz Cerar commented on WFLY-6688:
-----------------------------------
Could you manually upgrade undertow to 1.3.22.Final to see if it still happens?
> NPE in Http2PriorityTree$Http2PriorityNode.addDependent
> -------------------------------------------------------
>
> Key: WFLY-6688
> URL: https://issues.jboss.org/browse/WFLY-6688
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Reporter: Juergen Zimmermann
> Assignee: Stuart Douglas
>
> I'm using Windows 8.1 with JDK 8u92, WildFly 10.0.0.Final (Undertow 1.3.15) and alpn-boot 8.1.8.v20160420. When accessing any JSF-based web page (via Chrome) I get the following stacktrace. The REST clients are not using HTTP/2 and work fine.
> {{ERROR [org.xnio.listener] XNIO001007: A channel event listener threw an exception: java.lang.NullPointerException
> at io.undertow.protocols.http2.Http2PriorityTree$Http2PriorityNode.addDependent(Http2PriorityTree.java:248)
> at io.undertow.protocols.http2.Http2PriorityTree$Http2PriorityNode.exclusive(Http2PriorityTree.java:258)
> at io.undertow.protocols.http2.Http2PriorityTree.registerStream(Http2PriorityTree.java:65)
> at io.undertow.protocols.http2.Http2Channel.createChannel(Http2Channel.java:310)
> at io.undertow.protocols.http2.Http2Channel.createChannel(Http2Channel.java:60)
> at io.undertow.server.protocol.framed.AbstractFramedChannel.receive(AbstractFramedChannel.java:433)
> at io.undertow.server.protocol.http2.Http2ReceiveListener.handleEvent(Http2ReceiveListener.java:103)
> at io.undertow.server.protocol.http2.Http2ReceiveListener.handleEvent(Http2ReceiveListener.java:56)
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
> at io.undertow.server.protocol.framed.AbstractFramedChannel$FrameReadListener.handleEvent(AbstractFramedChannel.java:872)
> at io.undertow.server.protocol.framed.AbstractFramedChannel$FrameReadListener.handleEvent(AbstractFramedChannel.java:853)
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
> at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
> at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1059)
> at io.undertow.protocols.ssl.SslConduit$1.run(SslConduit.java:229)
> at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:580)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:464)}}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6688) NPE in Http2PriorityTree$Http2PriorityNode.addDependent
by Juergen Zimmermann (JIRA)
Juergen Zimmermann created WFLY-6688:
----------------------------------------
Summary: NPE in Http2PriorityTree$Http2PriorityNode.addDependent
Key: WFLY-6688
URL: https://issues.jboss.org/browse/WFLY-6688
Project: WildFly
Issue Type: Bug
Components: Web (Undertow)
Affects Versions: 10.0.0.Final
Reporter: Juergen Zimmermann
Assignee: Stuart Douglas
I'm using Windows 8.1 with JDK 8u92, WildFly 10.0.0.Final (Undertow 1.3.15) and alpn-boot 8.1.8.v20160420. When accessing any JSF-based web page (via Chrome) I get the following stacktrace. The REST clients are not using HTTP/2 and work fine.
{{ERROR [org.xnio.listener] XNIO001007: A channel event listener threw an exception: java.lang.NullPointerException
at io.undertow.protocols.http2.Http2PriorityTree$Http2PriorityNode.addDependent(Http2PriorityTree.java:248)
at io.undertow.protocols.http2.Http2PriorityTree$Http2PriorityNode.exclusive(Http2PriorityTree.java:258)
at io.undertow.protocols.http2.Http2PriorityTree.registerStream(Http2PriorityTree.java:65)
at io.undertow.protocols.http2.Http2Channel.createChannel(Http2Channel.java:310)
at io.undertow.protocols.http2.Http2Channel.createChannel(Http2Channel.java:60)
at io.undertow.server.protocol.framed.AbstractFramedChannel.receive(AbstractFramedChannel.java:433)
at io.undertow.server.protocol.http2.Http2ReceiveListener.handleEvent(Http2ReceiveListener.java:103)
at io.undertow.server.protocol.http2.Http2ReceiveListener.handleEvent(Http2ReceiveListener.java:56)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at io.undertow.server.protocol.framed.AbstractFramedChannel$FrameReadListener.handleEvent(AbstractFramedChannel.java:872)
at io.undertow.server.protocol.framed.AbstractFramedChannel$FrameReadListener.handleEvent(AbstractFramedChannel.java:853)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1059)
at io.undertow.protocols.ssl.SslConduit$1.run(SslConduit.java:229)
at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:580)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:464)}}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6686) JGroups ForkChannel can hold references to Hibernate SessionFactoryImpl which cause memory leak
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-6686?page=com.atlassian.jira.plugin.... ]
Paul Ferraro commented on WFLY-6686:
------------------------------------
We can workaround this in WildFly while we wait for a proper upstream fix.
> JGroups ForkChannel can hold references to Hibernate SessionFactoryImpl which cause memory leak
> -----------------------------------------------------------------------------------------------
>
> Key: WFLY-6686
> URL: https://issues.jboss.org/browse/WFLY-6686
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Environment: Using WF 10 build #2291 (Jun 6, 2016 3:40:44 PM)
> https://ci.jboss.org/hudson/job/WildFly-latest-master/2291/changes
> Reporter: Mathieu Lachance
> Assignee: Paul Ferraro
>
> We are using Hibernate second level cache through JPA configured as such:
> {code}
> <properties>
> <property name="hibernate.cache.use_query_cache" value="true"/>
> <property name="hibernate.cache.use_second_level_cache" value="true"/>
> <property name="hibernate.cache.region.factory_class" value="org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory"/>
> </properties>
> {code}
> After heap dump inspection, it seems that the JGroups ForkChannel identified by "hibernate" can hold a listener that hold the SessionFactoryImpl which then hold the whole application classloader.
> When undeploying the application, this can lead to classloader leak.
> I took an heap dump of such scenario and analysed it using eclipse memory analyzer (MAT) and here is the result:
> {code}
> Class Name | Ref. Objects | Shallow Heap | Ref. Shallow Heap | Retained Heap
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> channel org.jgroups.JChannel @ 0xc0eac6a0 | 1 | 112 | 136 | 448
> '- channel_listeners java.util.concurrent.CopyOnWriteArraySet @ 0xc0ecd260 | 1 | 16 | 136 | 112
> '- al java.util.concurrent.CopyOnWriteArrayList @ 0xc0ecd270 | 1 | 24 | 136 | 96
> '- array java.lang.Object[2] @ 0xc0ecd2b8 | 1 | 24 | 136 | 24
> '- [0] org.jgroups.fork.ForkChannel @ 0xc103d250 | 1 | 120 | 136 | 1 328
> '- channel_listeners java.util.concurrent.CopyOnWriteArraySet @ 0xc103d350 | 1 | 16 | 136 | 968
> '- al java.util.concurrent.CopyOnWriteArrayList @ 0xc103d360 | 1 | 24 | 136 | 952
> '- array java.lang.Object[1] @ 0xc103d3a8 | 1 | 24 | 136 | 880
> '- [0] org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher @ 0xc103d1f0| 1 | 96 | 136 | 856
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> {code}
> To remove that leak I ugily patched JGroups ForkChannel close method as such:
> {code}
> /** Closes the fork-channel, essentially setting its state to CLOSED. Note that - contrary to a regular channel -
> * a closed fork-channel can be connected again: this means re-attaching the fork-channel to the main-channel*/
> @Override
> public void close() {
> ((ForkProtocolStack)prot_stack).remove(fork_channel_id);
> if(state == State.CLOSED)
> return;
> disconnect(); // leave group if connected
> prot_stack.destroy();
> state=State.CLOSED;
> notifyChannelClosed(this);
> this.clearChannelListeners(); // <-- this is the line I added
> }
> {code}
> With that change in place, the memory leak is gone. I highly doubt though this is an acceptable fix. Though it does confirm my theory.
> I doubt that JGroups is really the culprit -- I'm more in the thinking that the "thing managing" JGroups is the culprit.
> Since I'm not an expert around that field I've opened the issue against the Wildfly project. Feel free to move it to the proper project.
> If you need any other informations let me know.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6599) JDBC_PING can't use a JNDI database connection because it is closed on shutdown
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6599?page=com.atlassian.jira.plugin.... ]
Radoslav Husar updated WFLY-6599:
---------------------------------
Issue Type: Bug (was: Feature Request)
> JDBC_PING can't use a JNDI database connection because it is closed on shutdown
> -------------------------------------------------------------------------------
>
> Key: WFLY-6599
> URL: https://issues.jboss.org/browse/WFLY-6599
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Final
> Reporter: Matthew Casperson
> Assignee: Radoslav Husar
>
> If you configure the JDBC_PING protocol in JGroups to use a datasource provided by WildFly via the *datasource_jndi_name* setting, you'll get the following exception when WildFly is shutdown:
> {code}
> [Server:main-server] 2016-05-10 11:05:45+1000 ERROR [[org.jgroups.protocols.JDBC_PING]] [[MSC service thread 1-4]] Could not open connection to database: java.sql.SQLException: javax.resource.ResourceException: IJ000470: You are trying to use a connection factory that has been shut down: java:/comp/env/jdbc/jgroups
> [Server:main-server] at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:146)
> [Server:main-server] at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:66)
> [Server:main-server] at org.jgroups.protocols.JDBC_PING.getConnection(JDBC_PING.java:348)
> [Server:main-server] at org.jgroups.protocols.JDBC_PING.delete(JDBC_PING.java:379)
> [Server:main-server] at org.jgroups.protocols.JDBC_PING.deleteSelf(JDBC_PING.java:395)
> [Server:main-server] at org.jgroups.protocols.JDBC_PING.stop(JDBC_PING.java:144)
> [Server:main-server] at org.jgroups.stack.ProtocolStack.stopStack(ProtocolStack.java:1015)
> [Server:main-server] at org.jgroups.JChannel.stopStack(JChannel.java:1002)
> [Server:main-server] at org.jgroups.JChannel.disconnect(JChannel.java:373)
> [Server:main-server] at org.wildfly.clustering.jgroups.spi.service.ChannelConnectorBuilder.stop(ChannelConnectorBuilder.java:103)
> [Server:main-server] at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:2056)
> [Server:main-server] at org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:2017)
> [Server:main-server] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [Server:main-server] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [Server:main-server] at java.lang.Thread.run(Thread.java:745)
> [Server:main-server] Caused by: javax.resource.ResourceException: IJ000470: You are trying to use a connection factory that has been shut down: java:/comp/env/jdbc/jgroups
> [Server:main-server] at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:735)
> [Server:main-server] at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138)
> [Server:main-server] ... 14 more
> {code}
> The solution is to configure the database connection directly, but then it seems that you loose the ability to use features like database connection validation, reconnection and the other settings provided by a WildFly datasource the improve the reliability of a database connection.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6687) JDBC_PING can't use a JNDI database connection because it is closed on shutdown
by Radoslav Husar (JIRA)
Radoslav Husar created WFLY-6687:
------------------------------------
Summary: JDBC_PING can't use a JNDI database connection because it is closed on shutdown
Key: WFLY-6687
URL: https://issues.jboss.org/browse/WFLY-6687
Project: WildFly
Issue Type: Feature Request
Components: Clustering
Affects Versions: 10.0.0.Final
Reporter: Radoslav Husar
Assignee: Radoslav Husar
If you configure the JDBC_PING protocol in JGroups to use a datasource provided by WildFly via the *datasource_jndi_name* setting, you'll get the following exception when WildFly is shutdown:
{code}
[Server:main-server] 2016-05-10 11:05:45+1000 ERROR [[org.jgroups.protocols.JDBC_PING]] [[MSC service thread 1-4]] Could not open connection to database: java.sql.SQLException: javax.resource.ResourceException: IJ000470: You are trying to use a connection factory that has been shut down: java:/comp/env/jdbc/jgroups
[Server:main-server] at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:146)
[Server:main-server] at org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:66)
[Server:main-server] at org.jgroups.protocols.JDBC_PING.getConnection(JDBC_PING.java:348)
[Server:main-server] at org.jgroups.protocols.JDBC_PING.delete(JDBC_PING.java:379)
[Server:main-server] at org.jgroups.protocols.JDBC_PING.deleteSelf(JDBC_PING.java:395)
[Server:main-server] at org.jgroups.protocols.JDBC_PING.stop(JDBC_PING.java:144)
[Server:main-server] at org.jgroups.stack.ProtocolStack.stopStack(ProtocolStack.java:1015)
[Server:main-server] at org.jgroups.JChannel.stopStack(JChannel.java:1002)
[Server:main-server] at org.jgroups.JChannel.disconnect(JChannel.java:373)
[Server:main-server] at org.wildfly.clustering.jgroups.spi.service.ChannelConnectorBuilder.stop(ChannelConnectorBuilder.java:103)
[Server:main-server] at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:2056)
[Server:main-server] at org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:2017)
[Server:main-server] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[Server:main-server] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[Server:main-server] at java.lang.Thread.run(Thread.java:745)
[Server:main-server] Caused by: javax.resource.ResourceException: IJ000470: You are trying to use a connection factory that has been shut down: java:/comp/env/jdbc/jgroups
[Server:main-server] at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:735)
[Server:main-server] at org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138)
[Server:main-server] ... 14 more
{code}
The solution is to configure the database connection directly, but then it seems that you loose the ability to use features like database connection validation, reconnection and the other settings provided by a WildFly datasource the improve the reliability of a database connection.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6685) Cache container enable statistics can lead to classloader leak
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-6685?page=com.atlassian.jira.plugin.... ]
Paul Ferraro commented on WFLY-6685:
------------------------------------
We can work around this issue by preventing Infinispan from attempting to register mbeans for its channel.
MBeans for the channel are already registered by the JGroups subsystem.
> Cache container enable statistics can lead to classloader leak
> --------------------------------------------------------------
>
> Key: WFLY-6685
> URL: https://issues.jboss.org/browse/WFLY-6685
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.0.0.Final
> Reporter: Mathieu Lachance
> Assignee: Paul Ferraro
>
> In standalone.xml, if we enable statistics such as:
> {code}
> <cache-container name="web" default-cache="repl" module="org.wildfly.clustering.web.infinispan" statistics-enabled="true">
> <transport lock-timeout="60000"/>
> <replicated-cache name="repl" statistics-enabled="true" mode="ASYNC">
> <locking isolation="READ_COMMITTED"/>
> <transaction locking="OPTIMISTIC" mode="BATCH"/>
> <state-transfer chunk-size="512" timeout="240000"/>
> </replicated-cache>
> </cache-container>
> {code}
> The following MBean:
> {code}
> jboss.infinispan:cluster=web,type=channel
> {code}
> is getting eventually registered when deploying any application.
> When undeploying that application, the MBean is not getting unregistered and thus can leak pretty much anything.
> Also, if you try to deploy the very same application right after another MBean will get registered:
> {code}
> jboss.infinispan2:cluster=web,type=channel
> {code}
> Note: cache enable-statistics=true doesn't seems to leak anything.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6402) EJBs accessible too early (spec violation)
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/WFLY-6402?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on WFLY-6402:
-----------------------------------------------
Brad Maxwell <bmaxwell(a)redhat.com> changed the Status of [bug 1310908|https://bugzilla.redhat.com/show_bug.cgi?id=1310908] from ASSIGNED to POST
> EJBs accessible too early (spec violation)
> ------------------------------------------
>
> Key: WFLY-6402
> URL: https://issues.jboss.org/browse/WFLY-6402
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.0.0.Final
> Reporter: Brad Maxwell
> Assignee: Fedor Gavrilov
> Labels: downstream_dependency
> Attachments: auto-test-reproducer.zip
>
>
> {code}
> EJB 3.1 spec, section 4.8.1:
> "If the Startup annotation appears on the Singleton bean class or if the Singleton has been designated via the deployment descriptor as requiring eager initialization, the container must initialize the Singleton bean instance during the application startup sequence. The container must initialize all such startup-time Singletons before any external client requests (that is, client requests originating outside of the application) are delivered to any enterprise bean components in the application.
> {code}
> Wildlfy does not implement this correctly, and allows calls to other EJBs before a @Startup @Singleton finishes its @PostConstruct call.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months