[JBoss JIRA] (WFCORE-1588) Register runtime attributes depending on the actual type of process/running mode.
by Jeff Mesnil (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1588?page=com.atlassian.jira.plugi... ]
Jeff Mesnil updated WFCORE-1588:
--------------------------------
Description:
Following work done on WFCORE-1513, runtime attributes should also be actually registered if it makes sense (as determined by the ProcessType/RunningMode).
was:
Following work done on WFCORE-1513, runtime operations should also be actually registered if it makes sense (as determined by the ProcessType/RunningMode).
> Register runtime attributes depending on the actual type of process/running mode.
> ---------------------------------------------------------------------------------
>
> Key: WFCORE-1588
> URL: https://issues.jboss.org/browse/WFCORE-1588
> Project: WildFly Core
> Issue Type: Enhancement
> Components: Domain Management
> Affects Versions: 2.1.0.Final
> Reporter: Jeff Mesnil
> Assignee: Jeff Mesnil
>
> Following work done on WFCORE-1513, runtime attributes should also be actually registered if it makes sense (as determined by the ProcessType/RunningMode).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (JBJCA-1321) Statement.cancel() is not invoked until the statement is completed
by Jesper Pedersen (JIRA)
[ https://issues.jboss.org/browse/JBJCA-1321?page=com.atlassian.jira.plugin... ]
Jesper Pedersen moved WFLY-6662 to JBJCA-1321:
----------------------------------------------
Project: IronJacamar (was: WildFly)
Key: JBJCA-1321 (was: WFLY-6662)
Workflow: classic default workflow (was: GIT Pull Request workflow )
Component/s: JDBC
(was: JCA)
Affects Version/s: 1.2.7.Final
WildFly/IronJacamar 1.3.4.Final
(was: 9.0.2.Final)
> Statement.cancel() is not invoked until the statement is completed
> ------------------------------------------------------------------
>
> Key: JBJCA-1321
> URL: https://issues.jboss.org/browse/JBJCA-1321
> Project: IronJacamar
> Issue Type: Bug
> Components: JDBC
> Affects Versions: 1.2.7.Final, WildFly/IronJacamar 1.3.4.Final
> Reporter: lorenzo benvenuti
> Assignee: Jesper Pedersen
>
> Hi,
> in our application we are using the {{Statement.cancel()}} method to stop long-running queries; in Wildfly 9.0.2 this is not working because the {{cancel()}} method is synchronized using a lock which is not released until the query is executed. In {{WrappedStatement}}:
> {code:java}
> public void cancel() throws SQLException
> {
> if (doLocking)
> lock();
> try
> {
> /* ... */
> {code}
> It seems this behaviour has changed from version 1.2.5.Final of ironjacamar-jdbc; in version 1.2.4.Final {{WrappedStatement.cancel}} doesn't try to obtain the lock.
> Probably I'm missing something, but to me it's strange that in order to cancel a statement you have to wait for its completion.
> Thank you,
> lorenzo
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (JBJCA-1321) Statement.cancel() is not invoked until the statement is completed
by Jesper Pedersen (JIRA)
[ https://issues.jboss.org/browse/JBJCA-1321?page=com.atlassian.jira.plugin... ]
Jesper Pedersen reassigned JBJCA-1321:
--------------------------------------
Fix Version/s: WildFly/IronJacamar 1.3.5.Final
1.2.8.Final
Priority: Critical (was: Major)
Assignee: Lin Gao (was: Jesper Pedersen)
> Statement.cancel() is not invoked until the statement is completed
> ------------------------------------------------------------------
>
> Key: JBJCA-1321
> URL: https://issues.jboss.org/browse/JBJCA-1321
> Project: IronJacamar
> Issue Type: Bug
> Components: JDBC
> Affects Versions: WildFly/IronJacamar 1.3.4.Final, 1.2.7.Final
> Reporter: lorenzo benvenuti
> Assignee: Lin Gao
> Priority: Critical
> Fix For: WildFly/IronJacamar 1.3.5.Final, 1.2.8.Final
>
>
> Hi,
> in our application we are using the {{Statement.cancel()}} method to stop long-running queries; in Wildfly 9.0.2 this is not working because the {{cancel()}} method is synchronized using a lock which is not released until the query is executed. In {{WrappedStatement}}:
> {code:java}
> public void cancel() throws SQLException
> {
> if (doLocking)
> lock();
> try
> {
> /* ... */
> {code}
> It seems this behaviour has changed from version 1.2.5.Final of ironjacamar-jdbc; in version 1.2.4.Final {{WrappedStatement.cancel}} doesn't try to obtain the lock.
> Probably I'm missing something, but to me it's strange that in order to cancel a statement you have to wait for its completion.
> Thank you,
> lorenzo
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6680) :read-proxies-configuration and :read-proxies-info fail when there is no httpd
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6680?page=com.atlassian.jira.plugin.... ]
Radoslav Husar reassigned WFLY-6680:
------------------------------------
Assignee: Aaron Ogburn (was: Radoslav Husar)
> :read-proxies-configuration and :read-proxies-info fail when there is no httpd
> ------------------------------------------------------------------------------
>
> Key: WFLY-6680
> URL: https://issues.jboss.org/browse/WFLY-6680
> Project: WildFly
> Issue Type: Bug
> Affects Versions: 10.0.0.Final
> Environment: RHEL 6, EAP 6.1.0, mod_cluster-1.2.4-1.Final_redhat_1.ep6.el6.noarch,
> Reporter: Kristina Clair
> Assignee: Aaron Ogburn
>
> When the modcluster subsystem is unable to connect to a proxy, the jboss-cli commands :read-proxies-configuration and :read-proxies-info fail with an unhelpful error.
> On both the domain controller and application host, :read-proxies-info and :read-proxies-configuration fail with the same error. This is the output from the application host:
> {noformat}
> [domain@localhost:9999 subsystem=modcluster] pwd
> /host=localhost/server=cluster2/subsystem=modcluster
> [domain@localhost:9999 subsystem=modcluster] :list-proxies
> {
> "outcome" => "success",
> "result" => [
> "web02:8009",
> "web01:8009"
> ]
> }
> [domain@localhost:9999 subsystem=modcluster] :read-proxies-configuration
> {
> "outcome" => "failed",
> "result" => undefined,
> "failure-description" => "JBAS014749: Operation handler failed: newValue is null",
> "rolled-back" => true
> }
> [domain@localhost:9999 subsystem=modcluster] :read-proxies-info
> {
> "outcome" => "failed",
> "result" => undefined,
> "failure-description" => "JBAS014749: Operation handler failed: newValue is null",
> "rolled-back" => true
> }
> {noformat}
> In the above example, modcluster was not able to connect to the proxies due to an ssl misconfiguration in the modcluster subsystem in domain.xml.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6693) JGroups ForkChannel can hold references to Hibernate SessionFactoryImpl which cause memory leak
by Paul Ferraro (JIRA)
Paul Ferraro created WFLY-6693:
----------------------------------
Summary: JGroups ForkChannel can hold references to Hibernate SessionFactoryImpl which cause memory leak
Key: WFLY-6693
URL: https://issues.jboss.org/browse/WFLY-6693
Project: WildFly
Issue Type: Bug
Components: Clustering
Environment: Using WF 10 build #2291 (Jun 6, 2016 3:40:44 PM)
https://ci.jboss.org/hudson/job/WildFly-latest-master/2291/changes
Reporter: Paul Ferraro
Assignee: Paul Ferraro
Priority: Blocker
Fix For: 10.1.0.Final
We are using Hibernate second level cache through JPA configured as such:
{code}
<properties>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.use_second_level_cache" value="true"/>
<property name="hibernate.cache.region.factory_class" value="org.jboss.as.jpa.hibernate5.infinispan.SharedInfinispanRegionFactory"/>
</properties>
{code}
After heap dump inspection, it seems that the JGroups ForkChannel identified by "hibernate" can hold a listener that hold the SessionFactoryImpl which then hold the whole application classloader.
When undeploying the application, this can lead to classloader leak.
I took an heap dump of such scenario and analysed it using eclipse memory analyzer (MAT) and here is the result:
{code}
Class Name | Ref. Objects | Shallow Heap | Ref. Shallow Heap | Retained Heap
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
channel org.jgroups.JChannel @ 0xc0eac6a0 | 1 | 112 | 136 | 448
'- channel_listeners java.util.concurrent.CopyOnWriteArraySet @ 0xc0ecd260 | 1 | 16 | 136 | 112
'- al java.util.concurrent.CopyOnWriteArrayList @ 0xc0ecd270 | 1 | 24 | 136 | 96
'- array java.lang.Object[2] @ 0xc0ecd2b8 | 1 | 24 | 136 | 24
'- [0] org.jgroups.fork.ForkChannel @ 0xc103d250 | 1 | 120 | 136 | 1 328
'- channel_listeners java.util.concurrent.CopyOnWriteArraySet @ 0xc103d350 | 1 | 16 | 136 | 968
'- al java.util.concurrent.CopyOnWriteArrayList @ 0xc103d360 | 1 | 24 | 136 | 952
'- array java.lang.Object[1] @ 0xc103d3a8 | 1 | 24 | 136 | 880
'- [0] org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher @ 0xc103d1f0| 1 | 96 | 136 | 856
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
{code}
To remove that leak I ugily patched JGroups ForkChannel close method as such:
{code}
/** Closes the fork-channel, essentially setting its state to CLOSED. Note that - contrary to a regular channel -
* a closed fork-channel can be connected again: this means re-attaching the fork-channel to the main-channel*/
@Override
public void close() {
((ForkProtocolStack)prot_stack).remove(fork_channel_id);
if(state == State.CLOSED)
return;
disconnect(); // leave group if connected
prot_stack.destroy();
state=State.CLOSED;
notifyChannelClosed(this);
this.clearChannelListeners(); // <-- this is the line I added
}
{code}
With that change in place, the memory leak is gone. I highly doubt though this is an acceptable fix. Though it does confirm my theory.
I doubt that JGroups is really the culprit -- I'm more in the thinking that the "thing managing" JGroups is the culprit.
Since I'm not an expert around that field I've opened the issue against the Wildfly project. Feel free to move it to the proper project.
If you need any other informations let me know.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6692) Provide minimalistic profile for Artemis backup configuration
by Miroslav Novak (JIRA)
[ https://issues.jboss.org/browse/WFLY-6692?page=com.atlassian.jira.plugin.... ]
Miroslav Novak updated WFLY-6692:
---------------------------------
Description:
If Artemis is configured as dedicated backup, no deployments like MDB,EJB,Servlets should be deployed to it. It should serve to its purpose which is to wait for Wildfly 10 with Artemis configured as live to fail.
We should provide minimalistic configuration for Wildfly serving purely as Artemis backup server.
was:
If Artemis is configured as dedicated backup no deployments like MDB,EJB,Servlets should be deployed to it. It should serve to its purpose which is to wait for Wildfly 10 with Artemis configured as live to fail.
We should provide minimalistic configuration for Wildfly serving purely as Artemis backup server.
> Provide minimalistic profile for Artemis backup configuration
> -------------------------------------------------------------
>
> Key: WFLY-6692
> URL: https://issues.jboss.org/browse/WFLY-6692
> Project: WildFly
> Issue Type: Feature Request
> Components: JMS
> Affects Versions: 10.0.0.Final
> Reporter: Miroslav Novak
> Assignee: Jeff Mesnil
>
> If Artemis is configured as dedicated backup, no deployments like MDB,EJB,Servlets should be deployed to it. It should serve to its purpose which is to wait for Wildfly 10 with Artemis configured as live to fail.
> We should provide minimalistic configuration for Wildfly serving purely as Artemis backup server.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months