[JBoss JIRA] (JGRP-2068) Problems with JBOSS cluster
by Fouad Zoghlami (JIRA)
[ https://issues.jboss.org/browse/JGRP-2068?page=com.atlassian.jira.plugin.... ]
Fouad Zoghlami commented on JGRP-2068:
--------------------------------------
could this be the same problem as:
the problem seems to be a problem with the jgroups.
Out of the log files:
...
2015-12-09 16:56:11,132 FATAL CloserThread [org.jgroups.JChannel] local_addr is null; cannot connect
2015-12-09 16:56:11,132 ERROR CloserThread [org.jgroups.JChannel] failure reconnecting to channel, retrying
org.jgroups.ChannelException: local_addr is null
at org.jgroups.JChannel.startStack(JChannel.java:1631)
at org.jgroups.JChannel.connect(JChannel.java:366)
at org.jgroups.JChannel$CloserThread.run(JChannel.java:2046)
...
Jgroups is not able to reconnect.
I found the following articles:
https://developer.jboss.org/message/8089
https://issues.jboss.org/browse/JGRP-1006
According to the second article it's fixed with jgroups 2.6.11
> Problems with JBOSS cluster
> ---------------------------
>
> Key: JGRP-2068
> URL: https://issues.jboss.org/browse/JGRP-2068
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 2.6.10
> Reporter: Fouad Zoghlami
> Assignee: Bela Ban
> Attachments: the master.zip, the slave.zip
>
>
> We are using a JBOSS cluster for the Process Server component. One node (the master) is used for Process Workplace and a second node is used by an external application to start new process instances from the scanner input.
> Recently they encountered a problem where one of the 2 nodes started to give problems. When we look in the logs it looks like there is some kind of communication problem between the 2 nodes in the cluster. From that point it looks like the cluster is repaired, but from that point the process server on one of the two nodes begins to give problems when users try to send on workitems. It looks like the locking state is out of sync or something like that. In all situation we have been able to fix the issue by restarting jboss on both nodes and clearing the /tmp, /data and /work folders
> Please see the logs file.
> Issue happened around 2015-11-12 10:48:35. It happened also 2 weeks ago and a couple of months ago.
> Please assist in finding the cause of this behavior and how we can prevent this in future.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6639) Upgrade IronJacamar from 1.3.3.Final to 1.3.4.Final
by Lin Gao (JIRA)
[ https://issues.jboss.org/browse/WFLY-6639?page=com.atlassian.jira.plugin.... ]
Lin Gao moved JBEAP-4681 to WFLY-6639:
--------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-6639 (was: JBEAP-4681)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: JCA
(was: JCA)
Target Release: (was: 7.0.z.GA)
Fix Version/s: (was: 7.0.1.GA)
> Upgrade IronJacamar from 1.3.3.Final to 1.3.4.Final
> ---------------------------------------------------
>
> Key: WFLY-6639
> URL: https://issues.jboss.org/browse/WFLY-6639
> Project: WildFly
> Issue Type: Component Upgrade
> Components: JCA
> Reporter: Lin Gao
> Assignee: Lin Gao
>
> If the password for any datasource is invalid, and there are multiple datasources defined, it is difficult to identify the problematic datasource from the the console log.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6173) Classes not unloaded after undeployment
by Martin Kouba (JIRA)
[ https://issues.jboss.org/browse/WFLY-6173?page=com.atlassian.jira.plugin.... ]
Martin Kouba edited comment on WFLY-6173 at 5/23/16 10:27 AM:
--------------------------------------------------------------
WFLY-6347 seems to be related. I've tried the reproducer on master branch and the {{BeanPropertiesCache}} is no longer a GC root.
was (Author: mkouba):
WFLY-6347 seems to be related. I've tried the reproducer on master branch and the {{BeanPropertiesCache}} is no longer a GC root. However, the leak is not fixed yet.
> Classes not unloaded after undeployment
> ---------------------------------------
>
> Key: WFLY-6173
> URL: https://issues.jboss.org/browse/WFLY-6173
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 8.2.0.Final, 10.0.0.Final
> Reporter: Joey Wang
> Assignee: Martin Kouba
> Priority: Blocker
> Fix For: 10.1.0.Final
>
> Attachments: memory-leak.zip
>
>
> I deployed a small web application with one single JSF and one managed bean, accessed the page and then undeployed the application. I found the classes of this application had never been unloaded via monitoring with Java VistualVM, also using '-XX:+TraceClassUnloading' JVM option proved the classes not unloaded.
> Then checking the heap dump of it, I found there were instance for each enum item (the managed bean has one enum type field, which is always initialized when the managed bean constructed) and one array instance including these enum instances.
> Please refer to the attachment for the same application. I started to verify the classloader memory leak issue because we found hot redeployment of our real application swallow some memory each time, then after lots of redeployment the server was short of memories.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (JGRP-2068) Problems with JBOSS cluster
by Fouad Zoghlami (JIRA)
[ https://issues.jboss.org/browse/JGRP-2068?page=com.atlassian.jira.plugin.... ]
Fouad Zoghlami reopened JGRP-2068:
----------------------------------
reopen
> Problems with JBOSS cluster
> ---------------------------
>
> Key: JGRP-2068
> URL: https://issues.jboss.org/browse/JGRP-2068
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 2.6.10
> Reporter: Fouad Zoghlami
> Assignee: Bela Ban
> Attachments: the master.zip, the slave.zip
>
>
> We are using a JBOSS cluster for the Process Server component. One node (the master) is used for Process Workplace and a second node is used by an external application to start new process instances from the scanner input.
> Recently they encountered a problem where one of the 2 nodes started to give problems. When we look in the logs it looks like there is some kind of communication problem between the 2 nodes in the cluster. From that point it looks like the cluster is repaired, but from that point the process server on one of the two nodes begins to give problems when users try to send on workitems. It looks like the locking state is out of sync or something like that. In all situation we have been able to fix the issue by restarting jboss on both nodes and clearing the /tmp, /data and /work folders
> Please see the logs file.
> Issue happened around 2015-11-12 10:48:35. It happened also 2 weeks ago and a couple of months ago.
> Please assist in finding the cause of this behavior and how we can prevent this in future.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (JGRP-2068) Problems with JBOSS cluster
by Fouad Zoghlami (JIRA)
[ https://issues.jboss.org/browse/JGRP-2068?page=com.atlassian.jira.plugin.... ]
Fouad Zoghlami commented on JGRP-2068:
--------------------------------------
Hi Bela,
Ok, but are you sure that this is a JBoss issue and not JGroups problem?
Did you investigate the log files that i attached?
Kindest regards,
Fouad Zoghlami
> Problems with JBOSS cluster
> ---------------------------
>
> Key: JGRP-2068
> URL: https://issues.jboss.org/browse/JGRP-2068
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 2.6.10
> Reporter: Fouad Zoghlami
> Assignee: Bela Ban
> Attachments: the master.zip, the slave.zip
>
>
> We are using a JBOSS cluster for the Process Server component. One node (the master) is used for Process Workplace and a second node is used by an external application to start new process instances from the scanner input.
> Recently they encountered a problem where one of the 2 nodes started to give problems. When we look in the logs it looks like there is some kind of communication problem between the 2 nodes in the cluster. From that point it looks like the cluster is repaired, but from that point the process server on one of the two nodes begins to give problems when users try to send on workitems. It looks like the locking state is out of sync or something like that. In all situation we have been able to fix the issue by restarting jboss on both nodes and clearing the /tmp, /data and /work folders
> Please see the logs file.
> Issue happened around 2015-11-12 10:48:35. It happened also 2 weeks ago and a couple of months ago.
> Please assist in finding the cause of this behavior and how we can prevent this in future.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (JGRP-2068) Problems with JBOSS cluster
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2068?page=com.atlassian.jira.plugin.... ]
Bela Ban closed JGRP-2068.
--------------------------
Resolution: Rejected
* Very old version of JGroups (2.6.x)
* JBoss issue, not JGroups problem
> Problems with JBOSS cluster
> ---------------------------
>
> Key: JGRP-2068
> URL: https://issues.jboss.org/browse/JGRP-2068
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 2.6.10
> Reporter: Fouad Zoghlami
> Assignee: Bela Ban
> Attachments: the master.zip, the slave.zip
>
>
> We are using a JBOSS cluster for the Process Server component. One node (the master) is used for Process Workplace and a second node is used by an external application to start new process instances from the scanner input.
> Recently they encountered a problem where one of the 2 nodes started to give problems. When we look in the logs it looks like there is some kind of communication problem between the 2 nodes in the cluster. From that point it looks like the cluster is repaired, but from that point the process server on one of the two nodes begins to give problems when users try to send on workitems. It looks like the locking state is out of sync or something like that. In all situation we have been able to fix the issue by restarting jboss on both nodes and clearing the /tmp, /data and /work folders
> Please see the logs file.
> Issue happened around 2015-11-12 10:48:35. It happened also 2 weeks ago and a couple of months ago.
> Please assist in finding the cause of this behavior and how we can prevent this in future.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6636) WELD-001408: Unsatisfied dependencies on hot deploy of app using module-alias as dependency
by Tomas Hofman (JIRA)
[ https://issues.jboss.org/browse/WFLY-6636?page=com.atlassian.jira.plugin.... ]
Tomas Hofman reassigned WFLY-6636:
----------------------------------
Assignee: Tomas Hofman (was: Stuart Douglas)
> WELD-001408: Unsatisfied dependencies on hot deploy of app using module-alias as dependency
> -------------------------------------------------------------------------------------------
>
> Key: WFLY-6636
> URL: https://issues.jboss.org/browse/WFLY-6636
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 10.0.0.Final
> Reporter: Brad Maxwell
> Assignee: Tomas Hofman
> Attachments: test.ear
>
>
> If a sub deployment uses another sub deployment's module-alias as a dependency, it will deploy successfully, but if hot deployed, it will fail with the error below. Attached reproducer.
> {code}
> <?xml version="1.0" encoding="UTF-8"?>
> <jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
> <ear-subdeployments-isolated>true</ear-subdeployments-isolated>
> <sub-deployment name="ejb1.jar">
> <module-alias name="deployment.ejb1"/>
> </sub-deployment>
> <sub-deployment name="ejb2.jar">
> <dependencies>
> <!-- works
> <module name="deployment.test.ear.ejb1.jar" slot="main"/>
> -->
> <!-- fails with WELD-001408: Unsatisfied dependencies for type TestEJB1 with qualifiers @Default on redeploy / hot deploy -->
> <module name="deployment.ejb1" slot="main"/>
> </dependencies>
> </sub-deployment>
> </jboss-deployment-structure>
> {code}
> {code}
> 10:43:42,140 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-8) MSC000001: Failed to start service jboss.deployment.unit."test.ear".WeldStartService: org.jboss.msc.service.StartException in service jboss.deployment.unit."test.ear".WeldStartService: Failed to start service
> at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1904)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type TestEJB1 with qualifiers @Default
> at injection point [BackedAnnotatedField] @Inject private test.TestEJB2.test1
> at test.TestEJB2.test1(TestEJB2.java:0)
> at org.jboss.weld.bootstrap.Validator.validateInjectionPointForDeploymentProblems(Validator.java:359)
> at org.jboss.weld.bootstrap.Validator.validateInjectionPoint(Validator.java:281)
> at org.jboss.weld.bootstrap.Validator.validateGeneralBean(Validator.java:134)
> at org.jboss.weld.bootstrap.Validator.validateRIBean(Validator.java:155)
> at org.jboss.weld.bootstrap.Validator.validateBean(Validator.java:518)
> at org.jboss.weld.bootstrap.ConcurrentValidator$1.doWork(ConcurrentValidator.java:68)
> at org.jboss.weld.bootstrap.ConcurrentValidator$1.doWork(ConcurrentValidator.java:66)
> at org.jboss.weld.executor.IterativeWorkerTaskFactory$1.call(IterativeWorkerTaskFactory.java:63)
> at org.jboss.weld.executor.IterativeWorkerTaskFactory$1.call(IterativeWorkerTaskFactory.java:56)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> 10:43:42,144 ERROR [org.jboss.as.controller.management-operation] (DeploymentScanner-threads - 2) WFLYCTL0013: Operation ("full-replace-deployment") failed - address: ([]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.unit.\"test.ear\".WeldStartService" => "org.jboss.msc.service.StartException in service jboss.deployment.unit.\"test.ear\".WeldStartService: Failed to start service
> Caused by: org.jboss.weld.exceptions.DeploymentException: WELD-001408: Unsatisfied dependencies for type TestEJB1 with qualifiers @Default
> at injection point [BackedAnnotatedField] @Inject private test.TestEJB2.test1
> at test.TestEJB2.test1(TestEJB2.java:0)
> "}}
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (WFLY-6173) Classes not unloaded after undeployment
by Martin Kouba (JIRA)
[ https://issues.jboss.org/browse/WFLY-6173?page=com.atlassian.jira.plugin.... ]
Martin Kouba commented on WFLY-6173:
------------------------------------
WFLY-6347 seems to be related. I've tried the reproducer on master branch and the {{BeanPropertiesCache}} is no longer a GC root. However, the leak is not fixed yet.
> Classes not unloaded after undeployment
> ---------------------------------------
>
> Key: WFLY-6173
> URL: https://issues.jboss.org/browse/WFLY-6173
> Project: WildFly
> Issue Type: Bug
> Components: CDI / Weld
> Affects Versions: 8.2.0.Final, 10.0.0.Final
> Reporter: Joey Wang
> Assignee: Martin Kouba
> Priority: Blocker
> Fix For: 10.1.0.Final
>
> Attachments: memory-leak.zip
>
>
> I deployed a small web application with one single JSF and one managed bean, accessed the page and then undeployed the application. I found the classes of this application had never been unloaded via monitoring with Java VistualVM, also using '-XX:+TraceClassUnloading' JVM option proved the classes not unloaded.
> Then checking the heap dump of it, I found there were instance for each enum item (the managed bean has one enum type field, which is always initialized when the managed bean constructed) and one array instance including these enum instances.
> Please refer to the attachment for the same application. I started to verify the classloader memory leak issue because we found hot redeployment of our real application swallow some memory each time, then after lots of redeployment the server was short of memories.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (DROOLS-1188) Move contents of kie-eap-modules repository directly into droolsjbpm-integration
by Petr Široký (JIRA)
Petr Široký created DROOLS-1188:
-----------------------------------
Summary: Move contents of kie-eap-modules repository directly into droolsjbpm-integration
Key: DROOLS-1188
URL: https://issues.jboss.org/browse/DROOLS-1188
Project: Drools
Issue Type: Enhancement
Components: build
Affects Versions: 6.4.0.Final
Reporter: Petr Široký
Assignee: Petr Široký
The KIE EAP integration now resides inside its own repository https://github.com/jboss-integration/kie-eap-modules. This is for some historical reasons which no longer seem to apply (e.g. we no longer build skinny WARs, we just bundle core engine jars as JBoss modules).
We should move the content directly into droolsjbpm-integration repository, under kie-eap-integration sub-directory. This would decrease the maintenance effort connected with having large number of source repositories.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months
[JBoss JIRA] (JGRP-2068) Problems with JBOSS cluster
by Fouad Zoghlami (JIRA)
Fouad Zoghlami created JGRP-2068:
------------------------------------
Summary: Problems with JBOSS cluster
Key: JGRP-2068
URL: https://issues.jboss.org/browse/JGRP-2068
Project: JGroups
Issue Type: Bug
Affects Versions: 2.6.10
Reporter: Fouad Zoghlami
Assignee: Bela Ban
Attachments: the master.zip, the slave.zip
We are using a JBOSS cluster for the Process Server component. One node (the master) is used for Process Workplace and a second node is used by an external application to start new process instances from the scanner input.
Recently they encountered a problem where one of the 2 nodes started to give problems. When we look in the logs it looks like there is some kind of communication problem between the 2 nodes in the cluster. From that point it looks like the cluster is repaired, but from that point the process server on one of the two nodes begins to give problems when users try to send on workitems. It looks like the locking state is out of sync or something like that. In all situation we have been able to fix the issue by restarting jboss on both nodes and clearing the /tmp, /data and /work folders
Please see the logs file.
Issue happened around 2015-11-12 10:48:35. It happened also 2 weeks ago and a couple of months ago.
Please assist in finding the cause of this behavior and how we can prevent this in future.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 11 months