[JBoss JIRA] (MODCLUSTER-376) multiple workers with the same id following a tomcat crash/kill
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-376?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere commented on MODCLUSTER-376:
------------------------------------------------
[~mbabacek] I am fixing MODCLUSTER-568 and stepped on this missing upstream (check my PR).
> multiple workers with the same id following a tomcat crash/kill
> ---------------------------------------------------------------
>
> Key: MODCLUSTER-376
> URL: https://issues.jboss.org/browse/MODCLUSTER-376
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.0.10, 1.2.11.Final
> Environment: -JBoss Enterprise Web Server 1.0.2
> -mod_cluster 1.0.10.GA_CP04
> -Red Hat Enterprise Linux 5
> Reporter: Aaron Ogburn
> Assignee: Michal Karm Babacek
> Fix For: 1.2.13.Final
>
> Attachments: 131122.patch
>
>
> Following a kill or crash of tomcat, multiple workers are seen for a single id when a new tomcat node reconnects and reuses the id from the previously crashed worker.
> The STATUS's ping check sees the old crashed worker first for the id and so pings the wrong destination. Thus the ping fails, and load factor is not applied. The persistent -1 load state leads to 503s.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (MODCLUSTER-376) multiple workers with the same id following a tomcat crash/kill
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-376?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere updated MODCLUSTER-376:
-------------------------------------------
Labels: (was: missing_upstream)
> multiple workers with the same id following a tomcat crash/kill
> ---------------------------------------------------------------
>
> Key: MODCLUSTER-376
> URL: https://issues.jboss.org/browse/MODCLUSTER-376
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.0.10, 1.2.11.Final
> Environment: -JBoss Enterprise Web Server 1.0.2
> -mod_cluster 1.0.10.GA_CP04
> -Red Hat Enterprise Linux 5
> Reporter: Aaron Ogburn
> Assignee: Michal Karm Babacek
> Fix For: 1.2.13.Final
>
> Attachments: 131122.patch
>
>
> Following a kill or crash of tomcat, multiple workers are seen for a single id when a new tomcat node reconnects and reuses the id from the previously crashed worker.
> The STATUS's ping check sees the old crashed worker first for the id and so pings the wrong destination. Thus the ping fails, and load factor is not applied. The persistent -1 load state leads to 503s.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (MODCLUSTER-501) mod_proxy_cluster crash when using BalancerMember
by Paolo Lutterotti (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-501?page=com.atlassian.jira.pl... ]
Paolo Lutterotti reopened MODCLUSTER-501:
-----------------------------------------
Hi,
I just retried with the 1.3.5 and 1.3.6-rc, but I am still experiencing segfault, in presence of BalancerMember directives
any suggestion I can look at?
thanks,
Paolo
> mod_proxy_cluster crash when using BalancerMember
> -------------------------------------------------
>
> Key: MODCLUSTER-501
> URL: https://issues.jboss.org/browse/MODCLUSTER-501
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final
> Environment: o.s.:: CentOS Linux release 7.2.1511
> httpd 2.4.6
> mod cluster 1.3.1 final
> Reporter: Paolo Lutterotti
> Assignee: Michal Karm Babacek
> Fix For: 1.3.6.CR1
>
> Attachments: httpd.conf, modules.txt, vhost.conf, vhost.conf
>
>
> Hello,
> I'm experiencing an issue very similar to https://issues.jboss.org/browse/MODCLUSTER-356
> When I try to use a BalancerMember directive, I get a segmentation fault, caused at line https://github.com/modcluster/mod_cluster/blob/1.3.1.Final/native/mod_pro...
> I tried to remove directives from the virtual host, but the issue persists until there is at least one ProxyPass using a balancer defined with BalancerMember. I attach a .conf file where I put all of the configurations involved in the issue and which is causing the crash.
> If I put a check at line 2223, there are no more segmentations, but a 503 caused by mod_proxy_cluster.c(2332): proxy: byrequests balancer FAILED
> Thanks,
> Paolo
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (MODCLUSTER-568) mod_cluster can't proxy to uppercase hostnames
by Aaron Ogburn (JIRA)
Aaron Ogburn created MODCLUSTER-568:
---------------------------------------
Summary: mod_cluster can't proxy to uppercase hostnames
Key: MODCLUSTER-568
URL: https://issues.jboss.org/browse/MODCLUSTER-568
Project: mod_cluster
Issue Type: Bug
Components: Native (httpd modules)
Affects Versions: 1.2.13.Final
Reporter: Aaron Ogburn
Assignee: Jean-Frederic Clere
If you bind JBoss to an upper case host name, it stays in a -1 load state and cannot be used for requests. It looks like a regression introduced by MODCLUSTER-376.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (MODCLUSTER-566) Exclusion list cannot be pre-populated in init()
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-566?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-566:
--------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Exclusion list cannot be pre-populated in init()
> ------------------------------------------------
>
> Key: MODCLUSTER-566
> URL: https://issues.jboss.org/browse/MODCLUSTER-566
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.6.CR1
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 2.0.0.Alpha1, 1.3.6.Final
>
>
> The exclusion list is unnecessarily populated eagerly on init(). This does not work in container such as WildFly where services are started asynchronously and virtual hosts and contexts can be added at any time.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (MODCLUSTER-567) Deprecate excludedContexts
by Paul Ferraro (JIRA)
Paul Ferraro created MODCLUSTER-567:
---------------------------------------
Summary: Deprecate excludedContexts
Key: MODCLUSTER-567
URL: https://issues.jboss.org/browse/MODCLUSTER-567
Project: mod_cluster
Issue Type: Task
Components: Core & Container Integration (Java)
Affects Versions: 1.3.6.CR1
Reporter: Paul Ferraro
Assignee: Jean-Frederic Clere
Fix For: 2.0.0.Alpha1
"Excluded contexts" is an artifact of poor deployment encapsulation. All contexts within a given engine should be registered with the load balancer. If there are application that should not be load balanced, then they should be deployed to a different engine.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months
[JBoss JIRA] (MODCLUSTER-566) Exclusion list cannot be pre-populated in init()
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-566?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-566:
--------------------------------------
Workaround Description:
You need to specify every exclusion *per host* for *every* context, e.g.:
{{excludedContexts="localhost:ROOT,localhost:docs,localhost:manager,localhost:host-manager,localhost:examples,localhost2:ROOT,localhost2:docs,localhost2:manager,localhost2:host-manager,localhost2:examples"}}
> Exclusion list cannot be pre-populated in init()
> ------------------------------------------------
>
> Key: MODCLUSTER-566
> URL: https://issues.jboss.org/browse/MODCLUSTER-566
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.6.CR1
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 2.0.0.Alpha1, 1.3.6.Final
>
>
> The exclusion list is unnecessarily populated eagerly on init(). This does not work in container such as WildFly where services are started asynchronously and virtual hosts and contexts can be added at any time.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 10 months