[JBoss JIRA] (MODCLUSTER-449) Implement ramp-up when starting new nodes
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-449?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-449:
--------------------------------------
Fix Version/s: 1.3.4.Final
(was: 1.3.3.Final)
> Implement ramp-up when starting new nodes
> -----------------------------------------
>
> Key: MODCLUSTER-449
> URL: https://issues.jboss.org/browse/MODCLUSTER-449
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Core & Container Integration (Java)
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 1.3.4.Final
>
>
> IIUC this has been a problem since inception. The problem is that the initial load stays in effect for performing load-balancing decisions until a new stat interval kicks in.
> This effect is mitigated by load decay over time, but for the time a new node joins in, it can get overloaded upon startup.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 7 months
[JBoss JIRA] (MODCLUSTER-452) mod_cluster with WildFly reports 0.0.0.0 to the balancer
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-452?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-452:
--------------------------------------
Fix Version/s: 1.3.4.Final
(was: 1.3.3.Final)
> mod_cluster with WildFly reports 0.0.0.0 to the balancer
> --------------------------------------------------------
>
> Key: MODCLUSTER-452
> URL: https://issues.jboss.org/browse/MODCLUSTER-452
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.0.Final
> Reporter: Michal Karm Babacek
> Assignee: Radoslav Husar
> Fix For: 1.3.4.Final
>
>
> While I was working on [issue 138|https://github.com/modcluster/mod_cluster/issues/138] (MODCLUSTER-448), it came to my attention that one can't force EAP 6.4 Beta (jbossweb, mod_cluster 1.2.11) to send anything like {{ajp://0.0.0.0:8009}} to the balancer. If one sets the application server to bind to 0.0.0.0, mod_cluster core correctly guesses 127.0.0.1 and sends {{ajp://127.0.0.1:8009}} to the balancer.
> On the contrary, WildFly goes only half way: it reports 127.0.0.1 in the log:
> {noformat}
> [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000012: default-server connector will use /127.0.0.1
> {noformat}
> and yet it sends 0.0.0.0 to the balancer:
> {noformat}
> Node localhost (ajp://0.0.0.0:8009):
> {noformat}
> Note the {{localhost}} string in place of properly generated UUID: MODCLUSTER-451.
> For some historical context, I suggest: MODCLUSTER-91 and MODCLUSTER-168
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 7 months
[JBoss JIRA] (MODCLUSTER-501) mod_proxy_cluster crash when using BalancerMember
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-501?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-501:
--------------------------------------
Fix Version/s: 1.3.4.Final
(was: 1.3.3.Final)
> mod_proxy_cluster crash when using BalancerMember
> -------------------------------------------------
>
> Key: MODCLUSTER-501
> URL: https://issues.jboss.org/browse/MODCLUSTER-501
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final
> Environment: o.s.:: CentOS Linux release 7.2.1511
> httpd 2.4.6
> mod cluster 1.3.1 final
> Reporter: Paolo Lutterotti
> Assignee: Michal Karm Babacek
> Fix For: 1.3.4.Final
>
> Attachments: httpd.conf, modules.txt, vhost.conf, vhost.conf
>
>
> Hello,
> I'm experiencing an issue very similar to https://issues.jboss.org/browse/MODCLUSTER-356
> When I try to use a BalancerMember directive, I get a segmentation fault, caused at line https://github.com/modcluster/mod_cluster/blob/1.3.1.Final/native/mod_pro...
> I tried to remove directives from the virtual host, but the issue persists until there is at least one ProxyPass using a balancer defined with BalancerMember. I attach a .conf file where I put all of the configurations involved in the issue and which is causing the crash.
> If I put a check at line 2223, there are no more segmentations, but a 503 caused by mod_proxy_cluster.c(2332): proxy: byrequests balancer FAILED
> Thanks,
> Paolo
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 7 months
[JBoss JIRA] (MODCLUSTER-505) ProxyErrorOverride=On causes workers in error state after 500 errors
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-505?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-505:
--------------------------------------
Fix Version/s: 1.3.4.Final
(was: 1.3.3.Final)
> ProxyErrorOverride=On causes workers in error state after 500 errors
> ---------------------------------------------------------------------
>
> Key: MODCLUSTER-505
> URL: https://issues.jboss.org/browse/MODCLUSTER-505
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.2.Final, 1.2.12.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Fix For: 1.2.14.Final, 1.3.4.Final
>
>
> When a VirtualHost uses ProxyPass to proxy traffic to the backend and uses ProxyErrorOverride to host custom error pages on Apache httpd side, when the backend replies with a 50x error code mod_proxy/mod_cluster marks that worker as down, breaking session stickiness. This behaviour is quite similar to have failonstatus=500 for example.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 7 months