[JBoss JIRA] (MODCLUSTER-547) process_buff regression brought in by modcluster/mod_cluster/pull/223/
by Michal Karm Babacek (JIRA)
Michal Karm Babacek created MODCLUSTER-547:
----------------------------------------------
Summary: process_buff regression brought in by modcluster/mod_cluster/pull/223/
Key: MODCLUSTER-547
URL: https://issues.jboss.org/browse/MODCLUSTER-547
Project: mod_cluster
Issue Type: Bug
Components: Native (httpd modules)
Affects Versions: 1.3.4.Final
Environment: httpd 2.2.15 RHEL 6 x86_64
Reporter: Michal Karm Babacek
Assignee: Jean-Frederic Clere
Priority: Blocker
Perfectly ordinary workers are prevented from joining the balancer:
h3. Wrong
{code}
[debug] mod_manager.c(2340): manager_trans INFO (/)
[debug] mod_manager.c(3056): manager_handler INFO (/) processing: ""
[error] process_buff:
[warn] manager_handler INFO error: SYNTAX: Can't parse MCMP message. It might have contained illegal symbols or unknown elements.
{code}
h3. Good
{code}
[debug] mod_manager.c(2301): manager_trans INFO (/)
[debug] mod_manager.c(3017): manager_handler INFO (/) processing: ""
[debug] mod_manager.c(3067): manager_handler INFO OK
[debug] mod_manager.c(2301): manager_trans CONFIG (/)
[debug] mod_manager.c(3017): manager_handler CONFIG (/) processing: "JVMRoute=little-20-worker-4&Balancer=balancerXXX&Domain=XXXXX14&Host=192.168.122.204&Port=14&StickySessionForce=No&StickySessionRemove=Yes&Type=ajp"
[debug] mod_manager.c(3067): manager_handler CONFIG OK
{code}
--
This message was sent by Atlassian JIRA
(v7.2.2#72004)
8 years, 3 months
[JBoss JIRA] (MODCLUSTER-546) Create tag & Release: 1.3.5.Final
by Michal Karm Babacek (JIRA)
Michal Karm Babacek created MODCLUSTER-546:
----------------------------------------------
Summary: Create tag & Release: 1.3.5.Final
Key: MODCLUSTER-546
URL: https://issues.jboss.org/browse/MODCLUSTER-546
Project: mod_cluster
Issue Type: Bug
Components: Core & Container Integration (Java), Native (httpd modules)
Affects Versions: 1.3.4.Final
Reporter: Michal Karm Babacek
Assignee: Jean-Frederic Clere
Priority: Critical
Please, take a look at [GitHub|https://github.com/modcluster/mod_cluster/commit/1985eb96da493ef2d...].
* 1.3.4.Final identifies itself as 1.3.4.Final-SNAPSHOT
* We need 1.3.5.Final both maven-release *and* native
* We need 1.3.4.Final and 1.3.5.Final tags and Released Versions in Jira
* We need 1.3.6.Final (and 1.3.7.Final while we are at it) configured as Unreleased Versions in Jira
* We need build and release upstream binaries
--
This message was sent by Atlassian JIRA
(v7.2.2#72004)
8 years, 3 months
[JBoss JIRA] (MODCLUSTER-545) Mod_cluster requires Advertise but Multicast interface is not available
by Bogdan Sikora (JIRA)
Bogdan Sikora created MODCLUSTER-545:
----------------------------------------
Summary: Mod_cluster requires Advertise but Multicast interface is not available
Key: MODCLUSTER-545
URL: https://issues.jboss.org/browse/MODCLUSTER-545
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.3.3.Final
Reporter: Bogdan Sikora
Assignee: Radoslav Husar
Priority: Minor
Attachments: standalone-ha.xml, standalone.xml
This error message has probably no impact to mod-cluster functionality as tests successfully finished even with this message in workers log. Balancer (standalone.xml) has no error messages.
[^standalone-ha.xml] (worker)
[^standalone.xml] (balancer)
Issue:
{noformat}
2016-10-24 03:51:18,393 ERROR [org.wildfly.extension.mod_cluster] (ServerService Thread Pool -- 64) WFLYMODCLS0004: Mod_cluster requires Advertise but Multicast interface is not available
{noformat}
Environment:
Windows machine with Tunnel adapter Teredo Tunneling Pseudo-Interface
Scenario:
1. Start Eap (standalone-ha.xml) on Ipv6 address with prefix 2001 (Teredo Tunneling)
2. Look for Mod_cluster Advertising error
Error from ContainerEventHandlerService class
{code}
// Read node to set configuration.
if (config.getAdvertise()) {
// There should be a socket-binding.... Well no it needs an advertise socket :-(
final SocketBinding binding = this.binding.getOptionalValue();
if (binding != null) {
config.setAdvertiseSocketAddress(binding.getMulticastSocketAddress());
config.setAdvertiseInterface(binding.getSocketAddress().getAddress());
if (!isMulticastEnabled(bindingManager.getValue().getDefaultInterfaceBinding().getNetworkInterfaces())) {
ROOT_LOGGER.multicastInterfaceNotAvailable();
}
}
}
...
private boolean isMulticastEnabled(Collection<NetworkInterface> ifaces) {
for (NetworkInterface iface : ifaces) {
try {
if (iface.isUp() && (iface.supportsMulticast() || iface.isLoopback())) {
return true;
}
} catch (SocketException e) {
// Ignore
}
}
return false;
}
{code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (MODCLUSTER-544) BalancerMember directives don't work and casue SegFaults
by Michal Karm Babacek (JIRA)
Michal Karm Babacek created MODCLUSTER-544:
----------------------------------------------
Summary: BalancerMember directives don't work and casue SegFaults
Key: MODCLUSTER-544
URL: https://issues.jboss.org/browse/MODCLUSTER-544
Project: mod_cluster
Issue Type: Bug
Components: Native (httpd modules)
Affects Versions: 1.3.3.Final, 1.2.13.Final
Environment: RHEL (others definitely too)
Reporter: Michal Karm Babacek
Assignee: Jean-Frederic Clere
There has been an ongoing discussion about interoperability between BalancerMember and ProxyPass directives and mod_cluster. This is a follow up on MODCLUSTER-391 and especially MODCLUSTER-356.
h3. TL;DR
* BalancerMember directives don't work as expected (at all)
* it is possible to use it to cause SegFault in httpd
* If these directives are *supposed to work*, then I have a wrong configuration or it is a bug to be fixed
* If they are *not supposed to work* in conjunction with mod_cluster, then I should stop trying to test these and remove all ever-failing scenarios from the test suite
h3. Configuration and goal
* two web apps, [^clusterbench.war] and [^tses.war], both deployed on each of two tomcats
* one web app is in excluded contexts (it is [^tses.war])
* the other one ([^clusterbench]) is registered with mod_cluster balancer
* main server: {{\*:2080}}
* mod_cluster VirtualHost: {{\*:8747}}
* proxyPass BalancerMember VirtualHost {{\*:2081}}
* I want to access [^clusterbench.war] via {{\*:8747}} and {{\*:2080}} (works (/)), and [^tses.war] via {{\*:2081}} (fails (x))
* see [^proxy_test.conf] for BalancerMember configuration (taken from httpd 2.2.26 test run, you must edit Location access)
* see [^mod_cluster.conf] for mod_cluster configuration (taken from httpd 2.2.26 test run, as above)
h3. Test
* (/) check, that only [^clusterbench.war] is registered and everything is cool: [mod_cluster-manager console|https://gist.github.com/Karm/26015dabf446360b0e019da6c907bed5]
* (/) [^clusterbench.war] on mod_cluster VirtualHost works: {{curl http://192.168.122.172:8747/clusterbench/requestinfo}}
* (/) [^clusterbench.war] on main server also works: {{curl http://192.168.122.172:2080/clusterbench/requestinfo}} (it works due to MODCLUSTER-430)
* httpd 2.2.26 / mod_cluster 1.2.13.Final:
** (x) [^tses.war] on BalancerMember ProxyPass VirtualHost fails: {{curl http://192.168.122.172:2081/tses}} with: {noformat}mod_proxy_cluster.c(2374): proxy: byrequests balancer FAILED
proxy: CLUSTER: (balancer://xqacluster). All workers are in error state
{noformat} and it doesn't matter whether I configure the same balancer (qacluster) for both mod_cluster and additional BalancerMemebr directives or if I have two balancers (this case).
** (x) [^clusterbench.war] on BalancerMember ProxyPass VirtualHost sometimes works and sometimes causes SegFault {{curl http://192.168.122.172:2081/clusterbench/requestinfo}} (see below)
* httpd 2.4.23 / mod_cluster 1.3.3.Final:
** (x) [^tses.war] on BalancerMember ProxyPass VirtualHost fails with {{curl http://192.168.122.172:2081/tses}} SegFault, *always* (see below)
** (/) [^clusterbench.war] on BalancerMember ProxyPass VirtualHost works {{curl http://192.168.122.172:2081/clusterbench/requestinfo}}
h3. Intermittent and stable SegFaults
h4. httpd 2.2.26 / mod_cluster 1.2.13.Final (EWS 2.1.1)
With the aforementioned setup, it is possible to cause SegFault roughly in 50% of requests to {{curl http://192.168.122.172:2081/clusterbench/requestinfo}} on httpd 2.2.26 mod_cluster 1.2.13.Final, the rest passes fine and the web app is served.
*Offending line:* [mod_proxy_cluster.c:3843|https://github.com/modcluster/mod_cluster/blob/1...]
*Trace:*
{noformat}
#0 proxy_cluster_pre_request (worker=<optimized out>, balancer=<optimized out>, r=0x5555558be3e0, conf=0x5555558767d8, url=0x7fffffffdd40) at mod_proxy_cluster.c:3843
#1 0x00007ffff0cfe3d6 in proxy_run_pre_request (worker=worker@entry=0x7fffffffdd38, balancer=balancer@entry=0x7fffffffdd30, r=r@entry=0x5555558be3e0,
conf=conf@entry=0x5555558767d8, url=url@entry=0x7fffffffdd40) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/mod_proxy.c:2428
#2 0x00007ffff0d01ef2 in ap_proxy_pre_request (worker=worker@entry=0x7fffffffdd38, balancer=balancer@entry=0x7fffffffdd30, r=r@entry=0x5555558be3e0,
conf=conf@entry=0x5555558767d8, url=url@entry=0x7fffffffdd40) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/proxy_util.c:1512
#3 0x00007ffff0cfeabb in proxy_handler (r=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/mod_proxy.c:952
#4 0x00005555555805e0 in ap_run_handler (r=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/config.c:157
#5 0x00005555555809a9 in ap_invoke_handler (r=r@entry=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/config.c:376
#6 0x000055555558dc58 in ap_process_request (r=r@entry=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/http/http_request.c:282
#7 0x000055555558aff8 in ap_process_http_connection (c=0x5555558ae2f0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/http/http_core.c:190
#8 0x0000555555587010 in ap_run_process_connection (c=0x5555558ae2f0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/connection.c:43
#9 0x00005555555873b0 in ap_process_connection (c=c@entry=0x5555558ae2f0, csd=<optimized out>) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/connection.c:190
#10 0x0000555555592b5b in child_main (child_num_arg=child_num_arg@entry=0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:667
#11 0x0000555555592fae in make_child (s=0x5555557bf880, slot=0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:712
#12 0x0000555555593b6e in ap_mpm_run (_pconf=_pconf@entry=0x5555557ba158, plog=<optimized out>, s=s@entry=0x5555557bf880)
at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:988
#13 0x000055555556b50e in main (argc=8, argv=0x7fffffffe268) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/main.c:753
{noformat}
h4. httpd 2.4.23 / mod_cluster 1.3.3.Final (JBCS 2.4.23)
With the aforementioned setup, it is *always* possible to SegFault httpd by accessing [^tses.war] on BalancerMember ProxyPass VirtualHos: {{curl http://192.168.122.172:2081/tses}}.
*Offending line:* [mod_proxy_cluster.c:2230|https://github.com/modcluster/mod_cluster/blob/1...]
*Trace:*
{noformat}
#0 0x00007fffe61a598f in internal_find_best_byrequests (balancer=0x55555593ad38, conf=0x555555918dd8, r=0x5555559a6630, domain=0x0, failoverdomain=0,
vhost_table=0x5555559a5c98, context_table=0x5555559a5e00, node_table=0x5555559a6088) at mod_proxy_cluster.c:2230
#1 0x00007fffe61a90c8 in find_best_worker (balancer=0x55555593ad38, conf=0x555555918dd8, r=0x5555559a6630, domain=0x0, failoverdomain=0, vhost_table=0x5555559a5c98,
context_table=0x5555559a5e00, node_table=0x5555559a6088, recurse=1) at mod_proxy_cluster.c:3457
#2 0x00007fffe61a9f4d in proxy_cluster_pre_request (worker=0x7fffffffdb68, balancer=0x7fffffffdb60, r=0x5555559a6630, conf=0x555555918dd8, url=0x7fffffffdb70)
at mod_proxy_cluster.c:3825
#3 0x00007fffec2fd9a6 in proxy_run_pre_request (worker=worker@entry=0x7fffffffdb68, balancer=balancer@entry=0x7fffffffdb60, r=r@entry=0x5555559a6630,
conf=conf@entry=0x555555918dd8, url=url@entry=0x7fffffffdb70) at mod_proxy.c:2853
#4 0x00007fffec302652 in ap_proxy_pre_request (worker=worker@entry=0x7fffffffdb68, balancer=balancer@entry=0x7fffffffdb60, r=r@entry=0x5555559a6630,
conf=conf@entry=0x555555918dd8, url=url@entry=0x7fffffffdb70) at proxy_util.c:1956
#5 0x00007fffec2fe1dc in proxy_handler (r=0x5555559a6630) at mod_proxy.c:1108
#6 0x00005555555aeff0 in ap_run_handler (r=r@entry=0x5555559a6630) at config.c:170
#7 0x00005555555af539 in ap_invoke_handler (r=r@entry=0x5555559a6630) at config.c:434
#8 0x00005555555c5b2a in ap_process_async_request (r=0x5555559a6630) at http_request.c:410
#9 0x00005555555c5e04 in ap_process_request (r=r@entry=0x5555559a6630) at http_request.c:445
#10 0x00005555555c1ded in ap_process_http_sync_connection (c=0x555555950050) at http_core.c:210
#11 ap_process_http_connection (c=0x555555950050) at http_core.c:251
#12 0x00005555555b9470 in ap_run_process_connection (c=c@entry=0x555555950050) at connection.c:42
#13 0x00005555555b99c8 in ap_process_connection (c=c@entry=0x555555950050, csd=<optimized out>) at connection.c:226
#14 0x00007fffec513a30 in child_main (child_num_arg=child_num_arg@entry=0, child_bucket=child_bucket@entry=0) at prefork.c:723
#15 0x00007fffec513c70 in make_child (s=0x55555582d400, slot=slot@entry=0, bucket=bucket@entry=0) at prefork.c:767
#16 0x00007fffec51521d in prefork_run (_pconf=<optimized out>, plog=0x5555558313a8, s=0x55555582d400) at prefork.c:979
#17 0x0000555555592aae in ap_run_mpm (pconf=pconf@entry=0x555555804188, plog=0x5555558313a8, s=0x55555582d400) at mpm_common.c:94
#18 0x000055555558bb18 in main (argc=8, argv=0x7fffffffe1a8) at main.c:783
{noformat}
h3. About the test
This test has always been failing in one way or another: not serving URL (HTTP 404), returning All workers in Error state (HTTP 503). SegFault has been slipping under the radar for some time, because the test ended up on assert earlier in the scenario - on the first HTTP 503.
We should clearly document which BalancerMember integration is supported and which is not. Furthermore, we must not SegFault even if user tries to do something weird, we must log an error message instead.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (MODCLUSTER-543) BalancerMember directives don't work and casue SegFaults
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-543?page=com.atlassian.jira.pl... ]
Michal Karm Babacek updated MODCLUSTER-543:
-------------------------------------------
Attachment: mod_cluster.conf
proxy_test.conf
clusterbench.war
tses.war
> BalancerMember directives don't work and casue SegFaults
> ---------------------------------------------------------
>
> Key: MODCLUSTER-543
> URL: https://issues.jboss.org/browse/MODCLUSTER-543
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.3.Final, 1.2.13.Final
> Environment: RHEL (others definitely too)
> Reporter: Michal Karm Babacek
> Assignee: Jean-Frederic Clere
> Labels: balancerMember, proxy
> Attachments: clusterbench.war, mod_cluster.conf, proxy_test.conf, tses.war
>
>
> There has been an ongoing discussion about interoperability between BalancerMember and ProxyPass directives and mod_cluster. This is a follow up on MODCLUSTER-391 and especially MODCLUSTER-356.
> h3. TL;DR
> * BalancerMember directives don't work as expected (at all)
> * it is possible to use it to cause SegFault in httpd
> * If these directives are *supposed to work*, then I have a wrong configuration or it is a bug to be fixed
> * If they are *not supposed to work* in conjunction with mod_cluster, then I should stop trying to test these and remove all ever-failing scenarios from the test suite
> h3. Configuration and goal
> * two web apps, [^clusterbench.war] and [^tses.war], both deployed on each of two tomcats
> * one web app is in excluded contexts (it is [^tses.war])
> * the other one ([^clusterbench]) is registered with mod_cluster balancer
> * main server: {{\*:2080}}
> * mod_cluster VirtualHost: {{\*:8747}}
> * proxyPass BalancerMember VirtualHost {{\*:2081}}
> * I want to access [^clusterbench.war] via {{\*:8747}} and {{\*:2080}} (works (/)), and [^tses.war] via {{\*:2081}} (fails (x))
> * see [^proxy_test.conf] for BalancerMember configuration (taken from httpd 2.2.26 test run, you must edit Location access)
> * see [^mod_cluster.conf] for mod_cluster configuration (taken from httpd 2.2.26 test run, as above)
> h3. Test
> * (/) check, that only [^clusterbench.war] is registered and everything is cool: [mod_cluster-manager console|https://gist.github.com/Karm/26015dabf446360b0e019da6c907bed5]
> * (/) [^clusterbench.war] on mod_cluster VirtualHost works: {{curl http://192.168.122.172:8747/clusterbench/requestinfo}}
> * (/) [^clusterbench.war] on main server also works: {{curl http://192.168.122.172:2080/clusterbench/requestinfo}} (it works due to MODCLUSTER-430)
> * httpd 2.2.26 / mod_cluster 1.2.13.Final:
> ** (x) [^tses.war] on BalancerMember ProxyPass VirtualHost fails: {{curl http://192.168.122.172:2081/tses}} with: {noformat}mod_proxy_cluster.c(2374): proxy: byrequests balancer FAILED
> proxy: CLUSTER: (balancer://xqacluster). All workers are in error state
> {noformat} and it doesn't matter whether I configure the same balancer (qacluster) for both mod_cluster and additional BalancerMemebr directives or if I have two balancers (this case).
> ** (x) [^clusterbench.war] on BalancerMember ProxyPass VirtualHost sometimes works and sometimes causes SegFault {{curl http://192.168.122.172:2081/clusterbench/requestinfo}} (see below)
> * httpd 2.4.23 / mod_cluster 1.3.3.Final:
> ** (x) [^tses.war] on BalancerMember ProxyPass VirtualHost fails with {{curl http://192.168.122.172:2081/tses}} SegFault, *always* (see below)
> ** (/) [^clusterbench.war] on BalancerMember ProxyPass VirtualHost works {{curl http://192.168.122.172:2081/clusterbench/requestinfo}}
> h3. Intermittent and stable SegFaults
> h4. httpd 2.2.26 / mod_cluster 1.2.13.Final (EWS 2.1.1)
> With the aforementioned setup, it is possible to cause SegFault roughly in 50% of requests to {{curl http://192.168.122.172:2081/clusterbench/requestinfo}} on httpd 2.2.26 mod_cluster 1.2.13.Final, the rest passes fine and the web app is served.
> *Offending line:* [mod_proxy_cluster.c:3843|https://github.com/modcluster/mod_cluster/blob/1...]
> *Trace:*
> {noformat}
> #0 proxy_cluster_pre_request (worker=<optimized out>, balancer=<optimized out>, r=0x5555558be3e0, conf=0x5555558767d8, url=0x7fffffffdd40) at mod_proxy_cluster.c:3843
> #1 0x00007ffff0cfe3d6 in proxy_run_pre_request (worker=worker@entry=0x7fffffffdd38, balancer=balancer@entry=0x7fffffffdd30, r=r@entry=0x5555558be3e0,
> conf=conf@entry=0x5555558767d8, url=url@entry=0x7fffffffdd40) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/mod_proxy.c:2428
> #2 0x00007ffff0d01ef2 in ap_proxy_pre_request (worker=worker@entry=0x7fffffffdd38, balancer=balancer@entry=0x7fffffffdd30, r=r@entry=0x5555558be3e0,
> conf=conf@entry=0x5555558767d8, url=url@entry=0x7fffffffdd40) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/proxy_util.c:1512
> #3 0x00007ffff0cfeabb in proxy_handler (r=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/mod_proxy.c:952
> #4 0x00005555555805e0 in ap_run_handler (r=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/config.c:157
> #5 0x00005555555809a9 in ap_invoke_handler (r=r@entry=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/config.c:376
> #6 0x000055555558dc58 in ap_process_request (r=r@entry=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/http/http_request.c:282
> #7 0x000055555558aff8 in ap_process_http_connection (c=0x5555558ae2f0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/http/http_core.c:190
> #8 0x0000555555587010 in ap_run_process_connection (c=0x5555558ae2f0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/connection.c:43
> #9 0x00005555555873b0 in ap_process_connection (c=c@entry=0x5555558ae2f0, csd=<optimized out>) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/connection.c:190
> #10 0x0000555555592b5b in child_main (child_num_arg=child_num_arg@entry=0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:667
> #11 0x0000555555592fae in make_child (s=0x5555557bf880, slot=0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:712
> #12 0x0000555555593b6e in ap_mpm_run (_pconf=_pconf@entry=0x5555557ba158, plog=<optimized out>, s=s@entry=0x5555557bf880)
> at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:988
> #13 0x000055555556b50e in main (argc=8, argv=0x7fffffffe268) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/main.c:753
> {noformat}
> h4. httpd 2.4.23 / mod_cluster 1.3.3.Final (JBCS 2.4.23)
> With the aforementioned setup, it is *always* possible to SegFault httpd by accessing [^tses.war] on BalancerMember ProxyPass VirtualHos: {{curl http://192.168.122.172:2081/tses}}.
> *Offending line:* [mod_proxy_cluster.c:2230|https://github.com/modcluster/mod_cluster/blob/1...]
> *Trace:*
> {noformat}
> #0 0x00007fffe61a598f in internal_find_best_byrequests (balancer=0x55555593ad38, conf=0x555555918dd8, r=0x5555559a6630, domain=0x0, failoverdomain=0,
> vhost_table=0x5555559a5c98, context_table=0x5555559a5e00, node_table=0x5555559a6088) at mod_proxy_cluster.c:2230
> #1 0x00007fffe61a90c8 in find_best_worker (balancer=0x55555593ad38, conf=0x555555918dd8, r=0x5555559a6630, domain=0x0, failoverdomain=0, vhost_table=0x5555559a5c98,
> context_table=0x5555559a5e00, node_table=0x5555559a6088, recurse=1) at mod_proxy_cluster.c:3457
> #2 0x00007fffe61a9f4d in proxy_cluster_pre_request (worker=0x7fffffffdb68, balancer=0x7fffffffdb60, r=0x5555559a6630, conf=0x555555918dd8, url=0x7fffffffdb70)
> at mod_proxy_cluster.c:3825
> #3 0x00007fffec2fd9a6 in proxy_run_pre_request (worker=worker@entry=0x7fffffffdb68, balancer=balancer@entry=0x7fffffffdb60, r=r@entry=0x5555559a6630,
> conf=conf@entry=0x555555918dd8, url=url@entry=0x7fffffffdb70) at mod_proxy.c:2853
> #4 0x00007fffec302652 in ap_proxy_pre_request (worker=worker@entry=0x7fffffffdb68, balancer=balancer@entry=0x7fffffffdb60, r=r@entry=0x5555559a6630,
> conf=conf@entry=0x555555918dd8, url=url@entry=0x7fffffffdb70) at proxy_util.c:1956
> #5 0x00007fffec2fe1dc in proxy_handler (r=0x5555559a6630) at mod_proxy.c:1108
> #6 0x00005555555aeff0 in ap_run_handler (r=r@entry=0x5555559a6630) at config.c:170
> #7 0x00005555555af539 in ap_invoke_handler (r=r@entry=0x5555559a6630) at config.c:434
> #8 0x00005555555c5b2a in ap_process_async_request (r=0x5555559a6630) at http_request.c:410
> #9 0x00005555555c5e04 in ap_process_request (r=r@entry=0x5555559a6630) at http_request.c:445
> #10 0x00005555555c1ded in ap_process_http_sync_connection (c=0x555555950050) at http_core.c:210
> #11 ap_process_http_connection (c=0x555555950050) at http_core.c:251
> #12 0x00005555555b9470 in ap_run_process_connection (c=c@entry=0x555555950050) at connection.c:42
> #13 0x00005555555b99c8 in ap_process_connection (c=c@entry=0x555555950050, csd=<optimized out>) at connection.c:226
> #14 0x00007fffec513a30 in child_main (child_num_arg=child_num_arg@entry=0, child_bucket=child_bucket@entry=0) at prefork.c:723
> #15 0x00007fffec513c70 in make_child (s=0x55555582d400, slot=slot@entry=0, bucket=bucket@entry=0) at prefork.c:767
> #16 0x00007fffec51521d in prefork_run (_pconf=<optimized out>, plog=0x5555558313a8, s=0x55555582d400) at prefork.c:979
> #17 0x0000555555592aae in ap_run_mpm (pconf=pconf@entry=0x555555804188, plog=0x5555558313a8, s=0x55555582d400) at mpm_common.c:94
> #18 0x000055555558bb18 in main (argc=8, argv=0x7fffffffe1a8) at main.c:783
> {noformat}
> h3. About the test
> This test has always been failing in one way or another: not serving URL (HTTP 404), returning All workers in Error state (HTTP 503). SegFault has been slipping under the radar for some time, because the test ended up on assert earlier in the scenario - on the first HTTP 503.
> We should clearly document which BalancerMember integration is supported and which is not. Furthermore, we must not SegFault even if user tries to do something weird, we must log an error message instead.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (MODCLUSTER-543) BalancerMember directives don't work and casue SegFaults
by Michal Karm Babacek (JIRA)
Michal Karm Babacek created MODCLUSTER-543:
----------------------------------------------
Summary: BalancerMember directives don't work and casue SegFaults
Key: MODCLUSTER-543
URL: https://issues.jboss.org/browse/MODCLUSTER-543
Project: mod_cluster
Issue Type: Bug
Components: Native (httpd modules)
Affects Versions: 1.2.13.Final, 1.3.3.Final
Environment: RHEL (others definitely too)
Reporter: Michal Karm Babacek
Assignee: Jean-Frederic Clere
There has been an ongoing discussion about interoperability between BalancerMember and ProxyPass directives and mod_cluster. This is a follow up on MODCLUSTER-391 and especially MODCLUSTER-356.
h3. TL;DR
* BalancerMember directives don't work as expected (at all)
* it is possible to use it to cause SegFault in httpd
* If these directives are *supposed to work*, then I have a wrong configuration or it is a bug to be fixed
* If they are *not supposed to work* in conjunction with mod_cluster, then I should stop trying to test these and remove all ever-failing scenarios from the test suite
h3. Configuration and goal
* two web apps, [^clusterbench.war] and [^tses.war], both deployed on each of two tomcats
* one web app is in excluded contexts (it is [^tses.war])
* the other one ([^clusterbench]) is registered with mod_cluster balancer
* main server: {{\*:2080}}
* mod_cluster VirtualHost: {{\*:8747}}
* proxyPass BalancerMember VirtualHost {{\*:2081}}
* I want to access [^clusterbench.war] via {{\*:8747}} and {{\*:2080}} (works (/)), and [^tses.war] via {{\*:2081}} (fails (x))
* see [^proxy_test.conf] for BalancerMember configuration (taken from httpd 2.2.26 test run, you must edit Location access)
* see [^mod_cluster.conf] for mod_cluster configuration (taken from httpd 2.2.26 test run, as above)
h3. Test
* (/) check, that only [^clusterbench.war] is registered and everything is cool: [mod_cluster-manager console|https://gist.github.com/Karm/26015dabf446360b0e019da6c907bed5]
* (/) [^clusterbench.war] on mod_cluster VirtualHost works: {{curl http://192.168.122.172:8747/clusterbench/requestinfo}}
* (/) [^clusterbench.war] on main server also works: {{curl http://192.168.122.172:2080/clusterbench/requestinfo}} (it works due to MODCLUSTER-430)
* httpd 2.2.26 / mod_cluster 1.2.13.Final:
** (x) [^tses.war] on BalancerMember ProxyPass VirtualHost fails: {{curl http://192.168.122.172:2081/tses}} with: {noformat}mod_proxy_cluster.c(2374): proxy: byrequests balancer FAILED
proxy: CLUSTER: (balancer://xqacluster). All workers are in error state
{noformat} and it doesn't matter whether I configure the same balancer (qacluster) for both mod_cluster and additional BalancerMemebr directives or if I have two balancers (this case).
** (x) [^clusterbench.war] on BalancerMember ProxyPass VirtualHost sometimes works and sometimes causes SegFault {{curl http://192.168.122.172:2081/clusterbench/requestinfo}} (see below)
* httpd 2.4.23 / mod_cluster 1.3.3.Final:
** (x) [^tses.war] on BalancerMember ProxyPass VirtualHost fails with {{curl http://192.168.122.172:2081/tses}} SegFault, *always* (see below)
** (/) [^clusterbench.war] on BalancerMember ProxyPass VirtualHost works {{curl http://192.168.122.172:2081/clusterbench/requestinfo}}
h3. Intermittent and stable SegFaults
h4. httpd 2.2.26 / mod_cluster 1.2.13.Final (EWS 2.1.1)
With the aforementioned setup, it is possible to cause SegFault roughly in 50% of requests to {{curl http://192.168.122.172:2081/clusterbench/requestinfo}} on httpd 2.2.26 mod_cluster 1.2.13.Final, the rest passes fine and the web app is served.
*Offending line:* [mod_proxy_cluster.c:3843|https://github.com/modcluster/mod_cluster/blob/1...]
*Trace:*
{noformat}
#0 proxy_cluster_pre_request (worker=<optimized out>, balancer=<optimized out>, r=0x5555558be3e0, conf=0x5555558767d8, url=0x7fffffffdd40) at mod_proxy_cluster.c:3843
#1 0x00007ffff0cfe3d6 in proxy_run_pre_request (worker=worker@entry=0x7fffffffdd38, balancer=balancer@entry=0x7fffffffdd30, r=r@entry=0x5555558be3e0,
conf=conf@entry=0x5555558767d8, url=url@entry=0x7fffffffdd40) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/mod_proxy.c:2428
#2 0x00007ffff0d01ef2 in ap_proxy_pre_request (worker=worker@entry=0x7fffffffdd38, balancer=balancer@entry=0x7fffffffdd30, r=r@entry=0x5555558be3e0,
conf=conf@entry=0x5555558767d8, url=url@entry=0x7fffffffdd40) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/proxy_util.c:1512
#3 0x00007ffff0cfeabb in proxy_handler (r=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/proxy/mod_proxy.c:952
#4 0x00005555555805e0 in ap_run_handler (r=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/config.c:157
#5 0x00005555555809a9 in ap_invoke_handler (r=r@entry=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/config.c:376
#6 0x000055555558dc58 in ap_process_request (r=r@entry=0x5555558be3e0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/http/http_request.c:282
#7 0x000055555558aff8 in ap_process_http_connection (c=0x5555558ae2f0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/modules/http/http_core.c:190
#8 0x0000555555587010 in ap_run_process_connection (c=0x5555558ae2f0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/connection.c:43
#9 0x00005555555873b0 in ap_process_connection (c=c@entry=0x5555558ae2f0, csd=<optimized out>) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/connection.c:190
#10 0x0000555555592b5b in child_main (child_num_arg=child_num_arg@entry=0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:667
#11 0x0000555555592fae in make_child (s=0x5555557bf880, slot=0) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:712
#12 0x0000555555593b6e in ap_mpm_run (_pconf=_pconf@entry=0x5555557ba158, plog=<optimized out>, s=s@entry=0x5555557bf880)
at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/mpm/prefork/prefork.c:988
#13 0x000055555556b50e in main (argc=8, argv=0x7fffffffe268) at /builddir/build/BUILD/httpd-EWS_2.1.1.CR1/server/main.c:753
{noformat}
h4. httpd 2.4.23 / mod_cluster 1.3.3.Final (JBCS 2.4.23)
With the aforementioned setup, it is *always* possible to SegFault httpd by accessing [^tses.war] on BalancerMember ProxyPass VirtualHos: {{curl http://192.168.122.172:2081/tses}}.
*Offending line:* [mod_proxy_cluster.c:2230|https://github.com/modcluster/mod_cluster/blob/1...]
*Trace:*
{noformat}
#0 0x00007fffe61a598f in internal_find_best_byrequests (balancer=0x55555593ad38, conf=0x555555918dd8, r=0x5555559a6630, domain=0x0, failoverdomain=0,
vhost_table=0x5555559a5c98, context_table=0x5555559a5e00, node_table=0x5555559a6088) at mod_proxy_cluster.c:2230
#1 0x00007fffe61a90c8 in find_best_worker (balancer=0x55555593ad38, conf=0x555555918dd8, r=0x5555559a6630, domain=0x0, failoverdomain=0, vhost_table=0x5555559a5c98,
context_table=0x5555559a5e00, node_table=0x5555559a6088, recurse=1) at mod_proxy_cluster.c:3457
#2 0x00007fffe61a9f4d in proxy_cluster_pre_request (worker=0x7fffffffdb68, balancer=0x7fffffffdb60, r=0x5555559a6630, conf=0x555555918dd8, url=0x7fffffffdb70)
at mod_proxy_cluster.c:3825
#3 0x00007fffec2fd9a6 in proxy_run_pre_request (worker=worker@entry=0x7fffffffdb68, balancer=balancer@entry=0x7fffffffdb60, r=r@entry=0x5555559a6630,
conf=conf@entry=0x555555918dd8, url=url@entry=0x7fffffffdb70) at mod_proxy.c:2853
#4 0x00007fffec302652 in ap_proxy_pre_request (worker=worker@entry=0x7fffffffdb68, balancer=balancer@entry=0x7fffffffdb60, r=r@entry=0x5555559a6630,
conf=conf@entry=0x555555918dd8, url=url@entry=0x7fffffffdb70) at proxy_util.c:1956
#5 0x00007fffec2fe1dc in proxy_handler (r=0x5555559a6630) at mod_proxy.c:1108
#6 0x00005555555aeff0 in ap_run_handler (r=r@entry=0x5555559a6630) at config.c:170
#7 0x00005555555af539 in ap_invoke_handler (r=r@entry=0x5555559a6630) at config.c:434
#8 0x00005555555c5b2a in ap_process_async_request (r=0x5555559a6630) at http_request.c:410
#9 0x00005555555c5e04 in ap_process_request (r=r@entry=0x5555559a6630) at http_request.c:445
#10 0x00005555555c1ded in ap_process_http_sync_connection (c=0x555555950050) at http_core.c:210
#11 ap_process_http_connection (c=0x555555950050) at http_core.c:251
#12 0x00005555555b9470 in ap_run_process_connection (c=c@entry=0x555555950050) at connection.c:42
#13 0x00005555555b99c8 in ap_process_connection (c=c@entry=0x555555950050, csd=<optimized out>) at connection.c:226
#14 0x00007fffec513a30 in child_main (child_num_arg=child_num_arg@entry=0, child_bucket=child_bucket@entry=0) at prefork.c:723
#15 0x00007fffec513c70 in make_child (s=0x55555582d400, slot=slot@entry=0, bucket=bucket@entry=0) at prefork.c:767
#16 0x00007fffec51521d in prefork_run (_pconf=<optimized out>, plog=0x5555558313a8, s=0x55555582d400) at prefork.c:979
#17 0x0000555555592aae in ap_run_mpm (pconf=pconf@entry=0x555555804188, plog=0x5555558313a8, s=0x55555582d400) at mpm_common.c:94
#18 0x000055555558bb18 in main (argc=8, argv=0x7fffffffe1a8) at main.c:783
{noformat}
h3. About the test
This test has always been failing in one way or another: not serving URL (HTTP 404), returning All workers in Error state (HTTP 503). SegFault has been slipping under the radar for some time, because the test ended up on assert earlier in the scenario - on the first HTTP 503.
We should clearly document which BalancerMember integration is supported and which is not. Furthermore, we must not SegFault even if user tries to do something weird, we must log an error message instead.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (MODCLUSTER-465) CMake: build expat, apr, apru, zlib, iconv, openssl, httpd, jk and mod_cluster
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-465?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-465:
--------------------------------------
Issue Type: Task (was: Enhancement)
> CMake: build expat, apr, apru, zlib, iconv, openssl, httpd, jk and mod_cluster
> ------------------------------------------------------------------------------
>
> Key: MODCLUSTER-465
> URL: https://issues.jboss.org/browse/MODCLUSTER-465
> Project: mod_cluster
> Issue Type: Task
> Components: Native (httpd modules)
> Reporter: Michal Karm Babacek
> Assignee: Michal Karm Babacek
> Original Estimate: 24 weeks
> Remaining Estimate: 24 weeks
>
> This is a long-term pet project dedicated to translating Autotools to CMake. The first two target platforms are Windows and Fedora, x86_64. Various flavours of Solaris, HP-UX, FreeBSD and Mac will follow.
> I'll use this JIRA for tracking notes and progress. It is noteworthy that the current upstream expat, apr and httpd use CMake to generate MSVC projects, so translating those to generating Linux make files shouldn't be that nightmareish. One Autotools directive at a time...
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 3 months
[JBoss JIRA] (MODCLUSTER-541) [Mod_cluster] ManagerBalancerName variable is lowercased
by Bogdan Sikora (JIRA)
Bogdan Sikora created MODCLUSTER-541:
----------------------------------------
Summary: [Mod_cluster] ManagerBalancerName variable is lowercased
Key: MODCLUSTER-541
URL: https://issues.jboss.org/browse/MODCLUSTER-541
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.3.3.Final
Reporter: Bogdan Sikora
Assignee: Michal Karm Babacek
Documentation
{noformat}
3.5.8. ManagerBalancerName
ManagerBalancerName: That is the name of balancer to use when the JBoss AS/JBossWeb/Tomcat doesn't provide a balancer name.
Default: mycluster
{noformat}
Issue:
Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase letters and is turning whole name to lowercase. If worker passes balancer name then is correctly used as documentation suggest and even with uppercase letters.
Reproduce:
1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balancer variable)
2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf
3. Start everything and access mod_cluster status page
4. Look for Balancer variable under your worker, should be cAmeLcAse but isn't
Workaround:
Set balancer name in each worker, as documentation says it will override variable set in mod_cluster.conf (ManagerBalancerName)
Verbose:
Httpd debug node join part
{noformat}
08:13:22.058 [INFO] RESPONSE: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.3.Final</h1><a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node jboss-eap-7.1 (ajp://192.168.122.88:8009): </h1>
<a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Enable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Disable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Stop Contexts</a><br/>
Balancer:qa-balancer,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 1
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre></body></html>
{noformat}
{noformat}
[Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-7.1&Host=192.168.122.88&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068): manager_handler CONFIG OK
[Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302): manager_trans ENABLE-APP (/)
[Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of Require all granted: granted
[Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of <RequireAny>: granted
[Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1054): update_workers_node starting
[Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(695): add_balancer_node: Create balancer balancer://qa-balancer
[Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(1779): AH00925: initializing worker ajp://192.168.122.88 shared
[Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(1774): AH00924: worker ajp://192.168.122.88 shared already initialized
[Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1066): update_workers_node done
{noformat}
Configuration (mod_cluster.conf)
{noformat}
<IfModule manager_module>
Listen 192.168.122.88:8747
LogLevel debug
<VirtualHost 192.168.122.88:8747>
ServerName localhost.localdomain:8747
<Directory />
Require all granted
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName QA-bAlAnCeR
AdvertiseGroup 224.0.5.88:23364
AdvertiseBindAddress 192.168.122.88:23364
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Require all granted
</Location>
</VirtualHost>
</IfModule>
{noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-540) ManagerBalancerName variable is lowercased
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-540?page=com.atlassian.jira.pl... ]
Bogdan Sikora updated MODCLUSTER-540:
-------------------------------------
Description:
Documentation
{noformat}
3.5.8. ManagerBalancerName
ManagerBalancerName: That is the name of balancer to use when the JBoss AS/JBossWeb/Tomcat doesn't provide a balancer name.
Default: mycluster
{noformat}
Issue:
Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase letters and is turning whole name to lowercase. If worker passes balancer name then is correctly used as documentation suggest and even with uppercase letters.
Reproduce:
1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balancer variable)
2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf
3. Start everything and access mod_cluster status page
4. Look for Balancer variable under your worker, should be cAmeLcAse but isn't
Workaround:
Set balancer name in each worker, as documentation says it will override variable set in mod_cluster.conf (ManagerBalancerName)
Verbose:
Httpd debug node join part
{noformat}
08:13:22.058 [INFO] RESPONSE: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.3.Final</h1><a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node jboss-eap-7.1 (ajp://192.168.122.88:8009): </h1>
<a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Enable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Disable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Stop Contexts</a><br/>
Balancer:qa-balancer,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 1
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre></body></html>
{noformat}
{noformat}
[Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-7.1&Host=192.168.122.88&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068): manager_handler CONFIG OK
[Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302): manager_trans ENABLE-APP (/)
[Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of Require all granted: granted
[Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of <RequireAny>: granted
[Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1054): update_workers_node starting
[Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(695): add_balancer_node: Create balancer balancer://qa-balancer
[Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(1779): AH00925: initializing worker ajp://192.168.122.88 shared
[Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(1774): AH00924: worker ajp://192.168.122.88 shared already initialized
[Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1066): update_workers_node done
{noformat}
Configuration (mod_cluster.conf)
{noformat}
<IfModule manager_module>
Listen 192.168.122.88:8747
LogLevel debug
<VirtualHost 192.168.122.88:8747>
ServerName localhost.localdomain:8747
<Directory />
Require all granted
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName QA-bAlAnCeR
AdvertiseGroup 224.0.5.88:23364
AdvertiseBindAddress 192.168.122.88:23364
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Require all granted
</Location>
</VirtualHost>
</IfModule>
{noformat}
was:
Documentation
{noformat}
3.5.8. ManagerBalancerName
ManagerBalancerName: That is the name of balancer to use when the JBoss AS/JBossWeb/Tomcat doesn't provide a balancer name.
Default: mycluster
{noformat}
Issue:
Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase letters and is turning whole name to lowercase. If worker passes balancer name then is correctly used as documentation suggest and even with uppercase letters.
Reproduce:
1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balancer variable)
2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf
3. Start everything and access mod_cluster status page
4. Look for Balancer variable under your worker, should be cAmeLcAse but isn't
Workaround:
Set balancer name in each worker, as documentation says it will override variable set in mod_cluster.conf (ManagerBalancerName)
Verbose:
Httpd debug node join part
{noformat}
08:13:22.058 [INFO] RESPONSE: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.3.Final</h1><a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node jboss-eap-7.1 (ajp://192.168.122.88:8009): </h1>
<a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Enable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Disable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Stop Contexts</a><br/>
Balancer:qa-balancert,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 1
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre></body></html>
{noformat}
{noformat}
[Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-7.1&Host=192.168.122.88&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068): manager_handler CONFIG OK
[Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302): manager_trans ENABLE-APP (/)
[Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of Require all granted: granted
[Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of <RequireAny>: granted
[Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1054): update_workers_node starting
[Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(695): add_balancer_node: Create balancer balancer://qa-balancer
[Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(1779): AH00925: initializing worker ajp://192.168.122.88 shared
[Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(1774): AH00924: worker ajp://192.168.122.88 shared already initialized
[Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1066): update_workers_node done
{noformat}
Configuration (mod_cluster.conf)
{noformat}
<IfModule manager_module>
Listen 192.168.122.88:8747
LogLevel debug
<VirtualHost 192.168.122.88:8747>
ServerName localhost.localdomain:8747
<Directory />
Require all granted
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName QA-bAlAnCeR
AdvertiseGroup 224.0.5.88:23364
AdvertiseBindAddress 192.168.122.88:23364
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Require all granted
</Location>
</VirtualHost>
</IfModule>
{noformat}
> ManagerBalancerName variable is lowercased
> ------------------------------------------
>
> Key: MODCLUSTER-540
> URL: https://issues.jboss.org/browse/MODCLUSTER-540
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.3.Final
> Reporter: Bogdan Sikora
> Assignee: Michal Karm Babacek
>
> Documentation
> {noformat}
> 3.5.8. ManagerBalancerName
> ManagerBalancerName: That is the name of balancer to use when the JBoss AS/JBossWeb/Tomcat doesn't provide a balancer name.
> Default: mycluster
> {noformat}
> Issue:
> Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase letters and is turning whole name to lowercase. If worker passes balancer name then is correctly used as documentation suggest and even with uppercase letters.
> Reproduce:
> 1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balancer variable)
> 2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf
> 3. Start everything and access mod_cluster status page
> 4. Look for Balancer variable under your worker, should be cAmeLcAse but isn't
> Workaround:
> Set balancer name in each worker, as documentation says it will override variable set in mod_cluster.conf (ManagerBalancerName)
> Verbose:
> Httpd debug node join part
> {noformat}
> 08:13:22.058 [INFO] RESPONSE: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
> <html><head>
> <title>Mod_cluster Status</title>
> </head><body>
> <h1>mod_cluster/1.3.3.Final</h1><a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=INFO&Range=ALL">show INFO output</a>
> <h1> Node jboss-eap-7.1 (ajp://192.168.122.88:8009): </h1>
> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Enable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Disable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Stop Contexts</a><br/>
> Balancer:qa-balancer,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 1
> <h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Stop</a>
> </pre><h3>Aliases:</h3><pre>default-host
> localhost
> </pre></body></html>
> {noformat}
> {noformat}
> [Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-7.1&Host=192.168.122.88&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
> [Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068): manager_handler CONFIG OK
> [Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302): manager_trans ENABLE-APP (/)
> [Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of Require all granted: granted
> [Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of <RequireAny>: granted
> [Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1054): update_workers_node starting
> [Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(695): add_balancer_node: Create balancer balancer://qa-balancer
> [Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
> [Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(1779): AH00925: initializing worker ajp://192.168.122.88 shared
> [Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
> [Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
> [Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
> [Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(1774): AH00924: worker ajp://192.168.122.88 shared already initialized
> [Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
> [Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
> [Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1066): update_workers_node done
> {noformat}
> Configuration (mod_cluster.conf)
> {noformat}
> <IfModule manager_module>
> Listen 192.168.122.88:8747
> LogLevel debug
> <VirtualHost 192.168.122.88:8747>
> ServerName localhost.localdomain:8747
> <Directory />
> Require all granted
> </Directory>
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName QA-bAlAnCeR
> AdvertiseGroup 224.0.5.88:23364
> AdvertiseBindAddress 192.168.122.88:23364
> EnableMCPMReceive
> <Location /mcm>
> SetHandler mod_cluster-manager
> Require all granted
> </Location>
> </VirtualHost>
> </IfModule>
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-540) ManagerBalancerName variable is lowercased
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-540?page=com.atlassian.jira.pl... ]
Bogdan Sikora updated MODCLUSTER-540:
-------------------------------------
Description:
Documentation
{noformat}
3.5.8. ManagerBalancerName
ManagerBalancerName: That is the name of balancer to use when the JBoss AS/JBossWeb/Tomcat doesn't provide a balancer name.
Default: mycluster
{noformat}
Issue:
Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase letters and is turning whole name to lowercase. If worker passes balancer name then is correctly used as documentation suggest and even with uppercase letters.
Reproduce:
1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balancer variable)
2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf
3. Start everything and access mod_cluster status page
4. Look for Balancer variable under your worker, should be cAmeLcAse but isn't
Workaround:
Set balancer name in each worker, as documentation says it will override variable set in mod_cluster.conf (ManagerBalancerName)
Verbose:
Httpd debug node join part
{noformat}
08:13:22.058 [INFO] RESPONSE: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.3.Final</h1><a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node jboss-eap-7.1 (ajp://192.168.122.88:8009): </h1>
<a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Enable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Disable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Stop Contexts</a><br/>
Balancer:qa-balancert,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 1
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre></body></html>
{noformat}
{noformat}
[Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-7.1&Host=192.168.122.88&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068): manager_handler CONFIG OK
[Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302): manager_trans ENABLE-APP (/)
[Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of Require all granted: granted
[Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of <RequireAny>: granted
[Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1054): update_workers_node starting
[Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(695): add_balancer_node: Create balancer balancer://qa-balancer
[Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(1779): AH00925: initializing worker ajp://192.168.122.88 shared
[Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(1774): AH00924: worker ajp://192.168.122.88 shared already initialized
[Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1066): update_workers_node done
{noformat}
Configuration (mod_cluster.conf)
{noformat}
<IfModule manager_module>
Listen 192.168.122.88:8747
LogLevel debug
<VirtualHost 192.168.122.88:8747>
ServerName localhost.localdomain:8747
<Directory />
Require all granted
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName QA-bAlAnCeR
AdvertiseGroup 224.0.5.88:23364
AdvertiseBindAddress 192.168.122.88:23364
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Require all granted
</Location>
</VirtualHost>
</IfModule>
{noformat}
was:
Documentation
{noformat}
3.5.8. ManagerBalancerName
ManagerBalancerName: That is the name of balancer to use when the JBoss AS/JBossWeb/Tomcat doesn't provide a balancer name.
Default: mycluster
{noformat}
Issue:
Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase letters and is turning whole name to lowercase. If worker passes balancer name then is correctly used as documentation suggest and even with uppercase letters.
Reproduce:
1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balancer variable)
2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf
3. Start everything and access mod_cluster status page
4. Look for Balancer variable under your worker, should be cAmeLcAse but isn't
Workaround:
Set balancer name in each worker, as documentation says it will override variable set in mod_cluster.conf (ManagerBalancerName)
Verbose:
Httpd debug node join part
{noformat}
08:13:22.058 [INFO] RESPONSE: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.3.Final</h1><a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node jboss-eap-7.1 (ajp://192.168.122.88:8009): </h1>
<a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Enable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Disable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Stop Contexts</a><br/>
*_+Balancer:qa-balancert+_*,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 1
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre></body></html>
{noformat}
{noformat}
[Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-7.1&Host=192.168.122.88&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068): manager_handler CONFIG OK
[Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302): manager_trans ENABLE-APP (/)
[Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of Require all granted: granted
[Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of <RequireAny>: granted
[Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1054): update_workers_node starting
[Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(695): add_balancer_node: Create balancer balancer://qa-balancer
[Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(1779): AH00925: initializing worker ajp://192.168.122.88 shared
[Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
[Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(1774): AH00924: worker ajp://192.168.122.88 shared already initialized
[Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
[Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
[Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1066): update_workers_node done
{noformat}
Configuration (mod_cluster.conf)
{noformat}
<IfModule manager_module>
Listen 192.168.122.88:8747
LogLevel debug
<VirtualHost 192.168.122.88:8747>
ServerName localhost.localdomain:8747
<Directory />
Require all granted
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName QA-bAlAnCeR
AdvertiseGroup 224.0.5.88:23364
AdvertiseBindAddress 192.168.122.88:23364
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Require all granted
</Location>
</VirtualHost>
</IfModule>
{noformat}
> ManagerBalancerName variable is lowercased
> ------------------------------------------
>
> Key: MODCLUSTER-540
> URL: https://issues.jboss.org/browse/MODCLUSTER-540
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.3.Final
> Reporter: Bogdan Sikora
> Assignee: Michal Karm Babacek
>
> Documentation
> {noformat}
> 3.5.8. ManagerBalancerName
> ManagerBalancerName: That is the name of balancer to use when the JBoss AS/JBossWeb/Tomcat doesn't provide a balancer name.
> Default: mycluster
> {noformat}
> Issue:
> Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase letters and is turning whole name to lowercase. If worker passes balancer name then is correctly used as documentation suggest and even with uppercase letters.
> Reproduce:
> 1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balancer variable)
> 2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf
> 3. Start everything and access mod_cluster status page
> 4. Look for Balancer variable under your worker, should be cAmeLcAse but isn't
> Workaround:
> Set balancer name in each worker, as documentation says it will override variable set in mod_cluster.conf (ManagerBalancerName)
> Verbose:
> Httpd debug node join part
> {noformat}
> 08:13:22.058 [INFO] RESPONSE: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
> <html><head>
> <title>Mod_cluster Status</title>
> </head><body>
> <h1>mod_cluster/1.3.3.Final</h1><a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=INFO&Range=ALL">show INFO output</a>
> <h1> Node jboss-eap-7.1 (ajp://192.168.122.88:8009): </h1>
> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Enable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Disable Contexts</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.1">Stop Contexts</a><br/>
> Balancer:qa-balancert,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 1
> <h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=2cfb1542-913e-11e6-9142-9fb646370036&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.1&Alias=default-host&Context=/clusterbench">Stop</a>
> </pre><h3>Aliases:</h3><pre>default-host
> localhost
> </pre></body></html>
> {noformat}
> {noformat}
> [Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-7.1&Host=192.168.122.88&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
> [Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068): manager_handler CONFIG OK
> [Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302): manager_trans ENABLE-APP (/)
> [Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of Require all granted: granted
> [Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of <RequireAny>: granted
> [Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1054): update_workers_node starting
> [Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(695): add_balancer_node: Create balancer balancer://qa-balancer
> [Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
> [Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(1779): AH00925: initializing worker ajp://192.168.122.88 shared
> [Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
> [Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
> [Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(293): Created: worker for ajp://192.168.122.88:8009
> [Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(1774): AH00924: worker ajp://192.168.122.88 shared already initialized
> [Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(1821): AH00927: initializing worker ajp://192.168.122.88 local
> [Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(1872): AH00931: initialized single connection worker in child 12217 for (192.168.122.88)
> [Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(1066): update_workers_node done
> {noformat}
> Configuration (mod_cluster.conf)
> {noformat}
> <IfModule manager_module>
> Listen 192.168.122.88:8747
> LogLevel debug
> <VirtualHost 192.168.122.88:8747>
> ServerName localhost.localdomain:8747
> <Directory />
> Require all granted
> </Directory>
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName QA-bAlAnCeR
> AdvertiseGroup 224.0.5.88:23364
> AdvertiseBindAddress 192.168.122.88:23364
> EnableMCPMReceive
> <Location /mcm>
> SetHandler mod_cluster-manager
> Require all granted
> </Location>
> </VirtualHost>
> </IfModule>
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months