From issues at jboss.org Thu Oct 13 08:49:01 2016 Content-Type: multipart/mixed; boundary="===============1349549773551536319==" MIME-Version: 1.0 From: Bogdan Sikora (JIRA) To: mod_cluster-issues at lists.jboss.org Subject: [mod_cluster-issues] [JBoss JIRA] (MODCLUSTER-541) [Mod_cluster] ManagerBalancerName variable is lowercased Date: Thu, 13 Oct 2016 08:49:01 -0400 Message-ID: In-Reply-To: JIRA.12654673.1476362938000@Atlassian.JIRA --===============1349549773551536319== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Bogdan Sikora created MODCLUSTER-541: ---------------------------------------- Summary: [Mod_cluster] ManagerBalancerName variable is lowerca= sed Key: MODCLUSTER-541 URL: https://issues.jboss.org/browse/MODCLUSTER-541 Project: mod_cluster Issue Type: Bug Affects Versions: 1.3.3.Final Reporter: Bogdan Sikora Assignee: Michal Karm Babacek Documentation {noformat} 3.5.8. ManagerBalancerName ManagerBalancerName: That is the name of balancer to use when the JBoss AS/= JBossWeb/Tomcat doesn't provide a balancer name. Default: mycluster {noformat} Issue: Apache Httpd (2.4.23-ER1) did not recognize ManagerBalancerName uppercase l= etters and is turning whole name to lowercase. If worker passes balancer na= me then is correctly used as documentation suggest and even with uppercase = letters. = Reproduce: 1. Set up balancer (httpd) with worker (for example EAP-7, do not set Balan= cer variable) 2. Set ManagerBalancerName to QA-bAlAnCeR in mod_cluster.conf 3. Start everything and access mod_cluster status page = 4. Look for Balancer variable under your worker, should be cAmeLcAse but is= n't Workaround: Set balancer name in each worker, as documentation says it will override va= riable set in mod_cluster.conf (ManagerBalancerName) = Verbose: Httpd debug node join part {noformat} 08:13:22.058 [INFO] RESPONSE: Mod_cluster Status

mod_cluster/1.3.3.Final

Auto Refresh show DUMP output= show INFO output

Node jboss-eap-7.1 (ajp://192.168.122.88:8009):

Enable Contexts Disable Contexts Stop Contexts
Balancer:qa-balancer,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10= 000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,C= onnected: 0,Load: 1

Virtual Host 1:

Contexts:

/clusterbench, Status: ENAB=
LED Request: 0 Disable Stop

Aliases:

default-host
localhost
{noformat} {noformat} [Thu Oct 13 08:12:04.337611 2016] [:debug] [pid 12216] mod_manager.c(3018):= manager_handler CONFIG (/) processing: "JVMRoute=3Djboss-eap-7.1&Host=3D19= 2.168.122.88&Maxattempts=3D1&Port=3D8009&StickySessionForce=3DNo&Type=3Dajp= &ping=3D10" [Thu Oct 13 08:12:04.340202 2016] [:debug] [pid 12216] mod_manager.c(3068):= manager_handler CONFIG OK [Thu Oct 13 08:12:04.342416 2016] [:debug] [pid 12217] mod_manager.c(2302):= manager_trans ENABLE-APP (/) [Thu Oct 13 08:12:04.342499 2016] [authz_core:debug] [pid 12217] mod_authz_= core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of= Require all granted: granted [Thu Oct 13 08:12:04.342509 2016] [authz_core:debug] [pid 12217] mod_authz_= core.c(809): [client 192.168.122.88:54652] AH01626: authorization result of= : granted [Thu Oct 13 08:12:04.342576 2016] [:debug] [pid 12217] mod_proxy_cluster.c(= 1054): update_workers_node starting [Thu Oct 13 08:12:04.343186 2016] [:debug] [pid 12217] mod_proxy_cluster.c(= 695): add_balancer_node: Create balancer balancer://qa-balancer [Thu Oct 13 08:12:04.343243 2016] [:debug] [pid 12217] mod_proxy_cluster.c(= 293): Created: worker for ajp://192.168.122.88:8009 [Thu Oct 13 08:12:04.343259 2016] [proxy:debug] [pid 12217] proxy_util.c(17= 79): AH00925: initializing worker ajp://192.168.122.88 shared [Thu Oct 13 08:12:04.343262 2016] [proxy:debug] [pid 12217] proxy_util.c(18= 21): AH00927: initializing worker ajp://192.168.122.88 local [Thu Oct 13 08:12:04.343279 2016] [proxy:debug] [pid 12217] proxy_util.c(18= 72): AH00931: initialized single connection worker in child 12217 for (192.= 168.122.88) [Thu Oct 13 08:12:04.343288 2016] [:debug] [pid 12217] mod_proxy_cluster.c(= 293): Created: worker for ajp://192.168.122.88:8009 [Thu Oct 13 08:12:04.343290 2016] [proxy:debug] [pid 12217] proxy_util.c(17= 74): AH00924: worker ajp://192.168.122.88 shared already initialized [Thu Oct 13 08:12:04.343292 2016] [proxy:debug] [pid 12217] proxy_util.c(18= 21): AH00927: initializing worker ajp://192.168.122.88 local [Thu Oct 13 08:12:04.343312 2016] [proxy:debug] [pid 12217] proxy_util.c(18= 72): AH00931: initialized single connection worker in child 12217 for (192.= 168.122.88) [Thu Oct 13 08:12:04.343318 2016] [:debug] [pid 12217] mod_proxy_cluster.c(= 1066): update_workers_node done {noformat} Configuration (mod_cluster.conf) {noformat} Listen 192.168.122.88:8747 LogLevel debug ServerName localhost.localdomain:8747 Require all granted KeepAliveTimeout 60 MaxKeepAliveRequests 0 ServerAdvertise on AdvertiseFrequency 5 ManagerBalancerName QA-bAlAnCeR AdvertiseGroup 224.0.5.88:23364 AdvertiseBindAddress 192.168.122.88:23364 EnableMCPMReceive SetHandler mod_cluster-manager Require all granted {noformat} -- This message was sent by Atlassian JIRA (v6.4.11#64026) --===============1349549773551536319==--