[JBoss JIRA] (MODCLUSTER-341) REMOVE-APP with 2 Aliases removes only the first one
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-341?page=com.atlassian.jira.pl... ]
Radoslav Husar closed MODCLUSTER-341.
-------------------------------------
Fix Version/s: 1.2.4.Final
Resolution: Out of Date
Never mind! It is fixed in 1.2.4.Final :-)
I wonder if we could provide a community download..
> REMOVE-APP with 2 Aliases removes only the first one
> ----------------------------------------------------
>
> Key: MODCLUSTER-341
> URL: https://issues.jboss.org/browse/MODCLUSTER-341
> Project: mod_cluster
> Issue Type: Feature Request
> Affects Versions: 1.2.0.Final
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Fix For: 1.2.4.Final
>
>
> Seems like only 1 Alias is removed if the application has been deployed with 2 different Aliases.
> This results in virtual host not being removed.
> Uncovered when testing with Undertow, since it has default-host and localhost defined in the stock config.
> {noformat}
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans INFO (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler INFO (/) processing: ""
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler INFO OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans CONFIG (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler CONFIG (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Host=127.0.0.1&Maxattempts=1&Port=8009&Reversed=true&StickySessionForce=No&Type=ajp&ping=10"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler CONFIG OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans ENABLE-APP (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler ENABLE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler ENABLE-APP OK
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(1667): manager_trans REMOVE-APP (/)
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2323): manager_handler REMOVE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2366): manager_handler REMOVE-APP OK
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-339) "proxy: DNS lookup failure" with IPv6 on Solaris
by Michal Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-339?page=com.atlassian.jira.pl... ]
Michal Babacek commented on MODCLUSTER-339:
-------------------------------------------
h3. Thinking aloud
I do not understand why should we put zone there at all. What should httpd, as a server, do with it?
I had tried to look up some httpd tests with IPv6, and I found only this, not using zone id:
[httpd-2.2.23/srclib/apr/test/testsock.c:314|https://gist.github.com/Karm/5642351#file-testsock-c-L314]
Furthermore, I examined the functions in {{httpd-2.2.23/srclib/apr/network_io/unix/sockaddr.c}} leading to {{getaddrinfo(hostname, servname, &hints, &ai_list);}}
Solaris POSIX mambo-jambo reveals a nice doc for [getaddrinfo()|http://docs.oracle.com/cd/E23823_01/html/816-5170/getaddrin...]
{quote}
The {{nodename}} can also be an IPv6 zone-id in the form:
{code}
<address>%<zone-id>
{code}
The address is the literal IPv6 link-local address or host name of the destination. The zone-id is the interface ID of the IPv6 link used to send the packet. The zone-id can either be a numeric value, indicating a literal zone value, or an interface name such as hme0.
{quote}
OK, we should be able to put %num there, still, why should be httpd interested in worker's interface zone id? It is not going to be binding to it...
I guess there is even a room for a nasty error where, given that zone id has a priority over the actual address, httpd will try to use a specific interface just because it was given an unnecessary zone id... Dunno :-(
h3. Toss % out
How about stripping the %num from the CONFIG message on the native side? As I stated above, it's IMHO useless there anyhow.
{code:title=RHEL with zone %666|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create balancer balancer://qacluster
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(426): Created: worker for ajp://[2620:52:0:102f:221:5eff:fe96:8180%666]:8009
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(549): proxy: initialized single connection worker 1 in child 10070 for (2620:52:0:102f:221:5eff:fe96:8180%666)
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(601): Created: worker for ajp://[2620:52:0:102f:221:5eff:fe96:8180%666]:8009 1 (status): 129
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node starting
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
{code}
OK, RHEL can handle it, SOLARIS can't. On the other hand:
{code:title=RHEL without any zone in the message|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
[Fri May 24 06:37:47 2013] [debug] mod_proxy_cluster.c(426): Created: worker for ajp://[2620:52:0:102f:221:5eff:fe96:8180]:8009
[Fri May 24 06:37:47 2013] [debug] mod_proxy_cluster.c(549): proxy: initialized single connection worker 1 in child 9967 for (2620:52:0:102f:221:5eff:fe96:8180)
[Fri May 24 06:37:47 2013] [debug] mod_proxy_cluster.c(601): Created: worker for ajp://[2620:52:0:102f:221:5eff:fe96:8180]:8009 1 (status): 129
{code}
Omitting the zone from the CONFIG message seems to be doing no harm.
Solaris up and running: :-)
{code:title=SOLARIS without any zone in the message|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
[Fri May 24 08:25:15 2013] [debug] mod_manager.c(1923): manager_trans CONFIG (/)
[Fri May 24 08:25:15 2013] [debug] mod_manager.c(2598): manager_handler CONFIG (/) processing: "JVMRoute=FakeNode&Host=%5B2620%3A52%3A0%3A105f%3A%3Affff%3A60%5D&Maxattempts=1&Port=8009&Type=ajp&ping=100\r\n"
[Fri May 24 08:25:15 2013] [debug] mod_manager.c(2647): manager_handler CONFIG OK
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node starting
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create balancer balancer://qacluster
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node starting
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create balancer balancer://qacluster
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(426): Created: worker for ajp://[2620:52:0:105f::ffff:60]:8009
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1 in child 19207 for (2620:52:0:105f::ffff:60) min=0 max=25 smax=25
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(601): Created: worker for ajp://[2620:52:0:105f::ffff:60]:8009 1 (status): 1
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2011): proxy: ajp: has acquired connection for (2620:52:0:105f::ffff:60)
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2067): proxy: connecting ajp://[2620:52:0:105f::ffff:60]:8009/ to 2620:52:0:105f::ffff:60:8009
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(426): Created: worker for ajp://[2620:52:0:105f::ffff:60]:8009
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1 in child 19208 for (2620:52:0:105f::ffff:60) min=0 max=25 smax=25
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(601): Created: worker for ajp://[2620:52:0:105f::ffff:60]:8009 1 (status): 1
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node starting
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2193): proxy: connected / to 2620:52:0:105f::ffff:60:8009
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2444): proxy: ajp: fam 26 socket created to connect to 2620:52:0:105f::ffff:60
{code}
Without *%something* in the Host attribute of the CONFIG message, there is no nasty *DNS lookup failure* and everything seems to be cool (not yet thoroughly tested though).
The aforementioned log was produced with this fake message:
{code}
{ echo "CONFIG / HTTP/1.0"; echo "Content-length: 108"; echo ""; echo "JVMRoute=FakeNode&Host=%5B2620%3A52%3A0%3A105f%3A%3Affff%3A60%5D&Maxattempts=1&Port=8009&Type=ajp&ping=100"; sleep 1; } | telnet 2620:52:0:105f::ffff:60 6666
{code}
What do you think about it?
> "proxy: DNS lookup failure" with IPv6 on Solaris
> ------------------------------------------------
>
> Key: MODCLUSTER-339
> URL: https://issues.jboss.org/browse/MODCLUSTER-339
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.3.Final, 1.2.4.Final
> Environment: Solaris 10 x86, Solaris 11 x86, Solaris 11 SPARC
> Reporter: Michal Babacek
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Labels: ipv6
> Fix For: 1.2.5.Final
>
> Attachments: access_log-mod_cluster, error_log-mod_cluster, error_log-mod_cluster-RHEL, error_log-proxypass, http.conf
>
>
> h2. Failure with mod_cluster
> Having the following setting:
> {code:title=mod_cluster.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> LoadModule slotmem_module modules/mod_slotmem.so
> LoadModule manager_module modules/mod_manager.so
> LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
> LoadModule advertise_module modules/mod_advertise.so
> MemManagerFile "/tmp/mod_cluster-eap6/jboss-ews-2.0/var/cache/mod_cluster"
> ServerName [2620:52:0:105f::ffff:50]:2080
> <IfModule manager_module>
> Listen [2620:52:0:105f::ffff:50]:6666
> LogLevel debug
> <VirtualHost [2620:52:0:105f::ffff:50]:6666>
> ServerName [2620:52:0:105f::ffff:50]:6666
> <Directory />
> Order deny,allow
> Deny from all
> Allow from all
> </Directory>
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName qacluster
> AdvertiseGroup [ff01::7]:23964
> EnableMCPMReceive
> <Location /mcm>
> SetHandler mod_cluster-manager
> Order deny,allow
> Deny from all
> Allow from all
> </Location>
> </VirtualHost>
> </IfModule>
> {code}
> I get a weird {{proxy: DNS lookup failure}} as soon as worker sends {{CONFIGURE}}}:
> {code:title=access_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> 2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "INFO / HTTP/1.1" 200 - "-" "ClusterListener/1.0"
> 2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "CONFIG / HTTP/1.1" 200 - "-" "ClusterListener/1.0"
> 2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "STATUS / HTTP/1.1" 200 64 "-" "ClusterListener/1.0"
> ...
> {code}
> {code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> ...
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans INFO (/)
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler INFO (/) processing: ""
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler INFO OK
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans CONFIG (/)
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-6.1&Host=%5B2620%3A52%3A0%3A105f%3A0%3A0%3Affff%3A50%252%5D&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler CONFIG OK
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans STATUS (/)
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler STATUS (/) processing: "JVMRoute=jboss-eap-6.1&Load=100"
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1638): Processing STATUS
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create balancer balancer://qacluster
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(426): Created: worker for ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1 in child 16847 for (2620:52:0:105f:0:0:ffff:50%2) min=0 max=25 smax=25
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(601): Created: worker for ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009 1 (status): 1
> [Thu May 16 08:37:24 2013] [debug] proxy_util.c(2011): proxy: ajp: has acquired connection for (2620:52:0:105f:0:0:ffff:50%2)
> [Thu May 16 08:37:24 2013] [debug] proxy_util.c(2067): proxy: connecting ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009/ to 2620:52:0:105f:0:0:ffff:50%2:8009
> [Thu May 16 08:37:24 2013] [error] [client 2620:52:0:105f::ffff:50] proxy: DNS lookup failure for: 2620:52:0:105f:0:0:ffff:50%2 returned by /
> [Thu May 16 08:37:24 2013] [debug] proxy_util.c(2029): proxy: ajp: has released connection for (2620:52:0:105f:0:0:ffff:50%2)
> ...
> {code}
> An attempt to access the mod_cluster manager console yields an unpleasant {{NOTOK}} (obviously, we have All workers are in error state...):
> {code}
> curl -g [2620:52:0:105f::ffff:50]:6666/mcm
> ...
> <h1> Node jboss-eap-6.1 (ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009): </h1>
> <a href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Enable Contexts</a> <a href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Disable Contexts</a><br/>
> Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 26,Ttl: 60000000,Status: NOTOK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: -1
> {code}
> You might take a look at the logs: [^error_log-mod_cluster], [^access_log-mod_cluster].
> h2. ProxyPass itself works just fine
> On the other hand, when I had tried this:
> {code:title=proxypass.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
> ServerName [2620:52:0:105f::ffff:50]:2080
> Listen [2620:52:0:105f::ffff:50]:6666
> LogLevel debug
> <VirtualHost [2620:52:0:105f::ffff:50]:6666>
> ServerName [2620:52:0:105f::ffff:50]:6666
> ProxyPass / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> ProxyPassReverse / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> <Directory />
> Order deny,allow
> Deny from all
> Allow from all
> </Directory>
> </VirtualHost>
> {code}
> I received:
> {code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> ...
> [Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(45): proxy: AJP: canonicalising URL //[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(1506): [client 2620:52:0:105f::ffff:50] proxy: ajp: found worker ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ for ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] mod_proxy.c(1020): Running scheme ajp handler (attempt 0)
> [Thu May 16 08:29:00 2013] [debug] mod_proxy_http.c(1963): proxy: HTTP: declining URL ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(681): proxy: AJP: serving URL ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2011): proxy: AJP: has acquired connection for (2620:52:0:105f:0:0:ffff:50)
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2067): proxy: connecting ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ to 2620:52:0:105f:0:0:ffff:50:8009
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2193): proxy: connected / to 2620:52:0:105f:0:0:ffff:50:8009
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2444): proxy: AJP: fam 26 socket created to connect to 2620:52:0:105f:0:0:ffff:50
> ...
> {code}
> And from the client's side, it worked:
> {{curl -g [2620:52:0:105f::ffff:50]:6666/}} brought me the worker's home page (EAP's welcome in this case).
> Take a look at the whole log [^error_log-proxypass].
> *Note:* The [^http.conf] was the same both for mod_cluster and for proxypass test.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-341) REMOVE-APP with 2 Aliases removes only the first one
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-341?page=com.atlassian.jira.pl... ]
Radoslav Husar commented on MODCLUSTER-341:
-------------------------------------------
Right, but its the latest on the community website :-(
http://www.jboss.org/mod_cluster/downloads/1-2-0-Final
I will update and try..
Speaking of, I bumped version in natives in master:
https://github.com/modcluster/mod_cluster/pull/19
> REMOVE-APP with 2 Aliases removes only the first one
> ----------------------------------------------------
>
> Key: MODCLUSTER-341
> URL: https://issues.jboss.org/browse/MODCLUSTER-341
> Project: mod_cluster
> Issue Type: Feature Request
> Affects Versions: 1.2.0.Final
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
>
> Seems like only 1 Alias is removed if the application has been deployed with 2 different Aliases.
> This results in virtual host not being removed.
> Uncovered when testing with Undertow, since it has default-host and localhost defined in the stock config.
> {noformat}
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans INFO (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler INFO (/) processing: ""
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler INFO OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans CONFIG (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler CONFIG (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Host=127.0.0.1&Maxattempts=1&Port=8009&Reversed=true&StickySessionForce=No&Type=ajp&ping=10"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler CONFIG OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans ENABLE-APP (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler ENABLE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler ENABLE-APP OK
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(1667): manager_trans REMOVE-APP (/)
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2323): manager_handler REMOVE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2366): manager_handler REMOVE-APP OK
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-342) Cannot start apache
by Jose Giner (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-342?page=com.atlassian.jira.pl... ]
Jose Giner commented on MODCLUSTER-342:
---------------------------------------
Hi,
We modify http.conf like this:
...
#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
...
#Virtual hosts
#Include conf/extra/httpd-vhosts.conf
Listen 192.168.34.19:10001
MemManagerFile /var/cache/httpd
<VirtualHost 192.168.34.19:10001>
<Directory />
Order deny,allow
Allow from all
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ManagerBalancerName other-server-group
AdvertiseFrequency 5
#This directive allows you to view mod_cluster status at URL http://192.168.34.19:10001/mod_cluster-manager
<Location /mod_cluster-manager>
SetHandler mod_cluster-manager
Order deny,allow
Allow from all
</Location>
</VirtualHost>
Regards,
> Cannot start apache
> -------------------
>
> Key: MODCLUSTER-342
> URL: https://issues.jboss.org/browse/MODCLUSTER-342
> Project: mod_cluster
> Issue Type: Feature Request
> Affects Versions: 1.1.3.Final
> Environment: LPAR on AIX 7 (httpd 2.2.14)
> Reporter: Jose Giner
> Assignee: Jean-Frederic Clere
>
> Hi,
> We cannot start apache.
> The error_log show:
> [Fri May 24 12:30:23 2013] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
> [Fri May 24 12:30:23 2013] [notice] Digest: generating secret for digest authentication ...
> [Fri May 24 12:30:23 2013] [notice] Digest: done
> [Fri May 24 12:30:24 2013] [emerg] create_mem_host failed
> Configuration Failed
> Regards,
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-342) Cannot start apache
by Jose Giner (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-342?page=com.atlassian.jira.pl... ]
Jose Giner commented on MODCLUSTER-342:
---------------------------------------
Hi,
We modify http.conf like this:
#LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
# Virtual hosts
#Include conf/extra/httpd-vhosts.conf
Listen 192.168.34.19:10001
MemManagerFile /var/cache/httpd
<VirtualHost 192.168.34.19:10001>
<Directory />
Order deny,allow
Allow from all
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ManagerBalancerName other-server-group
AdvertiseFrequency 5
#This directive allows you to view mod_cluster status at URL http://192.168.34.19:10001/mod_cluster-manager
<Location /mod_cluster-manager>
SetHandler mod_cluster-manager
Order deny,allow
Allow from all
</Location>
</VirtualHost>
root(/var/cache)# ls -l
total 0
drwx------ 2 root system 256 May 24 13:39 httpd
-rw-r--r-- 1 root system 0 May 24 13:39 httpd.context.contexts.lock
-rw-r--r-- 1 root system 0 May 24 13:39 httpd.host.hosts
-rw-r--r-- 1 root system 0 May 24 13:39 httpd.host.hosts.lock
-rw-r--r-- 1 root system 0 May 24 13:39 httpd.node.nodes.lock
root(/usr/IBMAHS/modules)#
-rwxr-xr-x 1 root system 52400 May 02 16:24 mod_slotmem.so
-rwxr-xr-x 1 root system 294788 May 02 16:27 mod_proxy.so.original
-rwxr-xr-x 1 root system 156953 May 02 16:27 mod_proxy_ajp.so.original
-rwxr-xr-x 1 root system 151231 May 02 16:28 mod_proxy_ajp.so
-rwxr-xr-x 1 root system 229349 May 03 08:42 mod_manager.so
-rwxr-xr-x 1 root system 136392 May 03 08:45 mod_proxy_cluster.so
-rwxr-xr-x 1 root system 61036 May 08 09:39 mod_advertise.so
-rwxr-xr-x 1 root system 304323 May 24 11:53 mod_proxy.so
Regards,
> Cannot start apache
> -------------------
>
> Key: MODCLUSTER-342
> URL: https://issues.jboss.org/browse/MODCLUSTER-342
> Project: mod_cluster
> Issue Type: Feature Request
> Affects Versions: 1.1.3.Final
> Environment: LPAR on AIX 7 (httpd 2.2.14)
> Reporter: Jose Giner
> Assignee: Jean-Frederic Clere
>
> Hi,
> We cannot start apache.
> The error_log show:
> [Fri May 24 12:30:23 2013] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
> [Fri May 24 12:30:23 2013] [notice] Digest: generating secret for digest authentication ...
> [Fri May 24 12:30:23 2013] [notice] Digest: done
> [Fri May 24 12:30:24 2013] [emerg] create_mem_host failed
> Configuration Failed
> Regards,
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-342) Cannot start apache
by Jose Giner (JIRA)
Jose Giner created MODCLUSTER-342:
-------------------------------------
Summary: Cannot start apache
Key: MODCLUSTER-342
URL: https://issues.jboss.org/browse/MODCLUSTER-342
Project: mod_cluster
Issue Type: Feature Request
Affects Versions: 1.1.3.Final
Environment: LPAR on AIX 7 (httpd 2.2.14)
Reporter: Jose Giner
Assignee: Jean-Frederic Clere
Hi,
We cannot start apache.
The error_log show:
[Fri May 24 12:30:23 2013] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
[Fri May 24 12:30:23 2013] [notice] Digest: generating secret for digest authentication ...
[Fri May 24 12:30:23 2013] [notice] Digest: done
[Fri May 24 12:30:24 2013] [emerg] create_mem_host failed
Configuration Failed
Regards,
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-341) REMOVE-APP with 2 Aliases removes only the first one
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-341?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere edited comment on MODCLUSTER-341 at 5/24/13 2:43 AM:
-------------------------------------------------------------------------
mod_cluster/1.2.0.Final that is an old version could you try with a newer one.
looks OK for me (jboss-as-7.2.x and mod_clsuter-1.2.4.Final).
was (Author: jfclere):
mod_cluster/1.2.0.Final that is an old version could you try with a newer one.
> REMOVE-APP with 2 Aliases removes only the first one
> ----------------------------------------------------
>
> Key: MODCLUSTER-341
> URL: https://issues.jboss.org/browse/MODCLUSTER-341
> Project: mod_cluster
> Issue Type: Feature Request
> Affects Versions: 1.2.0.Final
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
>
> Seems like only 1 Alias is removed if the application has been deployed with 2 different Aliases.
> This results in virtual host not being removed.
> Uncovered when testing with Undertow, since it has default-host and localhost defined in the stock config.
> {noformat}
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans INFO (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler INFO (/) processing: ""
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler INFO OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans CONFIG (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler CONFIG (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Host=127.0.0.1&Maxattempts=1&Port=8009&Reversed=true&StickySessionForce=No&Type=ajp&ping=10"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler CONFIG OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans ENABLE-APP (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler ENABLE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler ENABLE-APP OK
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(1667): manager_trans REMOVE-APP (/)
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2323): manager_handler REMOVE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2366): manager_handler REMOVE-APP OK
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-341) REMOVE-APP with 2 Aliases removes only the first one
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-341?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere commented on MODCLUSTER-341:
------------------------------------------------
mod_cluster/1.2.0.Final that is an old version could you try with a newer one.
> REMOVE-APP with 2 Aliases removes only the first one
> ----------------------------------------------------
>
> Key: MODCLUSTER-341
> URL: https://issues.jboss.org/browse/MODCLUSTER-341
> Project: mod_cluster
> Issue Type: Feature Request
> Affects Versions: 1.2.0.Final
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
>
> Seems like only 1 Alias is removed if the application has been deployed with 2 different Aliases.
> This results in virtual host not being removed.
> Uncovered when testing with Undertow, since it has default-host and localhost defined in the stock config.
> {noformat}
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans INFO (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler INFO (/) processing: ""
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler INFO OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans CONFIG (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler CONFIG (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Host=127.0.0.1&Maxattempts=1&Port=8009&Reversed=true&StickySessionForce=No&Type=ajp&ping=10"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler CONFIG OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans ENABLE-APP (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler ENABLE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler ENABLE-APP OK
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(1667): manager_trans REMOVE-APP (/)
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2323): manager_handler REMOVE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2366): manager_handler REMOVE-APP OK
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months
[JBoss JIRA] (MODCLUSTER-341) REMOVE-APP with 2 Aliases removes only the first one
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-341?page=com.atlassian.jira.pl... ]
Radoslav Husar commented on MODCLUSTER-341:
-------------------------------------------
Not noticable on shutdown, because "REMOVE-APP (/*)" is called..
> REMOVE-APP with 2 Aliases removes only the first one
> ----------------------------------------------------
>
> Key: MODCLUSTER-341
> URL: https://issues.jboss.org/browse/MODCLUSTER-341
> Project: mod_cluster
> Issue Type: Feature Request
> Affects Versions: 1.2.0.Final
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
>
> Seems like only 1 Alias is removed if the application has been deployed with 2 different Aliases.
> This results in virtual host not being removed.
> Uncovered when testing with Undertow, since it has default-host and localhost defined in the stock config.
> {noformat}
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans INFO (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler INFO (/) processing: ""
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler INFO OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans CONFIG (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler CONFIG (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Host=127.0.0.1&Maxattempts=1&Port=8009&Reversed=true&StickySessionForce=No&Type=ajp&ping=10"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler CONFIG OK
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(1667): manager_trans ENABLE-APP (/)
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2323): manager_handler ENABLE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:57:57 2013] [debug] mod_manager.c(2366): manager_handler ENABLE-APP OK
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(1667): manager_trans REMOVE-APP (/)
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2323): manager_handler REMOVE-APP (/) processing: "JVMRoute=5d08300e-37a2-390c-8197-142a84543d60&Alias=default-host%2Clocalhost&Context=%2Fclusterbench-ee6-web"
> [Wed May 22 13:58:19 2013] [debug] mod_manager.c(2366): manager_handler REMOVE-APP OK
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
11 years, 8 months