[
https://issues.jboss.org/browse/MODCLUSTER-339?page=com.atlassian.jira.pl...
]
Michal Babacek commented on MODCLUSTER-339:
-------------------------------------------
h3. Thinking aloud
I do not understand why should we put zone there at all. What should httpd, as a server,
do with it?
I had tried to look up some httpd tests with IPv6, and I found only this, not using zone
id:
[httpd-2.2.23/srclib/apr/test/testsock.c:314|https://gist.github.com/Karm/5642351#file-testsock-c-L314]
Furthermore, I examined the functions in
{{httpd-2.2.23/srclib/apr/network_io/unix/sockaddr.c}} leading to {{getaddrinfo(hostname,
servname, &hints, &ai_list);}}
Solaris POSIX mambo-jambo reveals a nice doc for
[
getaddrinfo()|http://docs.oracle.com/cd/E23823_01/html/816-5170/getaddrin...]
{quote}
The {{nodename}} can also be an IPv6 zone-id in the form:
{code}
<address>%<zone-id>
{code}
The address is the literal IPv6 link-local address or host name of the destination. The
zone-id is the interface ID of the IPv6 link used to send the packet. The zone-id can
either be a numeric value, indicating a literal zone value, or an interface name such as
hme0.
{quote}
OK, we should be able to put %num there, still, why should be httpd interested in
worker's interface zone id? It is not going to be binding to it...
I guess there is even a room for a nasty error where, given that zone id has a priority
over the actual address, httpd will try to use a specific interface just because it was
given an unnecessary zone id... Dunno :-(
h3. Toss % out
How about stripping the %num from the CONFIG message on the native side? As I stated
above, it's IMHO useless there anyhow.
{code:title=RHEL with zone %666|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create
balancer balancer://qacluster
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(426): Created: worker for
ajp://[2620:52:0:102f:221:5eff:fe96:8180%666]:8009
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(549): proxy: initialized single
connection worker 1 in child 10070 for (2620:52:0:102f:221:5eff:fe96:8180%666)
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(601): Created: worker for
ajp://[2620:52:0:102f:221:5eff:fe96:8180%666]:8009 1 (status): 129
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node
starting
[Fri May 24 06:44:25 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
{code}
OK, RHEL can handle it, SOLARIS can't. On the other hand:
{code:title=RHEL without any zone in the message|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
[Fri May 24 06:37:47 2013] [debug] mod_proxy_cluster.c(426): Created: worker for
ajp://[2620:52:0:102f:221:5eff:fe96:8180]:8009
[Fri May 24 06:37:47 2013] [debug] mod_proxy_cluster.c(549): proxy: initialized single
connection worker 1 in child 9967 for (2620:52:0:102f:221:5eff:fe96:8180)
[Fri May 24 06:37:47 2013] [debug] mod_proxy_cluster.c(601): Created: worker for
ajp://[2620:52:0:102f:221:5eff:fe96:8180]:8009 1 (status): 129
{code}
Omitting the zone from the CONFIG message seems to be doing no harm.
Solaris up and running: :-)
{code:title=SOLARIS without any zone in the message|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
[Fri May 24 08:25:15 2013] [debug] mod_manager.c(1923): manager_trans CONFIG (/)
[Fri May 24 08:25:15 2013] [debug] mod_manager.c(2598): manager_handler CONFIG (/)
processing:
"JVMRoute=FakeNode&Host=%5B2620%3A52%3A0%3A105f%3A%3Affff%3A60%5D&Maxattempts=1&Port=8009&Type=ajp&ping=100\r\n"
[Fri May 24 08:25:15 2013] [debug] mod_manager.c(2647): manager_handler CONFIG OK
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node
starting
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create
balancer balancer://qacluster
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node
starting
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create
balancer balancer://qacluster
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(426): Created: worker for
ajp://[2620:52:0:105f::ffff:60]:8009
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1
in child 19207 for (2620:52:0:105f::ffff:60) min=0 max=25 smax=25
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(601): Created: worker for
ajp://[2620:52:0:105f::ffff:60]:8009 1 (status): 1
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2011): proxy: ajp: has acquired connection
for (2620:52:0:105f::ffff:60)
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2067): proxy: connecting
ajp://[2620:52:0:105f::ffff:60]:8009/ to 2620:52:0:105f::ffff:60:8009
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(426): Created: worker for
ajp://[2620:52:0:105f::ffff:60]:8009
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1
in child 19208 for (2620:52:0:105f::ffff:60) min=0 max=25 smax=25
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(601): Created: worker for
ajp://[2620:52:0:105f::ffff:60]:8009 1 (status): 1
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1010): update_workers_node
starting
[Fri May 24 08:25:15 2013] [debug] mod_proxy_cluster.c(1025): update_workers_node done
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2193): proxy: connected / to
2620:52:0:105f::ffff:60:8009
[Fri May 24 08:25:15 2013] [debug] proxy_util.c(2444): proxy: ajp: fam 26 socket created
to connect to 2620:52:0:105f::ffff:60
{code}
Without *%something* in the Host attribute of the CONFIG message, there is no nasty *DNS
lookup failure* and everything seems to be cool (not yet thoroughly tested though).
The aforementioned log was produced with this fake message:
{code}
{ echo "CONFIG / HTTP/1.0"; echo "Content-length: 108"; echo
""; echo
"JVMRoute=FakeNode&Host=%5B2620%3A52%3A0%3A105f%3A%3Affff%3A60%5D&Maxattempts=1&Port=8009&Type=ajp&ping=100";
sleep 1; } | telnet 2620:52:0:105f::ffff:60 6666
{code}
What do you think about it?
"proxy: DNS lookup failure" with IPv6 on Solaris
------------------------------------------------
Key: MODCLUSTER-339
URL:
https://issues.jboss.org/browse/MODCLUSTER-339
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.3.Final, 1.2.4.Final
Environment: Solaris 10 x86, Solaris 11 x86, Solaris 11 SPARC
Reporter: Michal Babacek
Assignee: Jean-Frederic Clere
Priority: Critical
Labels: ipv6
Fix For: 1.2.5.Final
Attachments: access_log-mod_cluster, error_log-mod_cluster,
error_log-mod_cluster-RHEL, error_log-proxypass, http.conf
h2. Failure with mod_cluster
Having the following setting:
{code:title=mod_cluster.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
MemManagerFile "/tmp/mod_cluster-eap6/jboss-ews-2.0/var/cache/mod_cluster"
ServerName [2620:52:0:105f::ffff:50]:2080
<IfModule manager_module>
Listen [2620:52:0:105f::ffff:50]:6666
LogLevel debug
<VirtualHost [2620:52:0:105f::ffff:50]:6666>
ServerName [2620:52:0:105f::ffff:50]:6666
<Directory />
Order deny,allow
Deny from all
Allow from all
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName qacluster
AdvertiseGroup [ff01::7]:23964
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Order deny,allow
Deny from all
Allow from all
</Location>
</VirtualHost>
</IfModule>
{code}
I get a weird {{proxy: DNS lookup failure}} as soon as worker sends {{CONFIGURE}}}:
{code:title=access_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "INFO / HTTP/1.1" 200
- "-" "ClusterListener/1.0"
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "CONFIG / HTTP/1.1"
200 - "-" "ClusterListener/1.0"
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "STATUS / HTTP/1.1"
200 64 "-" "ClusterListener/1.0"
...
{code}
{code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
...
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans INFO (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler INFO (/)
processing: ""
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler INFO OK
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans CONFIG (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler CONFIG (/)
processing:
"JVMRoute=jboss-eap-6.1&Host=%5B2620%3A52%3A0%3A105f%3A0%3A0%3Affff%3A50%252%5D&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler CONFIG OK
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans STATUS (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler STATUS (/)
processing: "JVMRoute=jboss-eap-6.1&Load=100"
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1638): Processing STATUS
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create
balancer balancer://qacluster
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(426): Created: worker for
ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1
in child 16847 for (2620:52:0:105f:0:0:ffff:50%2) min=0 max=25 smax=25
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(601): Created: worker for
ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009 1 (status): 1
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2011): proxy: ajp: has acquired
connection for (2620:52:0:105f:0:0:ffff:50%2)
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2067): proxy: connecting
ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009/ to 2620:52:0:105f:0:0:ffff:50%2:8009
[Thu May 16 08:37:24 2013] [error] [client 2620:52:0:105f::ffff:50] proxy: DNS lookup
failure for: 2620:52:0:105f:0:0:ffff:50%2 returned by /
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2029): proxy: ajp: has released
connection for (2620:52:0:105f:0:0:ffff:50%2)
...
{code}
An attempt to access the mod_cluster manager console yields an unpleasant {{NOTOK}}
(obviously, we have All workers are in error state...):
{code}
curl -g [2620:52:0:105f::ffff:50]:6666/mcm
...
<h1> Node jboss-eap-6.1 (ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009): </h1>
<a
href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Enable
Contexts</a> <a
href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Disable
Contexts</a><br/>
Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax:
26,Ttl: 60000000,Status: NOTOK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: -1
{code}
You might take a look at the logs: [^error_log-mod_cluster], [^access_log-mod_cluster].
h2. ProxyPass itself works just fine
On the other hand, when I had tried this:
{code:title=proxypass.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
ServerName [2620:52:0:105f::ffff:50]:2080
Listen [2620:52:0:105f::ffff:50]:6666
LogLevel debug
<VirtualHost [2620:52:0:105f::ffff:50]:6666>
ServerName [2620:52:0:105f::ffff:50]:6666
ProxyPass / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
ProxyPassReverse / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
<Directory />
Order deny,allow
Deny from all
Allow from all
</Directory>
</VirtualHost>
{code}
I received:
{code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
...
[Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(45): proxy: AJP: canonicalising URL
//[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(1506): [client 2620:52:0:105f::ffff:50]
proxy: ajp: found worker ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ for
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] mod_proxy.c(1020): Running scheme ajp handler (attempt
0)
[Thu May 16 08:29:00 2013] [debug] mod_proxy_http.c(1963): proxy: HTTP: declining URL
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(681): proxy: AJP: serving URL
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2011): proxy: AJP: has acquired
connection for (2620:52:0:105f:0:0:ffff:50)
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2067): proxy: connecting
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ to 2620:52:0:105f:0:0:ffff:50:8009
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2193): proxy: connected / to
2620:52:0:105f:0:0:ffff:50:8009
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2444): proxy: AJP: fam 26 socket created
to connect to 2620:52:0:105f:0:0:ffff:50
...
{code}
And from the client's side, it worked:
{{curl -g [2620:52:0:105f::ffff:50]:6666/}} brought me the worker's home page
(EAP's welcome in this case).
Take a look at the whole log [^error_log-proxypass].
*Note:* The [^http.conf] was the same both for mod_cluster and for proxypass test.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:
http://www.atlassian.com/software/jira