[
https://issues.jboss.org/browse/MODCLUSTER-339?page=com.atlassian.jira.pl...
]
Michal Babacek commented on MODCLUSTER-339:
-------------------------------------------
[~jfclere] I have been investigating further and you might find these notes useful:
h4. IPv6 works if we remove % and zone id
The "fix", or rather a workaround, in
[/pull/20/|https://github.com/modcluster/mod_cluster/pull/20/] really made IPv6 work on
Soalris 11 SPARC64. I tested with attached [^mod_manager.so] (built from
[/pull/20/|https://github.com/modcluster/mod_cluster/pull/20/] sources for sparc64, *apxs*
from httpd-2.2.23). Here is the debug log from the successful test: [^error_log_pull20].
h4. Actual apr_sockaddr_info_get source code
I wondered what is the actual difference between Solaris's and Fedora's
{{apr_sockaddr_info_get}}, but I am bewildered with all these macros. What I did is to run
a preprocessor, so as I can compare the actual C code that is to be compiled on Fedora and
Solaris.
{noformat}
/tmp/native/httpd/httpd-2.2.23/srclib/apr
gcc -E -P -g -Wall -Wmissing-prototypes -Wstrict-prototypes -Wmissing-declarations -m64
-DSSL_EXPERIMENTAL -DSSL_ENGINE -DHAVE_CONFIG_H -DSOLARIS2=11 -D_POSIX_PTHREAD_SEMANTICS
-D_REENTRANT -I./include -I/tmp/native/httpd/httpd-2.2.23/srclib/apr/include/arch/unix
-I./include/arch/unix -I/tmp/native/httpd/httpd-2.2.23/srclib/apr/include/arch/unix
-I/tmp/native/httpd/httpd-2.2.23/srclib/apr/include -o network_io/unix/sockaddr.lo -c
network_io/unix/sockaddr.c
{noformat}
One may find resulting files attached as [^sockaddr.lo_fedora18_x86_64],
[^sockaddr.lo_solaris11_sparc64].
I took a look at differences in
* {{static apr_status_t find_addresses(apr_sockaddr_t **sa, const char *hostname,
apr_int32_t family, apr_port_t port, apr_int32_t flags, apr_pool_t *p)}}
* {{call_resolver(apr_sockaddr_t **sa, const char *hostname, apr_int32_t family,
apr_port_t port, apr_int32_t flags, apr_pool_t *p)}}
but it all boils down to the system's:
{{getaddrinfo(hostname, servname, &hints, &ai_list);}}
that, as far as I was able to look up, [supports %zoneid
syntax|http://docs.oracle.com/cd/E23823_01/html/816-5170/getaddrinfo-3soc....
So, I can't really see how could {{apr_sockaddr_info_get}} fail us? There is not much
code in it:
Solaris 11 SPARC64:
{code}
apr_status_t apr_sockaddr_info_get(apr_sockaddr_t **sa,
const char *hostname,
apr_int32_t family, apr_port_t port,
apr_int32_t flags, apr_pool_t *p)
{
apr_int32_t masked;
*sa = 0L;
if ((masked = flags & (0x01 | 0x02))) {
if (!hostname ||
family != 0 ||
masked == (0x01 | 0x02)) {
return 22;
}
}
return find_addresses(sa, hostname, family, port, flags, p);
}
{code}
the only difference from Fedora build being on line 7, {{*sa = ((void *)0);}}.
uh...
"proxy: DNS lookup failure" with IPv6 on Solaris
------------------------------------------------
Key: MODCLUSTER-339
URL:
https://issues.jboss.org/browse/MODCLUSTER-339
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.3.Final, 1.2.4.Final
Environment: Solaris 10 x86, Solaris 11 x86, Solaris 11 SPARC
Reporter: Michal Babacek
Assignee: Jean-Frederic Clere
Priority: Critical
Labels: ipv6
Fix For: 1.2.5.Final
Attachments: access_log-mod_cluster, error_log-mod_cluster,
error_log-mod_cluster-RHEL, error_log-proxypass, error_log_pull20, http.conf,
mod_manager.so, sockaddr.lo_fedora18_x86_64, sockaddr.lo_solaris11_sparc64
h2. Failure with mod_cluster
Having the following setting:
{code:title=mod_cluster.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
MemManagerFile "/tmp/mod_cluster-eap6/jboss-ews-2.0/var/cache/mod_cluster"
ServerName [2620:52:0:105f::ffff:50]:2080
<IfModule manager_module>
Listen [2620:52:0:105f::ffff:50]:6666
LogLevel debug
<VirtualHost [2620:52:0:105f::ffff:50]:6666>
ServerName [2620:52:0:105f::ffff:50]:6666
<Directory />
Order deny,allow
Deny from all
Allow from all
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName qacluster
AdvertiseGroup [ff01::7]:23964
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Order deny,allow
Deny from all
Allow from all
</Location>
</VirtualHost>
</IfModule>
{code}
I get a weird {{proxy: DNS lookup failure}} as soon as worker sends {{CONFIGURE}}}:
{code:title=access_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "INFO / HTTP/1.1" 200
- "-" "ClusterListener/1.0"
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "CONFIG / HTTP/1.1"
200 - "-" "ClusterListener/1.0"
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "STATUS / HTTP/1.1"
200 64 "-" "ClusterListener/1.0"
...
{code}
{code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
...
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans INFO (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler INFO (/)
processing: ""
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler INFO OK
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans CONFIG (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler CONFIG (/)
processing:
"JVMRoute=jboss-eap-6.1&Host=%5B2620%3A52%3A0%3A105f%3A0%3A0%3Affff%3A50%252%5D&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler CONFIG OK
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans STATUS (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler STATUS (/)
processing: "JVMRoute=jboss-eap-6.1&Load=100"
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1638): Processing STATUS
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create
balancer balancer://qacluster
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(426): Created: worker for
ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1
in child 16847 for (2620:52:0:105f:0:0:ffff:50%2) min=0 max=25 smax=25
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(601): Created: worker for
ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009 1 (status): 1
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2011): proxy: ajp: has acquired
connection for (2620:52:0:105f:0:0:ffff:50%2)
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2067): proxy: connecting
ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009/ to 2620:52:0:105f:0:0:ffff:50%2:8009
[Thu May 16 08:37:24 2013] [error] [client 2620:52:0:105f::ffff:50] proxy: DNS lookup
failure for: 2620:52:0:105f:0:0:ffff:50%2 returned by /
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2029): proxy: ajp: has released
connection for (2620:52:0:105f:0:0:ffff:50%2)
...
{code}
An attempt to access the mod_cluster manager console yields an unpleasant {{NOTOK}}
(obviously, we have All workers are in error state...):
{code}
curl -g [2620:52:0:105f::ffff:50]:6666/mcm
...
<h1> Node jboss-eap-6.1 (ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009): </h1>
<a
href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Enable
Contexts</a> <a
href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Disable
Contexts</a><br/>
Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax:
26,Ttl: 60000000,Status: NOTOK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: -1
{code}
You might take a look at the logs: [^error_log-mod_cluster], [^access_log-mod_cluster].
h2. ProxyPass itself works just fine
On the other hand, when I had tried this:
{code:title=proxypass.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
ServerName [2620:52:0:105f::ffff:50]:2080
Listen [2620:52:0:105f::ffff:50]:6666
LogLevel debug
<VirtualHost [2620:52:0:105f::ffff:50]:6666>
ServerName [2620:52:0:105f::ffff:50]:6666
ProxyPass / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
ProxyPassReverse / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
<Directory />
Order deny,allow
Deny from all
Allow from all
</Directory>
</VirtualHost>
{code}
I received:
{code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
...
[Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(45): proxy: AJP: canonicalising URL
//[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(1506): [client 2620:52:0:105f::ffff:50]
proxy: ajp: found worker ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ for
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] mod_proxy.c(1020): Running scheme ajp handler (attempt
0)
[Thu May 16 08:29:00 2013] [debug] mod_proxy_http.c(1963): proxy: HTTP: declining URL
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(681): proxy: AJP: serving URL
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2011): proxy: AJP: has acquired
connection for (2620:52:0:105f:0:0:ffff:50)
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2067): proxy: connecting
ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ to 2620:52:0:105f:0:0:ffff:50:8009
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2193): proxy: connected / to
2620:52:0:105f:0:0:ffff:50:8009
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2444): proxy: AJP: fam 26 socket created
to connect to 2620:52:0:105f:0:0:ffff:50
...
{code}
And from the client's side, it worked:
{{curl -g [2620:52:0:105f::ffff:50]:6666/}} brought me the worker's home page
(EAP's welcome in this case).
Take a look at the whole log [^error_log-proxypass].
*Note:* The [^http.conf] was the same both for mod_cluster and for proxypass test.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:
http://www.atlassian.com/software/jira