[JBoss JIRA] Created: (MODCLUSTER-91) Connector bind address of 0.0.0.0 propagated to proxy
by Brian Stansberry (JIRA)
Connector bind address of 0.0.0.0 propagated to proxy
-----------------------------------------------------
Key: MODCLUSTER-91
URL: https://jira.jboss.org/jira/browse/MODCLUSTER-91
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.0.1.GA
Reporter: Brian Stansberry
Assignee: Jean-Frederic Clere
Marek Goldmann wrote:
> I'm encountered a strange error. When I bind JBoss instance to 0.0.0.0
> address instead of a fixed ethernet address, node gets registered in
> mod_cluster, shows in mod_cluster-manager, but every request to
> registered contexts throws 503 error.
>
> httpd error log:
>
> [Fri Aug 07 03:21:05 2009] [error] (111)Connection refused: proxy:
> ajp: attempt to connect to 0.0.0.0:8009 (0.0.0.0) failed
> [Fri Aug 07 03:21:05 2009] [error] ap_proxy_connect_backend disabling
> worker for (0.0.0.0)
> [Fri Aug 07 03:21:15 2009] [error] proxy: ajp: disabled connection for
> (0.0.0.0)
> [Fri Aug 07 03:21:25 2009] [error] proxy: ajp: disabled connection for
> (0.0.0.0)
>
> This looks like a bug for me, because many administrators are binding
> JBoss to 0.0.0.0.
The java side needs to understand that 0.0.0.0 is useless as a client address and send something useful. Trick is deciding what's useful.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 5 months
[JBoss JIRA] (MODCLUSTER-311) mod_manager doesn't handle multiple virtualhosts per node
by Simone Gotti (JIRA)
Simone Gotti created MODCLUSTER-311:
---------------------------------------
Summary: mod_manager doesn't handle multiple virtualhosts per node
Key: MODCLUSTER-311
URL: https://issues.jboss.org/browse/MODCLUSTER-311
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.1.Final
Environment: RedHat EL 6.2, httpd-2.2.15-15.el6
Reporter: Simone Gotti
Assignee: Jean-Frederic Clere
Hi,
I was experimenting with mod_cluster and jboss as 7.1 configured with multiple virtualhosts.
My simple test was made of a single node (as instance) with 2 virtualhosts (site01 and site02) and 2 applications respectively deployed on one of the two vhosts.
I noticed that mod_manager was inserting the aliases of the 2 jboss vhosts in the same virtualhost (same vhost id):
{noformat}
balancer: [1] Name: balancer01 Sticky: 1 [JSESSIONID]/[jsessionid] remove: 0 force: 0 Timeout: 0 maxAttempts: 1
node: [1:1],Balancer: balancer01,JVMRoute: bf3c1d57-ed66-38b4-838d-0cba532b6737,LBGroup: [],Host: 192.168.122.21,Port: 8259,Type: ajp,flushpackets: 0,flushwait: 10,ping: 10,smax: 1,ttl: 60,timeout: 0
host: 1 [site01] vhost: 1 node: 1
host: 2 [site02] vhost: 1 node: 1
context: 1 [/context01] vhost: 1 node: 1 status: 1
context: 2 [/context02] vhost: 1 node: 1 status: 1
{noformat}
Now, looking at the mod_manager.c code I noticed that, inside process_appl_cmd, if the first alias name (I assume they always come in order and the first one provided in the ENABLE-APP MCMP command is always the jboss vhost default-name) doesn't exists in the hoststatsmem table then a new one is created with a fixed vhost id of 1 (as the comment says):
host = read_host(hoststatsmem, &hostinfo);
if (host == NULL) {
int vid = 1; /* XXX: That is not really the right value, but that works most time */
I tried to fix this trying to calculate the first available vhost id (see first part of the patch attached below)
>From my tests this seems to work (tried deploy, undeploy of various apps on different hosts and context). This also means that the logic inside mod_proxy_cluster looks right and correctly choose the right balancer (and sends the request to the backend only if the requested context inside the requestes vhost is defined).
{noformat}
balancer: [1] Name: balancer01 Sticky: 1 [JSESSIONID]/[jsessionid] remove: 0 force: 0 Timeout: 0 maxAttempts: 1
node: [1:1],Balancer: balancer01,JVMRoute: bf3c1d57-ed66-38b4-838d-0cba532b6737,LBGroup: [],Host: 192.168.122.21,Port: 8259,Type: ajp,flushpackets: 0,flushwait: 10,ping: 10,smax: 1,ttl: 60,timeout: 0
host: 1 [site02] vhost: 1 node: 1
host: 2 [site01] vhost: 2 node: 1
context: 1 [/context01] vhost: 2 node: 1 status: 1
context: 2 [/context02] vhost: 1 node: 1 status: 1
{noformat}
Then I tried adding some aliases on the jboss virtualhosts. On ENABLE it worked. Instead, during REMOVE, only the vhost default-name (the first Alias in the MCMP command) was removed keeping the other aliases and so the vhost (and giving problems during another ENABLE as it created another virtualhost only for the first alias).
On ENABLE:
{noformat}
balancer: [1] Name: balancer01 Sticky: 1 [JSESSIONID]/[jsessionid] remove: 0 force: 0 Timeout: 0 maxAttempts: 1
node: [1:1],Balancer: balancer01,JVMRoute: bf3c1d57-ed66-38b4-838d-0cba532b6737,LBGroup: [],Host: 192.168.122.21,Port: 8259,Type: ajp,flushpackets: 0,flushwait: 10,ping: 10,smax: 1,ttl: 60,timeout: 0
host: 1 [site01] vhost: 1 node: 1
host: 2 [site01alias01] vhost: 1 node: 1
host: 3 [site02] vhost: 2 node: 1
context: 1 [/context01] vhost: 1 node: 1 status: 1
context: 2 [/context02] vhost: 2 node: 1 status: 1
{noformat}
On REMOVE:
{noformat}
balancer: [1] Name: balancer01 Sticky: 1 [JSESSIONID]/[jsessionid] remove: 0 force: 0 Timeout: 0 maxAttempts: 1
node: [1:1],Balancer: balancer01,JVMRoute: bf3c1d57-ed66-38b4-838d-0cba532b6737,LBGroup: [],Host: 192.168.122.21,Port: 8259,Type: ajp,flushpackets: 0,flushwait: 10,ping: 10,smax: 1,ttl: 60,timeout: 0
host: 2 [site01alias01] vhost: 1 node: 1
host: 3 [site02] vhost: 2 node: 1
context: 2 [/context02] vhost: 2 node: 1 status: 1
{noformat}
To fix this, always inside process_appl_cmd I noticed that it was removing only the first host. So I modified it to remove all the hosts of that node with that vhost id.
This is the patch I made trying to fix this:
{noformat}
Index: mod_manager.c
===================================================================
--- mod_manager.c (revision 840)
+++ mod_manager.c (working copy)
@@ -1341,10 +1341,26 @@
hostinfo.id = 0;
host = read_host(hoststatsmem, &hostinfo);
if (host == NULL) {
- int vid = 1; /* XXX: That is not really the right value, but that works most time */
+
/* If REMOVE ignores it */
if (status == REMOVE)
return NULL;
+
+ /* Find the first available vhost id */
+ /* XXX: This can be racy if another request from the same node comes in the middle */
+ int vid = 1;
+ int size = loc_get_max_size_host();
+ int *id = apr_palloc(r->pool, sizeof(int) * size);
+ size = get_ids_used_host(hoststatsmem, id);
+ for (i=0; i<size; i++) {
+ hostinfo_t *ou;
+ if (get_host(hoststatsmem, &ou, id[i]) != APR_SUCCESS)
+ continue;
+
+ if(ou->vhost == vid && ou->node == node->mess.id)
+ vid++;
+ }
+
/* If the Host doesn't exist yet create it */
if (insert_update_hosts(hoststatsmem, vhost->host, node->mess.id, vid) != APR_SUCCESS) {
*errtype = TYPEMEM;
@@ -1384,7 +1400,18 @@
}
if (i==size) {
hostinfo.id = host->id;
- remove_host(hoststatsmem, &hostinfo);
+
+ int size = loc_get_max_size_host();
+ int *id = apr_palloc(r->pool, sizeof(int) * size);
+ size = get_ids_used_host(hoststatsmem, id);
+ for (i=0; i<size; i++) {
+ hostinfo_t *ou;
+
+ if (get_host(hoststatsmem, &ou, id[i]) != APR_SUCCESS)
+ continue;
+ if(ou->vhost == host->vhost && ou->node == node->mess.id)
+ remove_host(hoststatsmem, ou);
+ }
}
} else if (status == STOPPED) {
/* insert_update_contexts in fact makes that vhost->context corresponds only to the first context... */
{noformat}
As discussed on the forum, during ENABLE, some concurrency problems may happen. Probably this can create problems only if the same node launches multiple concurrent ENABLE-APP commands (I don't know if this can happen on the as side).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (MODCLUSTER-314) mod_cluster: HTTP 404 on node shutdown with pure IPv6 setup
by Michal Babacek (JIRA)
Michal Babacek created MODCLUSTER-314:
-----------------------------------------
Summary: mod_cluster: HTTP 404 on node shutdown with pure IPv6 setup
Key: MODCLUSTER-314
URL: https://issues.jboss.org/browse/MODCLUSTER-314
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.1.Final
Environment: RHEL 6 x86_64, pure IPv6, *Apache/2.2.21* (Unix) *mod_cluster/1.2.1.Final*
Reporter: Michal Babacek
Assignee: Jean-Frederic Clere
Priority: Critical
As a follow up on
* [JBPAPP-9195] mod_cluster: HTTP 503 on node shutdown with pure IPv6 setup
I have tried this mod_cluster + httpd bundle featuring *Apache/2.2.21* (Unix) *mod_cluster/1.2.1.Final* (unlike in [JBPAPP-9195] where we used Apache/2.2.17 (Unix) DAV/2 mod_cluster/1.2.1.Beta2)
* [mod_cluster-1.2.1.Final-linux2-x64.tar.gz|http://hudson.qa.jboss.com/huds...]
the result is surprising: Very frequent HTTP 404 errors on node shutdown.
h3. Http client
I have a curl client issuing requests to [2620:52:0:105f::ffff:c]:8888/SessionTest/hostname periodically, delay being 1 s. Note that there is always a new session for each request (no JSESSIONID stuff anywhere). There are two nodes I switch off and on randomly, always giving enough time so as the starting one may take off safely.
{noformat}
Wed May 30 17:00:13 EDT 2012 [2620:52:0:105f::ffff:c]:8888 0
+++ No errors in meanwhile +++
Wed May 30 17:05:24 EDT 2012 [2620:52:0:105f::ffff:c]:8888 0
Wed May 30 17:05:25 EDT 2012 404 Not Found The requested URL /SessionTest/hostname was not found on this server.
+++ HTTP 404 errors keep showing up every second +++
Wed May 30 17:05:58 EDT 2012 404 Not Found The requested URL /SessionTest/hostname was not found on this server.
Wed May 30 17:05:59 EDT 2012 [2620:52:0:105f::ffff:c]:8888 0
+++ No errors in meanwhile +++
Wed May 30 17:06:03 EDT 2012 [2620:52:0:105f::ffff:c]:8888 0
Wed May 30 17:06:04 EDT 2012 404 Not Found The requested URL /SessionTest/hostname was not found on this server.
+++ HTTP 404 errors keep showing up every second +++
Wed May 30 17:06:08 EDT 2012 404 Not Found The requested URL /SessionTest/hostname was not found on this server.
Wed May 30 17:06:09 EDT 2012 [2620:52:0:105f::ffff:c]:8888 0
+++ No errors in meanwhile +++
Wed May 30 17:06:25 EDT 2012 [2620:52:0:105f::ffff:c]:8888 0
{noformat}
please, note the time stamps marking HTTP 404 errors, we will match them against the attached debug logs.
h4. IO error
(i) *Note:* At *17:05:24* node vmg36 was switched off and vmg35 (up and running by that time) was supposed to take over. What actually happened with *vmg35* was the undermentioned *IO error sending command CONFIG to proxy* exception at *17:05:29*, which is 5 seconds after the vmg36's shutdown. Hmmm...was httpd somehow too busy to accept the command?
h3. Worker nodes
The configuration is exactly the same as in [JBPAPP-9195], I just swapped the balancer. If you take a look at the attached
* node-vmg35-Ctrl+C-log.zip
* node-vmg36-Ctrl+C-log.zip
you may observe the shutdown time stamps ( *^C* ) as well as several exceptions:
*vmg35, IP:2620:52:0:105f:0:0:ffff:c, JvmRoute:f49689d6-cdbb-3015-a642-f8200ea456ff*
* 17:04:26,550 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] Problems unmarshalling remote command from byte buffer: java.lang.NullPointerException
* 17:05:29,133 INFO [org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler] (ContainerBackgroundProcessor[StandardEngine[jboss.web]]) IO error sending command CONFIG to proxy
2620:52:0:105f:0:0:ffff:c/2620:52:0:105f:0:0:ffff:c:8888: java.net.SocketTimeoutException: Read timed out
*vmg36, IP:2620:52:0:105f::ffff:0, JvmRoute:dc7bd552-a020-3d08-acee-4ae3e0f178a8*
* 17:03:36,275 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] Problems unmarshalling remote command from byte buffer: java.lang.NullPointerException
h3. Httpd
There is the attached *error_log_report.zip* I am about to investigate. I have not managed to see what was wrong yet.
The promising reading probably lay between *17:05:24* and *17:05:29* throughout to the glitch at *17:05:59* and *17:05:58*.
To be continued...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 6 months
[JBoss JIRA] (MODCLUSTER-309) mod_proxy_cluster not checking all available balancers (but only the first one) for an available route
by Simone Gotti (JIRA)
Simone Gotti created MODCLUSTER-309:
---------------------------------------
Summary: mod_proxy_cluster not checking all available balancers (but only the first one) for an available route
Key: MODCLUSTER-309
URL: https://issues.jboss.org/browse/MODCLUSTER-309
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.1.Final
Environment: RedHat EL 6.2, httpd-2.2.15-15.el6
Reporter: Simone Gotti
Assignee: Jean-Frederic Clere
I have an environment with two or more balancers.
I want all the balancers to be available for all the virtualhosts (and maybe filter them using UseAlias). So I'm not configuring any ProxyPass directive but let mod_cluster create the balancers.
During a simple test I noticed that session stickiness was not working for some requests.
Enabling debug I noticed that mod_proxy_cluster:get_route_balancer failed to find a route so the worker was recalculated:
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(2617): proxy_cluster_trans for 0 (null) (null) uri: /context01/jsp01.jsp args: (null) unparsed_uri: /context01/jsp01.jsp
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(2441): cluster: balancer://balancer01 Found value 7tTdLpqWIZjDLcyBrn25tCc9.eb5376bd-c45b-38d1-97e0-c16b97f471d1 for stickysession JSESSIONID|jsessionid
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(2441): cluster: balancer://balancer01 Found value 7tTdLpqWIZjDLcyBrn25tCc9.eb5376bd-c45b-38d1-97e0-c16b97f471d1 for stickysession JSESSIONID|jsessionid
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(2675): proxy_cluster_trans using balancer02 uri: proxy:balancer://balancer02/context01/jsp01.jsp
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(2708): proxy_cluster_canon url: //balancer02/context01/jsp01.jsp
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(3140): proxy_cluster_pre_request: url balancer://balancer02/context01/jsp01.jsp
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(2880): cluster:No route found
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(1854): proxy: Entering byrequests for CLUSTER (balancer://balancer02)
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(1972): proxy: byrequests balancer DONE (ajp://192.168.122.22:8359)
[Tue May 15 16:39:46 2012] [debug] mod_proxy_cluster.c(3368): proxy_cluster_pre_request: balancer (balancer://balancer02) worker (ajp://192.168.122.22:8359) rewritten to ajp://192.168.122.22:8359/context01/jsp01.jsp
Looking at the code looks like mod_cluster checks only the first balancer and, if it does not find any valid route it will give up without retrying the other available balancers.
With this patch it's now checking all the available balancers:
Index: mod_proxy_cluster/mod_proxy_cluster.c
===================================================================
--- mod_proxy_cluster/mod_proxy_cluster.c (revision 838)
+++ mod_proxy_cluster/mod_proxy_cluster.c (working copy)
@@ -2453,7 +2453,7 @@
/* Nice we have a route, but make sure we have to serve it */
int *nodes = find_node_context_host(r, balancer, route, use_alias, vhost_table, context_table);
if (nodes == NULL)
- return NULL; /* we can't serve context/host for the request */
+ continue; /* we can't serve context/host for the request with this balancer*/
}
if (route && *route) {
char *domain = NULL;
and this is the log after this possible fix:
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2617): proxy_cluster_trans for 0 (null) (null) uri: /context01/ args: (null) unparsed_uri: /context01/
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2441): cluster: balancer://balancer02 Found value Frjih6gBZzDUg+RFgeUvKJfy.9cedb7e1-5c20-3da7-bd17-9d0bc99b49d3 for stickysession JSESSIONID|jsessionid
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2441): cluster: balancer://balancer01 Found value Frjih6gBZzDUg+RFgeUvKJfy.9cedb7e1-5c20-3da7-bd17-9d0bc99b49d3 for stickysession JSESSIONID|jsessionid
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2461): cluster: Found route 9cedb7e1-5c20-3da7-bd17-9d0bc99b49d3
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2377): find_nodedomain: finding node for 9cedb7e1-5c20-3da7-bd17-9d0bc99b49d3: balancer01
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2470): cluster: Found balancer balancer01 for 9cedb7e1-5c20-3da7-bd17-9d0bc99b49d3
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2675): proxy_cluster_trans using balancer01 uri: proxy:balancer://balancer01/context01/
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2708): proxy_cluster_canon url: //balancer01/context01/
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(3140): proxy_cluster_pre_request: url balancer://balancer01/context01/
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(2876): cluster: Using route 9cedb7e1-5c20-3da7-bd17-9d0bc99b49d3
[Tue May 15 16:38:36 2012] [debug] mod_proxy_cluster.c(3368): proxy_cluster_pre_request: balancer (balancer://balancer01) worker (ajp://192.168.122.22:8259) rewritten to ajp://192.168.122.22:8259/context01/
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 7 months
[JBoss JIRA] Created: (MODCLUSTER-253) ROOT in excludedContexts doesn't work
by Jean-Frederic Clere (JIRA)
ROOT in excludedContexts doesn't work
-------------------------------------
Key: MODCLUSTER-253
URL: https://issues.jboss.org/browse/MODCLUSTER-253
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.1.3.Final
Environment: 5.1.x
Reporter: Jean-Frederic Clere
Assignee: Jean-Frederic Clere
Fix For: 1.1.4.Final
Just do:
<property name="excludedContexts">ROOT,admin-console,invoker,jbossws,jmx-console,juddi,web-console</property>
and see:
[Thu Sep 01 10:12:07 2011] [debug] mod_manager.c(2296): manager_handler ENABLE-APP (/) processing: "JVMRoute=4e6189af-0502-3305-8ff3-fad7fee8b516&Alias=localhost&Context=%2F"
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 7 months
[JBoss JIRA] (MODCLUSTER-307) Issues using mod_cluster with Tomcat 6 and Executor thread pools
by Aaron Ogburn (JIRA)
Aaron Ogburn created MODCLUSTER-307:
---------------------------------------
Summary: Issues using mod_cluster with Tomcat 6 and Executor thread pools
Key: MODCLUSTER-307
URL: https://issues.jboss.org/browse/MODCLUSTER-307
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.1.Final
Environment: -Tomcat 6
-mod_cluster 1.2
Reporter: Aaron Ogburn
Assignee: Jean-Frederic Clere
Priority: Minor
mod_cluster cannot properly determine connector pool statistics from Tomcat 6 when using an Executor thread pool. It fails with the following exception:
ERROR ContainerBackgroundProcessor[StandardEngine[Catalina]] org.apache.catalina.core.ContainerBase - Exception invoking periodic operation:
java.lang.IllegalStateException
at org.jboss.modcluster.container.catalina.CatalinaEngine.getProxyConnector(CatalinaEngine.java:153)
at org.jboss.modcluster.ModClusterService.connectionEstablished(ModClusterService.java:306)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:372)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:347)
at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:460)
at org.jboss.modcluster.container.catalina.CatalinaEventHandlerAdapter.lifecycleEvent(CatalinaEventHandlerAdapter.java:239)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1385)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1649)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1638)
at java.lang.Thread.run(Thread.java:619)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 7 months