[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-536?page=com.atlassian.jira.pl... ]
Michal Karm Babacek updated MODCLUSTER-536:
-------------------------------------------
Forum Reference: https://developer.jboss.org/message/962154, https://developer.jboss.org/message/962243
> List of open files grows steadily during load test through mod_cluster
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-536
> URL: https://issues.jboss.org/browse/MODCLUSTER-536
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: Wildfly10.0.0.Final
> mod_cluster-1.3.1.Final-linux2-x64-ssl
> CentOS7 (virtualbox)
> Reporter: Wayne Wang
> Assignee: Michal Karm Babacek
> Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
>
>
> I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
> The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
> However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
> The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-536?page=com.atlassian.jira.pl... ]
Michal Karm Babacek reassigned MODCLUSTER-536:
----------------------------------------------
Assignee: Michal Karm Babacek (was: Jean-Frederic Clere)
> List of open files grows steadily during load test through mod_cluster
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-536
> URL: https://issues.jboss.org/browse/MODCLUSTER-536
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: Wildfly10.0.0.Final
> mod_cluster-1.3.1.Final-linux2-x64-ssl
> CentOS7 (virtualbox)
> Reporter: Wayne Wang
> Assignee: Michal Karm Babacek
> Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
>
>
> I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
> The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
> However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
> The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Wayne Wang (JIRA)
Wayne Wang created MODCLUSTER-536:
-------------------------------------
Summary: List of open files grows steadily during load test through mod_cluster
Key: MODCLUSTER-536
URL: https://issues.jboss.org/browse/MODCLUSTER-536
Project: mod_cluster
Issue Type: Bug
Components: Core & Container Integration (Java)
Affects Versions: 1.3.1.Final
Environment: Wildfly10.0.0.Final
mod_cluster-1.3.1.Final-linux2-x64-ssl
CentOS7 (virtualbox)
Reporter: Wayne Wang
Assignee: Jean-Frederic Clere
Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-534) Httpd Camel case balancer name not found
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-534?page=com.atlassian.jira.pl... ]
Work on MODCLUSTER-534 started by Michal Karm Babacek.
------------------------------------------------------
> Httpd Camel case balancer name not found
> ----------------------------------------
>
> Key: MODCLUSTER-534
> URL: https://issues.jboss.org/browse/MODCLUSTER-534
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final
> Reporter: Bogdan Sikora
> Assignee: Michal Karm Babacek
>
> Apache with balancer named with camel case name, sees workers correctly
> {noformat}
> <html><head>
> <title>Mod_cluster Status</title>
> </head><body>
> <h1>mod_cluster/1.3.1.Final</h1><a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=INFO&Range=ALL">show INFO output</a>
> <h1> Node jboss-eap-7.0 (ajp://10.16.92.87:8009): </h1>
> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Enable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Disable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Stop Contexts</a><br/>
> Balancer: QA-bAlAnCeR,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 44
> <h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0&Alias=default-host&Context=/clusterbench">Stop</a>
> </pre><h3>Aliases:</h3><pre>default-host
> localhost
> </pre><h1> Node jboss-eap-7.0-2 (ajp://10.16.92.87:8110): </h1>
> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Enable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Disable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Stop Contexts</a><br/>
> Balancer: QA-bAlAnCeR,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 36
> <h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0-2&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0-2&Alias=default-host&Context=/clusterbench">Stop</a>
> </pre><h3>Aliases:</h3><pre>default-host
> localhost
> </pre></body></html>
> {noformat}
> but request that should be routed to workers ends with 404
> {noformat}
> 07:31:15.536 [INFO] Verifying URL: http://10.16.92.87:2080/clusterbench/jvmroute for response code 200 and content to: contain ""
> Aug 23, 2016 7:31:15 AM com.gargoylesoftware.htmlunit.WebClient printContentIfNecessary
> INFO: statusCode=[404] contentType=[text/html]
> Aug 23, 2016 7:31:15 AM com.gargoylesoftware.htmlunit.WebClient printContentIfNecessary
> INFO: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>404 Not Found</title>
> </head><body>
> <h1>Not Found</h1>
> <p>The requested URL /clusterbench/jvmroute was not found on this server.</p>
> <hr>
> <address>Apache/2.4.6 (Red Hat) Server at 10.16.92.87 Port 2080</address>
> </body></html>
> {noformat}
> this is part of balancer debug log but from another run
> {noformat}
> [Tue Aug 23 08:41:47.468099 2016] [authz_core:debug] [pid 17958] mod_authz_core.c(809): [client 127.0.0.1:54932] AH01626: authorization result of Require all granted: granted
> [Tue Aug 23 08:41:47.468117 2016] [authz_core:debug] [pid 17958] mod_authz_core.c(809): [client 127.0.0.1:54932] AH01626: authorization result of <RequireAny>: granted
> [Tue Aug 23 08:41:47.474707 2016] [:debug] [pid 17959] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
> [Tue Aug 23 08:41:47.474727 2016] [:debug] [pid 17959] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
> [Tue Aug 23 08:41:47.474767 2016] [authz_core:debug] [pid 17959] mod_authz_core.c(809): [client 127.0.0.1:49024] AH01626: authorization result of Require all granted: granted
> [Tue Aug 23 08:41:47.474770 2016] [authz_core:debug] [pid 17959] mod_authz_core.c(809): [client 127.0.0.1:49024] AH01626: authorization result of <RequireAny>: granted
> [Tue Aug 23 08:41:47.474809 2016] [core:info] [pid 17959] [client 127.0.0.1:49024] AH00128: File does not exist: /opt/jbcs-httpd24-2.4/httpd/www/html/clusterbench/jvmroute
> [Tue Aug 23 08:41:48.979620 2016] [:debug] [pid 17960] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
> [Tue Aug 23 08:41:48.979638 2016] [:debug] [pid 17960] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
> {noformat}
> [Full debug log|https://da.gd/57CQ]
> [Test log to debug log|http://pastebin.test.redhat.com/405176]
> (on Redhat pastebin because of fedora paste bit was accusing me of spam )
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-383?page=com.atlassian.jira.pl... ]
RH Bugzilla Integration commented on MODCLUSTER-383:
----------------------------------------------------
Jean-frederic Clere <jclere(a)redhat.com> changed the Status of [bug 1018705|https://bugzilla.redhat.com/show_bug.cgi?id=1018705] from ASSIGNED to NEW
> Session draining broken: requests counting broken on load-balancer
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-383
> URL: https://issues.jboss.org/browse/MODCLUSTER-383
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha1
> Environment: Fedora 20, 64 bit, httpd 2.4.6 + mod_cluster master (21ceed3c219fc3ad743b361cafd1097ebac19dfe)
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.0.Final
>
>
> The request counting is broken. Looks like synchronization problem with dirty cached reads.
> Steps to reproduce:
> # start AS with some context
> # start LB
> # start 2 or more load driver threads
> # number of requests on that context goes to higher values than 2, eventually and slowly increasing
> On the AS it manifests as:
> {noformat}
> 19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-535) Httpd Camel case balancer name not found
by Bogdan Sikora (JIRA)
Bogdan Sikora created MODCLUSTER-535:
----------------------------------------
Summary: Httpd Camel case balancer name not found
Key: MODCLUSTER-535
URL: https://issues.jboss.org/browse/MODCLUSTER-535
Project: mod_cluster
Issue Type: Bug
Components: Native (httpd modules)
Affects Versions: 1.3.1.Final
Reporter: Bogdan Sikora
Assignee: Michal Karm Babacek
Apache with balancer named with camel case name, sees workers correctly
{noformat}
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.1.Final</h1><a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node jboss-eap-7.0 (ajp://10.16.92.87:8009): </h1>
<a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Enable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Disable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Stop Contexts</a><br/>
Balancer: QA-bAlAnCeR,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 44
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre><h1> Node jboss-eap-7.0-2 (ajp://10.16.92.87:8110): </h1>
<a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Enable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Disable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Stop Contexts</a><br/>
Balancer: QA-bAlAnCeR,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 36
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0-2&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0-2&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre></body></html>
{noformat}
but request that should be routed to workers ends with 404
{noformat}
07:31:15.536 [INFO] Verifying URL: http://10.16.92.87:2080/clusterbench/jvmroute for response code 200 and content to: contain ""
Aug 23, 2016 7:31:15 AM com.gargoylesoftware.htmlunit.WebClient printContentIfNecessary
INFO: statusCode=[404] contentType=[text/html]
Aug 23, 2016 7:31:15 AM com.gargoylesoftware.htmlunit.WebClient printContentIfNecessary
INFO: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /clusterbench/jvmroute was not found on this server.</p>
<hr>
<address>Apache/2.4.6 (Red Hat) Server at 10.16.92.87 Port 2080</address>
</body></html>
{noformat}
this is part of balancer debug log but from another run
{noformat}
[Tue Aug 23 08:41:47.468099 2016] [authz_core:debug] [pid 17958] mod_authz_core.c(809): [client 127.0.0.1:54932] AH01626: authorization result of Require all granted: granted
[Tue Aug 23 08:41:47.468117 2016] [authz_core:debug] [pid 17958] mod_authz_core.c(809): [client 127.0.0.1:54932] AH01626: authorization result of <RequireAny>: granted
[Tue Aug 23 08:41:47.474707 2016] [:debug] [pid 17959] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
[Tue Aug 23 08:41:47.474727 2016] [:debug] [pid 17959] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
[Tue Aug 23 08:41:47.474767 2016] [authz_core:debug] [pid 17959] mod_authz_core.c(809): [client 127.0.0.1:49024] AH01626: authorization result of Require all granted: granted
[Tue Aug 23 08:41:47.474770 2016] [authz_core:debug] [pid 17959] mod_authz_core.c(809): [client 127.0.0.1:49024] AH01626: authorization result of <RequireAny>: granted
[Tue Aug 23 08:41:47.474809 2016] [core:info] [pid 17959] [client 127.0.0.1:49024] AH00128: File does not exist: /opt/jbcs-httpd24-2.4/httpd/www/html/clusterbench/jvmroute
[Tue Aug 23 08:41:48.979620 2016] [:debug] [pid 17960] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
[Tue Aug 23 08:41:48.979638 2016] [:debug] [pid 17960] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
{noformat}
[Full debug log|https://da.gd/57CQ]
[Test log to debug log|http://pastebin.test.redhat.com/405176]
(on Redhat pastebin because of fedora paste bit was accusing me of spam )
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-534) Httpd Camel case balancer name not found
by Bogdan Sikora (JIRA)
Bogdan Sikora created MODCLUSTER-534:
----------------------------------------
Summary: Httpd Camel case balancer name not found
Key: MODCLUSTER-534
URL: https://issues.jboss.org/browse/MODCLUSTER-534
Project: mod_cluster
Issue Type: Bug
Components: Native (httpd modules)
Affects Versions: 1.3.1.Final
Reporter: Bogdan Sikora
Assignee: Michal Karm Babacek
Apache with balancer named with camel case name, sees workers correctly
{noformat}
<html><head>
<title>Mod_cluster Status</title>
</head><body>
<h1>mod_cluster/1.3.1.Final</h1><a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&refresh=10">Auto Refresh</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DUMP&Range=ALL">show DUMP output</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=INFO&Range=ALL">show INFO output</a>
<h1> Node jboss-eap-7.0 (ajp://10.16.92.87:8009): </h1>
<a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Enable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Disable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.0">Stop Contexts</a><br/>
Balancer: QA-bAlAnCeR,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 44
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre><h1> Node jboss-eap-7.0-2 (ajp://10.16.92.87:8110): </h1>
<a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Enable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Disable Contexts</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=NODE&JVMRoute=jboss-eap-7.0-2">Stop Contexts</a><br/>
Balancer: QA-bAlAnCeR,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 36
<h2> Virtual Host 1:</h2><h3>Contexts:</h3><pre>/clusterbench, Status: ENABLED Request: 0 <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0-2&Alias=default-host&Context=/clusterbench">Disable</a> <a href="/mcm?nonce=dbf6a14d-f4e6-46fc-88a7-a1851d9fd74e&Cmd=STOP-APP&Range=CONTEXT&JVMRoute=jboss-eap-7.0-2&Alias=default-host&Context=/clusterbench">Stop</a>
</pre><h3>Aliases:</h3><pre>default-host
localhost
</pre></body></html>
{noformat}
but request that should be routed to workers ends with 404
{noformat}
07:31:15.536 [INFO] Verifying URL: http://10.16.92.87:2080/clusterbench/jvmroute for response code 200 and content to: contain ""
Aug 23, 2016 7:31:15 AM com.gargoylesoftware.htmlunit.WebClient printContentIfNecessary
INFO: statusCode=[404] contentType=[text/html]
Aug 23, 2016 7:31:15 AM com.gargoylesoftware.htmlunit.WebClient printContentIfNecessary
INFO: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /clusterbench/jvmroute was not found on this server.</p>
<hr>
<address>Apache/2.4.6 (Red Hat) Server at 10.16.92.87 Port 2080</address>
</body></html>
{noformat}
this is part of balancer debug log but from another run
{noformat}
[Tue Aug 23 08:41:47.468099 2016] [authz_core:debug] [pid 17958] mod_authz_core.c(809): [client 127.0.0.1:54932] AH01626: authorization result of Require all granted: granted
[Tue Aug 23 08:41:47.468117 2016] [authz_core:debug] [pid 17958] mod_authz_core.c(809): [client 127.0.0.1:54932] AH01626: authorization result of <RequireAny>: granted
[Tue Aug 23 08:41:47.474707 2016] [:debug] [pid 17959] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
[Tue Aug 23 08:41:47.474727 2016] [:debug] [pid 17959] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
[Tue Aug 23 08:41:47.474767 2016] [authz_core:debug] [pid 17959] mod_authz_core.c(809): [client 127.0.0.1:49024] AH01626: authorization result of Require all granted: granted
[Tue Aug 23 08:41:47.474770 2016] [authz_core:debug] [pid 17959] mod_authz_core.c(809): [client 127.0.0.1:49024] AH01626: authorization result of <RequireAny>: granted
[Tue Aug 23 08:41:47.474809 2016] [core:info] [pid 17959] [client 127.0.0.1:49024] AH00128: File does not exist: /opt/jbcs-httpd24-2.4/httpd/www/html/clusterbench/jvmroute
[Tue Aug 23 08:41:48.979620 2016] [:debug] [pid 17960] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
[Tue Aug 23 08:41:48.979638 2016] [:debug] [pid 17960] mod_proxy_cluster.c(2105): get_context_host_balancer: balancer balancer://QA-bAlAnCeR not found
{noformat}
[Full debug log|https://da.gd/57CQ]
[Test log to debug log|http://pastebin.test.redhat.com/405176]
(on Redhat pastebin because of fedora paste bit was accusing me of spam )
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-522) Memory leak in processing MCMP, wrong apr pool used for allocation
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-522?page=com.atlassian.jira.pl... ]
Work on MODCLUSTER-522 stopped by Michal Karm Babacek.
------------------------------------------------------
> Memory leak in processing MCMP, wrong apr pool used for allocation
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-522
> URL: https://issues.jboss.org/browse/MODCLUSTER-522
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final, 2.0.0.Alpha1
> Environment: Solaris 10 x86_64, RHEL 6, Fedora 24, httpd 2.4.6, httpd 2.4.20
> Reporter: Michal Karm Babacek
> Assignee: Michal Karm Babacek
> Priority: Critical
> Attachments: mod_cluster-mem.jpg, mod_cluster-mem1.jpg
>
>
> There seems to be a wrong apr pool used for processing certain MCMP commands. We should use short life span pools for immediate processing and server-lifetime pools only for really persistent configuration. In the current state, with 20+ tomcat workers (1 alias and 1 context each) and virtually no client requests, we could see slow, but steady growth of heap allocated memory.
> TODO: Investigate the offending logic, make sure we ain't using long-lived pools for immediate processing.
> Originally discovered by: [~j_sykora] and [~jmsantuci]
> Illustrative memory overview - with constant number of Tomcats:
>
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months
[JBoss JIRA] (MODCLUSTER-466) mod_cluster undersizes the connection pool on httpd 2.2
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-466?page=com.atlassian.jira.pl... ]
RH Bugzilla Integration commented on MODCLUSTER-466:
----------------------------------------------------
Michal Karm Babacek <mbabacek(a)redhat.com> changed the Status of [bug 1256607|https://bugzilla.redhat.com/show_bug.cgi?id=1256607] from ASSIGNED to VERIFIED
> mod_cluster undersizes the connection pool on httpd 2.2
> -------------------------------------------------------
>
> Key: MODCLUSTER-466
> URL: https://issues.jboss.org/browse/MODCLUSTER-466
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Fix For: 1.3.2.Final, 1.2.13.Final
>
>
> If all threads in a httpd child worker process are saturated with long requests, then all connections in the pool are exhausted, likely leaving none available for additional pings, which then fail with errors like:
> [error] (70007)The timeout specified has expired: proxy: ajp: failed to acquire connection for ...
> Documentation suggests the connection pool will be sized to ThreadsPerChild+1 to avoid that, but it looks like it is really just ThreadsPerChild on httpd 2.2.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 4 months