[JBoss JIRA] (MODCLUSTER-407) worker-timeout can cause httpd thread stalls
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-407?page=com.atlassian.jira.pl... ]
Aaron Ogburn updated MODCLUSTER-407:
------------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/modcluster/mod_cluster/pull/76
Instead of relying on balancer->timeout to determine recursion, I added an additional param to find_best_worker to indicate whether or not to do the recursion. That's a bit simpler, and I'm not seeing any thread stalls with that logic.
> worker-timeout can cause httpd thread stalls
> --------------------------------------------
>
> Key: MODCLUSTER-407
> URL: https://issues.jboss.org/browse/MODCLUSTER-407
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.8.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
>
> Setting a modcluster worker-timeout can stall requests and threads on the httpd side when the requests are received with workers in a down state. A stack of the problem thread looks like the following (recursive loops through mod_proxy_cluster from #160 to #2):
> #0 0x00007ff8eb547533 in select () from /lib64/libc.so.6
> #1 0x00007ff8eba39185 in apr_sleep () from /usr/lib64/libapr-1.so.0
> #2 0x00007ff8e84be0d1 in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> ...
> #160 0x00007ff8e84beb9f in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> #161 0x00007ff8e88d2116 in proxy_run_pre_request () from /etc/httpd/modules/mod_proxy.so
> #162 0x00007ff8e88d9186 in ap_proxy_pre_request () from /etc/httpd/modules/mod_proxy.so
> #163 0x00007ff8e88d63c2 in ?? () from /etc/httpd/modules/mod_proxy.so
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-407) worker-timeout can cause httpd thread stalls
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-407?page=com.atlassian.jira.pl... ]
Aaron Ogburn commented on MODCLUSTER-407:
-----------------------------------------
Fortunately there's only two places in mod_proxy_cluster that call apr_sleep: proxy_cluster_watchdog_func and find_best_worker.
proxy_cluster_watchdog_func wouldn't be relevant since that's background/periodic functions and not request related. Checking out find_best_worker, it is indeed called by pre_request. Also it is recursive, so it seems to be the only thing that meets all criteria for a culprit that is recursive, called by pre_request, and calling apr_sleep. Also, the problem spot is going to be called only when nodes are in error state (which we have when killing nodes) so find_best_worker certainly looks like the culprit.
For further clarification, I added debug messages to find_best_worker indicating when it starts and ends and also when it starts and ends a recursive loop. After stopping request load, debug logging shows threads are continuing to start new recursive loops long after incoming requests stopped. It looks like relying on the balancer->timeout alone to determine whether we recurse or not is bad logic since multiple threads can get through at once. Then once one thread finishes a recursive loop, it sets balancer->timeout back, which can cause other threads to make another recursive loop. Once a few threads get in some recursive loops like that, they can keep each other stuck by continually resetting balancer->timeout and tripping one another back into the recursive loop.
> worker-timeout can cause httpd thread stalls
> --------------------------------------------
>
> Key: MODCLUSTER-407
> URL: https://issues.jboss.org/browse/MODCLUSTER-407
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.8.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
>
> Setting a modcluster worker-timeout can stall requests and threads on the httpd side when the requests are received with workers in a down state. A stack of the problem thread looks like the following (recursive loops through mod_proxy_cluster from #160 to #2):
> #0 0x00007ff8eb547533 in select () from /lib64/libc.so.6
> #1 0x00007ff8eba39185 in apr_sleep () from /usr/lib64/libapr-1.so.0
> #2 0x00007ff8e84be0d1 in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> ...
> #160 0x00007ff8e84beb9f in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> #161 0x00007ff8e88d2116 in proxy_run_pre_request () from /etc/httpd/modules/mod_proxy.so
> #162 0x00007ff8e88d9186 in ap_proxy_pre_request () from /etc/httpd/modules/mod_proxy.so
> #163 0x00007ff8e88d63c2 in ?? () from /etc/httpd/modules/mod_proxy.so
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-407) worker-timeout can cause httpd thread stalls
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-407?page=com.atlassian.jira.pl... ]
Aaron Ogburn updated MODCLUSTER-407:
------------------------------------
Affects Version/s: 1.2.8.Final
> worker-timeout can cause httpd thread stalls
> --------------------------------------------
>
> Key: MODCLUSTER-407
> URL: https://issues.jboss.org/browse/MODCLUSTER-407
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.8.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
>
> Setting a modcluster worker-timeout can stall requests and threads on the httpd side when the requests are received with workers in a down state. A stack of the problem thread looks like the following (recursive loops through mod_proxy_cluster from #160 to #2):
> #0 0x00007ff8eb547533 in select () from /lib64/libc.so.6
> #1 0x00007ff8eba39185 in apr_sleep () from /usr/lib64/libapr-1.so.0
> #2 0x00007ff8e84be0d1 in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> ...
> #160 0x00007ff8e84beb9f in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> #161 0x00007ff8e88d2116 in proxy_run_pre_request () from /etc/httpd/modules/mod_proxy.so
> #162 0x00007ff8e88d9186 in ap_proxy_pre_request () from /etc/httpd/modules/mod_proxy.so
> #163 0x00007ff8e88d63c2 in ?? () from /etc/httpd/modules/mod_proxy.so
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-407) worker-timeout can cause httpd thread stalls
by Aaron Ogburn (JIRA)
Aaron Ogburn created MODCLUSTER-407:
---------------------------------------
Summary: worker-timeout can cause httpd thread stalls
Key: MODCLUSTER-407
URL: https://issues.jboss.org/browse/MODCLUSTER-407
Project: mod_cluster
Issue Type: Bug
Reporter: Aaron Ogburn
Assignee: Jean-Frederic Clere
Setting a modcluster worker-timeout can stall requests and threads on the httpd side when the requests are received with workers in a down state. A stack of the problem thread looks like the following (recursive loops through mod_proxy_cluster from #160 to #2):
#0 0x00007ff8eb547533 in select () from /lib64/libc.so.6
#1 0x00007ff8eba39185 in apr_sleep () from /usr/lib64/libapr-1.so.0
#2 0x00007ff8e84be0d1 in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
...
#160 0x00007ff8e84beb9f in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
#161 0x00007ff8e88d2116 in proxy_run_pre_request () from /etc/httpd/modules/mod_proxy.so
#162 0x00007ff8e88d9186 in ap_proxy_pre_request () from /etc/httpd/modules/mod_proxy.so
#163 0x00007ff8e88d63c2 in ?? () from /etc/httpd/modules/mod_proxy.so
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-372) Number of registered contexts negatively affects mod_cluster performance
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-372?page=com.atlassian.jira.pl... ]
RH Bugzilla Integration commented on MODCLUSTER-372:
----------------------------------------------------
Michal Babacek <mbabacek(a)redhat.com> changed the Status of [bug 1030965|https://bugzilla.redhat.com/show_bug.cgi?id=1030965] from ON_QA to VERIFIED
> Number of registered contexts negatively affects mod_cluster performance
> ------------------------------------------------------------------------
>
> Key: MODCLUSTER-372
> URL: https://issues.jboss.org/browse/MODCLUSTER-372
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.4.Final, 1.2.6.Final
> Environment: RHEL6, other platforms are to be confirmed.
> Reporter: Michal Babacek
> Assignee: Jean-Frederic Clere
> Labels: mod_cluster, performace
> Fix For: 1.3.0.Final, 1.2.8.Final
>
> Attachments: 4-workers-1-context-balancer-cpu.png, 4-workers-61-context-accessing-1-balancer-cpu.png, 9-workers-1-context-balancer-cpu.png, 9-workers-31-context-accessing-1-balancer-cpu.png, 9-workers-31-context-round-robin-balancer-cpu.png, callgrind.zip, httpd.conf, performance-chart-1.2.8-flags.png, performance-chart-BZ1079156.png, performance-test-httpd.conf, performance-test-mod_cluster.conf
>
>
> There is a performance concern regarding CPU usage on the Apache HTTP Server with mod_cluster set up as a load balancer. Perf tests revealed that the major variable affecting the CPU usage is the overall number of registered contexts. It's noteworthy that these contexts actually don't need to be accessed at all, it's enough that they are present.
> The first test [9 workers, 31 contexts, round robin|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,...] depicts a behavior where all contexts are being accessed in a round robin fashion. Note the CPU usage. If we compare it to the [9 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,1context] test, it becomes apparent that there is something wrong with the CPU usage. As it is clear from the [9 workers, 31 contexts, access 1|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,acce...] test, accessing only one of these contexts doesn't help much.
> The last two tests, [4 workers, 61 contexts, accessing 1|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,61contexts,acce...] and [4 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,1context] confirm the results, environment differs only in number of nodes and number of contexts.
> [^httpd.conf] attached, stay tuned for some profiler outputs...
> h3. 9 workers, 31 contexts, round robin
> !9-workers-31-context-round-robin-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8843|147.4|15|60|0|8843|100%|
> |9|250|250|14422|240.3|37|215|0|14422|100%|
> |9|350|350|17551|292.5|196|443|0|17551|100%|
> |9|450|450|21303|355.0|269|603|0|21303|100%|
> |9|550|550|24818|413.6|325|686|0|24818|100%|
> |9|650|650|26804|446.7|469|800|0|26804|100%|
> |9|750|750|29930|498.8|507|985|0|29930|100%|
> |9|850|850|30665|511.0|665|1185|0|30665|100%|
> |9|950|950|34421|573.6|647|1316|0|34421|100%|
> |9|1050|1050|35067|584.4|800|1487|0|35067|100%|
> |9|1150|1150|36024|600.4|925|1612|0|36024|100%|
> |9|1250|1250|36635|610.5|1030|1815|0|36635|100%|
> |9|1350|1350|38395|639.9|1096|1942|0|38395|100%|
> |9|1450|1450|39713|661.8|1170|2466|0|39713|100%|
> |9|1550|1550|39455|657.5|1392|2340|0|39455|100%|
> |9|1650|1650|39849|664.1|1465|4240|0|39849|100%|
> |9|1750|1750|42435|707.2|1454|6511|0|42435|100%|
> |9|1850|1850|44714|745.2|1498|4866|0|44714|100%|
> |9|1950|1911|46072|767.8|1554|43016|0|46072|100%|
> |9|2050|1911|44496|741.5|1617|4953|39|44457|99%|
> |9|2150|1911|43715|728.5|1632|5348|125|43590|99%|
> |9|2250|1911|41112|685.1|1764|9800|155|40957|99%|
> h3. 9 workers, 31 contexts, accessing only one of them
> !9-workers-31-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8954|149.2|4|42|0|8954|100%|
> |9|250|250|14897|248.3|7|33|0|14897|100%|
> |9|350|350|20784|346.4|8|68|0|20784|100%|
> |9|450|450|26748|445.8|8|69|0|26748|100%|
> |9|550|550|32553|542.5|11|92|0|32553|100%|
> |9|650|650|38558|642.6|10|60|0|38558|100%|
> |9|750|750|43571|726.1|29|329|0|43571|100%|
> |9|850|850|46133|768.8|99|494|0|46133|100%|
> |9|950|950|50854|847.5|120|501|0|50854|100%|
> |9|1050|1050|54451|907.4|154|584|0|54451|100%|
> |9|1150|1150|59961|999.3|138|674|0|59961|100%|
> |9|1250|1250|62567|1,042.6|198|675|0|62567|100%|
> |9|1350|1350|61939|1,032.2|301|799|0|61939|100%|
> |9|1450|1450|67920|1,131.9|276|844|0|67920|100%|
> |9|1550|1550|73151|1,219.1|261|861|0|73151|100%|
> |9|1650|1650|73937|1,232.2|332|955|0|73937|100%|
> |9|1750|1750|73516|1,225.2|423|1046|0|73516|100%|
> |9|1850|1850|72556|1,209.1|515|1264|0|72556|100%|
> |9|1950|1911|78613|1,310.1|454|50273|0|78613|100%|
> |9|2050|1911|80141|1,335.6|431|1225|39|80102|99%|
> |9|2150|1911|76979|1,282.9|490|1338|127|76852|99%|
> |9|2250|1911|78048|1,300.7|464|1305|136|77912|99%|
> h3. 9 workers, 1 context
> !9-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8965|149.4|2|12|0|8965|100%|
> |9|250|250|14965|249.4|2|22|0|14965|100%|
> |9|350|350|20950|349.1|2|23|0|20950|100%|
> |9|450|450|26941|449.0|2|26|0|26941|100%|
> |9|550|550|32937|548.9|1|26|0|32937|100%|
> |9|650|650|38900|648.3|1|19|0|38900|100%|
> |9|750|750|44918|748.6|1|11|0|44918|100%|
> |9|850|850|50902|848.3|2|22|0|50902|100%|
> |9|950|950|56878|947.9|1|14|0|56878|100%|
> |9|1050|1050|62874|1,047.8|2|12|0|62874|100%|
> |9|1150|1150|68845|1,147.3|2|99|0|68845|100%|
> |9|1250|1250|74851|1,247.4|2|103|0|74851|100%|
> |9|1350|1350|80826|1,347.0|2|100|0|80826|100%|
> |9|1450|1450|86806|1,446.7|2|19|0|86806|100%|
> |9|1550|1550|92817|1,546.8|2|52|0|92817|100%|
> |9|1650|1650|98774|1,646.1|2|18|0|98774|100%|
> |9|1750|1750|104755|1,745.8|2|18|0|104755|100%|
> |9|1850|1850|110734|1,845.4|2|20|0|110734|100%|
> |9|1950|1910|113419|1,890.2|9|41855|0|113419|100%|
> |9|2050|1911|114437|1,907.1|2|77962|39|114397|99%|
> |9|2150|1911|114481|1,907.9|2|15|128|114353|99%|
> |9|2250|1911|114545|1,908.9|2|24|144|114401|99%|
> h3. 4 workers, 61 contexts, accessing only one of them
> !4-workers-61-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29796|496.6|6|52|0|29796|100%|
> |4|650|650|38706|645.0|7|149|0|38706|100%|
> |4|800|800|47585|793.0|8|129|0|47585|100%|
> |4|950|950|54467|907.7|43|377|0|54467|100%|
> |4|1100|1100|62500|1,041.6|54|396|0|62500|100%|
> |4|1250|1250|69446|1,157.3|81|512|0|69446|100%|
> |4|1400|1400|76217|1,270.2|97|517|0|76217|100%|
> |4|1550|1550|80216|1,336.8|152|810|0|80216|100%|
> |4|1700|1700|80797|1,346.5|271|864|0|80797|100%|
> |4|1850|1850|94172|1,569.3|182|822|0|94172|100%|
> |4|2000|1916|91014|1,516.8|253|48650|0|91014|100%|
> |4|2150|1916|95852|1,597.4|205|848|83|95769|99%|
> h3. 4 workers, 1 context
> !4-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29922|498.7|2|39|0|29922|100%|
> |4|650|650|38923|648.7|1|39|0|38923|100%|
> |4|800|800|47916|798.5|1|17|0|47916|100%|
> |4|950|950|56896|948.2|1|17|0|56896|100%|
> |4|1100|1100|65889|1,098.1|1|115|0|65889|100%|
> |4|1250|1250|74874|1,247.8|1|101|0|74874|100%|
> |4|1400|1400|83818|1,396.8|1|17|0|83818|100%|
> |4|1550|1550|92830|1,547.0|1|17|0|92830|100%|
> |4|1700|1700|101805|1,696.6|1|11|0|101805|100%|
> |4|1850|1850|110785|1,846.3|1|11|0|110785|100%|
> |4|2000|1916|113747|1,895.6|10|53108|0|113747|100%|
> |4|2150|1916|114825|1,913.6|1|24|83|114742|99%|
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-372) Number of registered contexts negatively affects mod_cluster performance
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-372?page=com.atlassian.jira.pl... ]
RH Bugzilla Integration commented on MODCLUSTER-372:
----------------------------------------------------
Michal Babacek <mbabacek(a)redhat.com> changed the Status of [bug 1030965|https://bugzilla.redhat.com/show_bug.cgi?id=1030965] from MODIFIED to ON_QA
> Number of registered contexts negatively affects mod_cluster performance
> ------------------------------------------------------------------------
>
> Key: MODCLUSTER-372
> URL: https://issues.jboss.org/browse/MODCLUSTER-372
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.4.Final, 1.2.6.Final
> Environment: RHEL6, other platforms are to be confirmed.
> Reporter: Michal Babacek
> Assignee: Jean-Frederic Clere
> Labels: mod_cluster, performace
> Fix For: 1.3.0.Final, 1.2.8.Final
>
> Attachments: 4-workers-1-context-balancer-cpu.png, 4-workers-61-context-accessing-1-balancer-cpu.png, 9-workers-1-context-balancer-cpu.png, 9-workers-31-context-accessing-1-balancer-cpu.png, 9-workers-31-context-round-robin-balancer-cpu.png, callgrind.zip, httpd.conf, performance-chart-1.2.8-flags.png, performance-chart-BZ1079156.png, performance-test-httpd.conf, performance-test-mod_cluster.conf
>
>
> There is a performance concern regarding CPU usage on the Apache HTTP Server with mod_cluster set up as a load balancer. Perf tests revealed that the major variable affecting the CPU usage is the overall number of registered contexts. It's noteworthy that these contexts actually don't need to be accessed at all, it's enough that they are present.
> The first test [9 workers, 31 contexts, round robin|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,...] depicts a behavior where all contexts are being accessed in a round robin fashion. Note the CPU usage. If we compare it to the [9 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,1context] test, it becomes apparent that there is something wrong with the CPU usage. As it is clear from the [9 workers, 31 contexts, access 1|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,acce...] test, accessing only one of these contexts doesn't help much.
> The last two tests, [4 workers, 61 contexts, accessing 1|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,61contexts,acce...] and [4 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,1context] confirm the results, environment differs only in number of nodes and number of contexts.
> [^httpd.conf] attached, stay tuned for some profiler outputs...
> h3. 9 workers, 31 contexts, round robin
> !9-workers-31-context-round-robin-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8843|147.4|15|60|0|8843|100%|
> |9|250|250|14422|240.3|37|215|0|14422|100%|
> |9|350|350|17551|292.5|196|443|0|17551|100%|
> |9|450|450|21303|355.0|269|603|0|21303|100%|
> |9|550|550|24818|413.6|325|686|0|24818|100%|
> |9|650|650|26804|446.7|469|800|0|26804|100%|
> |9|750|750|29930|498.8|507|985|0|29930|100%|
> |9|850|850|30665|511.0|665|1185|0|30665|100%|
> |9|950|950|34421|573.6|647|1316|0|34421|100%|
> |9|1050|1050|35067|584.4|800|1487|0|35067|100%|
> |9|1150|1150|36024|600.4|925|1612|0|36024|100%|
> |9|1250|1250|36635|610.5|1030|1815|0|36635|100%|
> |9|1350|1350|38395|639.9|1096|1942|0|38395|100%|
> |9|1450|1450|39713|661.8|1170|2466|0|39713|100%|
> |9|1550|1550|39455|657.5|1392|2340|0|39455|100%|
> |9|1650|1650|39849|664.1|1465|4240|0|39849|100%|
> |9|1750|1750|42435|707.2|1454|6511|0|42435|100%|
> |9|1850|1850|44714|745.2|1498|4866|0|44714|100%|
> |9|1950|1911|46072|767.8|1554|43016|0|46072|100%|
> |9|2050|1911|44496|741.5|1617|4953|39|44457|99%|
> |9|2150|1911|43715|728.5|1632|5348|125|43590|99%|
> |9|2250|1911|41112|685.1|1764|9800|155|40957|99%|
> h3. 9 workers, 31 contexts, accessing only one of them
> !9-workers-31-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8954|149.2|4|42|0|8954|100%|
> |9|250|250|14897|248.3|7|33|0|14897|100%|
> |9|350|350|20784|346.4|8|68|0|20784|100%|
> |9|450|450|26748|445.8|8|69|0|26748|100%|
> |9|550|550|32553|542.5|11|92|0|32553|100%|
> |9|650|650|38558|642.6|10|60|0|38558|100%|
> |9|750|750|43571|726.1|29|329|0|43571|100%|
> |9|850|850|46133|768.8|99|494|0|46133|100%|
> |9|950|950|50854|847.5|120|501|0|50854|100%|
> |9|1050|1050|54451|907.4|154|584|0|54451|100%|
> |9|1150|1150|59961|999.3|138|674|0|59961|100%|
> |9|1250|1250|62567|1,042.6|198|675|0|62567|100%|
> |9|1350|1350|61939|1,032.2|301|799|0|61939|100%|
> |9|1450|1450|67920|1,131.9|276|844|0|67920|100%|
> |9|1550|1550|73151|1,219.1|261|861|0|73151|100%|
> |9|1650|1650|73937|1,232.2|332|955|0|73937|100%|
> |9|1750|1750|73516|1,225.2|423|1046|0|73516|100%|
> |9|1850|1850|72556|1,209.1|515|1264|0|72556|100%|
> |9|1950|1911|78613|1,310.1|454|50273|0|78613|100%|
> |9|2050|1911|80141|1,335.6|431|1225|39|80102|99%|
> |9|2150|1911|76979|1,282.9|490|1338|127|76852|99%|
> |9|2250|1911|78048|1,300.7|464|1305|136|77912|99%|
> h3. 9 workers, 1 context
> !9-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8965|149.4|2|12|0|8965|100%|
> |9|250|250|14965|249.4|2|22|0|14965|100%|
> |9|350|350|20950|349.1|2|23|0|20950|100%|
> |9|450|450|26941|449.0|2|26|0|26941|100%|
> |9|550|550|32937|548.9|1|26|0|32937|100%|
> |9|650|650|38900|648.3|1|19|0|38900|100%|
> |9|750|750|44918|748.6|1|11|0|44918|100%|
> |9|850|850|50902|848.3|2|22|0|50902|100%|
> |9|950|950|56878|947.9|1|14|0|56878|100%|
> |9|1050|1050|62874|1,047.8|2|12|0|62874|100%|
> |9|1150|1150|68845|1,147.3|2|99|0|68845|100%|
> |9|1250|1250|74851|1,247.4|2|103|0|74851|100%|
> |9|1350|1350|80826|1,347.0|2|100|0|80826|100%|
> |9|1450|1450|86806|1,446.7|2|19|0|86806|100%|
> |9|1550|1550|92817|1,546.8|2|52|0|92817|100%|
> |9|1650|1650|98774|1,646.1|2|18|0|98774|100%|
> |9|1750|1750|104755|1,745.8|2|18|0|104755|100%|
> |9|1850|1850|110734|1,845.4|2|20|0|110734|100%|
> |9|1950|1910|113419|1,890.2|9|41855|0|113419|100%|
> |9|2050|1911|114437|1,907.1|2|77962|39|114397|99%|
> |9|2150|1911|114481|1,907.9|2|15|128|114353|99%|
> |9|2250|1911|114545|1,908.9|2|24|144|114401|99%|
> h3. 4 workers, 61 contexts, accessing only one of them
> !4-workers-61-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29796|496.6|6|52|0|29796|100%|
> |4|650|650|38706|645.0|7|149|0|38706|100%|
> |4|800|800|47585|793.0|8|129|0|47585|100%|
> |4|950|950|54467|907.7|43|377|0|54467|100%|
> |4|1100|1100|62500|1,041.6|54|396|0|62500|100%|
> |4|1250|1250|69446|1,157.3|81|512|0|69446|100%|
> |4|1400|1400|76217|1,270.2|97|517|0|76217|100%|
> |4|1550|1550|80216|1,336.8|152|810|0|80216|100%|
> |4|1700|1700|80797|1,346.5|271|864|0|80797|100%|
> |4|1850|1850|94172|1,569.3|182|822|0|94172|100%|
> |4|2000|1916|91014|1,516.8|253|48650|0|91014|100%|
> |4|2150|1916|95852|1,597.4|205|848|83|95769|99%|
> h3. 4 workers, 1 context
> !4-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29922|498.7|2|39|0|29922|100%|
> |4|650|650|38923|648.7|1|39|0|38923|100%|
> |4|800|800|47916|798.5|1|17|0|47916|100%|
> |4|950|950|56896|948.2|1|17|0|56896|100%|
> |4|1100|1100|65889|1,098.1|1|115|0|65889|100%|
> |4|1250|1250|74874|1,247.8|1|101|0|74874|100%|
> |4|1400|1400|83818|1,396.8|1|17|0|83818|100%|
> |4|1550|1550|92830|1,547.0|1|17|0|92830|100%|
> |4|1700|1700|101805|1,696.6|1|11|0|101805|100%|
> |4|1850|1850|110785|1,846.3|1|11|0|110785|100%|
> |4|2000|1916|113747|1,895.6|10|53108|0|113747|100%|
> |4|2150|1916|114825|1,913.6|1|24|83|114742|99%|
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-406) mod_rewrite PT is broken again
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-406?page=com.atlassian.jira.pl... ]
Aaron Ogburn commented on MODCLUSTER-406:
-----------------------------------------
Actually, I see this is caused by and fixed by the same thing as MODCLUSTER-335 :). Closing.
> mod_rewrite PT is broken again
> ------------------------------
>
> Key: MODCLUSTER-406
> URL: https://issues.jboss.org/browse/MODCLUSTER-406
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.4.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Fix For: 1.2.5.Final
>
>
> RewriteRule ^/(.*)$ balancer://mycluster/myapp/$1 [L,PT]
> This doesn't work; it fails with a 400/bad request and httpd complains about an invalid uri. This was previously fixed by MODCLUSTER-213. It looks like this was broken again in 1.2.4.Final+ by MODCLUSTER-328.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-406) mod_rewrite PT is broken again
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-406?page=com.atlassian.jira.pl... ]
Aaron Ogburn closed MODCLUSTER-406.
-----------------------------------
Fix Version/s: 1.2.5.Final
Resolution: Duplicate Issue
> mod_rewrite PT is broken again
> ------------------------------
>
> Key: MODCLUSTER-406
> URL: https://issues.jboss.org/browse/MODCLUSTER-406
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.4.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Fix For: 1.2.5.Final
>
>
> RewriteRule ^/(.*)$ balancer://mycluster/myapp/$1 [L,PT]
> This doesn't work; it fails with a 400/bad request and httpd complains about an invalid uri. This was previously fixed by MODCLUSTER-213. It looks like this was broken again in 1.2.4.Final+ by MODCLUSTER-328.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months
[JBoss JIRA] (MODCLUSTER-406) mod_rewrite PT is broken again
by Aaron Ogburn (JIRA)
Aaron Ogburn created MODCLUSTER-406:
---------------------------------------
Summary: mod_rewrite PT is broken again
Key: MODCLUSTER-406
URL: https://issues.jboss.org/browse/MODCLUSTER-406
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.8.Final, 1.3.0.Final, 1.2.4.Final
Reporter: Aaron Ogburn
Assignee: Jean-Frederic Clere
RewriteRule ^/(.*)$ balancer://mycluster/myapp/$1 [L,PT]
This doesn't work; it fails with a 400/bad request and httpd complains about an invalid uri. This was previously fixed by MODCLUSTER-213. It looks like this was broken again in 1.2.4.Final+ by MODCLUSTER-328.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 7 months