[JBoss JIRA] (MODCLUSTER-91) Connector bind address of 0.0.0.0 propagated to proxy
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-91?page=com.atlassian.jira.plu... ]
Jean-Frederic Clere resolved MODCLUSTER-91.
-------------------------------------------
Resolution: Done
> Connector bind address of 0.0.0.0 propagated to proxy
> -----------------------------------------------------
>
> Key: MODCLUSTER-91
> URL: https://issues.jboss.org/browse/MODCLUSTER-91
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.0.1.GA
> Reporter: Brian Stansberry
> Assignee: Paul Ferraro
> Fix For: 1.1.0.Beta1
>
>
> Marek Goldmann wrote:
> > I'm encountered a strange error. When I bind JBoss instance to 0.0.0.0
> > address instead of a fixed ethernet address, node gets registered in
> > mod_cluster, shows in mod_cluster-manager, but every request to
> > registered contexts throws 503 error.
> >
> > httpd error log:
> >
> > [Fri Aug 07 03:21:05 2009] [error] (111)Connection refused: proxy:
> > ajp: attempt to connect to 0.0.0.0:8009 (0.0.0.0) failed
> > [Fri Aug 07 03:21:05 2009] [error] ap_proxy_connect_backend disabling
> > worker for (0.0.0.0)
> > [Fri Aug 07 03:21:15 2009] [error] proxy: ajp: disabled connection for
> > (0.0.0.0)
> > [Fri Aug 07 03:21:25 2009] [error] proxy: ajp: disabled connection for
> > (0.0.0.0)
> >
> > This looks like a bug for me, because many administrators are binding
> > JBoss to 0.0.0.0.
> The java side needs to understand that 0.0.0.0 is useless as a client address and send something useful. Trick is deciding what's useful.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-383?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere commented on MODCLUSTER-383:
------------------------------------------------
All those have the same issue but they don't prevent WildFly from stopping additionally mod_proxy_balancer has the problem.
I would be nice to open a new JIRA for this one.
> Session draining broken: requests counting broken on load-balancer
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-383
> URL: https://issues.jboss.org/browse/MODCLUSTER-383
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha1
> Environment: Fedora 20, 64 bit, httpd 2.4.6 + mod_cluster master (21ceed3c219fc3ad743b361cafd1097ebac19dfe)
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.0.Alpha2
>
>
> The request counting is broken. Looks like synchronization problem with dirty cached reads.
> Steps to reproduce:
> # start AS with some context
> # start LB
> # start 2 or more load driver threads
> # number of requests on that context goes to higher values than 2, eventually and slowly increasing
> On the AS it manifests as:
> {noformat}
> 19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-372) Number of registered contexts negatively affects mod_cluster performance
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-372?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere resolved MODCLUSTER-372.
--------------------------------------------
Resolution: Done
Fixed by https://github.com/modcluster/mod_cluster/commit/e1913af57443d5946922e709...
> Number of registered contexts negatively affects mod_cluster performance
> ------------------------------------------------------------------------
>
> Key: MODCLUSTER-372
> URL: https://issues.jboss.org/browse/MODCLUSTER-372
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.4.Final, 1.2.6.Final
> Environment: RHEL6, other platforms are to be confirmed.
> Reporter: Michal Babacek
> Assignee: Jean-Frederic Clere
> Labels: mod_cluster, performace
> Fix For: 1.3.0.Alpha2
>
> Attachments: 4-workers-1-context-balancer-cpu.png, 4-workers-61-context-accessing-1-balancer-cpu.png, 9-workers-1-context-balancer-cpu.png, 9-workers-31-context-accessing-1-balancer-cpu.png, 9-workers-31-context-round-robin-balancer-cpu.png, callgrind.zip, httpd.conf
>
>
> There is a performance concern regarding CPU usage on the Apache HTTP Server with mod_cluster set up as a load balancer. Perf tests revealed that the major variable affecting the CPU usage is the overall number of registered contexts. It's noteworthy that these contexts actually don't need to be accessed at all, it's enough that they are present.
> The first test [9 workers, 31 contexts, round robin|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,...] depicts a behavior where all contexts are being accessed in a round robin fashion. Note the CPU usage. If we compare it to the [9 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,1context] test, it becomes apparent that there is something wrong with the CPU usage. As it is clear from the [9 workers, 31 contexts, access 1|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,acce...] test, accessing only one of these contexts doesn't help much.
> The last two tests, [4 workers, 61 contexts, accessing 1|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,61contexts,acce...] and [4 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,1context] confirm the results, environment differs only in number of nodes and number of contexts.
> [^httpd.conf] attached, stay tuned for some profiler outputs...
> h3. 9 workers, 31 contexts, round robin
> !9-workers-31-context-round-robin-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8843|147.4|15|60|0|8843|100%|
> |9|250|250|14422|240.3|37|215|0|14422|100%|
> |9|350|350|17551|292.5|196|443|0|17551|100%|
> |9|450|450|21303|355.0|269|603|0|21303|100%|
> |9|550|550|24818|413.6|325|686|0|24818|100%|
> |9|650|650|26804|446.7|469|800|0|26804|100%|
> |9|750|750|29930|498.8|507|985|0|29930|100%|
> |9|850|850|30665|511.0|665|1185|0|30665|100%|
> |9|950|950|34421|573.6|647|1316|0|34421|100%|
> |9|1050|1050|35067|584.4|800|1487|0|35067|100%|
> |9|1150|1150|36024|600.4|925|1612|0|36024|100%|
> |9|1250|1250|36635|610.5|1030|1815|0|36635|100%|
> |9|1350|1350|38395|639.9|1096|1942|0|38395|100%|
> |9|1450|1450|39713|661.8|1170|2466|0|39713|100%|
> |9|1550|1550|39455|657.5|1392|2340|0|39455|100%|
> |9|1650|1650|39849|664.1|1465|4240|0|39849|100%|
> |9|1750|1750|42435|707.2|1454|6511|0|42435|100%|
> |9|1850|1850|44714|745.2|1498|4866|0|44714|100%|
> |9|1950|1911|46072|767.8|1554|43016|0|46072|100%|
> |9|2050|1911|44496|741.5|1617|4953|39|44457|99%|
> |9|2150|1911|43715|728.5|1632|5348|125|43590|99%|
> |9|2250|1911|41112|685.1|1764|9800|155|40957|99%|
> h3. 9 workers, 31 contexts, accessing only one of them
> !9-workers-31-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8954|149.2|4|42|0|8954|100%|
> |9|250|250|14897|248.3|7|33|0|14897|100%|
> |9|350|350|20784|346.4|8|68|0|20784|100%|
> |9|450|450|26748|445.8|8|69|0|26748|100%|
> |9|550|550|32553|542.5|11|92|0|32553|100%|
> |9|650|650|38558|642.6|10|60|0|38558|100%|
> |9|750|750|43571|726.1|29|329|0|43571|100%|
> |9|850|850|46133|768.8|99|494|0|46133|100%|
> |9|950|950|50854|847.5|120|501|0|50854|100%|
> |9|1050|1050|54451|907.4|154|584|0|54451|100%|
> |9|1150|1150|59961|999.3|138|674|0|59961|100%|
> |9|1250|1250|62567|1,042.6|198|675|0|62567|100%|
> |9|1350|1350|61939|1,032.2|301|799|0|61939|100%|
> |9|1450|1450|67920|1,131.9|276|844|0|67920|100%|
> |9|1550|1550|73151|1,219.1|261|861|0|73151|100%|
> |9|1650|1650|73937|1,232.2|332|955|0|73937|100%|
> |9|1750|1750|73516|1,225.2|423|1046|0|73516|100%|
> |9|1850|1850|72556|1,209.1|515|1264|0|72556|100%|
> |9|1950|1911|78613|1,310.1|454|50273|0|78613|100%|
> |9|2050|1911|80141|1,335.6|431|1225|39|80102|99%|
> |9|2150|1911|76979|1,282.9|490|1338|127|76852|99%|
> |9|2250|1911|78048|1,300.7|464|1305|136|77912|99%|
> h3. 9 workers, 1 context
> !9-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8965|149.4|2|12|0|8965|100%|
> |9|250|250|14965|249.4|2|22|0|14965|100%|
> |9|350|350|20950|349.1|2|23|0|20950|100%|
> |9|450|450|26941|449.0|2|26|0|26941|100%|
> |9|550|550|32937|548.9|1|26|0|32937|100%|
> |9|650|650|38900|648.3|1|19|0|38900|100%|
> |9|750|750|44918|748.6|1|11|0|44918|100%|
> |9|850|850|50902|848.3|2|22|0|50902|100%|
> |9|950|950|56878|947.9|1|14|0|56878|100%|
> |9|1050|1050|62874|1,047.8|2|12|0|62874|100%|
> |9|1150|1150|68845|1,147.3|2|99|0|68845|100%|
> |9|1250|1250|74851|1,247.4|2|103|0|74851|100%|
> |9|1350|1350|80826|1,347.0|2|100|0|80826|100%|
> |9|1450|1450|86806|1,446.7|2|19|0|86806|100%|
> |9|1550|1550|92817|1,546.8|2|52|0|92817|100%|
> |9|1650|1650|98774|1,646.1|2|18|0|98774|100%|
> |9|1750|1750|104755|1,745.8|2|18|0|104755|100%|
> |9|1850|1850|110734|1,845.4|2|20|0|110734|100%|
> |9|1950|1910|113419|1,890.2|9|41855|0|113419|100%|
> |9|2050|1911|114437|1,907.1|2|77962|39|114397|99%|
> |9|2150|1911|114481|1,907.9|2|15|128|114353|99%|
> |9|2250|1911|114545|1,908.9|2|24|144|114401|99%|
> h3. 4 workers, 61 contexts, accessing only one of them
> !4-workers-61-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29796|496.6|6|52|0|29796|100%|
> |4|650|650|38706|645.0|7|149|0|38706|100%|
> |4|800|800|47585|793.0|8|129|0|47585|100%|
> |4|950|950|54467|907.7|43|377|0|54467|100%|
> |4|1100|1100|62500|1,041.6|54|396|0|62500|100%|
> |4|1250|1250|69446|1,157.3|81|512|0|69446|100%|
> |4|1400|1400|76217|1,270.2|97|517|0|76217|100%|
> |4|1550|1550|80216|1,336.8|152|810|0|80216|100%|
> |4|1700|1700|80797|1,346.5|271|864|0|80797|100%|
> |4|1850|1850|94172|1,569.3|182|822|0|94172|100%|
> |4|2000|1916|91014|1,516.8|253|48650|0|91014|100%|
> |4|2150|1916|95852|1,597.4|205|848|83|95769|99%|
> h3. 4 workers, 1 context
> !4-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29922|498.7|2|39|0|29922|100%|
> |4|650|650|38923|648.7|1|39|0|38923|100%|
> |4|800|800|47916|798.5|1|17|0|47916|100%|
> |4|950|950|56896|948.2|1|17|0|56896|100%|
> |4|1100|1100|65889|1,098.1|1|115|0|65889|100%|
> |4|1250|1250|74874|1,247.8|1|101|0|74874|100%|
> |4|1400|1400|83818|1,396.8|1|17|0|83818|100%|
> |4|1550|1550|92830|1,547.0|1|17|0|92830|100%|
> |4|1700|1700|101805|1,696.6|1|11|0|101805|100%|
> |4|1850|1850|110785|1,846.3|1|11|0|110785|100%|
> |4|2000|1916|113747|1,895.6|10|53108|0|113747|100%|
> |4|2150|1916|114825|1,913.6|1|24|83|114742|99%|
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-372) Number of registered contexts negatively affects mod_cluster performance
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-372?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere updated MODCLUSTER-372:
-------------------------------------------
Fix Version/s: (was: 1.2.8.Final)
> Number of registered contexts negatively affects mod_cluster performance
> ------------------------------------------------------------------------
>
> Key: MODCLUSTER-372
> URL: https://issues.jboss.org/browse/MODCLUSTER-372
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.4.Final, 1.2.6.Final
> Environment: RHEL6, other platforms are to be confirmed.
> Reporter: Michal Babacek
> Assignee: Jean-Frederic Clere
> Labels: mod_cluster, performace
> Fix For: 1.3.0.Alpha2
>
> Attachments: 4-workers-1-context-balancer-cpu.png, 4-workers-61-context-accessing-1-balancer-cpu.png, 9-workers-1-context-balancer-cpu.png, 9-workers-31-context-accessing-1-balancer-cpu.png, 9-workers-31-context-round-robin-balancer-cpu.png, callgrind.zip, httpd.conf
>
>
> There is a performance concern regarding CPU usage on the Apache HTTP Server with mod_cluster set up as a load balancer. Perf tests revealed that the major variable affecting the CPU usage is the overall number of registered contexts. It's noteworthy that these contexts actually don't need to be accessed at all, it's enough that they are present.
> The first test [9 workers, 31 contexts, round robin|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,...] depicts a behavior where all contexts are being accessed in a round robin fashion. Note the CPU usage. If we compare it to the [9 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,1context] test, it becomes apparent that there is something wrong with the CPU usage. As it is clear from the [9 workers, 31 contexts, access 1|https://issues.jboss.org/browse/MODCLUSTER-372#9workers,31contexts,acce...] test, accessing only one of these contexts doesn't help much.
> The last two tests, [4 workers, 61 contexts, accessing 1|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,61contexts,acce...] and [4 workers, 1 context|https://issues.jboss.org/browse/MODCLUSTER-372#4workers,1context] confirm the results, environment differs only in number of nodes and number of contexts.
> [^httpd.conf] attached, stay tuned for some profiler outputs...
> h3. 9 workers, 31 contexts, round robin
> !9-workers-31-context-round-robin-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8843|147.4|15|60|0|8843|100%|
> |9|250|250|14422|240.3|37|215|0|14422|100%|
> |9|350|350|17551|292.5|196|443|0|17551|100%|
> |9|450|450|21303|355.0|269|603|0|21303|100%|
> |9|550|550|24818|413.6|325|686|0|24818|100%|
> |9|650|650|26804|446.7|469|800|0|26804|100%|
> |9|750|750|29930|498.8|507|985|0|29930|100%|
> |9|850|850|30665|511.0|665|1185|0|30665|100%|
> |9|950|950|34421|573.6|647|1316|0|34421|100%|
> |9|1050|1050|35067|584.4|800|1487|0|35067|100%|
> |9|1150|1150|36024|600.4|925|1612|0|36024|100%|
> |9|1250|1250|36635|610.5|1030|1815|0|36635|100%|
> |9|1350|1350|38395|639.9|1096|1942|0|38395|100%|
> |9|1450|1450|39713|661.8|1170|2466|0|39713|100%|
> |9|1550|1550|39455|657.5|1392|2340|0|39455|100%|
> |9|1650|1650|39849|664.1|1465|4240|0|39849|100%|
> |9|1750|1750|42435|707.2|1454|6511|0|42435|100%|
> |9|1850|1850|44714|745.2|1498|4866|0|44714|100%|
> |9|1950|1911|46072|767.8|1554|43016|0|46072|100%|
> |9|2050|1911|44496|741.5|1617|4953|39|44457|99%|
> |9|2150|1911|43715|728.5|1632|5348|125|43590|99%|
> |9|2250|1911|41112|685.1|1764|9800|155|40957|99%|
> h3. 9 workers, 31 contexts, accessing only one of them
> !9-workers-31-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8954|149.2|4|42|0|8954|100%|
> |9|250|250|14897|248.3|7|33|0|14897|100%|
> |9|350|350|20784|346.4|8|68|0|20784|100%|
> |9|450|450|26748|445.8|8|69|0|26748|100%|
> |9|550|550|32553|542.5|11|92|0|32553|100%|
> |9|650|650|38558|642.6|10|60|0|38558|100%|
> |9|750|750|43571|726.1|29|329|0|43571|100%|
> |9|850|850|46133|768.8|99|494|0|46133|100%|
> |9|950|950|50854|847.5|120|501|0|50854|100%|
> |9|1050|1050|54451|907.4|154|584|0|54451|100%|
> |9|1150|1150|59961|999.3|138|674|0|59961|100%|
> |9|1250|1250|62567|1,042.6|198|675|0|62567|100%|
> |9|1350|1350|61939|1,032.2|301|799|0|61939|100%|
> |9|1450|1450|67920|1,131.9|276|844|0|67920|100%|
> |9|1550|1550|73151|1,219.1|261|861|0|73151|100%|
> |9|1650|1650|73937|1,232.2|332|955|0|73937|100%|
> |9|1750|1750|73516|1,225.2|423|1046|0|73516|100%|
> |9|1850|1850|72556|1,209.1|515|1264|0|72556|100%|
> |9|1950|1911|78613|1,310.1|454|50273|0|78613|100%|
> |9|2050|1911|80141|1,335.6|431|1225|39|80102|99%|
> |9|2150|1911|76979|1,282.9|490|1338|127|76852|99%|
> |9|2250|1911|78048|1,300.7|464|1305|136|77912|99%|
> h3. 9 workers, 1 context
> !9-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |9|150|150|8965|149.4|2|12|0|8965|100%|
> |9|250|250|14965|249.4|2|22|0|14965|100%|
> |9|350|350|20950|349.1|2|23|0|20950|100%|
> |9|450|450|26941|449.0|2|26|0|26941|100%|
> |9|550|550|32937|548.9|1|26|0|32937|100%|
> |9|650|650|38900|648.3|1|19|0|38900|100%|
> |9|750|750|44918|748.6|1|11|0|44918|100%|
> |9|850|850|50902|848.3|2|22|0|50902|100%|
> |9|950|950|56878|947.9|1|14|0|56878|100%|
> |9|1050|1050|62874|1,047.8|2|12|0|62874|100%|
> |9|1150|1150|68845|1,147.3|2|99|0|68845|100%|
> |9|1250|1250|74851|1,247.4|2|103|0|74851|100%|
> |9|1350|1350|80826|1,347.0|2|100|0|80826|100%|
> |9|1450|1450|86806|1,446.7|2|19|0|86806|100%|
> |9|1550|1550|92817|1,546.8|2|52|0|92817|100%|
> |9|1650|1650|98774|1,646.1|2|18|0|98774|100%|
> |9|1750|1750|104755|1,745.8|2|18|0|104755|100%|
> |9|1850|1850|110734|1,845.4|2|20|0|110734|100%|
> |9|1950|1910|113419|1,890.2|9|41855|0|113419|100%|
> |9|2050|1911|114437|1,907.1|2|77962|39|114397|99%|
> |9|2150|1911|114481|1,907.9|2|15|128|114353|99%|
> |9|2250|1911|114545|1,908.9|2|24|144|114401|99%|
> h3. 4 workers, 61 contexts, accessing only one of them
> !4-workers-61-context-accessing-1-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29796|496.6|6|52|0|29796|100%|
> |4|650|650|38706|645.0|7|149|0|38706|100%|
> |4|800|800|47585|793.0|8|129|0|47585|100%|
> |4|950|950|54467|907.7|43|377|0|54467|100%|
> |4|1100|1100|62500|1,041.6|54|396|0|62500|100%|
> |4|1250|1250|69446|1,157.3|81|512|0|69446|100%|
> |4|1400|1400|76217|1,270.2|97|517|0|76217|100%|
> |4|1550|1550|80216|1,336.8|152|810|0|80216|100%|
> |4|1700|1700|80797|1,346.5|271|864|0|80797|100%|
> |4|1850|1850|94172|1,569.3|182|822|0|94172|100%|
> |4|2000|1916|91014|1,516.8|253|48650|0|91014|100%|
> |4|2150|1916|95852|1,597.4|205|848|83|95769|99%|
> h3. 4 workers, 1 context
> !4-workers-1-context-balancer-cpu.png|thumbnail!
> ||Nodes||Sessions||Ses.Active||Samples||samples/s||mean resp. ms||max resp. ms||conn. errors||valid samples||%||
> |4|500|500|29922|498.7|2|39|0|29922|100%|
> |4|650|650|38923|648.7|1|39|0|38923|100%|
> |4|800|800|47916|798.5|1|17|0|47916|100%|
> |4|950|950|56896|948.2|1|17|0|56896|100%|
> |4|1100|1100|65889|1,098.1|1|115|0|65889|100%|
> |4|1250|1250|74874|1,247.8|1|101|0|74874|100%|
> |4|1400|1400|83818|1,396.8|1|17|0|83818|100%|
> |4|1550|1550|92830|1,547.0|1|17|0|92830|100%|
> |4|1700|1700|101805|1,696.6|1|11|0|101805|100%|
> |4|1850|1850|110785|1,846.3|1|11|0|110785|100%|
> |4|2000|1916|113747|1,895.6|10|53108|0|113747|100%|
> |4|2150|1916|114825|1,913.6|1|24|83|114742|99%|
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-384) mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
by Michal Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-384?page=com.atlassian.jira.pl... ]
Michal Babacek closed MODCLUSTER-384.
-------------------------------------
I'm closing this "issue".
The whole situation was an outcome of two distinct test environment errors:
* There were unexpected balancers in the network various other nodes with same jvmRoutes connected.
* Due to a stupid bug with copying the deployment resources, this: {{deployments/clusterbench.war/clusterbench.war}} could not have produced anything but HTTP 404.
End Of File
> mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
> ----------------------------------------------------------------------------------
>
> Key: MODCLUSTER-384
> URL: https://issues.jboss.org/browse/MODCLUSTER-384
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha2
> Environment: both Oracle JDK7 and OpenJDK7, RHEL6, both pure-IPv6 and dualstack
> Reporter: Michal Babacek
> Assignee: Michal Babacek
> Attachments: error_log.zip, jboss-eap-8.0-2.server.log.zip, jboss-eap-8.0.server.log.zip
>
>
> Guys, something is amiss with MCMP parsing and/or Undertow integration on IPv6 systems.
> There is this test:
> # configure and start *balancer*:httpd, *worker1*:jboss-eap-8.0, *worker2*:jboss-eap-8.0-2 (ignore this weird `jboss-eap-8`, it's just WildFly 8.0.0.Final-SNAPSHOT)
> # verify that application context is accessible via balancer
> # make a request and remember which worker processed it
> # commence a clean shutdown on that worker
> # make another request and make sure the other worker takes care of it
> # start that worker stopped in step 4.
> # wait till it's present on the mod_cluster manager console
> # stop that other worker that handled the request in step 5.
> # make a request and verify that someone is gonna take care of it
> The aforementioned test {color:green}passes{color} with the exactly same bits in an IPv4 environment with no problems whatsoever.
> On an IPv6 system, setup collapses with this in the server log (attached server log for both workers: [^jboss-eap-8.0.server.log.zip], [^jboss-eap-8.0-2.server.log.zip] )
> {noformat}
> 2014-01-30 09:00:41,279 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:41,286 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:51,308 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:2b/2620:52:0:105f:0:0:ffff:2b:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,332 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:1/2620:52:0:105f:0:0:ffff:1:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,338 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:01,341 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:11,350 ERROR [org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter] (UndertowEventHandlerAdapter - 1) Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> : java.lang.IllegalArgumentException: Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:381) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:350) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:458) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:160) [wildfly-mod_cluster-undertow-8.0.0.Final-SNAPSHOT.jar:8.0.0.Final-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final]
> {noformat}
> WDYT?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-384) mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
by Michal Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-384?page=com.atlassian.jira.pl... ]
Michal Babacek resolved MODCLUSTER-384.
---------------------------------------
Resolution: Rejected
> mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
> ----------------------------------------------------------------------------------
>
> Key: MODCLUSTER-384
> URL: https://issues.jboss.org/browse/MODCLUSTER-384
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha2
> Environment: both Oracle JDK7 and OpenJDK7, RHEL6, both pure-IPv6 and dualstack
> Reporter: Michal Babacek
> Assignee: Michal Babacek
> Attachments: error_log.zip, jboss-eap-8.0-2.server.log.zip, jboss-eap-8.0.server.log.zip
>
>
> Guys, something is amiss with MCMP parsing and/or Undertow integration on IPv6 systems.
> There is this test:
> # configure and start *balancer*:httpd, *worker1*:jboss-eap-8.0, *worker2*:jboss-eap-8.0-2 (ignore this weird `jboss-eap-8`, it's just WildFly 8.0.0.Final-SNAPSHOT)
> # verify that application context is accessible via balancer
> # make a request and remember which worker processed it
> # commence a clean shutdown on that worker
> # make another request and make sure the other worker takes care of it
> # start that worker stopped in step 4.
> # wait till it's present on the mod_cluster manager console
> # stop that other worker that handled the request in step 5.
> # make a request and verify that someone is gonna take care of it
> The aforementioned test {color:green}passes{color} with the exactly same bits in an IPv4 environment with no problems whatsoever.
> On an IPv6 system, setup collapses with this in the server log (attached server log for both workers: [^jboss-eap-8.0.server.log.zip], [^jboss-eap-8.0-2.server.log.zip] )
> {noformat}
> 2014-01-30 09:00:41,279 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:41,286 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:51,308 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:2b/2620:52:0:105f:0:0:ffff:2b:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,332 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:1/2620:52:0:105f:0:0:ffff:1:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,338 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:01,341 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:11,350 ERROR [org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter] (UndertowEventHandlerAdapter - 1) Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> : java.lang.IllegalArgumentException: Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:381) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:350) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:458) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:160) [wildfly-mod_cluster-undertow-8.0.0.Final-SNAPSHOT.jar:8.0.0.Final-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final]
> {noformat}
> WDYT?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months