[JBoss JIRA] (MODCLUSTER-384) mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
by Michal Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-384?page=com.atlassian.jira.pl... ]
Michal Babacek updated MODCLUSTER-384:
--------------------------------------
Attachment: error_log.zip
Apache HTTP Server [^error_log.zip] for a better understanding the test. Note that the balancer is running mod_cluster 1.2.6, however, as I stated in the original description, this same setup runs just fine in an IPv4 environment.
> mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
> ----------------------------------------------------------------------------------
>
> Key: MODCLUSTER-384
> URL: https://issues.jboss.org/browse/MODCLUSTER-384
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha2
> Environment: both Oracle JDK7 and OpenJDK7, RHEL6, both pure-IPv6 and dualstack
> Reporter: Michal Babacek
> Assignee: Radoslav Husar
> Attachments: error_log.zip, jboss-eap-8.0-2.server.log.zip, jboss-eap-8.0.server.log.zip
>
>
> Guys, something is amiss with MCMP parsing and/or Undertow integration on IPv6 systems.
> There is this test:
> # configure and start *balancer*:httpd, *worker1*:jboss-eap-8.0, *worker2*:jboss-eap-8.0-2 (ignore this weird `jboss-eap-8`, it's just WildFly 8.0.0.Final-SNAPSHOT)
> # verify that application context is accessible via balancer
> # make a request and remember which worker processed it
> # commence a clean shutdown on that worker
> # make another request and make sure the other worker takes care of it
> # start that worker stopped in step 4.
> # wait till it's present on the mod_cluster manager console
> # stop that other worker that handled the request in step 5.
> # make a request and verify that someone is gonna take care of it
> The aforementioned test {color:green}passes{color} with the exactly same bits in an IPv4 environment with no problems whatsoever.
> On an IPv6 system, setup collapses with this in the server log (attached server log for both workers: [^jboss-eap-8.0.server.log.zip], [^jboss-eap-8.0-2.server.log.zip] )
> {noformat}
> 2014-01-30 09:00:41,279 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:41,286 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:51,308 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:2b/2620:52:0:105f:0:0:ffff:2b:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,332 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:1/2620:52:0:105f:0:0:ffff:1:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,338 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:01,341 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:11,350 ERROR [org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter] (UndertowEventHandlerAdapter - 1) Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> : java.lang.IllegalArgumentException: Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:381) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:350) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:458) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:160) [wildfly-mod_cluster-undertow-8.0.0.Final-SNAPSHOT.jar:8.0.0.Final-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final]
> {noformat}
> WDYT?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-384) mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
by Michal Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-384?page=com.atlassian.jira.pl... ]
Michal Babacek updated MODCLUSTER-384:
--------------------------------------
Attachment: jboss-eap-8.0-2.server.log.zip
jboss-eap-8.0.server.log.zip
> mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
> ----------------------------------------------------------------------------------
>
> Key: MODCLUSTER-384
> URL: https://issues.jboss.org/browse/MODCLUSTER-384
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha2
> Environment: both Oracle JDK7 and OpenJDK7, RHEL6, both pure-IPv6 and dualstack
> Reporter: Michal Babacek
> Assignee: Radoslav Husar
> Attachments: jboss-eap-8.0-2.server.log.zip, jboss-eap-8.0.server.log.zip
>
>
> Guys, something is amiss with MCMP parsing and/or Undertow integration on IPv6 systems.
> There is this test:
> # configure and start *balancer*:httpd, *worker1*:jboss-eap-8.0, *worker2*:jboss-eap-8.0-2 (ignore this weird `jboss-eap-8`, it's just WildFly 8.0.0.Final-SNAPSHOT)
> # verify that application context is accessible via balancer
> # make a request and remember which worker processed it
> # commence a clean shutdown on that worker
> # make another request and make sure the other worker takes care of it
> # start that worker stopped in step 4.
> # wait till it's present on the mod_cluster manager console
> # stop that other worker that handled the request in step 5.
> # make a request and verify that someone is gonna take care of it
> The aforementioned test {color:green}passes{color} with the exactly same bits in an IPv4 environment with no problems whatsoever.
> On an IPv6 system, setup collapses with this in the server log (attached server log for both workers: [^jboss-eap-8.0.server.log.zip], [^jboss-eap-8.0-2.server.log.zip] )
> {noformat}
> 2014-01-30 09:00:41,279 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:41,286 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:00:51,308 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:2b/2620:52:0:105f:0:0:ffff:2b:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,332 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:1/2620:52:0:105f:0:0:ffff:1:8847, configuration will be reset: MEM: Old node still exist
> 2014-01-30 09:01:01,338 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:01,341 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Can't read node
> 2014-01-30 09:01:11,350 ERROR [org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter] (UndertowEventHandlerAdapter - 1) Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> : java.lang.IllegalArgumentException: Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
> Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
> Vhost: [2:1:1], Alias: localhost
> Vhost: [2:1:2], Alias: default-host
> Vhost: [3:1:3], Alias: default-host
> Vhost: [3:1:4], Alias: localhost
> Vhost: [1:1:5], Alias: default-host
> Vhost: [1:1:6], Alias: localhost
> Context: [2:1:1], Context: /clusterbench, Status: ENABLED
> Context: [3:1:2], Context: /clusterbench, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench, Status: ENABLED
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:381) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:350) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:458) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
> at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:160) [wildfly-mod_cluster-undertow-8.0.0.Final-SNAPSHOT.jar:8.0.0.Final-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [rt.jar:1.7.0_45]
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
> at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
> at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final]
> {noformat}
> WDYT?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-384) mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
by Michal Babacek (JIRA)
Michal Babacek created MODCLUSTER-384:
-----------------------------------------
Summary: mod_cluster with Undertow throws java.lang.IllegalArgumentException on IPv6 system
Key: MODCLUSTER-384
URL: https://issues.jboss.org/browse/MODCLUSTER-384
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.3.0.Alpha2
Environment: both Oracle JDK7 and OpenJDK7, RHEL6, both pure-IPv6 and dualstack
Reporter: Michal Babacek
Assignee: Radoslav Husar
Guys, something is amiss with MCMP parsing and/or Undertow integration on IPv6 systems.
There is this test:
# configure and start *balancer*:httpd, *worker1*:jboss-eap-8.0, *worker2*:jboss-eap-8.0-2 (ignore this weird `jboss-eap-8`, it's just WildFly 8.0.0.Final-SNAPSHOT)
# verify that application context is accessible via balancer
# make a request and remember which worker processed it
# commence a clean shutdown on that worker
# make another request and make sure the other worker takes care of it
# start that worker stopped in step 4.
# wait till it's present on the mod_cluster manager console
# stop that other worker that handled the request in step 5.
# make a request and verify that someone is gonna take care of it
The aforementioned test {color:green}passes{color} with the exactly same bits in an IPv4 environment with no problems whatsoever.
On an IPv6 system, setup collapses with this in the server log (attached server log for both workers: [^jboss-eap-8.0.server.log.zip], [^jboss-eap-8.0-2.server.log.zip] )
{noformat}
2014-01-30 09:00:41,279 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Old node still exist
2014-01-30 09:00:41,286 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Old node still exist
2014-01-30 09:00:51,308 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:2b/2620:52:0:105f:0:0:ffff:2b:8847, configuration will be reset: MEM: Old node still exist
2014-01-30 09:01:01,332 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending CONFIG command to 2620:52:0:105f:0:0:ffff:1/2620:52:0:105f:0:0:ffff:1:8847, configuration will be reset: MEM: Old node still exist
2014-01-30 09:01:01,338 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:6d/2620:52:0:105f:0:0:ffff:6d:8847, configuration will be reset: MEM: Can't read node
2014-01-30 09:01:01,341 ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to 2620:52:0:105f:0:0:ffff:23/2620:52:0:105f:0:0:ffff:23:8847, configuration will be reset: MEM: Can't read node
2014-01-30 09:01:11,350 ERROR [org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter] (UndertowEventHandlerAdapter - 1) Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
Vhost: [2:1:1], Alias: localhost
Vhost: [2:1:2], Alias: default-host
Vhost: [3:1:3], Alias: default-host
Vhost: [3:1:4], Alias: localhost
Vhost: [1:1:5], Alias: default-host
Vhost: [1:1:6], Alias: localhost
Context: [2:1:1], Context: /clusterbench, Status: ENABLED
Context: [3:1:2], Context: /clusterbench, Status: ENABLED
Context: [1:1:3], Context: /clusterbench, Status: ENABLED
: java.lang.IllegalArgumentException: Node: [1],Name: jboss-eap-8.0-2,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8110,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 8,Read: 544,Transfered: 0,Connected: 0,Load: 100
Node: [3],Name: REMOVED,Balancer: qacluster,LBGroup: ,Host: [2620:52:0:105f:0:0:ffff:6d],Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 1,Ttl: 60,Elected: 5,Read: 340,Transfered: 0,Connected: 0,Load: 100
Vhost: [2:1:1], Alias: localhost
Vhost: [2:1:2], Alias: default-host
Vhost: [3:1:3], Alias: default-host
Vhost: [3:1:4], Alias: localhost
Vhost: [1:1:5], Alias: default-host
Vhost: [1:1:6], Alias: localhost
Context: [2:1:1], Context: /clusterbench, Status: ENABLED
Context: [3:1:2], Context: /clusterbench, Status: ENABLED
Context: [1:1:3], Context: /clusterbench, Status: ENABLED
at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:381) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:350) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:458) [mod_cluster-core-1.3.0.Alpha2-SNAPSHOT.jar:1.3.0.Alpha2-SNAPSHOT]
at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:160) [wildfly-mod_cluster-undertow-8.0.0.Final-SNAPSHOT.jar:8.0.0.Final-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [rt.jar:1.7.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [rt.jar:1.7.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [rt.jar:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
at org.jboss.threads.JBossThread.run(JBossThread.java:122) [jboss-threads-2.1.1.Final.jar:2.1.1.Final]
{noformat}
WDYT?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-383?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-383:
--------------------------------------
Environment: Fedora 20, 64 bit, httpd 2.4.6 + mod_cluster master (21ceed3c219fc3ad743b361cafd1097ebac19dfe)
Updated environment field, but since the BZ is about Solaris, I would think multiple platforms are affected here.
> Session draining broken: requests counting broken on load-balancer
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-383
> URL: https://issues.jboss.org/browse/MODCLUSTER-383
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha1
> Environment: Fedora 20, 64 bit, httpd 2.4.6 + mod_cluster master (21ceed3c219fc3ad743b361cafd1097ebac19dfe)
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.0.Alpha2
>
>
> The request counting is broken. Looks like synchronization problem with dirty cached reads.
> Steps to reproduce:
> # start AS with some context
> # start LB
> # start 2 or more load driver threads
> # number of requests on that context goes to higher values than 2, eventually and slowly increasing
> On the AS it manifests as:
> {noformat}
> 19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-383?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere commented on MODCLUSTER-383:
------------------------------------------------
Which version of httpd and on which platform.
> Session draining broken: requests counting broken on load-balancer
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-383
> URL: https://issues.jboss.org/browse/MODCLUSTER-383
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha1
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.0.Alpha2
>
>
> The request counting is broken. Looks like synchronization problem with dirty cached reads.
> Steps to reproduce:
> # start AS with some context
> # start LB
> # start 2 or more load driver threads
> # number of requests on that context goes to higher values than 2, eventually and slowly increasing
> On the AS it manifests as:
> {noformat}
> 19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-383?page=com.atlassian.jira.pl... ]
RH Bugzilla Integration updated MODCLUSTER-383:
-----------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=1018705
> Session draining broken: requests counting broken on load-balancer
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-383
> URL: https://issues.jboss.org/browse/MODCLUSTER-383
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha1
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.0.Alpha2
>
>
> The request counting is broken. Looks like synchronization problem with dirty cached reads.
> Steps to reproduce:
> # start AS with some context
> # start LB
> # start 2 or more load driver threads
> # number of requests on that context goes to higher values than 2, eventually and slowly increasing
> On the AS it manifests as:
> {noformat}
> 19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-383?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-383:
--------------------------------------
Bugzilla Update: (was: Perform)
> Session draining broken: requests counting broken on load-balancer
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-383
> URL: https://issues.jboss.org/browse/MODCLUSTER-383
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha1
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.0.Alpha2
>
>
> The request counting is broken. Looks like synchronization problem with dirty cached reads.
> Steps to reproduce:
> # start AS with some context
> # start LB
> # start 2 or more load driver threads
> # number of requests on that context goes to higher values than 2, eventually and slowly increasing
> On the AS it manifests as:
> {noformat}
> 19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-383?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-383:
--------------------------------------
Workaround Description: Disable context manually, wait for desired time. Do not use session draining.
Workaround: Workaround Exists
> Session draining broken: requests counting broken on load-balancer
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-383
> URL: https://issues.jboss.org/browse/MODCLUSTER-383
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.3.0.Alpha1
> Reporter: Radoslav Husar
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.0.Alpha2
>
>
> The request counting is broken. Looks like synchronization problem with dirty cached reads.
> Steps to reproduce:
> # start AS with some context
> # start LB
> # start 2 or more load driver threads
> # number of requests on that context goes to higher values than 2, eventually and slowly increasing
> On the AS it manifests as:
> {noformat}
> 19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months
[JBoss JIRA] (MODCLUSTER-383) Session draining broken: requests counting broken on load-balancer
by Radoslav Husar (JIRA)
Radoslav Husar created MODCLUSTER-383:
-----------------------------------------
Summary: Session draining broken: requests counting broken on load-balancer
Key: MODCLUSTER-383
URL: https://issues.jboss.org/browse/MODCLUSTER-383
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.3.0.Alpha1
Reporter: Radoslav Husar
Assignee: Jean-Frederic Clere
Priority: Critical
Fix For: 1.3.0.Alpha2
The request counting is broken. Looks like synchronization problem with dirty cached reads.
Steps to reproduce:
# start AS with some context
# start LB
# start 2 or more load driver threads
# number of requests on that context goes to higher values than 2, eventually and slowly increasing
On the AS it manifests as:
{noformat}
19:44:14,160 WARN [org.jboss.modcluster] (MSC service thread 1-7) MODCLUSTER000022: Failed to drain 57 remaining pending requests from default-host:/clusterbench within 10.0 seconds
{noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 10 months