[JBoss JIRA] (MODCLUSTER-588) HTTP/2.0: Wildfly worker's interoperability with Apache HTTP Server mod_cluster
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-588?page=com.atlassian.jira.pl... ]
Michal Karm Babacek updated MODCLUSTER-588:
-------------------------------------------
Description:
There are two configurations debated on this JIRA, the first seems to work O.K. (/) with some concerns, the second one doesn't work at all (x).
h1. 1. (/) "ALPN hack", Oracle JDK8
Hello, when you have this configuration of Apache HTTP Server 2.4.23 [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...] and EAP 7.1 [standalone-ha.xml|https://gist.github.com/Karm/e84ca6199f6bf3e7906d8ec632...] and you start both servers, the worker correctly contacts the balancer and it is able to serve requests via HTTP/2.0. Although, I am mildly concerned about the irregular messages I can see during {{httpd <-> eap}} periodic 5s liveness verification ({{LBstatusRecalTime}}) with OPTIONS method and about connector's exceptions during sending MCMP messages. I haven't tried to let it run for days, but long-time impact is my main motivation for posting this. I parsed the log into corresponding blocks to illustrate situation on the balancer side and on the worker side.
h2. Start
Both servers started, no client requests whatsoever. The first block is the very first communication between servers.
h4. Worker Block 1
{code}05:33:02,544 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
05:33:02,695 DEBUG [io.undertow.request] (default I/O-2) Using ALPN provider JDK8AlpnProvider for connector at /192.168.122.172:8443{code}
h4. Balancer Block 1
[Full block|https://gist.github.com/Karm/6ff0c53b70ce28d73da3f828bb1605be]
h4. Worker Block 2
No exception, no log.
h4. Balancer Block 2
[Full block|https://gist.github.com/Karm/7a5d4f5e667b9ee86e446fc73f8d1ec1]
h4. Worker Block 3
{code}05:33:23,144 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 3
[Full block|https://gist.github.com/Karm/cfda838d94a1806e7e7e087cf412dc6d]
h4. Worker Block 4
No exception, no log.
h4. Balancer Block 4
[Full block|https://gist.github.com/Karm/7f0af3c5f443c22f90ea38a57a8f044a]
h4. Worker Block 5
No exception, no log.
h4. Balancer Block 5
[Full block|https://gist.github.com/Karm/36434f6493aee9e6f30a1bb8a88734b3]
h4. Worker Block 6
No exception, no log.
h4. Balancer Block 6
[Full block|https://gist.github.com/Karm/cf877cb9670a66d7ea36e19a6867b61b]
h4. Worker Block 7
No exception, no log.
h4. Balancer Block 7
[Full block|https://gist.github.com/Karm/612901b85f62d1a71bf9e4caf7a1e537]
h4. Worker Block 8
No exception, no log.
h4. Balancer Block 8
[Full block|https://gist.github.com/Karm/83dd6b29cf066241613d9a2fbbc9751a]
h4. Worker Block 9
No exception, no log.
h4. Balancer Block 9
[Full block|https://gist.github.com/Karm/02f3e5a33f1c7e5e314eda897e8df80a]
h4. Worker Block 10
{code}05:33:58,238 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code} [Full block|https://gist.github.com/Karm/380150a2b76399cdd68f033fb50c21b6]
h4. Balancer Block 10
[Full block|https://gist.github.com/Karm/5f84ae53bdaa725472a3dd152072cab7]
h4. Worker Block 11
{code}05:34:02,808 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
05:34:03,002 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 11
[Full block|https://gist.github.com/Karm/90ec31c8cfee9b204d91dbef5be2b3a3]
h4. Worker Block 12
No exception, no log.
h4. Balancer Block 12
[Full block|https://gist.github.com/Karm/33b79c1f7b3dc4655b88648e141cf242]
h4. Worker Block 13
{code}05:34:13,242 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 13
[Full block|https://gist.github.com/Karm/1f25ef8f913368fb91e0795ee1b8de42]
h4. Worker Block 14
{code}05:34:18,230 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code}
[Full block|https://gist.github.com/Karm/45dcb2efa03683656f4e59aabcdab1d0]
h4. Balancer Block 14
[Full block|https://gist.github.com/Karm/87b7daf8b07732b50b0740bb84e79d6e]
h2. Clients
Worker is correctly registered:{code}mod_cluster/1.3.5.Final
Node jboss-eap-7.1 (https://192.168.122.172:8443):
Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 99
Virtual Host 1:
Contexts:
/clusterbench, Status: ENABLED Request: 0 Disable Stop
/, Status: ENABLED Request: 0 Disable Stop
Aliases:
default-host
localhost{code}
h3. Curl (x)
Not related to EAP, nss Fedora 25 shenanigans with using blacklisted cipher suite: {{tls cipher ECDHE-RSA-AES256-SHA blacklisted by rfc7540}}
* {{curl --http2 https://rhel7GAx86-64:2080/clusterbench/session --insecure -i -vvv}}, [curl log|https://gist.github.com/Karm/64d45635f5d81d586d556b4915410ddc]
* Worker - no log, wasn't actually called by the balancer, the request crashed earlier
* Balancer - tells curl to go away, [handshake log|https://gist.github.com/Karm/7eff0a338a4ecb9060a97e758891b02a]
h3. Chrome (/)
* Chrome 58.0.3029.110 (64-bit), correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
* Worker - web app served, [log|https://gist.github.com/Karm/00581ebd382f3f0b9148eff78b08ab50]
* Balancer - O.K., [log|https://gist.github.com/Karm/19221581ed374d928ca21f374d8ef028]
h3. Firefox (/)
* Firefox 53.0.3 (64-bit) correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
* Worker - O.K., [log|https://gist.github.com/Karm/74a917b68392ca585d0b1a90c9d2e883]
* Balancer - O.K., [log|https://gist.github.com/Karm/14a7a6c070da3981022e1a06aa5bf434]
h1. 2. (x) ALPN via Wildfly OpenSSL + OpenSSL 1.0.2h + Oracle JDK8
With the same Apache HTTP Server config as in the aforementioned case, i.e. [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...], and EAP 7.1 configured for Wildfly OpenSSL: [standalone-ha.xml_openssl|https://gist.github.com/Karm/fb1923d0b711a73aac...], the whole setup falls apart at the very beginning when worker contacts the balancer and then is unable to parse balancer's response. (!) *Noteworthy:* If you put another EAP configured as an Undertow mod_cluster balancer in front of this worker, also with Wildfly OpenSSL, it works.
h3. Balancer
Everything appears cool, {{mod_manager.c(3104): manager_handler INFO OK}}, see [the log snippet|https://gist.github.com/Karm/5904cca0235bc3ad4f31c53252e3430a].
h3 Worker
Falls flat on its face while processing balancer's response: {code}06:16:24,031 ERROR [org.jboss.mod_cluster.undertow] (UndertowEventHandlerAdapter - 1) null: java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.java:542)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.sendRequest(DefaultMCMPHandler.java:702)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:387)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:365)
at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:454)
at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:169)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jboss.threads.JBossThread.run(JBossThread.java:320){code}, see [full log|https://gist.github.com/Karm/4f991b1cfd6d7700ef9fc27604fb3007]
h3. Result
No worker is registered, integration is broken. The same worker setup with another Wildfly (EAP) Undertow mod_cluster balancer works.
WDYT guys, [~rhusar], [~swd847], [~jfclere]?
was:
There are two configurations debated on this JIRA, the first seems to work O.K. (/) with some concerns, the second one doesn't work at all (x).
h1. (/) "ALPN hack", Oracle JDK8
Hello, when you have this configuration of Apache HTTP Server 2.4.23 [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...] and EAP 7.1 [standalone-ha.xml|https://gist.github.com/Karm/e84ca6199f6bf3e7906d8ec632...] and you start both servers, the worker correctly contacts the balancer and it is able to serve requests via HTTP/2.0. Although, I am mildly concerned about the irregular messages I can see during {{httpd <-> eap}} periodic 5s liveness verification ({{LBstatusRecalTime}}) with OPTIONS method and about connector's exceptions during sending MCMP messages. I haven't tried to let it run for days, but long-time impact is my main motivation for posting this. I parsed the log into corresponding blocks to illustrate situation on the balancer side and on the worker side.
h2. Start
Both servers started, no client requests whatsoever. The first block is the very first communication between servers.
h4. Worker Block 1
{code}05:33:02,544 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
05:33:02,695 DEBUG [io.undertow.request] (default I/O-2) Using ALPN provider JDK8AlpnProvider for connector at /192.168.122.172:8443{code}
h4. Balancer Block 1
[Full block|https://gist.github.com/Karm/6ff0c53b70ce28d73da3f828bb1605be]
h4. Worker Block 2
No exception, no log.
h4. Balancer Block 2
[Full block|https://gist.github.com/Karm/7a5d4f5e667b9ee86e446fc73f8d1ec1]
h4. Worker Block 3
{code}05:33:23,144 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 3
[Full block|https://gist.github.com/Karm/cfda838d94a1806e7e7e087cf412dc6d]
h4. Worker Block 4
No exception, no log.
h4. Balancer Block 4
[Full block|https://gist.github.com/Karm/7f0af3c5f443c22f90ea38a57a8f044a]
h4. Worker Block 5
No exception, no log.
h4. Balancer Block 5
[Full block|https://gist.github.com/Karm/36434f6493aee9e6f30a1bb8a88734b3]
h4. Worker Block 6
No exception, no log.
h4. Balancer Block 6
[Full block|https://gist.github.com/Karm/cf877cb9670a66d7ea36e19a6867b61b]
h4. Worker Block 7
No exception, no log.
h4. Balancer Block 7
[Full block|https://gist.github.com/Karm/612901b85f62d1a71bf9e4caf7a1e537]
h4. Worker Block 8
No exception, no log.
h4. Balancer Block 8
[Full block|https://gist.github.com/Karm/83dd6b29cf066241613d9a2fbbc9751a]
h4. Worker Block 9
No exception, no log.
h4. Balancer Block 9
[Full block|https://gist.github.com/Karm/02f3e5a33f1c7e5e314eda897e8df80a]
h4. Worker Block 10
{code}05:33:58,238 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code} [Full block|https://gist.github.com/Karm/380150a2b76399cdd68f033fb50c21b6]
h4. Balancer Block 10
[Full block|https://gist.github.com/Karm/5f84ae53bdaa725472a3dd152072cab7]
h4. Worker Block 11
{code}05:34:02,808 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
05:34:03,002 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 11
[Full block|https://gist.github.com/Karm/90ec31c8cfee9b204d91dbef5be2b3a3]
h4. Worker Block 12
No exception, no log.
h4. Balancer Block 12
[Full block|https://gist.github.com/Karm/33b79c1f7b3dc4655b88648e141cf242]
h4. Worker Block 13
{code}05:34:13,242 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 13
[Full block|https://gist.github.com/Karm/1f25ef8f913368fb91e0795ee1b8de42]
h4. Worker Block 14
{code}05:34:18,230 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code}
[Full block|https://gist.github.com/Karm/45dcb2efa03683656f4e59aabcdab1d0]
h4. Balancer Block 14
[Full block|https://gist.github.com/Karm/87b7daf8b07732b50b0740bb84e79d6e]
h2. Clients
Worker is correctly registered:{code}mod_cluster/1.3.5.Final
Node jboss-eap-7.1 (https://192.168.122.172:8443):
Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 99
Virtual Host 1:
Contexts:
/clusterbench, Status: ENABLED Request: 0 Disable Stop
/, Status: ENABLED Request: 0 Disable Stop
Aliases:
default-host
localhost{code}
h3. Curl (x)
Not related to EAP, nss Fedora 25 shenanigans with using blacklisted cipher suite: {{tls cipher ECDHE-RSA-AES256-SHA blacklisted by rfc7540}}
* {{curl --http2 https://rhel7GAx86-64:2080/clusterbench/session --insecure -i -vvv}}, [curl log|https://gist.github.com/Karm/64d45635f5d81d586d556b4915410ddc]
* Worker - no log, wasn't actually called by the balancer, the request crashed earlier
* Balancer - tells curl to go away, [handshake log|https://gist.github.com/Karm/7eff0a338a4ecb9060a97e758891b02a]
h3. Chrome (/)
* Chrome 58.0.3029.110 (64-bit), correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
* Worker - web app served, [log|https://gist.github.com/Karm/00581ebd382f3f0b9148eff78b08ab50]
* Balancer - O.K., [log|https://gist.github.com/Karm/19221581ed374d928ca21f374d8ef028]
h3. Firefox (/)
* Firefox 53.0.3 (64-bit) correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
* Worker - O.K., [log|https://gist.github.com/Karm/74a917b68392ca585d0b1a90c9d2e883]
* Balancer - O.K., [log|https://gist.github.com/Karm/14a7a6c070da3981022e1a06aa5bf434]
h1. (x) ALPN via Wildfly OpenSSL + OpenSSL 1.0.2h + Oracle JDK8
With the same Apache HTTP Server config as in the aforementioned case, i.e. [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...], and EAP 7.1 configured for Wildfly OpenSSL: [standalone-ha.xml_openssl|https://gist.github.com/Karm/fb1923d0b711a73aac...], the whole setup falls apart at the very beginning when worker contacts the balancer and then is unable to parse balancer's response. (!) *Noteworthy:* If you put another EAP configured as an Undertow mod_cluster balancer in front of this worker, also with Wildfly OpenSSL, it works.
h3. Balancer
Everything appears cool, {{mod_manager.c(3104): manager_handler INFO OK}}, see [the log snippet|https://gist.github.com/Karm/5904cca0235bc3ad4f31c53252e3430a].
h3 Worker
Falls flat on its face while processing balancer's response: {code}06:16:24,031 ERROR [org.jboss.mod_cluster.undertow] (UndertowEventHandlerAdapter - 1) null: java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.java:542)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.sendRequest(DefaultMCMPHandler.java:702)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:387)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:365)
at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:454)
at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:169)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jboss.threads.JBossThread.run(JBossThread.java:320){code}, see [full log|https://gist.github.com/Karm/4f991b1cfd6d7700ef9fc27604fb3007]
h3. Result
No worker is registered, integration is broken. The same worker setup with another Wildfly (EAP) Undertow mod_cluster balancer works.
WDYT guys, [~rhusar], [~swd847], [~jfclere]?
> HTTP/2.0: Wildfly worker's interoperability with Apache HTTP Server mod_cluster
> -------------------------------------------------------------------------------
>
> Key: MODCLUSTER-588
> URL: https://issues.jboss.org/browse/MODCLUSTER-588
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.6.Final
> Environment: RHEL 7.3, OpenSSL 1.0.2h from JBCS
> Reporter: Michal Karm Babacek
> Assignee: Radoslav Husar
> Priority: Critical
>
> There are two configurations debated on this JIRA, the first seems to work O.K. (/) with some concerns, the second one doesn't work at all (x).
> h1. 1. (/) "ALPN hack", Oracle JDK8
> Hello, when you have this configuration of Apache HTTP Server 2.4.23 [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...] and EAP 7.1 [standalone-ha.xml|https://gist.github.com/Karm/e84ca6199f6bf3e7906d8ec632...] and you start both servers, the worker correctly contacts the balancer and it is able to serve requests via HTTP/2.0. Although, I am mildly concerned about the irregular messages I can see during {{httpd <-> eap}} periodic 5s liveness verification ({{LBstatusRecalTime}}) with OPTIONS method and about connector's exceptions during sending MCMP messages. I haven't tried to let it run for days, but long-time impact is my main motivation for posting this. I parsed the log into corresponding blocks to illustrate situation on the balancer side and on the worker side.
> h2. Start
> Both servers started, no client requests whatsoever. The first block is the very first communication between servers.
> h4. Worker Block 1
> {code}05:33:02,544 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
> 05:33:02,695 DEBUG [io.undertow.request] (default I/O-2) Using ALPN provider JDK8AlpnProvider for connector at /192.168.122.172:8443{code}
> h4. Balancer Block 1
> [Full block|https://gist.github.com/Karm/6ff0c53b70ce28d73da3f828bb1605be]
> h4. Worker Block 2
> No exception, no log.
> h4. Balancer Block 2
> [Full block|https://gist.github.com/Karm/7a5d4f5e667b9ee86e446fc73f8d1ec1]
> h4. Worker Block 3
> {code}05:33:23,144 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
> at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
> at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
> at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
> at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
> at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
> h4. Balancer Block 3
> [Full block|https://gist.github.com/Karm/cfda838d94a1806e7e7e087cf412dc6d]
> h4. Worker Block 4
> No exception, no log.
> h4. Balancer Block 4
> [Full block|https://gist.github.com/Karm/7f0af3c5f443c22f90ea38a57a8f044a]
> h4. Worker Block 5
> No exception, no log.
> h4. Balancer Block 5
> [Full block|https://gist.github.com/Karm/36434f6493aee9e6f30a1bb8a88734b3]
> h4. Worker Block 6
> No exception, no log.
> h4. Balancer Block 6
> [Full block|https://gist.github.com/Karm/cf877cb9670a66d7ea36e19a6867b61b]
> h4. Worker Block 7
> No exception, no log.
> h4. Balancer Block 7
> [Full block|https://gist.github.com/Karm/612901b85f62d1a71bf9e4caf7a1e537]
> h4. Worker Block 8
> No exception, no log.
> h4. Balancer Block 8
> [Full block|https://gist.github.com/Karm/83dd6b29cf066241613d9a2fbbc9751a]
> h4. Worker Block 9
> No exception, no log.
> h4. Balancer Block 9
> [Full block|https://gist.github.com/Karm/02f3e5a33f1c7e5e314eda897e8df80a]
> h4. Worker Block 10
> {code}05:33:58,238 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code} [Full block|https://gist.github.com/Karm/380150a2b76399cdd68f033fb50c21b6]
> h4. Balancer Block 10
> [Full block|https://gist.github.com/Karm/5f84ae53bdaa725472a3dd152072cab7]
> h4. Worker Block 11
> {code}05:34:02,808 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
> 05:34:03,002 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
> at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
> at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
> at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
> at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
> at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
> h4. Balancer Block 11
> [Full block|https://gist.github.com/Karm/90ec31c8cfee9b204d91dbef5be2b3a3]
> h4. Worker Block 12
> No exception, no log.
> h4. Balancer Block 12
> [Full block|https://gist.github.com/Karm/33b79c1f7b3dc4655b88648e141cf242]
> h4. Worker Block 13
> {code}05:34:13,242 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
> at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
> at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
> at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
> at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
> at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
> h4. Balancer Block 13
> [Full block|https://gist.github.com/Karm/1f25ef8f913368fb91e0795ee1b8de42]
> h4. Worker Block 14
> {code}05:34:18,230 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code}
> [Full block|https://gist.github.com/Karm/45dcb2efa03683656f4e59aabcdab1d0]
> h4. Balancer Block 14
> [Full block|https://gist.github.com/Karm/87b7daf8b07732b50b0740bb84e79d6e]
> h2. Clients
> Worker is correctly registered:{code}mod_cluster/1.3.5.Final
> Node jboss-eap-7.1 (https://192.168.122.172:8443):
> Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 99
> Virtual Host 1:
> Contexts:
> /clusterbench, Status: ENABLED Request: 0 Disable Stop
> /, Status: ENABLED Request: 0 Disable Stop
> Aliases:
> default-host
> localhost{code}
> h3. Curl (x)
> Not related to EAP, nss Fedora 25 shenanigans with using blacklisted cipher suite: {{tls cipher ECDHE-RSA-AES256-SHA blacklisted by rfc7540}}
> * {{curl --http2 https://rhel7GAx86-64:2080/clusterbench/session --insecure -i -vvv}}, [curl log|https://gist.github.com/Karm/64d45635f5d81d586d556b4915410ddc]
> * Worker - no log, wasn't actually called by the balancer, the request crashed earlier
> * Balancer - tells curl to go away, [handshake log|https://gist.github.com/Karm/7eff0a338a4ecb9060a97e758891b02a]
> h3. Chrome (/)
> * Chrome 58.0.3029.110 (64-bit), correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
> * Worker - web app served, [log|https://gist.github.com/Karm/00581ebd382f3f0b9148eff78b08ab50]
> * Balancer - O.K., [log|https://gist.github.com/Karm/19221581ed374d928ca21f374d8ef028]
> h3. Firefox (/)
> * Firefox 53.0.3 (64-bit) correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
> * Worker - O.K., [log|https://gist.github.com/Karm/74a917b68392ca585d0b1a90c9d2e883]
> * Balancer - O.K., [log|https://gist.github.com/Karm/14a7a6c070da3981022e1a06aa5bf434]
> h1. 2. (x) ALPN via Wildfly OpenSSL + OpenSSL 1.0.2h + Oracle JDK8
> With the same Apache HTTP Server config as in the aforementioned case, i.e. [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...], and EAP 7.1 configured for Wildfly OpenSSL: [standalone-ha.xml_openssl|https://gist.github.com/Karm/fb1923d0b711a73aac...], the whole setup falls apart at the very beginning when worker contacts the balancer and then is unable to parse balancer's response. (!) *Noteworthy:* If you put another EAP configured as an Undertow mod_cluster balancer in front of this worker, also with Wildfly OpenSSL, it works.
> h3. Balancer
> Everything appears cool, {{mod_manager.c(3104): manager_handler INFO OK}}, see [the log snippet|https://gist.github.com/Karm/5904cca0235bc3ad4f31c53252e3430a].
> h3 Worker
> Falls flat on its face while processing balancer's response: {code}06:16:24,031 ERROR [org.jboss.mod_cluster.undertow] (UndertowEventHandlerAdapter - 1) null: java.lang.NumberFormatException: null
> at java.lang.Integer.parseInt(Integer.java:542)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.sendRequest(DefaultMCMPHandler.java:702)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:387)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:365)
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:454)
> at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:169)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320){code}, see [full log|https://gist.github.com/Karm/4f991b1cfd6d7700ef9fc27604fb3007]
> h3. Result
> No worker is registered, integration is broken. The same worker setup with another Wildfly (EAP) Undertow mod_cluster balancer works.
> WDYT guys, [~rhusar], [~swd847], [~jfclere]?
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-588) HTTP/2.0: Wildfly worker's interoperability with Apache HTTP Server mod_cluster
by Michal Karm Babacek (JIRA)
Michal Karm Babacek created MODCLUSTER-588:
----------------------------------------------
Summary: HTTP/2.0: Wildfly worker's interoperability with Apache HTTP Server mod_cluster
Key: MODCLUSTER-588
URL: https://issues.jboss.org/browse/MODCLUSTER-588
Project: mod_cluster
Issue Type: Bug
Components: Core & Container Integration (Java)
Affects Versions: 1.3.6.Final
Environment: RHEL 7.3, OpenSSL 1.0.2h from JBCS
Reporter: Michal Karm Babacek
Assignee: Radoslav Husar
Priority: Critical
There are two configurations debated on this JIRA, the first seems to work O.K. (/) with some concerns, the second one doesn't work at all (x).
h1. (/) "ALPN hack", Oracle JDK8
Hello, when you have this configuration of Apache HTTP Server 2.4.23 [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...] and EAP 7.1 [standalone-ha.xml|https://gist.github.com/Karm/e84ca6199f6bf3e7906d8ec632...] and you start both servers, the worker correctly contacts the balancer and it is able to serve requests via HTTP/2.0. Although, I am mildly concerned about the irregular messages I can see during {{httpd <-> eap}} periodic 5s liveness verification ({{LBstatusRecalTime}}) with OPTIONS method and about connector's exceptions during sending MCMP messages. I haven't tried to let it run for days, but long-time impact is my main motivation for posting this. I parsed the log into corresponding blocks to illustrate situation on the balancer side and on the worker side.
h2. Start
Both servers started, no client requests whatsoever. The first block is the very first communication between servers.
h4. Worker Block 1
{code}05:33:02,544 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
05:33:02,695 DEBUG [io.undertow.request] (default I/O-2) Using ALPN provider JDK8AlpnProvider for connector at /192.168.122.172:8443{code}
h4. Balancer Block 1
[Full block|https://gist.github.com/Karm/6ff0c53b70ce28d73da3f828bb1605be]
h4. Worker Block 2
No exception, no log.
h4. Balancer Block 2
[Full block|https://gist.github.com/Karm/7a5d4f5e667b9ee86e446fc73f8d1ec1]
h4. Worker Block 3
{code}05:33:23,144 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 3
[Full block|https://gist.github.com/Karm/cfda838d94a1806e7e7e087cf412dc6d]
h4. Worker Block 4
No exception, no log.
h4. Balancer Block 4
[Full block|https://gist.github.com/Karm/7f0af3c5f443c22f90ea38a57a8f044a]
h4. Worker Block 5
No exception, no log.
h4. Balancer Block 5
[Full block|https://gist.github.com/Karm/36434f6493aee9e6f30a1bb8a88734b3]
h4. Worker Block 6
No exception, no log.
h4. Balancer Block 6
[Full block|https://gist.github.com/Karm/cf877cb9670a66d7ea36e19a6867b61b]
h4. Worker Block 7
No exception, no log.
h4. Balancer Block 7
[Full block|https://gist.github.com/Karm/612901b85f62d1a71bf9e4caf7a1e537]
h4. Worker Block 8
No exception, no log.
h4. Balancer Block 8
[Full block|https://gist.github.com/Karm/83dd6b29cf066241613d9a2fbbc9751a]
h4. Worker Block 9
No exception, no log.
h4. Balancer Block 9
[Full block|https://gist.github.com/Karm/02f3e5a33f1c7e5e314eda897e8df80a]
h4. Worker Block 10
{code}05:33:58,238 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code} [Full block|https://gist.github.com/Karm/380150a2b76399cdd68f033fb50c21b6]
h4. Balancer Block 10
[Full block|https://gist.github.com/Karm/5f84ae53bdaa725472a3dd152072cab7]
h4. Worker Block 11
{code}05:34:02,808 DEBUG [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000009: Sending STATUS for default-server
05:34:03,002 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 11
[Full block|https://gist.github.com/Karm/90ec31c8cfee9b204d91dbef5be2b3a3]
h4. Worker Block 12
No exception, no log.
h4. Balancer Block 12
[Full block|https://gist.github.com/Karm/33b79c1f7b3dc4655b88648e141cf242]
h4. Worker Block 13
{code}05:34:13,242 DEBUG [io.undertow.request] (default I/O-4) UT005013: An IOException occurred: java.nio.channels.ClosedChannelException
at io.undertow.protocols.ssl.SslConduit.doWrap(SslConduit.java:844)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:647)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$SslReadReadyHandler.readReady(SslConduit.java:1098)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:567){code}
h4. Balancer Block 13
[Full block|https://gist.github.com/Karm/1f25ef8f913368fb91e0795ee1b8de42]
h4. Worker Block 14
{code}05:34:18,230 DEBUG [io.undertow.request.io] (default I/O-2) UT005013: An IOException occurred: java.io.IOException: Connection reset by peer{code}
[Full block|https://gist.github.com/Karm/45dcb2efa03683656f4e59aabcdab1d0]
h4. Balancer Block 14
[Full block|https://gist.github.com/Karm/87b7daf8b07732b50b0740bb84e79d6e]
h2. Clients
Worker is correctly registered:{code}mod_cluster/1.3.5.Final
Node jboss-eap-7.1 (https://192.168.122.172:8443):
Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 2,Ttl: 60000000,Status: OK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: 99
Virtual Host 1:
Contexts:
/clusterbench, Status: ENABLED Request: 0 Disable Stop
/, Status: ENABLED Request: 0 Disable Stop
Aliases:
default-host
localhost{code}
h3. Curl (x)
Not related to EAP, nss Fedora 25 shenanigans with using blacklisted cipher suite: {{tls cipher ECDHE-RSA-AES256-SHA blacklisted by rfc7540}}
* {{curl --http2 https://rhel7GAx86-64:2080/clusterbench/session --insecure -i -vvv}}, [curl log|https://gist.github.com/Karm/64d45635f5d81d586d556b4915410ddc]
* Worker - no log, wasn't actually called by the balancer, the request crashed earlier
* Balancer - tells curl to go away, [handshake log|https://gist.github.com/Karm/7eff0a338a4ecb9060a97e758891b02a]
h3. Chrome (/)
* Chrome 58.0.3029.110 (64-bit), correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
* Worker - web app served, [log|https://gist.github.com/Karm/00581ebd382f3f0b9148eff78b08ab50]
* Balancer - O.K., [log|https://gist.github.com/Karm/19221581ed374d928ca21f374d8ef028]
h3. Firefox (/)
* Firefox 53.0.3 (64-bit) correctly display worker's web app on {{https://rhel7GAx86-64:2080/clusterbench/session}} via HTTP/2.0.
* Worker - O.K., [log|https://gist.github.com/Karm/74a917b68392ca585d0b1a90c9d2e883]
* Balancer - O.K., [log|https://gist.github.com/Karm/14a7a6c070da3981022e1a06aa5bf434]
h1. (x) ALPN via Wildfly OpenSSL + OpenSSL 1.0.2h + Oracle JDK8
With the same Apache HTTP Server config as in the aforementioned case, i.e. [mod_cluster.conf|https://gist.github.com/Karm/541fba87030c4ee380f0c6872fe...], and EAP 7.1 configured for Wildfly OpenSSL: [standalone-ha.xml_openssl|https://gist.github.com/Karm/fb1923d0b711a73aac...], the whole setup falls apart at the very beginning when worker contacts the balancer and then is unable to parse balancer's response. (!) *Noteworthy:* If you put another EAP configured as an Undertow mod_cluster balancer in front of this worker, also with Wildfly OpenSSL, it works.
h3. Balancer
Everything appears cool, {{mod_manager.c(3104): manager_handler INFO OK}}, see [the log snippet|https://gist.github.com/Karm/5904cca0235bc3ad4f31c53252e3430a].
h3 Worker
Falls flat on its face while processing balancer's response: {code}06:16:24,031 ERROR [org.jboss.mod_cluster.undertow] (UndertowEventHandlerAdapter - 1) null: java.lang.NumberFormatException: null
at java.lang.Integer.parseInt(Integer.java:542)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.sendRequest(DefaultMCMPHandler.java:702)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:387)
at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:365)
at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:454)
at org.wildfly.mod_cluster.undertow.UndertowEventHandlerAdapter.run(UndertowEventHandlerAdapter.java:169)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jboss.threads.JBossThread.run(JBossThread.java:320){code}, see [full log|https://gist.github.com/Karm/4f991b1cfd6d7700ef9fc27604fb3007]
h3. Result
No worker is registered, integration is broken. The same worker setup with another Wildfly (EAP) Undertow mod_cluster balancer works.
WDYT guys, [~rhusar], [~swd847], [~jfclere]?
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-585) mod_cluster excluded-contexts doesn't exclude slash prefixed /contexts; should perform normalization
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-585?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-585:
--------------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/modcluster/mod_cluster/pull/261
> mod_cluster excluded-contexts doesn't exclude slash prefixed /contexts; should perform normalization
> ----------------------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-585
> URL: https://issues.jboss.org/browse/MODCLUSTER-585
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 2.0.0.Alpha1, 1.3.6.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
>
> Test instructions for WildFly:
> {noformat}
> [rhusar@syrah wildfly-11.0.0.Beta1-SNAPSHOT]$ ./bin/jboss-cli.sh -c
> [standalone@localhost:9990 /] /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="default-host:/clusterbench"
> {"outcome" => "success"}
> [standalone@localhost:9990 /] :reload
> {
> "outcome" => "success",
> "result" => undefined
> }
> [standalone@localhost:9990 /] /subsystem=modcluster/:read-proxies-info
> {
> "outcome" => "success",
> "result" => [
> "localhost:9090",
> "Node: [1],Name: node1,Balancer: mycluster,LBGroup: ,Host: 127.0.0.1,Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 26,Ttl: 60,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 73
> Vhost: [1:1:1], Alias: localhost
> Vhost: [1:1:2], Alias: default-host
> Context: [1:1:1], Context: /wildfly-services, Status: ENABLED
> Context: [1:1:2], Context: /clusterbench-passivating, Status: ENABLED
> Context: [1:1:3], Context: /tmp, Status: ENABLED
> Context: [1:1:4], Context: /, Status: ENABLED
> Context: [1:1:5], Context: /clusterbench, Status: ENABLED
> "
> ]
> }
> [standalone@localhost:9990 /] /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="default-host:clusterbench"
> {
> "outcome" => "success",
> "response-headers" => {
> "operation-requires-reload" => true,
> "process-state" => "reload-required"
> }
> }
> [standalone@localhost:9990 /] :reload
> {
> "outcome" => "success",
> "result" => undefined
> }
> [standalone@localhost:9990 /] /subsystem=modcluster/:read-proxies-info
> {
> "outcome" => "success",
> "result" => [
> "localhost:9090",
> "Node: [1],Name: node1,Balancer: mycluster,LBGroup: ,Host: 127.0.0.1,Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 26,Ttl: 60,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 69
> Vhost: [1:1:1], Alias: localhost
> Vhost: [1:1:2], Alias: default-host
> Context: [1:1:2], Context: /tmp, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench-passivating, Status: ENABLED
> Context: [1:1:4], Context: /wildfly-services, Status: ENABLED
> Context: [1:1:5], Context: /, Status: ENABLED
> "
> ]
> }
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-585) mod_cluster excluded-contexts doesn't exclude slash prefixed /contexts; should perform normalization
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-585?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-585:
--------------------------------------
Priority: Major (was: Minor)
> mod_cluster excluded-contexts doesn't exclude slash prefixed /contexts; should perform normalization
> ----------------------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-585
> URL: https://issues.jboss.org/browse/MODCLUSTER-585
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 2.0.0.Alpha1, 1.3.6.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
>
> Test instructions for WildFly:
> {noformat}
> [rhusar@syrah wildfly-11.0.0.Beta1-SNAPSHOT]$ ./bin/jboss-cli.sh -c
> [standalone@localhost:9990 /] /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="default-host:/clusterbench"
> {"outcome" => "success"}
> [standalone@localhost:9990 /] :reload
> {
> "outcome" => "success",
> "result" => undefined
> }
> [standalone@localhost:9990 /] /subsystem=modcluster/:read-proxies-info
> {
> "outcome" => "success",
> "result" => [
> "localhost:9090",
> "Node: [1],Name: node1,Balancer: mycluster,LBGroup: ,Host: 127.0.0.1,Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 26,Ttl: 60,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 73
> Vhost: [1:1:1], Alias: localhost
> Vhost: [1:1:2], Alias: default-host
> Context: [1:1:1], Context: /wildfly-services, Status: ENABLED
> Context: [1:1:2], Context: /clusterbench-passivating, Status: ENABLED
> Context: [1:1:3], Context: /tmp, Status: ENABLED
> Context: [1:1:4], Context: /, Status: ENABLED
> Context: [1:1:5], Context: /clusterbench, Status: ENABLED
> "
> ]
> }
> [standalone@localhost:9990 /] /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="default-host:clusterbench"
> {
> "outcome" => "success",
> "response-headers" => {
> "operation-requires-reload" => true,
> "process-state" => "reload-required"
> }
> }
> [standalone@localhost:9990 /] :reload
> {
> "outcome" => "success",
> "result" => undefined
> }
> [standalone@localhost:9990 /] /subsystem=modcluster/:read-proxies-info
> {
> "outcome" => "success",
> "result" => [
> "localhost:9090",
> "Node: [1],Name: node1,Balancer: mycluster,LBGroup: ,Host: 127.0.0.1,Port: 8009,Type: ajp,Flushpackets: Off,Flushwait: 10,Ping: 10,Smax: 26,Ttl: 60,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 69
> Vhost: [1:1:1], Alias: localhost
> Vhost: [1:1:2], Alias: default-host
> Context: [1:1:2], Context: /tmp, Status: ENABLED
> Context: [1:1:3], Context: /clusterbench-passivating, Status: ENABLED
> Context: [1:1:4], Context: /wildfly-services, Status: ENABLED
> Context: [1:1:5], Context: /, Status: ENABLED
> "
> ]
> }
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-587) Docs: Undertow hosts root location is exposed(by default) to mod_cluster load balancer
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-587?page=com.atlassian.jira.pl... ]
Paul Ferraro commented on MODCLUSTER-587:
-----------------------------------------
The root location would only need to be removed if a web application wants to use the root context. This is true whether mod_cluster is used or not.
The purpose of the "welcome" content is to verify that web requests work. Exposing the "welcome" content to the load balancer does the same, but for load balanced web requests.
So, in summary, I disagree with the premise of this jira.
> Docs: Undertow hosts root location is exposed(by default) to mod_cluster load balancer
> --------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-587
> URL: https://issues.jboss.org/browse/MODCLUSTER-587
> Project: mod_cluster
> Issue Type: Bug
> Components: Documentation & Demos
> Reporter: Bogdan Sikora
> Assignee: Michal Karm Babacek
>
> Undertow can be configured to serve static content per host via the location resource. This location resource is exposed to the mod_cluster balancer.
> {noformat}
> "context" => {
> "/custom_location" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> However, all EAP standalone profiles have enabled root location by default. This root location is then exposed to the balancer( Application root "/" is registered).
> {noformat}
> "context" => {
> "/" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> Root application matches with any application call and therefore mod_cluster balancer is unable to correctly route requests.
> To make mod_cluster work, one must delete root location from worker nodes.
> {noformat}
> /subsystem=undertow/server=default-server/host=default-host/location=\/:remove()
> {noformat}
> or excluding context ROOT
> {noformat}
> /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="ROOT")
> {noformat}
> *_+Proposal:+_*
> Exclude ROOT context by default
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-587) Docs: Undertow hosts root location is exposed(by default) to mod_cluster load balancer
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-587?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-587:
--------------------------------------
Summary: Docs: Undertow hosts root location is exposed(by default) to mod_cluster load balancer (was: Undertow hosts root location is exposed(by default) to mod_cluster load balancer)
> Docs: Undertow hosts root location is exposed(by default) to mod_cluster load balancer
> --------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-587
> URL: https://issues.jboss.org/browse/MODCLUSTER-587
> Project: mod_cluster
> Issue Type: Bug
> Components: Documentation & Demos
> Reporter: Bogdan Sikora
> Assignee: Michal Karm Babacek
>
> Undertow can be configured to serve static content per host via the location resource. This location resource is exposed to the mod_cluster balancer.
> {noformat}
> "context" => {
> "/custom_location" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> However, all EAP standalone profiles have enabled root location by default. This root location is then exposed to the balancer( Application root "/" is registered).
> {noformat}
> "context" => {
> "/" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> Root application matches with any application call and therefore mod_cluster balancer is unable to correctly route requests.
> To make mod_cluster work, one must delete root location from worker nodes.
> {noformat}
> /subsystem=undertow/server=default-server/host=default-host/location=\/:remove()
> {noformat}
> or excluding context ROOT
> {noformat}
> /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="ROOT")
> {noformat}
> *_+Proposal:+_*
> Exclude ROOT context by default
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-587) Undertow hosts root location is exposed(by default) to mod_cluster load balancer
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-587?page=com.atlassian.jira.pl... ]
Bogdan Sikora updated MODCLUSTER-587:
-------------------------------------
Priority: Major (was: Critical)
> Undertow hosts root location is exposed(by default) to mod_cluster load balancer
> --------------------------------------------------------------------------------
>
> Key: MODCLUSTER-587
> URL: https://issues.jboss.org/browse/MODCLUSTER-587
> Project: mod_cluster
> Issue Type: Bug
> Components: Documentation & Demos
> Reporter: Bogdan Sikora
> Assignee: Michal Karm Babacek
>
> Undertow can be configured to serve static content per host via the location resource. This location resource is exposed to the mod_cluster balancer.
> {noformat}
> "context" => {
> "/custom_location" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> However, all EAP standalone profiles have enabled root location by default. This root location is then exposed to the balancer( Application root "/" is registered).
> {noformat}
> "context" => {
> "/" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> Root application matches with any application call and therefore mod_cluster balancer is unable to correctly route requests.
> To make mod_cluster work, one must delete root location from worker nodes.
> {noformat}
> /subsystem=undertow/server=default-server/host=default-host/location=\/:remove()
> {noformat}
> or excluding context ROOT
> {noformat}
> /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="ROOT")
> {noformat}
> *_+Proposal:+_*
> Exclude ROOT context by default
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-587) Undertow hosts root location is exposed(by default) to mod_cluster load balancer
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-587?page=com.atlassian.jira.pl... ]
Bogdan Sikora reassigned MODCLUSTER-587:
----------------------------------------
Assignee: Michal Karm Babacek (was: Radoslav Husar)
> Undertow hosts root location is exposed(by default) to mod_cluster load balancer
> --------------------------------------------------------------------------------
>
> Key: MODCLUSTER-587
> URL: https://issues.jboss.org/browse/MODCLUSTER-587
> Project: mod_cluster
> Issue Type: Bug
> Components: Documentation & Demos
> Reporter: Bogdan Sikora
> Assignee: Michal Karm Babacek
> Priority: Critical
>
> Undertow can be configured to serve static content per host via the location resource. This location resource is exposed to the mod_cluster balancer.
> {noformat}
> "context" => {
> "/custom_location" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> However, all EAP standalone profiles have enabled root location by default. This root location is then exposed to the balancer( Application root "/" is registered).
> {noformat}
> "context" => {
> "/" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> Root application matches with any application call and therefore mod_cluster balancer is unable to correctly route requests.
> To make mod_cluster work, one must delete root location from worker nodes.
> {noformat}
> /subsystem=undertow/server=default-server/host=default-host/location=\/:remove()
> {noformat}
> or excluding context ROOT
> {noformat}
> /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="ROOT")
> {noformat}
> *_+Proposal:+_*
> Exclude ROOT context by default
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-587) Undertow hosts root location is exposed(by default) to mod_cluster load balancer
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-587?page=com.atlassian.jira.pl... ]
Bogdan Sikora moved JBEAP-11385 to MODCLUSTER-587:
--------------------------------------------------
Project: mod_cluster (was: JBoss Enterprise Application Platform)
Key: MODCLUSTER-587 (was: JBEAP-11385)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Documentation & Demos
(was: mod_cluster)
Affects Version/s: (was: 7.1.0.DR19)
> Undertow hosts root location is exposed(by default) to mod_cluster load balancer
> --------------------------------------------------------------------------------
>
> Key: MODCLUSTER-587
> URL: https://issues.jboss.org/browse/MODCLUSTER-587
> Project: mod_cluster
> Issue Type: Bug
> Components: Documentation & Demos
> Reporter: Bogdan Sikora
> Assignee: Radoslav Husar
> Priority: Critical
>
> Undertow can be configured to serve static content per host via the location resource. This location resource is exposed to the mod_cluster balancer.
> {noformat}
> "context" => {
> "/custom_location" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> However, all EAP standalone profiles have enabled root location by default. This root location is then exposed to the balancer( Application root "/" is registered).
> {noformat}
> "context" => {
> "/" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> Root application matches with any application call and therefore mod_cluster balancer is unable to correctly route requests.
> To make mod_cluster work, one must delete root location from worker nodes.
> {noformat}
> /subsystem=undertow/server=default-server/host=default-host/location=\/:remove()
> {noformat}
> or excluding context ROOT
> {noformat}
> /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="ROOT")
> {noformat}
> *_+Proposal:+_*
> Exclude ROOT context by default
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months
[JBoss JIRA] (MODCLUSTER-575) Create a Load SPI module
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-575?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-575:
--------------------------------------
Description: Currently, implementers of a custom LoadMetric need to pull in entire core module. A load spi module could be created, which then along with the container spi module is sufficient for metric implementers to import. (was: Currently, implementers of a custom LoadMetric need to pull in entire core module.)
> Create a Load SPI module
> ------------------------
>
> Key: MODCLUSTER-575
> URL: https://issues.jboss.org/browse/MODCLUSTER-575
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.6.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Fix For: 2.0.0.Alpha1
>
>
> Currently, implementers of a custom LoadMetric need to pull in entire core module. A load spi module could be created, which then along with the container spi module is sufficient for metric implementers to import.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
7 years, 6 months