[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-536?page=com.atlassian.jira.pl... ]
Michal Karm reassigned MODCLUSTER-536:
--------------------------------------
Assignee: Jean-Frederic Clere (was: Michal Karm)
> List of open files grows steadily during load test through mod_cluster
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-536
> URL: https://issues.jboss.org/browse/MODCLUSTER-536
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: Wildfly10.0.0.Final
> mod_cluster-1.3.1.Final-linux2-x64-ssl
> CentOS7 (virtualbox)
> Reporter: wayne tech
> Assignee: Jean-Frederic Clere
> Priority: Major
> Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
>
>
> I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
> The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
> However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
> The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-536?page=com.atlassian.jira.pl... ]
Michal Karm commented on MODCLUSTER-536:
----------------------------------------
Leaving the topic.
If [~jfclere] is interested; he might find someone to look into it or close it.
> List of open files grows steadily during load test through mod_cluster
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-536
> URL: https://issues.jboss.org/browse/MODCLUSTER-536
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: Wildfly10.0.0.Final
> mod_cluster-1.3.1.Final-linux2-x64-ssl
> CentOS7 (virtualbox)
> Reporter: wayne tech
> Assignee: Jean-Frederic Clere
> Priority: Major
> Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
>
>
> I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
> The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
> However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
> The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-538) A proper man page instead of Readme file and Selinux policy file in Fedora
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-538?page=com.atlassian.jira.pl... ]
Michal Karm commented on MODCLUSTER-538:
----------------------------------------
Leaving the topic.
If [~jfclere] is interested; he might find someone to look into it or close it.
> A proper man page instead of Readme file and Selinux policy file in Fedora
> --------------------------------------------------------------------------
>
> Key: MODCLUSTER-538
> URL: https://issues.jboss.org/browse/MODCLUSTER-538
> Project: mod_cluster
> Issue Type: Enhancement
> Components: Native (httpd modules)
> Affects Versions: 1.3.3.Final
> Environment: Fedora 24, Fedora 25, Fedora Rawhide
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Minor
>
> Current doc in the RPM distribution is a Markdown file {{/usr/share/doc/mod_cluster/README}}. SELinux must be configured manually according to documentation in {{/etc/httpd/conf.d/mod_cluster.conf}}. It would be much better if user has the SELinux policy set during installation via loading a policy file and it would be also great if typing {{man mod_cluster}} actually brought up a proper man page.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-538) A proper man page instead of Readme file and Selinux policy file in Fedora
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-538?page=com.atlassian.jira.pl... ]
Michal Karm reassigned MODCLUSTER-538:
--------------------------------------
Assignee: Jean-Frederic Clere (was: Michal Karm)
> A proper man page instead of Readme file and Selinux policy file in Fedora
> --------------------------------------------------------------------------
>
> Key: MODCLUSTER-538
> URL: https://issues.jboss.org/browse/MODCLUSTER-538
> Project: mod_cluster
> Issue Type: Enhancement
> Components: Native (httpd modules)
> Affects Versions: 1.3.3.Final
> Environment: Fedora 24, Fedora 25, Fedora Rawhide
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Minor
>
> Current doc in the RPM distribution is a Markdown file {{/usr/share/doc/mod_cluster/README}}. SELinux must be configured manually according to documentation in {{/etc/httpd/conf.d/mod_cluster.conf}}. It would be much better if user has the SELinux policy set during installation via loading a policy file and it would be also great if typing {{man mod_cluster}} actually brought up a proper man page.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-565) Expand mod_cluster manager console to output JSON data about worker nodes
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-565?page=com.atlassian.jira.pl... ]
Michal Karm commented on MODCLUSTER-565:
----------------------------------------
Leaving the topic.
If [~jfclere] is interested he might find someone to look into it or close it.
> Expand mod_cluster manager console to output JSON data about worker nodes
> -------------------------------------------------------------------------
>
> Key: MODCLUSTER-565
> URL: https://issues.jboss.org/browse/MODCLUSTER-565
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Native (httpd modules)
> Affects Versions: 2.0.0.Alpha1
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Major
>
> mod_proxy_cluster needs a module that would provide a comprehensive command & control visualization of events going on within the balancer and with worker nodes. The visualization would also serve as a demonstration for presentation purposes (to replace current SWING app from ~2009-ish).
> The new module would most likely work with https://d3js.org/ and generate an SVG/JS page on a new added httpd handler. For this new module to work, we need JSON data from mod_cluster manager (currently spits out only HTML/text/XML).
> h3. Task one: mod_cluster manager JSON data
> [~bsikora] researched suitable JavaScript libraries and he would like to have the JSON output resemble the following:
> {quote}
> I find the Wildfly style of Undertow mod_cluster proxy filter CLI output the most suitable:
> {code}
> "balancer" => {"mycluster" => {
> "max-attempts" => 1,
> "sticky-session" => true,
> "sticky-session-cookie" => "JSESSIONID",
> "sticky-session-force" => false,
> "sticky-session-path" => undefined,
> "sticky-session-remove" => false,
> "wait-worker" => 0,
> "load-balancing-group" => undefined,
> "node" => {
> "jboss-eap-7.1-1" => {
> "aliases" => [
> "default-host",
> "localhost"
> ],
> "cache-connections" => 40,
> "elected" => 0,
> "flush-packets" => false,
> "load" => 0,
> "load-balancing-group" => undefined,
> "max-connections" => 40,
> "open-connections" => 0,
> "ping" => 10,
> "queue-new-requests" => true,
> "read" => 0L,
> "request-queue-size" => 1000,
> "status" => "NODE_HOT_STANDBY",
> "timeout" => 0,
> "ttl" => 60L,
> "uri" => "ajp://192.168.122.88:8009/?#",
> "written" => 0L,
> "context" => {"/clusterbench" => {
> "requests" => 0,
> "status" => "enabled"
> }}
> },
> {code}
> So, the JSON for my JavaScript renderer could look like:
> {code}
> {
> "balancers": [{
> "name": "mycluster",
> "max-attempts": 1,
> "sticky-session": true,
> "sticky-session-cookie": "JSESSIONID",
> "sticky-session-force": false,
> "sticky-session-remove": false,
> "wait-worker": 0,
> "workers": [
> {
> "name": "jboss-eap-7.1-1",
> "aliases": ["default-host", "localhost"],
> "cache-connections": 40,
> "elected": 0,
> "flush-packets": false,
> "load": 0,
> "requests": 350
> }, {
> "name": "jboss-eap-7.1-2",
> "aliases": ["default-host", "localhost"],
> "cache-connections": 40,
> "elected": 0,
> "flush-packets": false,
> "load": 0,
> "requests": 350
> }
> ]
> }]
> }
> {code}
> {quote}
> h3. Task two: mod_cluster gui console
> Write a separate module that could be optionally loaded alongside with mod_manager to provide the gui JavaScript/SVG gui console and/or consider deploying the console as a web page into the httpd's web serving directory.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-565) Expand mod_cluster manager console to output JSON data about worker nodes
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-565?page=com.atlassian.jira.pl... ]
Michal Karm reassigned MODCLUSTER-565:
--------------------------------------
Assignee: Jean-Frederic Clere (was: Michal Karm)
> Expand mod_cluster manager console to output JSON data about worker nodes
> -------------------------------------------------------------------------
>
> Key: MODCLUSTER-565
> URL: https://issues.jboss.org/browse/MODCLUSTER-565
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Native (httpd modules)
> Affects Versions: 2.0.0.Alpha1
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Major
>
> mod_proxy_cluster needs a module that would provide a comprehensive command & control visualization of events going on within the balancer and with worker nodes. The visualization would also serve as a demonstration for presentation purposes (to replace current SWING app from ~2009-ish).
> The new module would most likely work with https://d3js.org/ and generate an SVG/JS page on a new added httpd handler. For this new module to work, we need JSON data from mod_cluster manager (currently spits out only HTML/text/XML).
> h3. Task one: mod_cluster manager JSON data
> [~bsikora] researched suitable JavaScript libraries and he would like to have the JSON output resemble the following:
> {quote}
> I find the Wildfly style of Undertow mod_cluster proxy filter CLI output the most suitable:
> {code}
> "balancer" => {"mycluster" => {
> "max-attempts" => 1,
> "sticky-session" => true,
> "sticky-session-cookie" => "JSESSIONID",
> "sticky-session-force" => false,
> "sticky-session-path" => undefined,
> "sticky-session-remove" => false,
> "wait-worker" => 0,
> "load-balancing-group" => undefined,
> "node" => {
> "jboss-eap-7.1-1" => {
> "aliases" => [
> "default-host",
> "localhost"
> ],
> "cache-connections" => 40,
> "elected" => 0,
> "flush-packets" => false,
> "load" => 0,
> "load-balancing-group" => undefined,
> "max-connections" => 40,
> "open-connections" => 0,
> "ping" => 10,
> "queue-new-requests" => true,
> "read" => 0L,
> "request-queue-size" => 1000,
> "status" => "NODE_HOT_STANDBY",
> "timeout" => 0,
> "ttl" => 60L,
> "uri" => "ajp://192.168.122.88:8009/?#",
> "written" => 0L,
> "context" => {"/clusterbench" => {
> "requests" => 0,
> "status" => "enabled"
> }}
> },
> {code}
> So, the JSON for my JavaScript renderer could look like:
> {code}
> {
> "balancers": [{
> "name": "mycluster",
> "max-attempts": 1,
> "sticky-session": true,
> "sticky-session-cookie": "JSESSIONID",
> "sticky-session-force": false,
> "sticky-session-remove": false,
> "wait-worker": 0,
> "workers": [
> {
> "name": "jboss-eap-7.1-1",
> "aliases": ["default-host", "localhost"],
> "cache-connections": 40,
> "elected": 0,
> "flush-packets": false,
> "load": 0,
> "requests": 350
> }, {
> "name": "jboss-eap-7.1-2",
> "aliases": ["default-host", "localhost"],
> "cache-connections": 40,
> "elected": 0,
> "flush-packets": false,
> "load": 0,
> "requests": 350
> }
> ]
> }]
> }
> {code}
> {quote}
> h3. Task two: mod_cluster gui console
> Write a separate module that could be optionally loaded alongside with mod_manager to provide the gui JavaScript/SVG gui console and/or consider deploying the console as a web page into the httpd's web serving directory.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-570) Multiple invalid nodes created on graceful httpd restart
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-570?page=com.atlassian.jira.pl... ]
Michal Karm commented on MODCLUSTER-570:
----------------------------------------
Leaving the topic.
If [~jfclere]] is interested; he might find someone to look into it again.
> Multiple invalid nodes created on graceful httpd restart
> --------------------------------------------------------
>
> Key: MODCLUSTER-570
> URL: https://issues.jboss.org/browse/MODCLUSTER-570
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.5.Final, 1.3.6.Final
> Environment: httpd 2.4.25
> Alpine Linux 3.5
> Reporter: Antoine Cotten
> Assignee: Jean-Frederic Clere
> Priority: Minor
> Attachments: mod_cluster_after_graceful.png, mod_cluster_before_graceful.png
>
>
> mod_cluster creates a bunch of invalid nodes after a graceful restart of Apache httpd ({{SIGUSR1}}, {{SIGWINCH}}). The issue happens with or without backend servers in the pool.
> httpd logs:
> {code}
> [mpm_event:notice] [pid 1:tid 140313388616520] AH00493: SIGUSR1 received. Doing graceful restart
> [:notice] [pid 1:tid 140313388616520] Advertise initialized for process 1
> [mpm_event:notice] [pid 1:tid 140313388616520] AH00489: Apache/2.4.25 (Unix) mod_cluster/1.3.5.Final configured -- resuming normal operations
> [core:notice] [pid 1:tid 140313388616520] AH00094: Command line: 'httpd -D FOREGROUND'
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> # ... trimmed 48 lines ...
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://:\x9d\x7f failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] add_balancer_node: balancer safe-name (balancer://4:6666\r\nX-Manager-Url: /c8dae916-7642-4deb-b841-2f3435613749\r\nX-Manager-Protocol: http\r\nX-Manager-Host: 172.17.0.4\r\n\r\n) too long
> {code}
> Application server logs (tomcat8):
> {code}
> Feb 01, 2017 11:34:25 AM org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler sendRequest
> ERROR: MODCLUSTER000042: Error MEM sending STATUS command to 172.17.0.4/172.17.0.4:6666, configuration will be reset: MEM: Can't read node with "java1-depl1-1234" JVMRoute
> Feb 01, 2017 11:34:35 AM org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor processChildren
> SEVERE: Exception invoking periodic operation:
> java.lang.IllegalArgumentException: Node: [1],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [2],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [3],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> # ... trimmed lines ...
> Node: [18],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [19],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Vhost: [0:0:1], Alias:
> Vhost: [0:0:2], Alias:
> Vhost: [0:0:3], Alias:
> # ... trimmed lines ...
> Vhost: [0:0:19], Alias:
> Vhost: [0:0:20], Alias:
> Context: [0:0:1], Context: , Status: REMOVED
> Context: [0:0:2], Context: , Status: REMOVED
> Context: [0:0:3], Context: , Status: REMOVED
> # ... trimmed lines ...
> Context: [0:0:97], Context: , Status: REMOVED
> Context: [0:0:98], Context: , Status: REMOVED
> Context: [0:0:99], Context: , Status: REMOVED
> Context: [0:0:1], Context: , Status: REMOVED
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:396)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:365)
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:457)
> at org.jboss.modcluster.container.catalina.CatalinaEventHandlerAdapter.lifecycleEvent(CatalinaEventHandlerAdapter.java:252)
> at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
> at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
> at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1374)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1546)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1524)
> at java.lang.Thread.run(Thread.java:745)
> Feb 01, 2017 11:35:05 AM org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler sendRequest
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-570) Multiple invalid nodes created on graceful httpd restart
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-570?page=com.atlassian.jira.pl... ]
Michal Karm reassigned MODCLUSTER-570:
--------------------------------------
Assignee: Jean-Frederic Clere (was: Michal Karm)
> Multiple invalid nodes created on graceful httpd restart
> --------------------------------------------------------
>
> Key: MODCLUSTER-570
> URL: https://issues.jboss.org/browse/MODCLUSTER-570
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.5.Final, 1.3.6.Final
> Environment: httpd 2.4.25
> Alpine Linux 3.5
> Reporter: Antoine Cotten
> Assignee: Jean-Frederic Clere
> Priority: Minor
> Attachments: mod_cluster_after_graceful.png, mod_cluster_before_graceful.png
>
>
> mod_cluster creates a bunch of invalid nodes after a graceful restart of Apache httpd ({{SIGUSR1}}, {{SIGWINCH}}). The issue happens with or without backend servers in the pool.
> httpd logs:
> {code}
> [mpm_event:notice] [pid 1:tid 140313388616520] AH00493: SIGUSR1 received. Doing graceful restart
> [:notice] [pid 1:tid 140313388616520] Advertise initialized for process 1
> [mpm_event:notice] [pid 1:tid 140313388616520] AH00489: Apache/2.4.25 (Unix) mod_cluster/1.3.5.Final configured -- resuming normal operations
> [core:notice] [pid 1:tid 140313388616520] AH00094: Command line: 'httpd -D FOREGROUND'
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> # ... trimmed 48 lines ...
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://:\x9d\x7f failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] add_balancer_node: balancer safe-name (balancer://4:6666\r\nX-Manager-Url: /c8dae916-7642-4deb-b841-2f3435613749\r\nX-Manager-Protocol: http\r\nX-Manager-Host: 172.17.0.4\r\n\r\n) too long
> {code}
> Application server logs (tomcat8):
> {code}
> Feb 01, 2017 11:34:25 AM org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler sendRequest
> ERROR: MODCLUSTER000042: Error MEM sending STATUS command to 172.17.0.4/172.17.0.4:6666, configuration will be reset: MEM: Can't read node with "java1-depl1-1234" JVMRoute
> Feb 01, 2017 11:34:35 AM org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor processChildren
> SEVERE: Exception invoking periodic operation:
> java.lang.IllegalArgumentException: Node: [1],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [2],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [3],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> # ... trimmed lines ...
> Node: [18],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [19],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Vhost: [0:0:1], Alias:
> Vhost: [0:0:2], Alias:
> Vhost: [0:0:3], Alias:
> # ... trimmed lines ...
> Vhost: [0:0:19], Alias:
> Vhost: [0:0:20], Alias:
> Context: [0:0:1], Context: , Status: REMOVED
> Context: [0:0:2], Context: , Status: REMOVED
> Context: [0:0:3], Context: , Status: REMOVED
> # ... trimmed lines ...
> Context: [0:0:97], Context: , Status: REMOVED
> Context: [0:0:98], Context: , Status: REMOVED
> Context: [0:0:99], Context: , Status: REMOVED
> Context: [0:0:1], Context: , Status: REMOVED
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:396)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:365)
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:457)
> at org.jboss.modcluster.container.catalina.CatalinaEventHandlerAdapter.lifecycleEvent(CatalinaEventHandlerAdapter.java:252)
> at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
> at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
> at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1374)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1546)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1524)
> at java.lang.Thread.run(Thread.java:745)
> Feb 01, 2017 11:35:05 AM org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler sendRequest
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months
[JBoss JIRA] (MODCLUSTER-587) Docs: Undertow hosts root location is exposed(by default) to mod_cluster load balancer
by Michal Karm (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-587?page=com.atlassian.jira.pl... ]
Michal Karm closed MODCLUSTER-587.
----------------------------------
Resolution: Out of Date
Closing. Dated.
> Docs: Undertow hosts root location is exposed(by default) to mod_cluster load balancer
> --------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-587
> URL: https://issues.jboss.org/browse/MODCLUSTER-587
> Project: mod_cluster
> Issue Type: Bug
> Components: Documentation & Demos
> Reporter: Bogdan Sikora
> Assignee: Michal Karm
> Priority: Major
>
> Undertow can be configured to serve static content per host via the location resource. This location resource is exposed to the mod_cluster balancer.
> {noformat}
> "context" => {
> "/custom_location" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> However, all EAP standalone profiles have enabled root location by default. This root location is then exposed to the balancer( Application root "/" is registered).
> {noformat}
> "context" => {
> "/" => {
> "requests" => 0,
> "status" => "enabled"
> },
> {noformat}
> Root application matches with any application call and therefore mod_cluster balancer is unable to correctly route requests.
> To make mod_cluster work, one must delete root location from worker nodes.
> {noformat}
> /subsystem=undertow/server=default-server/host=default-host/location=\/:remove()
> {noformat}
> or excluding context ROOT
> {noformat}
> /subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=excluded-contexts,value="ROOT")
> {noformat}
> *_+Proposal:+_*
> Exclude ROOT context by default
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 4 months