[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-536?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-536:
--------------------------------------
Comment: was deleted
(was: Hi Radoslav,
In mod_cluster 1.3.1.final, mod_cluster-manager status page reports wrong information. For details, can you please check following thread.
https://developer.jboss.org/thread/271283?start=0&tstart=0
Thanks.
Leo)
> List of open files grows steadily during load test through mod_cluster
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-536
> URL: https://issues.jboss.org/browse/MODCLUSTER-536
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: Wildfly10.0.0.Final
> mod_cluster-1.3.1.Final-linux2-x64-ssl
> CentOS7 (virtualbox)
> Reporter: Wayne Wang
> Assignee: Michal Karm Babacek
> Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
>
>
> I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
> The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
> However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
> The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Leo HE (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-536?page=com.atlassian.jira.pl... ]
Leo HE commented on MODCLUSTER-536:
-----------------------------------
Hi Jean,
In mod_cluster 1.3.1.final, mod_cluster-manager status page reports wrong information. For details, can you please check following thread.
https://developer.jboss.org/thread/271283?start=0&tstart=0
Thanks.
Leo
> List of open files grows steadily during load test through mod_cluster
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-536
> URL: https://issues.jboss.org/browse/MODCLUSTER-536
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: Wildfly10.0.0.Final
> mod_cluster-1.3.1.Final-linux2-x64-ssl
> CentOS7 (virtualbox)
> Reporter: Wayne Wang
> Assignee: Michal Karm Babacek
> Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
>
>
> I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
> The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
> However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
> The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (MODCLUSTER-536) List of open files grows steadily during load test through mod_cluster
by Leo HE (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-536?page=com.atlassian.jira.pl... ]
Leo HE commented on MODCLUSTER-536:
-----------------------------------
Hi Radoslav,
In mod_cluster 1.3.1.final, mod_cluster-manager status page reports wrong information. For details, can you please check following thread.
https://developer.jboss.org/thread/271283?start=0&tstart=0
Thanks.
Leo
> List of open files grows steadily during load test through mod_cluster
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-536
> URL: https://issues.jboss.org/browse/MODCLUSTER-536
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: Wildfly10.0.0.Final
> mod_cluster-1.3.1.Final-linux2-x64-ssl
> CentOS7 (virtualbox)
> Reporter: Wayne Wang
> Assignee: Michal Karm Babacek
> Attachments: error_log, httpd-mpm.conf, httpd.conf, server.log, standalone-full-ha-snippet.xml
>
>
> I was able to configure wildfly 10 modcluster to work with Apache mod_cluster (1.3.1). However, when I was doing a load test, I found out that the test through web server eventually caused error in wildfly instance and I also saw error log in Apache web server
> The obvious error in wildfly instance is the so-called "java.net.SocketException: Too many files open". When I used the command lsop -u | grep TCP | wc -l, I can see the number grew steadily until the wildfly instance reported the error. This was when I sent requests through web server.
> However, when I sent the requests through wildfly instance (app server) directly, the number did not grow, and the app server can take a lot heavier load without this issue.
> The issue did not happen until many rounds of load tests were executed through web server. If I restart the web server, everything is working fine until I execute many rounds of load tests again
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (MODCLUSTER-438) WebSocket support for mod_cluster
by Michal Karm Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-438?page=com.atlassian.jira.pl... ]
Michal Karm Babacek commented on MODCLUSTER-438:
------------------------------------------------
[~gfleury] Thank you. The current version of httpd/mod_cluster supports websockets via proxy_wstunnel. Does nginx/mod_cluster anything differently? e.g. what happens when worker dies? Does the balancer try to re-negotiate handshake with a different worker or it's up to the client to do so?
We arrived at the conclusion that there is no meaning in having websocket "ha", so when worker dies, the communication is broken and client has to initiate it again - balancer would give the client another, healthily worker then.
> WebSocket support for mod_cluster
> ---------------------------------
>
> Key: MODCLUSTER-438
> URL: https://issues.jboss.org/browse/MODCLUSTER-438
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Native (httpd modules)
> Affects Versions: 1.3.2.Final
> Reporter: Michal Karm Babacek
> Assignee: Jean-Frederic Clere
> Fix For: Awaiting Volunteers
>
>
> * take a look at mod_proxy_wstunnel
> * mod_cluster should be able to load balance WebSocket connections to worker nodes
> * consider high-availability with respect to WebSockets
> Additional details TBD...
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (MODCLUSTER-449) Implement ramp-up when starting new nodes
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-449?page=com.atlassian.jira.pl... ]
Bogdan Sikora edited comment on MODCLUSTER-449 at 9/8/16 8:25 AM:
------------------------------------------------------------------
Undertow balancer with 2 (Load 10) nodes and in the middle connects third (Load 90) node
!undertowRamp-up.jpg|thumbnail!
was (Author: bsikora):
Undertow balancer with 2 (Load 10) nodes and in the middle third (Load 90) node
!undertowRamp-up.jpg|thumbnail!
> Implement ramp-up when starting new nodes
> -----------------------------------------
>
> Key: MODCLUSTER-449
> URL: https://issues.jboss.org/browse/MODCLUSTER-449
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Core & Container Integration (Java)
> Affects Versions: 1.2.0.Final, 1.3.0.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 2.0.0.Alpha1
>
> Attachments: httpdRamp-up.jpg, undertowRamp-up.jpg
>
>
> IIUC this has been a problem since inception. The problem is that the initial load stays in effect for performing load-balancing decisions until a new stat interval kicks in.
> This effect is mitigated by load decay over time, but for the time a new node joins in, it can get overloaded upon startup.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (MODCLUSTER-449) Implement ramp-up when starting new nodes
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-449?page=com.atlassian.jira.pl... ]
Bogdan Sikora edited comment on MODCLUSTER-449 at 9/8/16 8:25 AM:
------------------------------------------------------------------
Httpd balancer with 2 (Load 10) nodes and in the middle connects third (Load 90) node
!httpdRamp-up.jpg|thumbnail!
was (Author: bsikora):
Httpd balancer with 2 (Load 10) nodes and in the middle third (Load 90) node
> Implement ramp-up when starting new nodes
> -----------------------------------------
>
> Key: MODCLUSTER-449
> URL: https://issues.jboss.org/browse/MODCLUSTER-449
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Core & Container Integration (Java)
> Affects Versions: 1.2.0.Final, 1.3.0.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 2.0.0.Alpha1
>
> Attachments: httpdRamp-up.jpg, undertowRamp-up.jpg
>
>
> IIUC this has been a problem since inception. The problem is that the initial load stays in effect for performing load-balancing decisions until a new stat interval kicks in.
> This effect is mitigated by load decay over time, but for the time a new node joins in, it can get overloaded upon startup.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months
[JBoss JIRA] (MODCLUSTER-449) Implement ramp-up when starting new nodes
by Bogdan Sikora (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-449?page=com.atlassian.jira.pl... ]
Bogdan Sikora edited comment on MODCLUSTER-449 at 9/8/16 8:25 AM:
------------------------------------------------------------------
Undertow balancer with 2 (Load 10) nodes and in the middle third (Load 90) node
!undertowRamp-up.jpg|thumbnail!
was (Author: bsikora):
Undertow balancer with 2 (Load 10) nodes and in the middle third (Load 90) node
> Implement ramp-up when starting new nodes
> -----------------------------------------
>
> Key: MODCLUSTER-449
> URL: https://issues.jboss.org/browse/MODCLUSTER-449
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Core & Container Integration (Java)
> Affects Versions: 1.2.0.Final, 1.3.0.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 2.0.0.Alpha1
>
> Attachments: httpdRamp-up.jpg, undertowRamp-up.jpg
>
>
> IIUC this has been a problem since inception. The problem is that the initial load stays in effect for performing load-balancing decisions until a new stat interval kicks in.
> This effect is mitigated by load decay over time, but for the time a new node joins in, it can get overloaded upon startup.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 6 months