[JBoss JIRA] (MODCLUSTER-721) Setting smax results in very small max connection pool on mod_cluster
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-721?page=com.atlassian.jira.p... ]
Radoslav Husar updated MODCLUSTER-721:
--------------------------------------
Fix Version/s: 2.0.0.Alpha1
> Setting smax results in very small max connection pool on mod_cluster
> ---------------------------------------------------------------------
>
> Key: MODCLUSTER-721
> URL: https://issues.redhat.com/browse/MODCLUSTER-721
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.13.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Priority: Major
> Fix For: 2.0.0.Alpha1, 1.3.14.Final
>
>
> mod_clusters connection pool is sized differently in httpd 2.4 now compared to httpd 2.2. If smax is set, mod_cluster now sets that pool max to smax+1:
> {code}
> if (worker->s->hmax < node->mess.smax)
> worker->s->hmax = node->mess.smax + 1;
> {code}
> Before the max would be ThreadsPerChild+1. Now if someone sets smax=1, then they start seeing severe responsiveness issues if moving an equivalent config from httpd 2.2 to httpd 2.4 because it fails to acquire a connection with 3 concurrent requests:
> {code}
> [Tue Apr 21 10:02:08.995604 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(1981): AH00927: initializing worker http://127.0.0.1:8080 local
> [Tue Apr 21 10:02:08.995616 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(2016): AH00930: initialized pool in child 27148 for () min=0 max=2 smax=1
> ...
> [Tue Apr 21 10:20:41.123443 2020] [:debug] [pid 27147:tid 139853093992192] mod_proxy_cluster.c(2479): proxy: byrequests balancer DONE (http:///127.0.0.1:8080)
> [Tue Apr 21 10:20:41.123456 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1919): AH00924: worker http:///127.0.0.1:8080 shared already initialized
> [Tue Apr 21 10:20:41.123460 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1976): AH00926: worker http:///127.0.0.1:8080 local already initialized
> [Tue Apr 21 10:20:41.123464 2020] [proxy:debug] [pid 27147:tid 139853093992192] mod_proxy.c(1254): [client 127.0.0.1:40760] AH01143: Running scheme balancer handler (attempt 0)
> [Tue Apr 21 10:20:41.125726 2020] [proxy:error] [pid 27147:tid 139853093992192] (70007)The timeout specified has expired: AH00941: HTTP: failed to acquire connection for ()
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-721) Setting smax results in very small max connection pool on mod_cluster
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-721?page=com.atlassian.jira.p... ]
Radoslav Husar updated MODCLUSTER-721:
--------------------------------------
Git Pull Request: https://github.com/modcluster/mod_proxy_cluster/pull/27, https://github.com/modcluster/mod_cluster/pull/457 (was: https://github.com/modcluster/mod_proxy_cluster/pull/27)
> Setting smax results in very small max connection pool on mod_cluster
> ---------------------------------------------------------------------
>
> Key: MODCLUSTER-721
> URL: https://issues.redhat.com/browse/MODCLUSTER-721
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.13.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Priority: Major
> Fix For: 2.0.0.Alpha1, 1.3.14.Final
>
>
> mod_clusters connection pool is sized differently in httpd 2.4 now compared to httpd 2.2. If smax is set, mod_cluster now sets that pool max to smax+1:
> {code}
> if (worker->s->hmax < node->mess.smax)
> worker->s->hmax = node->mess.smax + 1;
> {code}
> Before the max would be ThreadsPerChild+1. Now if someone sets smax=1, then they start seeing severe responsiveness issues if moving an equivalent config from httpd 2.2 to httpd 2.4 because it fails to acquire a connection with 3 concurrent requests:
> {code}
> [Tue Apr 21 10:02:08.995604 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(1981): AH00927: initializing worker http://127.0.0.1:8080 local
> [Tue Apr 21 10:02:08.995616 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(2016): AH00930: initialized pool in child 27148 for () min=0 max=2 smax=1
> ...
> [Tue Apr 21 10:20:41.123443 2020] [:debug] [pid 27147:tid 139853093992192] mod_proxy_cluster.c(2479): proxy: byrequests balancer DONE (http:///127.0.0.1:8080)
> [Tue Apr 21 10:20:41.123456 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1919): AH00924: worker http:///127.0.0.1:8080 shared already initialized
> [Tue Apr 21 10:20:41.123460 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1976): AH00926: worker http:///127.0.0.1:8080 local already initialized
> [Tue Apr 21 10:20:41.123464 2020] [proxy:debug] [pid 27147:tid 139853093992192] mod_proxy.c(1254): [client 127.0.0.1:40760] AH01143: Running scheme balancer handler (attempt 0)
> [Tue Apr 21 10:20:41.125726 2020] [proxy:error] [pid 27147:tid 139853093992192] (70007)The timeout specified has expired: AH00941: HTTP: failed to acquire connection for ()
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-721) Setting smax results in very small max connection pool on mod_cluster
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-721?page=com.atlassian.jira.p... ]
Radoslav Husar updated MODCLUSTER-721:
--------------------------------------
Fix Version/s: 1.3.14.Final
> Setting smax results in very small max connection pool on mod_cluster
> ---------------------------------------------------------------------
>
> Key: MODCLUSTER-721
> URL: https://issues.redhat.com/browse/MODCLUSTER-721
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.13.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Priority: Major
> Fix For: 1.3.14.Final
>
>
> mod_clusters connection pool is sized differently in httpd 2.4 now compared to httpd 2.2. If smax is set, mod_cluster now sets that pool max to smax+1:
> {code}
> if (worker->s->hmax < node->mess.smax)
> worker->s->hmax = node->mess.smax + 1;
> {code}
> Before the max would be ThreadsPerChild+1. Now if someone sets smax=1, then they start seeing severe responsiveness issues if moving an equivalent config from httpd 2.2 to httpd 2.4 because it fails to acquire a connection with 3 concurrent requests:
> {code}
> [Tue Apr 21 10:02:08.995604 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(1981): AH00927: initializing worker http://127.0.0.1:8080 local
> [Tue Apr 21 10:02:08.995616 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(2016): AH00930: initialized pool in child 27148 for () min=0 max=2 smax=1
> ...
> [Tue Apr 21 10:20:41.123443 2020] [:debug] [pid 27147:tid 139853093992192] mod_proxy_cluster.c(2479): proxy: byrequests balancer DONE (http:///127.0.0.1:8080)
> [Tue Apr 21 10:20:41.123456 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1919): AH00924: worker http:///127.0.0.1:8080 shared already initialized
> [Tue Apr 21 10:20:41.123460 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1976): AH00926: worker http:///127.0.0.1:8080 local already initialized
> [Tue Apr 21 10:20:41.123464 2020] [proxy:debug] [pid 27147:tid 139853093992192] mod_proxy.c(1254): [client 127.0.0.1:40760] AH01143: Running scheme balancer handler (attempt 0)
> [Tue Apr 21 10:20:41.125726 2020] [proxy:error] [pid 27147:tid 139853093992192] (70007)The timeout specified has expired: AH00941: HTTP: failed to acquire connection for ()
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-721) Setting smax results in very small max connection pool on mod_cluster
by Jean-Frederic Clere (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-721?page=com.atlassian.jira.p... ]
Jean-Frederic Clere updated MODCLUSTER-721:
-------------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Setting smax results in very small max connection pool on mod_cluster
> ---------------------------------------------------------------------
>
> Key: MODCLUSTER-721
> URL: https://issues.redhat.com/browse/MODCLUSTER-721
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.13.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Priority: Major
>
> mod_clusters connection pool is sized differently in httpd 2.4 now compared to httpd 2.2. If smax is set, mod_cluster now sets that pool max to smax+1:
> {code}
> if (worker->s->hmax < node->mess.smax)
> worker->s->hmax = node->mess.smax + 1;
> {code}
> Before the max would be ThreadsPerChild+1. Now if someone sets smax=1, then they start seeing severe responsiveness issues if moving an equivalent config from httpd 2.2 to httpd 2.4 because it fails to acquire a connection with 3 concurrent requests:
> {code}
> [Tue Apr 21 10:02:08.995604 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(1981): AH00927: initializing worker http://127.0.0.1:8080 local
> [Tue Apr 21 10:02:08.995616 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(2016): AH00930: initialized pool in child 27148 for () min=0 max=2 smax=1
> ...
> [Tue Apr 21 10:20:41.123443 2020] [:debug] [pid 27147:tid 139853093992192] mod_proxy_cluster.c(2479): proxy: byrequests balancer DONE (http:///127.0.0.1:8080)
> [Tue Apr 21 10:20:41.123456 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1919): AH00924: worker http:///127.0.0.1:8080 shared already initialized
> [Tue Apr 21 10:20:41.123460 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1976): AH00926: worker http:///127.0.0.1:8080 local already initialized
> [Tue Apr 21 10:20:41.123464 2020] [proxy:debug] [pid 27147:tid 139853093992192] mod_proxy.c(1254): [client 127.0.0.1:40760] AH01143: Running scheme balancer handler (attempt 0)
> [Tue Apr 21 10:20:41.125726 2020] [proxy:error] [pid 27147:tid 139853093992192] (70007)The timeout specified has expired: AH00941: HTTP: failed to acquire connection for ()
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-522) Memory leak in processing MCMP, wrong apr pool used for allocation
by Jean-Frederic Clere (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-522?page=com.atlassian.jira.p... ]
Jean-Frederic Clere edited comment on MODCLUSTER-522 at 5/6/20 3:43 AM:
------------------------------------------------------------------------
[~sekhach] don't use the old 1.3.3 use one of the recent tag from the github repo and build from the sources.
Otherwise you have 2 way to fix the problem:
- remove the -Wunused-parameter (https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html)
- fix the old broken code (just remove void *mconfig and the parameter when it is called.
was (Author: jfclere):
[~sekhach] don't use the old 1.3.x use one of the recent tag from the github repo and build from the sources.
Otherwise you have 2 way to fix the problem:
- remove the -Wunused-parameter (https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html)
- fix the old broken code (just remove void *mconfig and the parameter when it is called.
> Memory leak in processing MCMP, wrong apr pool used for allocation
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-522
> URL: https://issues.redhat.com/browse/MODCLUSTER-522
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final, 2.0.0.Alpha1
> Environment: Solaris 10 x86_64, RHEL 6, Fedora 24, httpd 2.4.6, httpd 2.4.20
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.5.Final
>
> Attachments: mod_cluster-mem.jpg, mod_cluster-mem1.jpg, screenshot-1.png
>
>
> There seems to be a wrong apr pool used for processing certain MCMP commands. We should use short life span pools for immediate processing and server-lifetime pools only for really persistent configuration. In the current state, with 20+ tomcat workers (1 alias and 1 context each) and virtually no client requests, we could see slow, but steady growth of heap allocated memory.
> TODO: Investigate the offending logic, make sure we ain't using long-lived pools for immediate processing.
> Originally discovered by: [~j_sykora] and [~jmsantuci]
> Illustrative memory overview - with constant number of Tomcats:
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-522) Memory leak in processing MCMP, wrong apr pool used for allocation
by Jean-Frederic Clere (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-522?page=com.atlassian.jira.p... ]
Jean-Frederic Clere commented on MODCLUSTER-522:
------------------------------------------------
[~sekhach] don't use the old 1.3.x use one of the recent tag from the github repo and build from the sources.
Otherwise you have 2 way to fix the problem:
- remove the -Wunused-parameter (https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html)
- fix the old broken code (just remove void *mconfig and the parameter when it is called.
> Memory leak in processing MCMP, wrong apr pool used for allocation
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-522
> URL: https://issues.redhat.com/browse/MODCLUSTER-522
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final, 2.0.0.Alpha1
> Environment: Solaris 10 x86_64, RHEL 6, Fedora 24, httpd 2.4.6, httpd 2.4.20
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.5.Final
>
> Attachments: mod_cluster-mem.jpg, mod_cluster-mem1.jpg, screenshot-1.png
>
>
> There seems to be a wrong apr pool used for processing certain MCMP commands. We should use short life span pools for immediate processing and server-lifetime pools only for really persistent configuration. In the current state, with 20+ tomcat workers (1 alias and 1 context each) and virtually no client requests, we could see slow, but steady growth of heap allocated memory.
> TODO: Investigate the offending logic, make sure we ain't using long-lived pools for immediate processing.
> Originally discovered by: [~j_sykora] and [~jmsantuci]
> Illustrative memory overview - with constant number of Tomcats:
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-522) Memory leak in processing MCMP, wrong apr pool used for allocation
by chandra sekhar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-522?page=com.atlassian.jira.p... ]
chandra sekhar edited comment on MODCLUSTER-522 at 5/5/20 3:30 AM:
-------------------------------------------------------------------
dear karm, i am getting the compilation error while trying to build the rpm for the redhat-7 from the fedora system as below:
[wildfly@as-vip build]$ uname -a
Linux as-vip 4.10.16-200.fc25.x86_64 #1 SMP Mon May 15 15:19:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[wildfly@as-vip build]$
Please help us to get the latest build for rhel 7. thanks.
Error is as in "screenshot-1" image
/home/wildfly/modclusterbuild/mod_proxy_cluster-master/native/mod_proxy_cluster/mod_proxy_cluster.c: In function âcmd_proxy_cluster_deterministic_failoverâ:
/home/wildfly/modclusterbuild/mod_proxy_cluster-master/native/mod_proxy_cluster/mod_proxy_cluster.c:3523:72: warning: unused parameter âparmsâ [-Wunused-parameter]
static const char *cmd_proxy_cluster_deterministic_failover(cmd_parms *parms, void *mconfig, int on)
^~~~~
/home/wildfly/modclusterbuild/mod_proxy_cluster-master/native/mod_proxy_cluster/mod_proxy_cluster.c:3523:85: warning: unused parameter âmconfigâ [-Wunused-parameter]
static const char *cmd_proxy_cluster_deterministic_failover(cmd_parms *parms, void *mconfig, int on)
^~~~~~~
mod_proxy_cluster/CMakeFiles/mod_proxy_cluster.dir/build.make:62: recipe for target 'mod_proxy_cluster/CMakeFiles/mod_proxy_cluster.dir/mod_proxy_cluster.c.o' failed
make[2]: *** [mod_proxy_cluster/CMakeFiles/mod_proxy_cluster.dir/mod_proxy_cluster.c.o] Error 1
CMakeFiles/Makefile2:85: recipe for target 'mod_proxy_cluster/CMakeFiles/mod_proxy_cluster.dir/all' failed
make[1]: *** [mod_proxy_cluster/CMakeFiles/mod_proxy_cluster.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
[wildfly@as-vip build]$
was (Author: sekhach):
dear karm, i am getting the compilation error while trying to build the rpm for the redhat-7 from the fedora system as below:
[wildfly@as-vip build]$ uname -a
Linux as-vip 4.10.16-200.fc25.x86_64 #1 SMP Mon May 15 15:19:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[wildfly@as-vip build]$
Please help us to get the latest build for rhel 7. thanks.
> Memory leak in processing MCMP, wrong apr pool used for allocation
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-522
> URL: https://issues.redhat.com/browse/MODCLUSTER-522
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final, 2.0.0.Alpha1
> Environment: Solaris 10 x86_64, RHEL 6, Fedora 24, httpd 2.4.6, httpd 2.4.20
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.5.Final
>
> Attachments: mod_cluster-mem.jpg, mod_cluster-mem1.jpg, screenshot-1.png
>
>
> There seems to be a wrong apr pool used for processing certain MCMP commands. We should use short life span pools for immediate processing and server-lifetime pools only for really persistent configuration. In the current state, with 20+ tomcat workers (1 alias and 1 context each) and virtually no client requests, we could see slow, but steady growth of heap allocated memory.
> TODO: Investigate the offending logic, make sure we ain't using long-lived pools for immediate processing.
> Originally discovered by: [~j_sykora] and [~jmsantuci]
> Illustrative memory overview - with constant number of Tomcats:
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-522) Memory leak in processing MCMP, wrong apr pool used for allocation
by chandra sekhar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-522?page=com.atlassian.jira.p... ]
chandra sekhar commented on MODCLUSTER-522:
-------------------------------------------
dear karm, i am getting the compilation error while trying to build the rpm for the redhat-7 from the fedora system as below:
[wildfly@as-vip build]$ uname -a
Linux as-vip 4.10.16-200.fc25.x86_64 #1 SMP Mon May 15 15:19:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[wildfly@as-vip build]$
Please help us to get the latest build for rhel 7. thanks.
> Memory leak in processing MCMP, wrong apr pool used for allocation
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-522
> URL: https://issues.redhat.com/browse/MODCLUSTER-522
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final, 2.0.0.Alpha1
> Environment: Solaris 10 x86_64, RHEL 6, Fedora 24, httpd 2.4.6, httpd 2.4.20
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.5.Final
>
> Attachments: mod_cluster-mem.jpg, mod_cluster-mem1.jpg, screenshot-1.png
>
>
> There seems to be a wrong apr pool used for processing certain MCMP commands. We should use short life span pools for immediate processing and server-lifetime pools only for really persistent configuration. In the current state, with 20+ tomcat workers (1 alias and 1 context each) and virtually no client requests, we could see slow, but steady growth of heap allocated memory.
> TODO: Investigate the offending logic, make sure we ain't using long-lived pools for immediate processing.
> Originally discovered by: [~j_sykora] and [~jmsantuci]
> Illustrative memory overview - with constant number of Tomcats:
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-522) Memory leak in processing MCMP, wrong apr pool used for allocation
by chandra sekhar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-522?page=com.atlassian.jira.p... ]
chandra sekhar updated MODCLUSTER-522:
--------------------------------------
Attachment: screenshot-1.png
> Memory leak in processing MCMP, wrong apr pool used for allocation
> ------------------------------------------------------------------
>
> Key: MODCLUSTER-522
> URL: https://issues.redhat.com/browse/MODCLUSTER-522
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.1.Final, 2.0.0.Alpha1
> Environment: Solaris 10 x86_64, RHEL 6, Fedora 24, httpd 2.4.6, httpd 2.4.20
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Fix For: 1.3.5.Final
>
> Attachments: mod_cluster-mem.jpg, mod_cluster-mem1.jpg, screenshot-1.png
>
>
> There seems to be a wrong apr pool used for processing certain MCMP commands. We should use short life span pools for immediate processing and server-lifetime pools only for really persistent configuration. In the current state, with 20+ tomcat workers (1 alias and 1 context each) and virtually no client requests, we could see slow, but steady growth of heap allocated memory.
> TODO: Investigate the offending logic, make sure we ain't using long-lived pools for immediate processing.
> Originally discovered by: [~j_sykora] and [~jmsantuci]
> Illustrative memory overview - with constant number of Tomcats:
>
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months