[JBoss JIRA] (MODCLUSTER-711) Using "connectorPort" property fails if multiple services are configured in Tomcat
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-711?page=com.atlassian.jira.p... ]
Radoslav Husar updated MODCLUSTER-711:
--------------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/modcluster/mod_cluster/pull/459, https://github.com/modcluster/mod_cluster/pull/460
> Using "connectorPort" property fails if multiple services are configured in Tomcat
> ----------------------------------------------------------------------------------
>
> Key: MODCLUSTER-711
> URL: https://issues.redhat.com/browse/MODCLUSTER-711
> Project: mod_cluster
> Issue Type: Bug
> Components: Core & Container Integration (Java)
> Affects Versions: 1.4.0.Final
> Reporter: Tomas Briceno Fernandez
> Assignee: Radoslav Husar
> Priority: Major
> Fix For: 2.0.0.Alpha1, 1.4.2.Final
>
>
> If the Tomcat server configuration has several <service> elements and the mod_cluster listener is configured with *connectorPort* (most probably it is the same with *connectorAddress*), the configuration fails with these messages:
> {code}
> 06-Feb-2020 16:11:17.596 INFO [ContainerBackgroundProcessor[StandardEngine[TestEngine]]] org.jboss.modcluster.ModClusterService.connectionEstablished MODCLUSTER000012: TestEngine connector will use /127.0.0.1
> 06-Feb-2020 16:11:17.598 INFO [ContainerBackgroundProcessor[StandardEngine[TestEngine]]] org.jboss.modcluster.ModClusterService.establishJvmRoute MODCLUSTER000011: TestEngine will use 7bb39e02-96c0-3f8f-9fab-d464ad729cfe as jvm-route
> 06-Feb-2020 16:11:17.598 SEVERE [ContainerBackgroundProcessor[StandardEngine[TestEngine]]] org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren Exception invoking periodic operation:
> java.lang.RuntimeException: MODCLUSTER000047: No configured connector matches specified host:port (*:8081)! Ensure connectorPort and/or connectorAddress are configured.
> at org.jboss.modcluster.container.tomcat.ConfigurableProxyConnectorProvider.createProxyConnector(ConfigurableProxyConnectorProvider.java:89)
> at org.jboss.modcluster.container.tomcat.TomcatEngine.getProxyConnector(TomcatEngine.java:140)
> at org.jboss.modcluster.ModClusterService.connectionEstablished(ModClusterService.java:267)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:341)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:315)
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:388)
> at org.jboss.modcluster.container.tomcat.TomcatEventHandlerAdapter.lifecycleEvent(TomcatEventHandlerAdapter.java:229)
> at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
> at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1174)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1396)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1368)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> My own code inspection suggests this is because of this loop :
> {code:title=org.jboss.modcluster.ModClusterService}
> @Override
> public void connectionEstablished(InetAddress localAddress) {
> for (Engine engine : this.server.getEngines()) {
> Connector connector = engine.getProxyConnector();
> InetAddress address = connector.getAddress();
> // Set connector address
> if ((address == null) || address.isAnyLocalAddress()) {
> connector.setAddress(localAddress);
> ModClusterLogger.LOGGER.detectConnectorAddress(engine, localAddress);
> }
> this.establishJvmRoute(engine);
> }
> this.established = true;
> }
> {code}
> The problem here is that the invocation of *engine.getProxyConnector()* will check if one and only one of the connectors in the engine matches the port configured by *connectorPort*. If more that one service is configured there will be multiple engines and this code will apply the previous condition to all of them. That is, to properly exit this method the port should exist in all engines, which will not happen normally.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-721) Setting smax results in very small max connection pool on mod_cluster
by Jean-Frederic Clere (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-721?page=com.atlassian.jira.p... ]
Jean-Frederic Clere updated MODCLUSTER-721:
-------------------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/modcluster/mod_proxy_cluster/pull/27
> Setting smax results in very small max connection pool on mod_cluster
> ---------------------------------------------------------------------
>
> Key: MODCLUSTER-721
> URL: https://issues.redhat.com/browse/MODCLUSTER-721
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.13.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Priority: Major
>
> mod_clusters connection pool is sized differently in httpd 2.4 now compared to httpd 2.2. If smax is set, mod_cluster now sets that pool max to smax+1:
> {code}
> if (worker->s->hmax < node->mess.smax)
> worker->s->hmax = node->mess.smax + 1;
> {code}
> Before the max would be ThreadsPerChild+1. Now if someone sets smax=1, then they start seeing severe responsiveness issues if moving an equivalent config from httpd 2.2 to httpd 2.4 because it fails to acquire a connection with 3 concurrent requests:
> {code}
> [Tue Apr 21 10:02:08.995604 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(1981): AH00927: initializing worker http://127.0.0.1:8080 local
> [Tue Apr 21 10:02:08.995616 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(2016): AH00930: initialized pool in child 27148 for () min=0 max=2 smax=1
> ...
> [Tue Apr 21 10:20:41.123443 2020] [:debug] [pid 27147:tid 139853093992192] mod_proxy_cluster.c(2479): proxy: byrequests balancer DONE (http:///127.0.0.1:8080)
> [Tue Apr 21 10:20:41.123456 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1919): AH00924: worker http:///127.0.0.1:8080 shared already initialized
> [Tue Apr 21 10:20:41.123460 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1976): AH00926: worker http:///127.0.0.1:8080 local already initialized
> [Tue Apr 21 10:20:41.123464 2020] [proxy:debug] [pid 27147:tid 139853093992192] mod_proxy.c(1254): [client 127.0.0.1:40760] AH01143: Running scheme balancer handler (attempt 0)
> [Tue Apr 21 10:20:41.125726 2020] [proxy:error] [pid 27147:tid 139853093992192] (70007)The timeout specified has expired: AH00941: HTTP: failed to acquire connection for ()
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-721) Setting smax results in very small max connection pool on mod_cluster
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-721?page=com.atlassian.jira.p... ]
Radoslav Husar reassigned MODCLUSTER-721:
-----------------------------------------
Assignee: Jean-Frederic Clere (was: Radoslav Husar)
> Setting smax results in very small max connection pool on mod_cluster
> ---------------------------------------------------------------------
>
> Key: MODCLUSTER-721
> URL: https://issues.redhat.com/browse/MODCLUSTER-721
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.13.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Priority: Major
>
> mod_clusters connection pool is sized differently in httpd 2.4 now compared to httpd 2.2. If smax is set, mod_cluster now sets that pool max to smax+1:
> {code}
> if (worker->s->hmax < node->mess.smax)
> worker->s->hmax = node->mess.smax + 1;
> {code}
> Before the max would be ThreadsPerChild+1. Now if someone sets smax=1, then they start seeing severe responsiveness issues if moving an equivalent config from httpd 2.2 to httpd 2.4 because it fails to acquire a connection with 3 concurrent requests:
> {code}
> [Tue Apr 21 10:02:08.995604 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(1981): AH00927: initializing worker http://127.0.0.1:8080 local
> [Tue Apr 21 10:02:08.995616 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(2016): AH00930: initialized pool in child 27148 for () min=0 max=2 smax=1
> ...
> [Tue Apr 21 10:20:41.123443 2020] [:debug] [pid 27147:tid 139853093992192] mod_proxy_cluster.c(2479): proxy: byrequests balancer DONE (http:///127.0.0.1:8080)
> [Tue Apr 21 10:20:41.123456 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1919): AH00924: worker http:///127.0.0.1:8080 shared already initialized
> [Tue Apr 21 10:20:41.123460 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1976): AH00926: worker http:///127.0.0.1:8080 local already initialized
> [Tue Apr 21 10:20:41.123464 2020] [proxy:debug] [pid 27147:tid 139853093992192] mod_proxy.c(1254): [client 127.0.0.1:40760] AH01143: Running scheme balancer handler (attempt 0)
> [Tue Apr 21 10:20:41.125726 2020] [proxy:error] [pid 27147:tid 139853093992192] (70007)The timeout specified has expired: AH00941: HTTP: failed to acquire connection for ()
> {code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-721) Setting smax results in very small max connection pool on mod_cluster
by Aaron Ogburn (Jira)
Aaron Ogburn created MODCLUSTER-721:
---------------------------------------
Summary: Setting smax results in very small max connection pool on mod_cluster
Key: MODCLUSTER-721
URL: https://issues.redhat.com/browse/MODCLUSTER-721
Project: mod_cluster
Issue Type: Bug
Components: Native (httpd modules)
Affects Versions: 1.3.13.Final
Reporter: Aaron Ogburn
Assignee: Radoslav Husar
mod_clusters connection pool is sized differently in httpd 2.4 now compared to httpd 2.2. If smax is set, mod_cluster now sets that pool max to smax+1:
{code}
if (worker->s->hmax < node->mess.smax)
worker->s->hmax = node->mess.smax + 1;
{code}
Before the max would be ThreadsPerChild+1. Now if someone sets smax=1, then they start seeing severe responsiveness issues if moving an equivalent config from httpd 2.2 to httpd 2.4 because it fails to acquire a connection with 3 concurrent requests:
{code}
[Tue Apr 21 10:02:08.995604 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(1981): AH00927: initializing worker http://127.0.0.1:8080 local
[Tue Apr 21 10:02:08.995616 2020] [proxy:debug] [pid 27148:tid 139854005438432] proxy_util.c(2016): AH00930: initialized pool in child 27148 for () min=0 max=2 smax=1
...
[Tue Apr 21 10:20:41.123443 2020] [:debug] [pid 27147:tid 139853093992192] mod_proxy_cluster.c(2479): proxy: byrequests balancer DONE (http:///127.0.0.1:8080)
[Tue Apr 21 10:20:41.123456 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1919): AH00924: worker http:///127.0.0.1:8080 shared already initialized
[Tue Apr 21 10:20:41.123460 2020] [proxy:debug] [pid 27147:tid 139853093992192] proxy_util.c(1976): AH00926: worker http:///127.0.0.1:8080 local already initialized
[Tue Apr 21 10:20:41.123464 2020] [proxy:debug] [pid 27147:tid 139853093992192] mod_proxy.c(1254): [client 127.0.0.1:40760] AH01143: Running scheme balancer handler (attempt 0)
[Tue Apr 21 10:20:41.125726 2020] [proxy:error] [pid 27147:tid 139853093992192] (70007)The timeout specified has expired: AH00941: HTTP: failed to acquire connection for ()
{code}
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-720) Allow mod_cluster listener to register at <Service> level rather than <Server> level
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-720?page=com.atlassian.jira.p... ]
Radoslav Husar updated MODCLUSTER-720:
--------------------------------------
Description: The container even handler needs to handle sequence of events that are sent when the listener is on the <Service> level as opposed to only handling the chain of events on the <Server> level. The intention is to only register that service the listener is registered on thus a service stub that only filers that service is then passed around.
> Allow mod_cluster listener to register at <Service> level rather than <Server> level
> ------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-720
> URL: https://issues.redhat.com/browse/MODCLUSTER-720
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Core & Container Integration (Java)
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Critical
> Fix For: 2.0.0.Alpha1, 1.4.2.Final
>
>
> The container even handler needs to handle sequence of events that are sent when the listener is on the <Service> level as opposed to only handling the chain of events on the <Server> level. The intention is to only register that service the listener is registered on thus a service stub that only filers that service is then passed around.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months
[JBoss JIRA] (MODCLUSTER-718) mod_cluster does not properly disable session stickiness
by Radoslav Husar (Jira)
[ https://issues.redhat.com/browse/MODCLUSTER-718?page=com.atlassian.jira.p... ]
Radoslav Husar commented on MODCLUSTER-718:
-------------------------------------------
[~jfclere] Please amend fixed version if incorrect.
> mod_cluster does not properly disable session stickiness
> --------------------------------------------------------
>
> Key: MODCLUSTER-718
> URL: https://issues.redhat.com/browse/MODCLUSTER-718
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 2.0.0.Alpha1, 1.3.12.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Priority: Major
> Fix For: 2.0.0.Alpha1, 1.3.13.Final
>
>
> Disable sticky sessions on JBoss's mod-cluster-config:
> {code}
> <mod-cluster-config advertise-socket="modcluster" proxies="proxy1" sticky-session="false" connector="ajp">
> {code}
> But httpd/mod_cluster still maintains stickiness regardless.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
4 years, 7 months