[JBoss JIRA] (MODCLUSTER-567) Deprecate excludedContexts
by Jean-Frederic Clere (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-567?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere reassigned MODCLUSTER-567:
----------------------------------------------
Assignee: Radoslav Husar (was: Jean-Frederic Clere)
> Deprecate excludedContexts
> --------------------------
>
> Key: MODCLUSTER-567
> URL: https://issues.jboss.org/browse/MODCLUSTER-567
> Project: mod_cluster
> Issue Type: Task
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.6.CR1
> Reporter: Paul Ferraro
> Assignee: Radoslav Husar
> Priority: Major
> Fix For: 2.0.0.Alpha1
>
>
> "Excluded contexts" is an artifact of poor deployment encapsulation. All contexts within a given engine should be registered with the load balancer. If there are application that should not be load balanced, then they should be deployed to a different engine.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (MODCLUSTER-570) Multiple invalid nodes created on graceful httpd restart
by Jean-Frederic Clere (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-570?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere reassigned MODCLUSTER-570:
----------------------------------------------
Assignee: Michal Karm (was: George Zaronikas)
> Multiple invalid nodes created on graceful httpd restart
> --------------------------------------------------------
>
> Key: MODCLUSTER-570
> URL: https://issues.jboss.org/browse/MODCLUSTER-570
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.5.Final, 1.3.6.Final
> Environment: httpd 2.4.25
> Alpine Linux 3.5
> Reporter: Antoine Cotten
> Assignee: Michal Karm
> Priority: Minor
> Attachments: mod_cluster_after_graceful.png, mod_cluster_before_graceful.png
>
>
> mod_cluster creates a bunch of invalid nodes after a graceful restart of Apache httpd ({{SIGUSR1}}, {{SIGWINCH}}). The issue happens with or without backend servers in the pool.
> httpd logs:
> {code}
> [mpm_event:notice] [pid 1:tid 140313388616520] AH00493: SIGUSR1 received. Doing graceful restart
> [:notice] [pid 1:tid 140313388616520] Advertise initialized for process 1
> [mpm_event:notice] [pid 1:tid 140313388616520] AH00489: Apache/2.4.25 (Unix) mod_cluster/1.3.5.Final configured -- resuming normal operations
> [core:notice] [pid 1:tid 140313388616520] AH00094: Command line: 'httpd -D FOREGROUND'
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 116:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> # ... trimmed 48 lines ...
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://: failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] Created: worker for ://:\x9d\x7f failed: Unable to parse URL
> [:notice] [pid 145:tid 140313388616520] add_balancer_node: balancer safe-name (balancer://4:6666\r\nX-Manager-Url: /c8dae916-7642-4deb-b841-2f3435613749\r\nX-Manager-Protocol: http\r\nX-Manager-Host: 172.17.0.4\r\n\r\n) too long
> {code}
> Application server logs (tomcat8):
> {code}
> Feb 01, 2017 11:34:25 AM org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler sendRequest
> ERROR: MODCLUSTER000042: Error MEM sending STATUS command to 172.17.0.4/172.17.0.4:6666, configuration will be reset: MEM: Can't read node with "java1-depl1-1234" JVMRoute
> Feb 01, 2017 11:34:35 AM org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor processChildren
> SEVERE: Exception invoking periodic operation:
> java.lang.IllegalArgumentException: Node: [1],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [2],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [3],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> # ... trimmed lines ...
> Node: [18],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Node: [19],Name: ,Balancer: ,LBGroup: ,Host: ,Port: ,Type: ,Flushpackets: Off,Flushwait: 0,Ping: 0,Smax: 0,Ttl: 0,Elected: 0,Read: 0,Transfered: 0,Connected: 0,Load: 0
> Vhost: [0:0:1], Alias:
> Vhost: [0:0:2], Alias:
> Vhost: [0:0:3], Alias:
> # ... trimmed lines ...
> Vhost: [0:0:19], Alias:
> Vhost: [0:0:20], Alias:
> Context: [0:0:1], Context: , Status: REMOVED
> Context: [0:0:2], Context: , Status: REMOVED
> Context: [0:0:3], Context: , Status: REMOVED
> # ... trimmed lines ...
> Context: [0:0:97], Context: , Status: REMOVED
> Context: [0:0:98], Context: , Status: REMOVED
> Context: [0:0:99], Context: , Status: REMOVED
> Context: [0:0:1], Context: , Status: REMOVED
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPResponseParser.parseInfoResponse(DefaultMCMPResponseParser.java:96)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:396)
> at org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler.status(DefaultMCMPHandler.java:365)
> at org.jboss.modcluster.ModClusterService.status(ModClusterService.java:457)
> at org.jboss.modcluster.container.catalina.CatalinaEventHandlerAdapter.lifecycleEvent(CatalinaEventHandlerAdapter.java:252)
> at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
> at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
> at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1374)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1546)
> at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1524)
> at java.lang.Thread.run(Thread.java:745)
> Feb 01, 2017 11:35:05 AM org.jboss.modcluster.mcmp.impl.DefaultMCMPHandler sendRequest
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (MODCLUSTER-573) Excluded contexts on http://modcluster.io/documentation/
by Jean-Frederic Clere (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-573?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere commented on MODCLUSTER-573:
------------------------------------------------
https://docs.modcluster.io/ is the new link..
> Excluded contexts on http://modcluster.io/documentation/
> --------------------------------------------------------
>
> Key: MODCLUSTER-573
> URL: https://issues.jboss.org/browse/MODCLUSTER-573
> Project: mod_cluster
> Issue Type: Bug
> Components: Documentation & Demos
> Reporter: Michal Karm
> Assignee: Radoslav Husar
> Priority: Major
>
> [~bsikora] reported:
> {quote}
> http://modcluster.io/documentation/
> Excluded contexts
> > Tomcat attributeAS7/Wildfly attributeWildfly DefaultTomcat/AS7 Default
> > LocationScope
> > excludedContexts excluded-contexts *None* ROOT, admin-console, invoker,
> > bossws, jmx-console, juddi, web-console Worker Worker
> >
> > List of contexts to exclude from httpd registration, of the form: *host1*:
> > *context1*,*host2*:*context2*,*host3*:*context3* If no host is indicated,
> > it is assumed to be the default host of the server (e.g. localhost). “ROOT”
> > indicates the root context. Using the default configuration, this property
> > can by manipulated via the jboss.mod_cluster.excludedContexts system
> > property.
> >
> IMHO conflict with JBEAP-5058
> Cheers,
> Bogdan Sikora
> {quote}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (MODCLUSTER-577) ProxyPass with subdirectory
by Jean-Frederic Clere (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-577?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere closed MODCLUSTER-577.
------------------------------------------
Resolution: Rejected
httpd uses the context it receives from wildfly to route requests from httpd, you can't mix ProxyPassMatch with mod_cluster.
> ProxyPass with subdirectory
> ---------------------------
>
> Key: MODCLUSTER-577
> URL: https://issues.jboss.org/browse/MODCLUSTER-577
> Project: mod_cluster
> Issue Type: Feature Request
> Components: Core & Container Integration (Java)
> Affects Versions: 1.3.1.Final
> Environment: apache 2.4 + mod_proxy 1.3.1 + wildfly 10.1
> Reporter: Juliano Carlos da Silva
> Assignee: Jean-Frederic Clere
> Priority: Major
>
> in apache
> Listen 0.0.0.0:6666
> <VirtualHost *:6666>
> ManagerBalancerName mycluster
> ServerAdvertise On http://XXX.XXX.XXX.XXX:6666
> AdvertiseFrequency 5
> AllowDisplay On
> EnableMCPMReceive
> </VirtualHost>
> ProxyPassMatch ^/apps/websocket/ ws://10.77.1.150:8080
> ProxyPassReverse /apps/websocket/ http://xxxxxxxx/apps/websocket/
> <LocationMatch ^/apps/((?!websocket).*)$>
> #ProxyPass ajp://XXX.XXX.XXX.XXX:8009
> ProxyPass balancer://mycluster
> </LocationMatch>
> if you get LocationMatch and replace balancer for ajp its work, but with balancer every time it try to get from balancer ignoring my ProxyPassMatch.
> But with ajp or removing mod_cluster and use mod_balancer its work ultil with balancer://mycluster.
> I think that is something in core of mod_cluster
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (MODCLUSTER-578) mod_proxy_cluster terminates HTTP/2 and talks HTTP/1.1 (https) to WildFly/Tomcat workers
by Jean-Frederic Clere (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-578?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere closed MODCLUSTER-578.
------------------------------------------
Resolution: Rejected
See JBCS-327
> mod_proxy_cluster terminates HTTP/2 and talks HTTP/1.1 (https) to WildFly/Tomcat workers
> ----------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-578
> URL: https://issues.jboss.org/browse/MODCLUSTER-578
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.6.Final, 1.3.8.Final
> Reporter: Michal Karm
> Assignee: Jean-Frederic Clere
> Priority: Critical
>
> Despite having H2 enabled in Undertow https connector, Apache HTTP Server with mod_proxy-cluster terminates H2, i.e.
> * client <--> httpd communication is H2
> * direct client <--> worker is H2
> * but when client is served by worker via httpd, HTTP 1.1 is used between httpd and workers: client <--H2--> httpd <--HTTP 1.1--> worker
> * from the client's point of view, H2 is used, but in fact, it is used just between client and balacer, not all the way to the worker
> h3. From Wildfly Undertow access log:
> Accessed through httpd balacner:
> {code}
> 192.168.122.172 - "GET /clusterbench/requestinfo HTTP/1.1" 200 1399
> 192.168.122.172 - "GET /clusterbench/requestinfo HTTP/1.1" 200 1399
> 192.168.122.172 - "GET /clusterbench/requestinfo HTTP/1.1" 200 1399
> 192.168.122.172 - "GET /clusterbench/requestinfo HTTP/1.1" 200 1399
> {code}
> Balancer is checking worker's availablity:
> {code}
> 192.168.122.172 - "OPTIONS * HTTP/1.0" 200 -
> 192.168.122.172 - "OPTIONS * HTTP/1.0" 200 -
> 192.168.122.172 - "OPTIONS * HTTP/1.0" 200 -
> 192.168.122.172 - "OPTIONS * HTTP/1.0" 200 -
> {code}
> {code}
> Accessed directly via browser, httpd balancer is skipped:
> 192.168.122.1 - "GET /clusterbench/requestinfo HTTP/2.0" 200 920
> 192.168.122.1 - "GET /clusterbench/requestinfo HTTP/2.0" 200 920
> {code}
> h3. Configuration
> h4. conf.modules.d/00-proxy.conf
> {code}
> LoadModule proxy_module modules/mod_proxy.so
> LoadModule proxy_connect_module modules/mod_proxy_connect.so
> LoadModule proxy_express_module modules/mod_proxy_express.so
> LoadModule proxy_fdpass_module modules/mod_proxy_fdpass.so
> LoadModule proxy_http_module modules/mod_proxy_http.so
> LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so
> LoadModule proxy_hcheck_module modules/mod_proxy_hcheck.so
> LoadModule proxy_http2_module modules/mod_proxy_http2.so
> {code}
> h4. conf.d/mod_cluster.conf
> {code}
> LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
> LoadModule cluster_slotmem_module modules/mod_cluster_slotmem.so
> LoadModule manager_module modules/mod_manager.so
> LoadModule advertise_module modules/mod_advertise.so
> LoadModule http2_module modules/mod_http2.so
> MemManagerFile /tmp/mod_cluster-eapx/jbcs-httpd24-2.4/httpd/cache/mod_cluster
> ServerName rhel7GAx86-64:2080
> SSLEngine on
> SSLProtocol All -SSLv2 -SSLv3
> SSLCipherSuite "HIGH MEDIUM !LOW"
> SSLProxyCipherSuite "HIGH MEDIUM !LOW"
> SSLProxyCheckPeerCN Off
> SSLProxyCheckPeerName Off
> SSLHonorCipherOrder On
> SSLCertificateFile /opt/noe-tests/resources/ssl/proper/server.crt
> SSLCertificateKeyFile /opt/noe-tests/resources/ssl/proper/server.key
> SSLCACertificateFile /opt/noe-tests/resources/ssl/proper/myca.crt
> SSLVerifyClient optional
> SSLProxyVerify optional
> SSLProxyEngine On
> SSLVerifyDepth 10
> SSLProxyVerifyDepth 10
> SSLProxyMachineCertificateFile /opt/noe-tests/resources/ssl/proper/client.pem
> SSLProxyCACertificateFile /opt/noe-tests/resources/ssl/proper/myca.crt
> SSLProxyProtocol All -SSLv2 -SSLv3
> EnableOptions
> LogLevel debug
> <IfModule manager_module>
> Listen 192.168.122.172:8747
> <VirtualHost 192.168.122.172:8747>
> <Directory />
> Require all granted
> </Directory>
> ServerAdvertise on
> EnableMCPMReceive
> <Location /mcm>
> SetHandler mod_cluster-manager
> Require all granted
> </Location>
> AdvertiseGroup 224.0.5.172:62844
> AdvertiseBindAddress 192.168.122.172:62844
> SSLEngine on
> SSLProtocol All -SSLv2 -SSLv3
> SSLCipherSuite "HIGH MEDIUM !LOW"
> SSLProxyCipherSuite "HIGH MEDIUM !LOW"
> SSLProxyCheckPeerCN Off
> SSLProxyCheckPeerName Off
> SSLHonorCipherOrder On
> SSLCertificateFile /opt/noe-tests/resources/ssl/proper/server.crt
> SSLCertificateKeyFile /opt/noe-tests/resources/ssl/proper/server.key
> SSLCACertificateFile /opt/noe-tests/resources/ssl/proper/myca.crt
> SSLVerifyClient optional
> SSLProxyVerify optional
> SSLProxyEngine On
> SSLVerifyDepth 10
> SSLProxyVerifyDepth 10
> SSLProxyMachineCertificateFile /opt/noe-tests/resources/ssl/proper/client.pem
> SSLProxyCACertificateFile /opt/noe-tests/resources/ssl/proper/myca.crt
> SSLProxyProtocol All -SSLv2 -SSLv3
> Protocols h2
> ProtocolsHonorOrder on
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName qacluster
> </VirtualHost>
> </IfModule>
> {code}
> h3. Mod_cluster subsystem
> MCMP uses HTTP 1/1 (https), becasue at the moment, one cannot make it to use wildfly-openssl provider: JBEAP-9688
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months
[JBoss JIRA] (MODCLUSTER-580) EnableWsTunnel enables only ws comunication
by Jean-Frederic Clere (Jira)
[ https://issues.jboss.org/browse/MODCLUSTER-580?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere commented on MODCLUSTER-580:
------------------------------------------------
https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypass look for upgrade.
> EnableWsTunnel enables only ws comunication
> -------------------------------------------
>
> Key: MODCLUSTER-580
> URL: https://issues.jboss.org/browse/MODCLUSTER-580
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.5.Final
> Reporter: Bogdan Sikora
> Assignee: Jean-Frederic Clere
> Priority: Major
>
> WebSocket configuration for apache httpd (EnableWsTunnel) balancer enables only ws communication, but undertow as balancer enables both http and ws.
> {noformat}
> # mod_proxy_balancer should be disabled when mod_cluster is used
> LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
> LoadModule cluster_slotmem_module modules/mod_cluster_slotmem.so
> LoadModule manager_module modules/mod_manager.so
> LoadModule advertise_module modules/mod_advertise.so
> MemManagerFile /mnt/hudson_workspace/mod_cluster/jbcs-httpd24-2.4/httpd/cache/mod_cluster
> ServerName dev89:2080
> EnableWsTunnel
> LogLevel warn
> <IfModule manager_module>
> Listen 10.19.70.244:8747
> <VirtualHost 10.19.70.244:8747>
> <Directory />
> Require all granted
> </Directory>
> ServerAdvertise on
> EnableMCPMReceive
> <Location /mcm>
> SetHandler mod_cluster-manager
> Require all granted
> </Location>
> AdvertiseGroup 224.0.5.244:55918
> AdvertiseBindAddress 10.19.70.244:55918
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName qacluster
> </VirtualHost>
> </IfModule>
> {noformat}
> Worker joins with
> {noformat}
> <h1> Node jboss-eap-7.1 (ws://10.19.70.244:8080): </h1>
> {noformat}
> and all http comunication ends with
> {noformat}
> <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>500 Internal Server Error</title>
> </head><body>
> <h1>Internal Server Error</h1>
> <p>The server encountered an internal error or
> misconfiguration and was unable to complete
> your request.</p>
> <p>Please contact the server administrator at
> Administrator@localhost to inform them of the time this error occurred,
> and the actions you performed just before this error.</p>
> <p>More information about this error may be available
> in the server error log.</p>
> <hr>
> <address>Apache/2.4.23 (Red Hat) Server at 10.19.70.244 Port 2080</address>
> </body></html>
> {noformat}
> And log message
> {noformat}
> [Sat Apr 08 16:21:29.335633 2017] [proxy:warn] [pid 12680] [client 10.19.70.244:55922] AH01144: No protocol handler was valid for the URL /clusterbench/jvmroute. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
5 years, 8 months