[JBoss JIRA] (MODCLUSTER-580) EnableWsTunnel enables only ws comunication
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-580?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-580:
--------------------------------------
Component/s: Native (httpd modules)
> EnableWsTunnel enables only ws comunication
> -------------------------------------------
>
> Key: MODCLUSTER-580
> URL: https://issues.jboss.org/browse/MODCLUSTER-580
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.5.Final
> Reporter: Bogdan Sikora
> Assignee: Jean-Frederic Clere
>
> WebSocket configuration for apache httpd (EnableWsTunnel) balancer enables only ws communication, but undertow as balancer enables both http and ws.
> {noformat}
> # mod_proxy_balancer should be disabled when mod_cluster is used
> LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
> LoadModule cluster_slotmem_module modules/mod_cluster_slotmem.so
> LoadModule manager_module modules/mod_manager.so
> LoadModule advertise_module modules/mod_advertise.so
> MemManagerFile /mnt/hudson_workspace/mod_cluster/jbcs-httpd24-2.4/httpd/cache/mod_cluster
> ServerName dev89:2080
> EnableWsTunnel
> LogLevel warn
> <IfModule manager_module>
> Listen 10.19.70.244:8747
> <VirtualHost 10.19.70.244:8747>
> <Directory />
> Require all granted
> </Directory>
> ServerAdvertise on
> EnableMCPMReceive
> <Location /mcm>
> SetHandler mod_cluster-manager
> Require all granted
> </Location>
> AdvertiseGroup 224.0.5.244:55918
> AdvertiseBindAddress 10.19.70.244:55918
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName qacluster
> </VirtualHost>
> </IfModule>
> {noformat}
> Worker joins with
> {noformat}
> <h1> Node jboss-eap-7.1 (ws://10.19.70.244:8080): </h1>
> {noformat}
> and all http comunication ends with
> {noformat}
> <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
> <html><head>
> <title>500 Internal Server Error</title>
> </head><body>
> <h1>Internal Server Error</h1>
> <p>The server encountered an internal error or
> misconfiguration and was unable to complete
> your request.</p>
> <p>Please contact the server administrator at
> Administrator@localhost to inform them of the time this error occurred,
> and the actions you performed just before this error.</p>
> <p>More information about this error may be available
> in the server error log.</p>
> <hr>
> <address>Apache/2.4.23 (Red Hat) Server at 10.19.70.244 Port 2080</address>
> </body></html>
> {noformat}
> And log message
> {noformat}
> [Sat Apr 08 16:21:29.335633 2017] [proxy:warn] [pid 12680] [client 10.19.70.244:55922] AH01144: No protocol handler was valid for the URL /clusterbench/jvmroute. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 8 months
[JBoss JIRA] (MODCLUSTER-425) The timeout specified has expired: proxy: AJP: cping/cpong failed, ajp_ilink_receive() can't receive header
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-425?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-425:
--------------------------------------
Component/s: Native (httpd modules)
> The timeout specified has expired: proxy: AJP: cping/cpong failed,ajp_ilink_receive() can't receive header
> ----------------------------------------------------------------------------------------------------------
>
> Key: MODCLUSTER-425
> URL: https://issues.jboss.org/browse/MODCLUSTER-425
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.2.6.Final, 1.2.9.Final
> Environment: Confirmed RHEL6 x86_64
> Reporter: Michal Karm Babacek
> Assignee: Michal Karm Babacek
>
> mod_cluster balancer used in front of GateIn Portal, after certain warm-up time, starts to show:
> {code}
> [debug] proxy_util.c(2200): proxy: connected / to perf13:8009
> [debug] mod_proxy_cluster.c(1384): proxy_cluster_try_pingpong: connected to backend
> [debug] mod_proxy_cluster.c(1108): ajp_cping_cpong: Done
> [debug] proxy_util.c(2036): proxy: ajp: has released connection for (perf13)
> [debug] proxy_util.c(2018): proxy: ajp: has acquired connection for (perf12)
> [debug] proxy_util.c(2074): proxy: connecting ajp://perf12:8009/ to perf12:8009
> [debug] proxy_util.c(2200): proxy: connected / to perf12:8009
> [debug] proxy_util.c(2451): proxy: ajp: fam 2 socket created to connect to perf12
> [debug] mod_proxy_cluster.c(1384): proxy_cluster_try_pingpong: connected to backend
> [debug] mod_proxy_cluster.c(1108): ajp_cping_cpong: Done
> [debug] proxy_util.c(2036): proxy: ajp: has released connection for (perf12)
> [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header
> [error] ajp_handle_cping_cpong: ajp_ilink_receive failed
> [error] (70007)The timeout specified has expired: proxy: AJP: cping/cpong failed to 10.16.88.190:8009 (perf12)
> [debug] proxy_util.c(2018): proxy: ajp: has acquired connection for (perf12)
> [debug] proxy_util.c(2074): proxy: connecting ajp://perf12:8009/ to perf12:8009
> [debug] proxy_util.c(2200): proxy: connected / to perf12:8009
> [debug] mod_proxy_cluster.c(1384): proxy_cluster_try_pingpong: connected to backend
> [debug] mod_proxy_cluster.c(1108): ajp_cping_cpong: Done
> [debug] proxy_util.c(2036): proxy: ajp: has released connection for (perf12)
> [debug] proxy_util.c(2018): proxy: ajp: has acquired connection for (perf13)
> [debug] proxy_util.c(2074): proxy: connecting ajp://perf13:8009/ to perf13:8009
> [debug] proxy_util.c(2200): proxy: connected / to perf13:8009
> [debug] mod_proxy_cluster.c(1384): proxy_cluster_try_pingpong: connected to backend
> [debug] mod_proxy_cluster.c(1108): ajp_cping_cpong: Done
> [debug] proxy_util.c(2036): proxy: ajp: has released connection for (perf13)
> [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header
> [error] ajp_handle_cping_cpong: ajp_ilink_receive failed
> [error] (70007)The timeout specified has expired: proxy: AJP: cping/cpong failed to 10.16.88.190:8009 (perf12)
> [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header
> [error] ajp_handle_cping_cpong: ajp_ilink_receive failed
> [error] (70007)The timeout specified has expired: proxy: AJP: cping/cpong failed to 10.16.88.190:8009 (perf12)
> [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header
> [error] ajp_handle_cping_cpong: ajp_ilink_receive failed
> [error] (70007)The timeout specified has expired: proxy: AJP: cping/cpong failed to 10.16.88.190:8009 (perf12)
> [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header
> [error] ajp_handle_cping_cpong: ajp_ilink_receive failed
> [error] (70007)The timeout specified has expired: proxy: AJP: cping/cpong failed to 10.16.88.190:8009 (perf12)
> [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header
> [error] ajp_handle_cping_cpong: ajp_ilink_receive failed
> [error] (70007)The timeout specified has expired: proxy: AJP: cping/cpong failed to 10.16.88.190:8009 (perf12)
> [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header
> {code}
> Investigation is going on...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 8 months
[JBoss JIRA] (MODCLUSTER-391) mod_cluster and mod_proxy integration
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-391?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-391:
--------------------------------------
Labels: (was: native_libraries)
> mod_cluster and mod_proxy integration
> -------------------------------------
>
> Key: MODCLUSTER-391
> URL: https://issues.jboss.org/browse/MODCLUSTER-391
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.6.Final
> Environment: All platforms we build mod_cluster for.
> Reporter: Michal Karm Babacek
> Assignee: Jean-Frederic Clere
> Attachments: error_log, mod_cluster.conf, mod_proxy.conf, standalone-ha.xml
>
>
> This Jira encapsulates all concerns regarding mod_cluster - mod_proxy integration. For instance, while basic {{ProxyPass}} settings work just fine, e.g. serving some files on {{/static}} from the Apache HTTP itself:
> {code}
> ProxyPassMatch ^/static/ !
> ProxyPass / balancer://qacluster stickysession=JSESSIONID|jsessionid nofailover=on
> ProxyPassReverse / balancer://qacluster
> ProxyPreserveHost on
> {code}
> there are more complex setups, involving {{BalancerMember}} configurations, that do not work as expected. In the following example, one wanted to have {{/clusterbench}} application managed by mod_cluster, dynamically, while at the same time, in a different VirtualHost, having {{/tses}} application handled by manually created mod_proxy balancer settings.
> Attached [^mod_cluster.conf], [^mod_proxy.conf], [^standalone-ha.xml](modcluster subsystem element only) and [^error_log].
> The aforementioned setup resulted in:
> |HTTP 200|(From worker)|http://10.16.88.19:8847/clusterbench/requestinfo/|OK|(/)|
> |HTTP 404|(From httpd)|http://10.16.88.19:8847/tses/session.jsp|Expected fail|(/)|
> |HTTP 503|(From httpd)|http://10.16.88.19:2182/tses/session.jsp|Unexpected fail|(x)|
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 8 months
[JBoss JIRA] (MODCLUSTER-391) mod_cluster and mod_proxy integration
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-391?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-391:
--------------------------------------
Component/s: Native (httpd modules)
> mod_cluster and mod_proxy integration
> -------------------------------------
>
> Key: MODCLUSTER-391
> URL: https://issues.jboss.org/browse/MODCLUSTER-391
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.2.6.Final
> Environment: All platforms we build mod_cluster for.
> Reporter: Michal Karm Babacek
> Assignee: Jean-Frederic Clere
> Attachments: error_log, mod_cluster.conf, mod_proxy.conf, standalone-ha.xml
>
>
> This Jira encapsulates all concerns regarding mod_cluster - mod_proxy integration. For instance, while basic {{ProxyPass}} settings work just fine, e.g. serving some files on {{/static}} from the Apache HTTP itself:
> {code}
> ProxyPassMatch ^/static/ !
> ProxyPass / balancer://qacluster stickysession=JSESSIONID|jsessionid nofailover=on
> ProxyPassReverse / balancer://qacluster
> ProxyPreserveHost on
> {code}
> there are more complex setups, involving {{BalancerMember}} configurations, that do not work as expected. In the following example, one wanted to have {{/clusterbench}} application managed by mod_cluster, dynamically, while at the same time, in a different VirtualHost, having {{/tses}} application handled by manually created mod_proxy balancer settings.
> Attached [^mod_cluster.conf], [^mod_proxy.conf], [^standalone-ha.xml](modcluster subsystem element only) and [^error_log].
> The aforementioned setup resulted in:
> |HTTP 200|(From worker)|http://10.16.88.19:8847/clusterbench/requestinfo/|OK|(/)|
> |HTTP 404|(From httpd)|http://10.16.88.19:8847/tses/session.jsp|Expected fail|(/)|
> |HTTP 503|(From httpd)|http://10.16.88.19:2182/tses/session.jsp|Unexpected fail|(x)|
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 8 months
[JBoss JIRA] (MODCLUSTER-401) EnableOptions and SSL configuration
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-401?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-401:
--------------------------------------
Component/s: Native (httpd modules)
> EnableOptions and SSL configuration
> -----------------------------------
>
> Key: MODCLUSTER-401
> URL: https://issues.jboss.org/browse/MODCLUSTER-401
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.2.8.Final
> Environment: HP-UX Apache HTTP Server 2.2.15, RHEL Apache HTTP Server 2.2.22, perhaps platform independent...
> Reporter: Michal Karm Babacek
> Assignee: Jean-Frederic Clere
> Fix For: 1.2.14.Final
>
>
> As a follow up on MODCLUSTER-400 and a documentation effort for *EnableOptions* logic, I tried to add {{EnableOptions}} to the configuration so as to allow for a "cping/cpong" emulation of the famous AJP feature.
> With the following {{mod_cluster.conf / httpd.conf}} (standalone-ha.xml being the same as in MODCLUSTER-400's description):
> {code}
> +++
> Listen 10.16.92.191:2081
> +++
> MemManagerFile "/hell/workspace/hpws22/apache/cache/mod_cluster"
> ServerName 10.16.92.191:2081
> <IfModule manager_module>
> Listen 10.16.92.191:8745
> LogLevel debug
> <VirtualHost 10.16.92.191:8745>
> ServerName 10.16.92.191:8745
> <Directory />
> Order deny,allow
> Deny from all
> Allow from all
> </Directory>
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName qacluster
> AdvertiseGroup 224.0.3.47:23364
> EnableOptions
> EnableMCPMReceive
> SSLEngine on
> SSLProtocol all -SSLv2 -SSLv3
> SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"
> SSLHonorCipherOrder on
> SSLCertificateFile /vault/server.crt
> SSLCertificateKeyFile /vault/server.key
> SSLCACertificateFile /vault/myca.crt
> SSLProxyEngine On
> SSLVerifyDepth 10
> <Location /mcm>
> SetHandler mod_cluster-manager
> Order deny,allow
> Deny from all
> Allow from all
> </Location>
> </VirtualHost>
> </IfModule>
> {code}
> one gets this [^hp-ux_error_log-EnableOptions.zip] log:
> {code}
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received HTTP/1.1 200 OK
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Server: Apache-Coyote/1.1
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Allow: GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Content-Length: 0
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Date: Fri, 02 May 2014 17:22:46 GMT
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Connection: close
> [debug] mod_proxy_cluster.c(1239): http_cping_cpong: Done
> [debug] proxy_util.c(2047): proxy: https: has released connection for (10.16.92.191)
> [debug] mod_manager.c(2666): manager_handler STATUS OK
> [debug] proxy_util.c(2029): proxy: https: has acquired connection for (10.16.92.191)
> [debug] proxy_util.c(2085): proxy: connecting https://10.16.92.191:8645/ to 10.16.92.191:8645
> [debug] proxy_util.c(2211): proxy: connected / to 10.16.92.191:8645
> [debug] proxy_util.c(2462): proxy: https: fam 2 socket created to connect to 10.16.92.191
> [debug] mod_proxy_cluster.c(1384): proxy_cluster_try_pingpong: connected to backend
> [error] [client 10.16.92.191] SSL Proxy requested for 10.16.92.191:2081 but not enabled [Hint: SSLProxyEngine]
> [error] proxy: https: failed to enable ssl support for 10.16.92.191:8645 (10.16.92.191)
> [debug] proxy_util.c(2047): proxy: https: has released connection for (10.16.92.191)
> {code}
> Why is the JBoss EAP residing on {{10.16.92.191:8645}} trying to request SSL Proxy on the virtual host {{10.16.92.191:2081}}? The result is {{Status: NOTOK}} on mod_cluser manager console.
> I tried to remove that {{10.16.92.191:2081}}, so as the {{10.16.92.191:8745}} is the only one ([^hp-ux_error_log-EnableOptions-single-vhost.zip]):
> {code}
> - Listen 10.16.92.191:2081
> - ServerName 10.16.92.191:2081
> {code}
> The result is a funny trial to request a proxy for the boxe's actual hostname and port 80 *no one* (netstat) is even listening on:
> {code}
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received HTTP/1.1 200 OK
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Server: Apache-Coyote/1.1
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Allow: GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Content-Length: 0
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Date: Fri, 02 May 2014 17:39:33 GMT
> [debug] mod_proxy_cluster.c(1223): http_cping_cpong: received Connection: close
> [debug] mod_proxy_cluster.c(1239): http_cping_cpong: Done
> [debug] proxy_util.c(2047): proxy: https: has released connection for (10.16.92.191)
> [debug] mod_manager.c(2666): manager_handler STATUS OK
> [debug] proxy_util.c(2029): proxy: https: has acquired connection for (10.16.92.191)
> [debug] proxy_util.c(2085): proxy: connecting https://10.16.92.191:8645/ to 10.16.92.191:8645
> [debug] proxy_util.c(2211): proxy: connected / to 10.16.92.191:8645
> [debug] proxy_util.c(2462): proxy: https: fam 2 socket created to connect to 10.16.92.191
> [debug] mod_proxy_cluster.c(1384): proxy_cluster_try_pingpong: connected to backend
> [error] [client 10.16.92.191] SSL Proxy requested for eap-perf-hpux-03.mw.lab.eng.bos.redhat.com:80 but not enabled [Hint: SSLProxyEngine]
> [error] proxy: https: failed to enable ssl support for 10.16.92.191:8645 (10.16.92.191)
> [debug] proxy_util.c(2047): proxy: https: has released connection for (10.16.92.191)
> {code}
> I tried to add: {{RequestHeader set Front-End-Https "On"}} to the configuration without any luck.
> Finally, I replicated the SSL configuration *outside* the VirtualHost:
> {code}
> MemManagerFile "/hell/workspace/hpws22/apache/cache/mod_cluster"
> Listen 10.16.92.191:2081
> ServerName 10.16.92.191:2081
> SSLEngine on
> SSLProtocol all -SSLv2 -SSLv3
> SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !S RP !DSS"
> SSLHonorCipherOrder on
> SSLCertificateFile /vault/server.crt
> SSLCertificateKeyFile /vault/server.key
> SSLCACertificateFile /vault/myca.crt
> SSLProxyEngine On
> SSLVerifyDepth 10
> <IfModule manager_module>
> +++ the same as above +++
> </IfModule>
> {code}
> This configuration fixed the aforementioned {{failed to enable ssl support}} *and* actually helped to workaround the MODCLUSTER-400: (log: [^hp-ux_error_log-EnableOptions-SSL_everywhere.zip])
> {code}
> Fri, May 2, 2014 02:23:44 PM Request URI: /clusterbench/requestinfo
> Headers: {host=10.16.92.191:8645, user-agent=curl/7.30.0, accept=*/*, cookie=JSESSIONID=2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3-2, x-forwarded-for=10.16.92.191, x-forwarded-host=10.16.92.191:8745, x-forwarded-server=10.16.92.191, connection=Keep-Alive}
> Host header: 10.16.92.191:8645
> Character encoding: null
> JVM route: jboss-eap-6.3-2
> Session ID: 2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3-2
> Session isNew: false
> Fri, May 2, 2014 02:23:47 PM Request URI: /clusterbench/requestinfo
> Headers: {host=10.16.92.191:8645, user-agent=curl/7.30.0, accept=*/*, cookie=JSESSIONID=2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3-2, x-forwarded-for=10.16.92.191, x-forwarded-host=10.16.92.191:8745, x-forwarded-server=10.16.92.191, connection=Keep-Alive}
> Host header: 10.16.92.191:8645
> Character encoding: null
> JVM route: jboss-eap-6.3-2
> Session ID: 2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3-2
> Session isNew: false
> -- stop jboss-eap-6.3-2 -- (the same behavior with jvm kill) --
> Fri, May 2, 2014 02:23:50 PM Request URI: /clusterbench/requestinfo
> Headers: {host=10.16.92.191:8544, user-agent=curl/7.30.0, accept=*/*, cookie=JSESSIONID=2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3-2, x-forwarded-for=10.16.92.191, x-forwarded-host=10.16.92.191:8745, x-forwarded-server=10.16.92.191, connection=Keep-Alive}
> Host header: 10.16.92.191:8544
> Character encoding: null
> JVM route: jboss-eap-6.3
> Session ID: 2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3
> Session isNew: false
> Fri, May 2, 2014 02:23:53 PM Request URI: /clusterbench/requestinfo
> Headers: {host=10.16.92.191:8544, user-agent=curl/7.30.0, accept=*/*, cookie=JSESSIONID=2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3, x-forwarded-for=10.16.92.191, x-forwarded-host=10.16.92.191:8745, x-forwarded-server=10.16.92.191, connection=Keep-Alive}
> Host header: 10.16.92.191:8544
> Character encoding: null
> JVM route: jboss-eap-6.3
> Session ID: 2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3
> Session isNew: false
> Fri, May 2, 2014 02:23:56 PM Request URI: /clusterbench/requestinfo
> Headers: {host=10.16.92.191:8544, user-agent=curl/7.30.0, accept=*/*, cookie=JSESSIONID=2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3, x-forwarded-for=10.16.92.191, x-forwarded-host=10.16.92.191:8745, x-forwarded-server=10.16.92.191, connection=Keep-Alive}
> Host header: 10.16.92.191:8544
> Character encoding: null
> JVM route: jboss-eap-6.3
> Session ID: 2hC9ax9LGYDvQZtH0RXdBimf.jboss-eap-6.3
> Session isNew: false
> {code}
> Why isn't the {{10.16.92.191:8745}} enough? Is it a configuration error or a ProxyPass/SSL integration bug?
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 8 months
[JBoss JIRA] (MODCLUSTER-410) OPTIONS call not returning available methods
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-410?page=com.atlassian.jira.pl... ]
Radoslav Husar updated MODCLUSTER-410:
--------------------------------------
Component/s: Native (httpd modules)
> OPTIONS call not returning available methods
> --------------------------------------------
>
> Key: MODCLUSTER-410
> URL: https://issues.jboss.org/browse/MODCLUSTER-410
> Project: mod_cluster
> Issue Type: Bug
> Components: Native (httpd modules)
> Affects Versions: 1.3.2.Final, 1.2.12.Final
> Reporter: Roman Jurkov
> Assignee: Jean-Frederic Clere
> Priority: Minor
>
> options call returning just standard registered methods, that is because mod_cluster doesn't register methods through httpd api ap_method_register().
> by using apache api we can get free validation, and faster checks against method_number() instead of method name and strcmp().
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
6 years, 8 months