[JBoss JIRA] (MODCLUSTER-339) "proxy: DNS lookup failure" with IPv6 on Solaris
by Michal Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-339?page=com.atlassian.jira.pl... ]
Michal Babacek updated MODCLUSTER-339:
--------------------------------------
Attachment: http.conf
error_log-proxypass
error_log-mod_cluster
access_log-mod_cluster
> "proxy: DNS lookup failure" with IPv6 on Solaris
> ------------------------------------------------
>
> Key: MODCLUSTER-339
> URL: https://issues.jboss.org/browse/MODCLUSTER-339
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.3.Final, 1.2.4.Final
> Environment: Solaris 10 x86, Solaris 11 x86, Solaris 11 SPARC
> Reporter: Michal Babacek
> Assignee: Jean-Frederic Clere
> Priority: Critical
> Labels: ipv6
> Fix For: 1.2.5.Final
>
> Attachments: access_log-mod_cluster, error_log-mod_cluster, error_log-proxypass, http.conf
>
>
> h2. Failure with mod_cluster
> Having the following setting:
> {code:title=mod_cluster.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> LoadModule slotmem_module modules/mod_slotmem.so
> LoadModule manager_module modules/mod_manager.so
> LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
> LoadModule advertise_module modules/mod_advertise.so
> MemManagerFile "/tmp/mod_cluster-eap6/jboss-ews-2.0/var/cache/mod_cluster"
> ServerName [2620:52:0:105f::ffff:50]:2080
> <IfModule manager_module>
> Listen [2620:52:0:105f::ffff:50]:6666
> LogLevel debug
> <VirtualHost [2620:52:0:105f::ffff:50]:6666>
> ServerName [2620:52:0:105f::ffff:50]:6666
> <Directory />
> Order deny,allow
> Deny from all
> Allow from all
> </Directory>
> KeepAliveTimeout 60
> MaxKeepAliveRequests 0
> ServerAdvertise on
> AdvertiseFrequency 5
> ManagerBalancerName qacluster
> AdvertiseGroup [ff01::7]:23964
> EnableMCPMReceive
> <Location /mcm>
> SetHandler mod_cluster-manager
> Order deny,allow
> Deny from all
> Allow from all
> </Location>
> </VirtualHost>
> </IfModule>
> {code}
> I get a weird {{proxy: DNS lookup failure}} as soon as worker sends {{CONFIGURE}}}:
> {code:title=access_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> 2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "INFO / HTTP/1.1" 200 - "-" "ClusterListener/1.0"
> 2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "CONFIG / HTTP/1.1" 200 - "-" "ClusterListener/1.0"
> 2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "STATUS / HTTP/1.1" 200 64 "-" "ClusterListener/1.0"
> ...
> {code}
> {code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> ...
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans INFO (/)
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler INFO (/) processing: ""
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler INFO OK
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans CONFIG (/)
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-6.1&Host=%5B2620%3A52%3A0%3A105f%3A0%3A0%3Affff%3A50%252%5D&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler CONFIG OK
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans STATUS (/)
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler STATUS (/) processing: "JVMRoute=jboss-eap-6.1&Load=100"
> [Thu May 16 08:37:24 2013] [debug] mod_manager.c(1638): Processing STATUS
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create balancer balancer://qacluster
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(426): Created: worker for ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1 in child 16847 for (2620:52:0:105f:0:0:ffff:50%2) min=0 max=25 smax=25
> [Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(601): Created: worker for ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009 1 (status): 1
> [Thu May 16 08:37:24 2013] [debug] proxy_util.c(2011): proxy: ajp: has acquired connection for (2620:52:0:105f:0:0:ffff:50%2)
> [Thu May 16 08:37:24 2013] [debug] proxy_util.c(2067): proxy: connecting ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009/ to 2620:52:0:105f:0:0:ffff:50%2:8009
> [Thu May 16 08:37:24 2013] [error] [client 2620:52:0:105f::ffff:50] proxy: DNS lookup failure for: 2620:52:0:105f:0:0:ffff:50%2 returned by /
> [Thu May 16 08:37:24 2013] [debug] proxy_util.c(2029): proxy: ajp: has released connection for (2620:52:0:105f:0:0:ffff:50%2)
> ...
> {code}
> An attempt to access the mod_cluster manager console yields an unpleasant {{NOTOK}} (obviously, we have All workers are in error state...):
> {code}
> curl -g [2620:52:0:105f::ffff:50]:6666/mcm
> ...
> <h1> Node jboss-eap-6.1 (ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009): </h1>
> <a href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Enable Contexts</a> <a href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Disable Contexts</a><br/>
> Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 26,Ttl: 60000000,Status: NOTOK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: -1
> {code}
> You might take a look at the logs: [^error_log-mod_cluster], [^access_log-mod_cluster].
> h2. ProxyPass itself works just fine
> On the other hand, when I had tried this:
> {code:title=proxypass.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
> ServerName [2620:52:0:105f::ffff:50]:2080
> Listen [2620:52:0:105f::ffff:50]:6666
> LogLevel debug
> <VirtualHost [2620:52:0:105f::ffff:50]:6666>
> ServerName [2620:52:0:105f::ffff:50]:6666
> ProxyPass / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> ProxyPassReverse / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> <Directory />
> Order deny,allow
> Deny from all
> Allow from all
> </Directory>
> </VirtualHost>
> {code}
> I received:
> {code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
> ...
> [Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(45): proxy: AJP: canonicalising URL //[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(1506): [client 2620:52:0:105f::ffff:50] proxy: ajp: found worker ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ for ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] mod_proxy.c(1020): Running scheme ajp handler (attempt 0)
> [Thu May 16 08:29:00 2013] [debug] mod_proxy_http.c(1963): proxy: HTTP: declining URL ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(681): proxy: AJP: serving URL ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2011): proxy: AJP: has acquired connection for (2620:52:0:105f:0:0:ffff:50)
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2067): proxy: connecting ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ to 2620:52:0:105f:0:0:ffff:50:8009
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2193): proxy: connected / to 2620:52:0:105f:0:0:ffff:50:8009
> [Thu May 16 08:29:00 2013] [debug] proxy_util.c(2444): proxy: AJP: fam 26 socket created to connect to 2620:52:0:105f:0:0:ffff:50
> ...
> {code}
> And from the client's side, it worked:
> {{curl -g [2620:52:0:105f::ffff:50]:6666/}} brought me the worker's home page (EAP's welcome in this case).
> Take a look at the whole log [^error_log-proxypass].
> *Note:* The [^http.conf] was the same both for mod_cluster and for proxypass test.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-339) "proxy: DNS lookup failure" with IPv6 on Solaris
by Michal Babacek (JIRA)
Michal Babacek created MODCLUSTER-339:
-----------------------------------------
Summary: "proxy: DNS lookup failure" with IPv6 on Solaris
Key: MODCLUSTER-339
URL: https://issues.jboss.org/browse/MODCLUSTER-339
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.4.Final, 1.2.3.Final
Environment: Solaris 10 x86, Solaris 11 x86, Solaris 11 SPARC
Reporter: Michal Babacek
Assignee: Jean-Frederic Clere
Priority: Critical
Fix For: 1.2.5.Final
h2. Failure with mod_cluster
Having the following setting:
{code:title=mod_cluster.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so
MemManagerFile "/tmp/mod_cluster-eap6/jboss-ews-2.0/var/cache/mod_cluster"
ServerName [2620:52:0:105f::ffff:50]:2080
<IfModule manager_module>
Listen [2620:52:0:105f::ffff:50]:6666
LogLevel debug
<VirtualHost [2620:52:0:105f::ffff:50]:6666>
ServerName [2620:52:0:105f::ffff:50]:6666
<Directory />
Order deny,allow
Deny from all
Allow from all
</Directory>
KeepAliveTimeout 60
MaxKeepAliveRequests 0
ServerAdvertise on
AdvertiseFrequency 5
ManagerBalancerName qacluster
AdvertiseGroup [ff01::7]:23964
EnableMCPMReceive
<Location /mcm>
SetHandler mod_cluster-manager
Order deny,allow
Deny from all
Allow from all
</Location>
</VirtualHost>
</IfModule>
{code}
I get a weird {{proxy: DNS lookup failure}} as soon as worker sends {{CONFIGURE}}}:
{code:title=access_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "INFO / HTTP/1.1" 200 - "-" "ClusterListener/1.0"
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "CONFIG / HTTP/1.1" 200 - "-" "ClusterListener/1.0"
2620:52:0:105f::ffff:50 - - [16/May/2013:08:37:24 -0400] "STATUS / HTTP/1.1" 200 64 "-" "ClusterListener/1.0"
...
{code}
{code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
...
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans INFO (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler INFO (/) processing: ""
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler INFO OK
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans CONFIG (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler CONFIG (/) processing: "JVMRoute=jboss-eap-6.1&Host=%5B2620%3A52%3A0%3A105f%3A0%3A0%3Affff%3A50%252%5D&Maxattempts=1&Port=8009&StickySessionForce=No&Type=ajp&ping=10"
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2647): manager_handler CONFIG OK
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1923): manager_trans STATUS (/)
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(2598): manager_handler STATUS (/) processing: "JVMRoute=jboss-eap-6.1&Load=100"
[Thu May 16 08:37:24 2013] [debug] mod_manager.c(1638): Processing STATUS
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(655): add_balancer_node: Create balancer balancer://qacluster
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(426): Created: worker for ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(532): proxy: initialized worker 1 in child 16847 for (2620:52:0:105f:0:0:ffff:50%2) min=0 max=25 smax=25
[Thu May 16 08:37:24 2013] [debug] mod_proxy_cluster.c(601): Created: worker for ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009 1 (status): 1
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2011): proxy: ajp: has acquired connection for (2620:52:0:105f:0:0:ffff:50%2)
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2067): proxy: connecting ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009/ to 2620:52:0:105f:0:0:ffff:50%2:8009
[Thu May 16 08:37:24 2013] [error] [client 2620:52:0:105f::ffff:50] proxy: DNS lookup failure for: 2620:52:0:105f:0:0:ffff:50%2 returned by /
[Thu May 16 08:37:24 2013] [debug] proxy_util.c(2029): proxy: ajp: has released connection for (2620:52:0:105f:0:0:ffff:50%2)
...
{code}
An attempt to access the mod_cluster manager console yields an unpleasant {{NOTOK}} (obviously, we have All workers are in error state...):
{code}
curl -g [2620:52:0:105f::ffff:50]:6666/mcm
...
<h1> Node jboss-eap-6.1 (ajp://[2620:52:0:105f:0:0:ffff:50%2]:8009): </h1>
<a href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=ENABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Enable Contexts</a> <a href="/mcm?nonce=1bddeeea-f9d5-eeea-ee5b-e3a5e3c0a965&Cmd=DISABLE-APP&Range=NODE&JVMRoute=jboss-eap-6.1">Disable Contexts</a><br/>
Balancer: qacluster,LBGroup: ,Flushpackets: Off,Flushwait: 10000,Ping: 10000000,Smax: 26,Ttl: 60000000,Status: NOTOK,Elected: 0,Read: 0,Transferred: 0,Connected: 0,Load: -1
{code}
You might take a look at the logs: [^error_log-mod_cluster], [^access_log-mod_cluster].
h2. ProxyPass itself works just fine
On the other hand, when I had tried this:
{code:title=proxypass.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
ServerName [2620:52:0:105f::ffff:50]:2080
Listen [2620:52:0:105f::ffff:50]:6666
LogLevel debug
<VirtualHost [2620:52:0:105f::ffff:50]:6666>
ServerName [2620:52:0:105f::ffff:50]:6666
ProxyPass / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
ProxyPassReverse / ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
<Directory />
Order deny,allow
Deny from all
Allow from all
</Directory>
</VirtualHost>
{code}
I received:
{code:title=error_log|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
...
[Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(45): proxy: AJP: canonicalising URL //[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(1506): [client 2620:52:0:105f::ffff:50] proxy: ajp: found worker ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ for ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] mod_proxy.c(1020): Running scheme ajp handler (attempt 0)
[Thu May 16 08:29:00 2013] [debug] mod_proxy_http.c(1963): proxy: HTTP: declining URL ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] mod_proxy_ajp.c(681): proxy: AJP: serving URL ajp://[2620:52:0:105f:0:0:ffff:50]:8009/
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2011): proxy: AJP: has acquired connection for (2620:52:0:105f:0:0:ffff:50)
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2067): proxy: connecting ajp://[2620:52:0:105f:0:0:ffff:50]:8009/ to 2620:52:0:105f:0:0:ffff:50:8009
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2193): proxy: connected / to 2620:52:0:105f:0:0:ffff:50:8009
[Thu May 16 08:29:00 2013] [debug] proxy_util.c(2444): proxy: AJP: fam 26 socket created to connect to 2620:52:0:105f:0:0:ffff:50
...
{code}
And from the client's side, it worked:
{{curl -g [2620:52:0:105f::ffff:50]:6666/}} brought me the worker's home page (EAP's welcome in this case).
Take a look at the whole log [^error_log-proxypass].
*Note:* The [^http.conf] was the same both for mod_cluster and for proxypass test.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-338) Advertize adds a message digest even if security key is not configured
by Radoslav Husar (JIRA)
Radoslav Husar created MODCLUSTER-338:
-----------------------------------------
Summary: Advertize adds a message digest even if security key is not configured
Key: MODCLUSTER-338
URL: https://issues.jboss.org/browse/MODCLUSTER-338
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.4.Final
Reporter: Radoslav Husar
Assignee: Radoslav Husar
Fix For: 1.3.0.Alpha1
As wireshark hints, the message digest is always included in the message even if the advertise security key is not configured.
This would not be such a problem if the salt actually used wouldn't be random bits from the memory.
This renders the digest completely useless since it can never be verified.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-337) If HTTPd sends a digest, require digest matching
by Radoslav Husar (JIRA)
Radoslav Husar created MODCLUSTER-337:
-----------------------------------------
Summary: If HTTPd sends a digest, require digest matching
Key: MODCLUSTER-337
URL: https://issues.jboss.org/browse/MODCLUSTER-337
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.2.4.Final
Reporter: Radoslav Husar
Assignee: Radoslav Husar
Currently, mod_cluster server-side verifis the message digest only in case it has explicitely set advertize security key. However, if the HTTPd is configured to use security key we should require a digest check.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-336) configurable STATUS interval
by Aaron Ogburn (JIRA)
Aaron Ogburn created MODCLUSTER-336:
---------------------------------------
Summary: configurable STATUS interval
Key: MODCLUSTER-336
URL: https://issues.jboss.org/browse/MODCLUSTER-336
Project: mod_cluster
Issue Type: Enhancement
Affects Versions: 1.2.4.Final
Reporter: Aaron Ogburn
Assignee: Jean-Frederic Clere
The frequency of STATUS MCMPs is the backgroundProcessorDelay of the engine of the web system and there is no way to change it currently.
While there is not a way to change that backgroundProcessorDelay, can we add a configurable delay within mod_cluster instead? Maybe something like this wrapping the web engine background processor's calls into mod_cluster:
if (currentTime > lastStatus + delay) {
// handle STATUS
lastStatus = currentTime
} else
// do nothing
Thus the frequency of the background processor calls isn't changed as expected, but the frequency with which it can actually result in a STATUS MCMP through mod_cluster is.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-336) configurable STATUS interval
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-336?page=com.atlassian.jira.pl... ]
Aaron Ogburn updated MODCLUSTER-336:
------------------------------------
Description:
The frequency of STATUS MCMPs is the backgroundProcessorDelay of the engine of the web system and there is no way to change it currently.
While there is not a way to change that backgroundProcessorDelay, can we add a configurable delay within mod_cluster instead? Maybe something like this wrapping the web engine background processor's calls into mod_cluster:
{code:java}
if (currentTime > lastStatus + delay) {
// handle STATUS
lastStatus = currentTime
} else
// do nothing
{code}
Thus the frequency of the background processor calls isn't changed as expected, but the frequency with which it can actually result in a STATUS MCMP through mod_cluster is.
was:
The frequency of STATUS MCMPs is the backgroundProcessorDelay of the engine of the web system and there is no way to change it currently.
While there is not a way to change that backgroundProcessorDelay, can we add a configurable delay within mod_cluster instead? Maybe something like this wrapping the web engine background processor's calls into mod_cluster:
if (currentTime > lastStatus + delay) {
// handle STATUS
lastStatus = currentTime
} else
// do nothing
Thus the frequency of the background processor calls isn't changed as expected, but the frequency with which it can actually result in a STATUS MCMP through mod_cluster is.
> configurable STATUS interval
> ----------------------------
>
> Key: MODCLUSTER-336
> URL: https://issues.jboss.org/browse/MODCLUSTER-336
> Project: mod_cluster
> Issue Type: Enhancement
> Affects Versions: 1.2.4.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
>
> The frequency of STATUS MCMPs is the backgroundProcessorDelay of the engine of the web system and there is no way to change it currently.
> While there is not a way to change that backgroundProcessorDelay, can we add a configurable delay within mod_cluster instead? Maybe something like this wrapping the web engine background processor's calls into mod_cluster:
> {code:java}
> if (currentTime > lastStatus + delay) {
> // handle STATUS
> lastStatus = currentTime
> } else
> // do nothing
> {code}
> Thus the frequency of the background processor calls isn't changed as expected, but the frequency with which it can actually result in a STATUS MCMP through mod_cluster is.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months
[JBoss JIRA] (MODCLUSTER-305) ProxyPass can break StickySession
by Michal Babacek (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-305?page=com.atlassian.jira.pl... ]
Michal Babacek commented on MODCLUSTER-305:
-------------------------------------------
Might be related to investigation on [JBQA-7899]...
> ProxyPass can break StickySession
> ---------------------------------
>
> Key: MODCLUSTER-305
> URL: https://issues.jboss.org/browse/MODCLUSTER-305
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.0.Final
> Reporter: Yves Peter
> Assignee: Jean-Frederic Clere
> Labels: proxy
> Fix For: 1.2.1.Final
>
>
> Adding a ProxyPass directive to httpd.conf breaks StickySession for all contexts. Example ProxyPass:
> {code}
> ProxyPass /activevos_appA balancer://avos_appA/activevos stickysession=JSESSIONID|jsessionid nofailover=On
> ProxyPassReverse /activevos_appA balancer://avos_appA/activevos
> {code}
> StickySession only breaks if the ProxyPass points to a balancer that does not exist at that moment (avos_appA in the example) and the parameter "stickysession=JSESSIONID|jsessionid" is used.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 11 months