[JBoss JIRA] (MODCLUSTER-178) Domains: non sticky requests should be contained within a given domain
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-178?page=com.atlassian.jira.pl... ]
Radoslav Husar commented on MODCLUSTER-178:
-------------------------------------------
{quote}first we stick to a node, then in case of a failure of that node we failover to another node within same domain, and only in case of a second node failure, we will send request to Domain2.{quote}
If I am reading this right, what you just described are *sticky* sessions and that is how they currently work.
What this Jira is about are non-sticky sessions (i.e. they do not stick to particular node) but they do not currently stick to the domain either. Implementing this and enabling this would keep sessions sticky to the domain only.
> Domains: non sticky requests should be contained within a given domain
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-178
> URL: https://issues.jboss.org/browse/MODCLUSTER-178
> Project: mod_cluster
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Jean-Frederic Clere
> Attachments: cluster-small.png, cluster.jpg
>
>
> If we have a couple of domains, then non-sticky requests can go to *any* server in *any* domain.
> It would be nice if we could at least make the domain sticky, so e.g. non-sticky requests always pick the same domain, as long as there are servers in that domain.
> Maybe this should be a new flag, e.g. stickyDomain. If stickySession is false, then we'd check stickyDomain.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 9 months
[JBoss JIRA] (MODCLUSTER-178) Domains: non sticky requests should be contained within a given domain
by Roman Jurkov (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-178?page=com.atlassian.jira.pl... ]
Roman Jurkov commented on MODCLUSTER-178:
-----------------------------------------
this ticket could be simple, however can become tricky if we want to expand this functionality.
use case one is where we stick to a node (which is supported as we already know)
second use case stick to a domain (which is what this ticket describes)
third use case (which would be tricky i think) is to support combination of both
!cluster-small.png!
first we stick to a node, then in case of a failure of that node we failover to another node within same domain, and only in case of a second node failure, we will send request to Domain2.
do you think we would ever want to implement third use case?
i was looking into implementing this one.
> Domains: non sticky requests should be contained within a given domain
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-178
> URL: https://issues.jboss.org/browse/MODCLUSTER-178
> Project: mod_cluster
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Jean-Frederic Clere
> Attachments: cluster-small.png, cluster.jpg
>
>
> If we have a couple of domains, then non-sticky requests can go to *any* server in *any* domain.
> It would be nice if we could at least make the domain sticky, so e.g. non-sticky requests always pick the same domain, as long as there are servers in that domain.
> Maybe this should be a new flag, e.g. stickyDomain. If stickySession is false, then we'd check stickyDomain.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 9 months
[JBoss JIRA] (MODCLUSTER-178) Domains: non sticky requests should be contained within a given domain
by Roman Jurkov (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-178?page=com.atlassian.jira.pl... ]
Roman Jurkov updated MODCLUSTER-178:
------------------------------------
Attachment: cluster-small.png
> Domains: non sticky requests should be contained within a given domain
> ----------------------------------------------------------------------
>
> Key: MODCLUSTER-178
> URL: https://issues.jboss.org/browse/MODCLUSTER-178
> Project: mod_cluster
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Jean-Frederic Clere
> Attachments: cluster-small.png, cluster.jpg
>
>
> If we have a couple of domains, then non-sticky requests can go to *any* server in *any* domain.
> It would be nice if we could at least make the domain sticky, so e.g. non-sticky requests always pick the same domain, as long as there are servers in that domain.
> Maybe this should be a new flag, e.g. stickyDomain. If stickySession is false, then we'd check stickyDomain.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 9 months
[JBoss JIRA] (MODCLUSTER-391) mod_cluster and mod_proxy integration
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-391?page=com.atlassian.jira.pl... ]
RH Bugzilla Integration commented on MODCLUSTER-391:
----------------------------------------------------
Paul Gier <pgier(a)redhat.com> changed the Status of [bug 987259|https://bugzilla.redhat.com/show_bug.cgi?id=987259] from MODIFIED to ON_QA
> mod_cluster and mod_proxy integration
> -------------------------------------
>
> Key: MODCLUSTER-391
> URL: https://issues.jboss.org/browse/MODCLUSTER-391
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.6.Final
> Environment: All platforms we build mod_cluster for.
> Reporter: Michal Babacek
> Assignee: Jean-Frederic Clere
> Labels: native_libraries
> Fix For: 1.3.1.Final, 1.2.10.Final
>
> Attachments: error_log, mod_cluster.conf, mod_proxy.conf, standalone-ha.xml
>
>
> This Jira encapsulates all concerns regarding mod_cluster - mod_proxy integration. For instance, while basic {{ProxyPass}} settings work just fine, e.g. serving some files on {{/static}} from the Apache HTTP itself:
> {code}
> ProxyPassMatch ^/static/ !
> ProxyPass / balancer://qacluster stickysession=JSESSIONID|jsessionid nofailover=on
> ProxyPassReverse / balancer://qacluster
> ProxyPreserveHost on
> {code}
> there are more complex setups, involving {{BalancerMember}} configurations, that do not work as expected. In the following example, one wanted to have {{/clusterbench}} application managed by mod_cluster, dynamically, while at the same time, in a different VirtualHost, having {{/tses}} application handled by manually created mod_proxy balancer settings.
> Attached [^mod_cluster.conf], [^mod_proxy.conf], [^standalone-ha.xml](modcluster subsystem element only) and [^error_log].
> The aforementioned setup resulted in:
> |HTTP 200|(From worker)|http://10.16.88.19:8847/clusterbench/requestinfo/|OK|(/)|
> |HTTP 404|(From httpd)|http://10.16.88.19:8847/tses/session.jsp|Expected fail|(/)|
> |HTTP 503|(From httpd)|http://10.16.88.19:2182/tses/session.jsp|Unexpected fail|(x)|
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 9 months
[JBoss JIRA] (MODCLUSTER-407) worker-timeout can cause httpd thread stalls
by Aaron Ogburn (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-407?page=com.atlassian.jira.pl... ]
Aaron Ogburn updated MODCLUSTER-407:
------------------------------------
Steps to Reproduce:
1) Configure jboss with worker-timeout="1" in the modcluster subsystem
2) Start httpd and JBoss. Run httpd on a multicore system (4+ cores).
3) Confirm JBoss is reachable through httpd/mod_cluster then kill JBoss so the mod_cluster worker-timeout retry logic is used
4) Load up httpd with highly concurrent request traffic for JBoss for some time.
Then check for stalled requests/threads. Each request should finish by ~1 second. But this could take minutes once stalled. You can check access logs with %T to check response times once they're done, pstack to check threads, or the mod_status page (it'll show may threads in W state with many seconds since their requests started, which keeps growing)..
was:
1) Configure jboss with worker-timeout="1" in the modcluster subsystem
2) Start httpd and JBoss
3) Confirm JBoss is reachable through httpd/mod_cluster then kill JBoss so the mod_cluster worker-timeout retry logic is used
4) Load up httpd with requests for JBoss (a couple seconds holding refresh in a browser even will do the trick)
Then check for stalled requests/threads. Each request should finish by ~1 second. But this could take minutes once stalled. You can check access logs with %T to check response times once they're done, pstack to check threads, or the mod_status page (it'll show may threads in W state with many seconds since their requests started, which keeps growing)..
> worker-timeout can cause httpd thread stalls
> --------------------------------------------
>
> Key: MODCLUSTER-407
> URL: https://issues.jboss.org/browse/MODCLUSTER-407
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.2.8.Final
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Fix For: 1.3.1.Final, 1.2.9.Final
>
>
> Setting a modcluster worker-timeout can stall requests and threads on the httpd side when the requests are received with workers in a down state. A stack of the problem thread looks like the following (recursive loops through mod_proxy_cluster from #160 to #2):
> #0 0x00007ff8eb547533 in select () from /lib64/libc.so.6
> #1 0x00007ff8eba39185 in apr_sleep () from /usr/lib64/libapr-1.so.0
> #2 0x00007ff8e84be0d1 in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> ...
> #160 0x00007ff8e84beb9f in ?? () from /etc/httpd/modules/mod_proxy_cluster.so
> #161 0x00007ff8e88d2116 in proxy_run_pre_request () from /etc/httpd/modules/mod_proxy.so
> #162 0x00007ff8e88d9186 in ap_proxy_pre_request () from /etc/httpd/modules/mod_proxy.so
> #163 0x00007ff8e88d63c2 in ?? () from /etc/httpd/modules/mod_proxy.so
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 9 months
[JBoss JIRA] (MODCLUSTER-376) multiple workers with the same id following a tomcat crash/kill
by Jean-Frederic Clere (JIRA)
[ https://issues.jboss.org/browse/MODCLUSTER-376?page=com.atlassian.jira.pl... ]
Jean-Frederic Clere edited comment on MODCLUSTER-376 at 6/3/14 8:22 AM:
------------------------------------------------------------------------
Patch used at the customer... But NOT merged upstream...
was (Author: jfclere):
Patch used at the customer... But merged upstream...
> multiple workers with the same id following a tomcat crash/kill
> ---------------------------------------------------------------
>
> Key: MODCLUSTER-376
> URL: https://issues.jboss.org/browse/MODCLUSTER-376
> Project: mod_cluster
> Issue Type: Bug
> Affects Versions: 1.0.10
> Environment: -JBoss Enterprise Web Server 1.0.2
> -mod_cluster 1.0.10.GA_CP04
> -Red Hat Enterprise Linux 5
> Reporter: Aaron Ogburn
> Assignee: Jean-Frederic Clere
> Attachments: 131122.patch
>
>
> Following a kill or crash of tomcat, multiple workers are seen for a single id when a new tomcat node reconnects and reuses the id from the previously crashed worker.
> The STATUS's ping check sees the old crashed worker first for the id and so pings the wrong destination. Thus the ping fails, and load factor is not applied. The persistent -1 load state leads to 503s.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 9 months