[
https://issues.jboss.org/browse/MODCLUSTER-269?page=com.atlassian.jira.pl...
]
Michal Babacek updated MODCLUSTER-269:
--------------------------------------
Description:
With this setting on the httpd side:
{code:title=conf/httpd.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
#Enable mod_cluster manager
<Location /mcm>
SetHandler mod_cluster-manager
Order deny,allow
Deny from all
#My machine..., NOT matching the worker node
Allow from 10.34.3.
</Location>
{code}
{code:title=conf.d/modcluster.conf|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
Listen 8080
Listen 6666
LogLevel debug
<VirtualHost perf08:6666>
ServerName perf08
KeepAlive Off
KeepAliveTimeout 60
MaxKeepAliveRequests 1
ManagerBalancerName qacluster
AdvertiseGroup 224.0.1.105:23364
ServerAdvertise On
AdvertiseFrequency 5
</VirtualHost>
{code}
and this on AS7 (worker) side:
{code:lang=xml|title=standalone-ha.xml|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
<subsystem xmlns="urn:jboss:domain:modcluster:1.0">
<mod-cluster-config proxy-list="perf08:8080"/>
</subsystem>
{code}
we get the following behavior:
# I can access mod_cluster-manager web console from 10.34.3., it is not possible from the
worker node (10.16.88.). Correct.
# Worker node (10.16.88.) *{color:red}is able to register{color}* itself with my *perf08*
balancer on both the *perf08:6666* and *perf08:8080*.
# Even if I use some wild AJP port on the worker side, it self-configures with balancer
(sending: "JVMRoute=perf04node&Host=perf04&Port=9989&Type=ajp" as a
part of a CONFIG MCMP message.
# Worker accesses / context on my balancer, so filtering contexts is unlikely to by a
convenient way of preventing undesired workers to connect.
Here rises a question:
How one actually prevent some unknown, wild worker nodes from registering their malicious
context with my publicly exposed balancer, if I do not want to (e.g. performance reasons)
use certificates for worker authentication?
The question has risen during EC2 related talk with [~akostadinov]...
was:
With this setting on the httpd side:
{code:title=conf/httpd.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
#Enable mod_cluster manager
<Location /mcm>
SetHandler mod_cluster-manager
Order deny,allow
Deny from all
#My machine..., NOT matching the worker node
Allow from 10.34.3.
</Location>
{code}
{code:title=conf.d/modcluster.conf|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
Listen 8080
Listen 6666
LogLevel debug
<VirtualHost perf08:6666>
ServerName perf08
KeepAlive Off
KeepAliveTimeout 60
MaxKeepAliveRequests 1
ManagerBalancerName qacluster
AdvertiseGroup 224.0.1.105:23364
ServerAdvertise On
AdvertiseFrequency 5
</VirtualHost>
{code}
and this on AS7 (worker) side:
{code:lang=xml|title=standalone-ha.xml|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
<subsystem xmlns="urn:jboss:domain:modcluster:1.0">
<mod-cluster-config proxy-list="perf08:8080"/>
</subsystem>
{code}
we get the following behavior:
# I can access mod_cluster-manager web console from 10.34.3., it is not possible from the
worker node (10.16.88.). Correct.
# Worker node (10.16.88.) *{color:red}is able to register{color}* itself with my *perf08*
balancer on both the *perf08:6666* and *perf08:8080*.
# Even if I use some wild AJP port on the worker side, it self-configures with balancer
(sending: "JVMRoute=perf04node&Host=perf04&Port=9989&Type=ajp" as a
part of a CONFIG MCMP command.
# Worker access / context on my balancer, so filtering contexts is unlikely to by a
convenient way of preventing undesired workers to connect.
Here rises a question:
How one actually prevent some unknown, wild worker nodes from registering their malicious
context with my publicly exposed balancer, if I do not want to (e.g. performance reasons)
use certificates for worker authentication?
The question has risen during EC2 related talk with [~akostadinov]...
Workers register with balancer on any port it is listening on
-------------------------------------------------------------
Key: MODCLUSTER-269
URL:
https://issues.jboss.org/browse/MODCLUSTER-269
Project: mod_cluster
Issue Type: Bug
Affects Versions: 1.1.3.Final
Environment: Mod_cluster 1.1.3.Final, x86_64
Reporter: Michal Babacek
Assignee: Michal Babacek
Labels: eap51, eap6, ews, mod_cluster
With this setting on the httpd side:
{code:title=conf/httpd.conf|borderStyle=solid|borderColor=#ccc| titleBGColor=#F7D6C1}
#Enable mod_cluster manager
<Location /mcm>
SetHandler mod_cluster-manager
Order deny,allow
Deny from all
#My machine..., NOT matching the worker node
Allow from 10.34.3.
</Location>
{code}
{code:title=conf.d/modcluster.conf|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
Listen 8080
Listen 6666
LogLevel debug
<VirtualHost perf08:6666>
ServerName perf08
KeepAlive Off
KeepAliveTimeout 60
MaxKeepAliveRequests 1
ManagerBalancerName qacluster
AdvertiseGroup 224.0.1.105:23364
ServerAdvertise On
AdvertiseFrequency 5
</VirtualHost>
{code}
and this on AS7 (worker) side:
{code:lang=xml|title=standalone-ha.xml|borderStyle=solid|borderColor=#ccc|
titleBGColor=#F7D6C1}
<subsystem xmlns="urn:jboss:domain:modcluster:1.0">
<mod-cluster-config proxy-list="perf08:8080"/>
</subsystem>
{code}
we get the following behavior:
# I can access mod_cluster-manager web console from 10.34.3., it is not possible from
the worker node (10.16.88.). Correct.
# Worker node (10.16.88.) *{color:red}is able to register{color}* itself with my
*perf08* balancer on both the *perf08:6666* and *perf08:8080*.
# Even if I use some wild AJP port on the worker side, it self-configures with balancer
(sending: "JVMRoute=perf04node&Host=perf04&Port=9989&Type=ajp" as a
part of a CONFIG MCMP message.
# Worker accesses / context on my balancer, so filtering contexts is unlikely to by a
convenient way of preventing undesired workers to connect.
Here rises a question:
How one actually prevent some unknown, wild worker nodes from registering their malicious
context with my publicly exposed balancer, if I do not want to (e.g. performance reasons)
use certificates for worker authentication?
The question has risen during EC2 related talk with [~akostadinov]...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see:
http://www.atlassian.com/software/jira