[mod_cluster-dev] Powering down a worker

Bela Ban bban at redhat.com
Wed Jun 9 09:40:44 EDT 2010



jean-frederic clere wrote:
> On 06/09/2010 02:43 PM, Bela Ban wrote:
>> I have the scenario where I run httpd/mod-cluster on an EC2 instance and
>> a few workers on different EC2 instances.
>>
>> When I "terminate" a worker instance (using the EC2 GUI), apparently the
>> virtual instance is terminated *ungracefully*, ie. similar to just
>> pulling the power plug. This means that the shutdown scripts (in
>> /etc/rc0.d) are not run, and the open sockets (e.g. to mod-cluster) are
>> not closed, so mod-cluster won't remove the worker.
>>
>> When I look at mod_cluster_manager, it continues listing the killed
>> worker in OK state.
>
> With CR2? Well at least for 5 seconds normally.
>
>> My questions:
>>
>> * I recall that, unlike mod-jk, mod-cluster doesn't have
>> cping/cpong, or any other heartbeating mechanism. Is this correct
>> ?
>
> It has a heartbeating logic so it should detect the dead node

OK, I was mistaken: I can see the cping / cpong requests in error_log...


>> So would mod-cluster detect a worker's unreachability, e.g. when
>> I pull the plug on the switch connecting the worker to mod-cluster ?
>
> Yep, otherwise there is a bug somewhere. It would interesting to use
> debug in the httpd conf file and mail the corresponding error log. (Or
> open a JIRA and put the error_log file there)


OK, I verified this actually works: mod_cluster_manager does not show 
the terminated instance any longer

-- 
Bela Ban
Lead JGroups / Clustering Team
JBoss


More information about the mod_cluster-dev mailing list