[mod_cluster-dev] Powering down a worker

Bela Ban bban at redhat.com
Thu Jun 10 03:35:41 EDT 2010


OK, works as designed !
thx

Paul Ferraro wrote:
> On Wed, 2010-06-09 at 14:43 +0200, Bela Ban wrote:
>   
>> I have the scenario where I run httpd/mod-cluster on an EC2 instance and 
>> a few workers on different EC2 instances.
>>
>> When I "terminate" a worker instance (using the EC2 GUI), apparently the 
>> virtual instance is terminated *ungracefully*, ie. similar to just 
>> pulling the power plug. This means that the shutdown scripts (in 
>> /etc/rc0.d) are not run, and the open sockets (e.g. to mod-cluster) are 
>> not closed, so mod-cluster won't remove the worker.
>>
>> When I look at mod_cluster_manager, it continues listing the killed 
>> worker in OK state.
>>
>> My questions:
>>
>>     * I recall that, unlike mod-jk, mod-cluster doesn't have
>>       cping/cpong, or any other heartbeating mechanism. Is this correct
>>       ? So would mod-cluster detect a worker's unreachability, e.g. when
>>       I pull the plug on the switch connecting the worker to mod-cluster ?
>>     * I though that the workers detect when a member has crashed and the
>>       cluster master then notifies the proxy. So when we have workers
>>       {A,B,C}, and C crashes ungracefully, wouldn't A notify the proxy
>>       of C's death, so the proxy can remove C ?
>>     
>
> If you are indeed using HAModClusterService, you should see the
> following INFO message in the server.log of the master node, following
> the log messages regarding jgroups group membership change:
> Removing jvm route [...] from proxy [...] on behalf of crashed
> member: ...
>
>
>
>   

-- 
Bela Ban
Lead JGroups / Clustering Team
JBoss



More information about the mod_cluster-dev mailing list