[infinispan-dev] Question on connection pooling in RemoteCacheManager

Mircea Markus mircea.markus at jboss.com
Tue Nov 9 11:22:47 EST 2010


Adding infinispan-dev in CC as these these might be of interest for other as well.

On 9 Nov 2010, at 16:03, Richard Achmatowicz wrote:
> Hi Mircea
> 
> Thanks for the clarification on connection pooling.
> 
> One other question about HotRod concerning intelligence. My understanding is that RemoteCacheManager
> can can be aware of:
> - the topology of the cluster it is interacting with, by way of piggybacking topology information onto responses
> - the host in a cluster a (k,v) pair should reside on by way of doing its own hashing, and so can correctly
> choose the server-side server module  it should talk to directly
both true
> 
> What wasn't clear to me was how these features were activated. The impression I got was that they are
> activated implicitly depending on the type of cache involved:
In the case of Java hr client they are activated by default. The reason for having these levels is for clients written in other languages. e.g. if one would want to write a client in c++ but doesn't need to be  topology-aware because it uses a static ISPN cluster then no point implementing the whole protocol, but only a subset of it. 
> - if a client is using a replicated cache, then cluster-topology information together with load balancing will be used
> automatically to determine which server-side server module to talk to when the client interacts with the cache
yes. The load balancing policy can be specified through infinispan.client.hotrod.request_balancing_strategy config. Defaults to round-robin 
> - if a client is using a distributed cache, then load balancing will be ignored and hashing will be used to determine
> which server-side server module to talk to when the client interacts with the cache
yes
> - if a client is using an invalidate cache ????
same as replicated.
> - all of this happens with no prior configuration of the RemoteCacheManager required
yes.
> - a RemoteCacheManager can thus be both (i) using load balancing to determine which server-side peer to interact with
> *and* (ii) using hashing  to determine which server-side peer to interact with, depending on the caches it has handed out and
> the configurations of those caches
-yes 
> Is this correct?

All of it! 
Cheers,
Mircea
> 
> On 11/09/2010 06:25 AM, Mircea Markus wrote:
>> Hi Richard,
>> 
>> On 8 Nov 2010, at 22:52, Richard Achmatowicz wrote:
>> 
>>> Hi Mircea
>>> 
>>> I'm looking at a javadoc for RemoteCacheManager and some of the description on the connection pooling section
>>> is a little unclear for me.
>>> 
>>> The way I understand it,TCP connections are established between a RemoteCacheManager instance and
>>> a server-side HotRod server module instance, in order to pass requests executed on the client side to
>>> the server side for processing and return the results.
>>> The originators of the requests may be the RemoteCacheManager
>>> itself, or client threads performing operations on caches they received from the RemoteCacheManager.
>> yes.
>>> So, if we have one client thread CT1 on the client-side, accessing a cache C provided by its RemoteCacheManager,
>>> RCM, and there are three nodes X,Y,Z on the  server side, then we have at least three TCP connections held by the
>>> RemoteCacheManager, one for each node on the server-side.
>> yes.
>>> I'm trying to understand your definitions of the terms maxActive, maxTotal and maxIdle from the javadoc for
>>> RemoteCacheManager. I'm going to look at the definitions one by one:
>>> 
>>> 1. maxActive - controls the maximum number of connections per server that are allocated (checked out to
>>> client threads, or idle in the pool) at one time.
>>> - when you say per server, do you mean per X, Y or Z HotRod server modules on the server side?
>> yes.
>>> - when is a connection allocated? When a client CT1 calls RCM.getCache()? When a client CT1
>>> performs an operation on a cache it has obtained? (i.e. are the TCP connections allocated per cache instance
>>> or per operation?)
>> Connection are pooled and reused between operations on RemoteCaches. All remote named caches (i.e. as obtained through RCM.getCache(name)) share the same connection pool. When a method a RemoteCache is called (e.g. get(k)) a connection
>> is taken from the pool, used for communicating with servers and then returned to pool. Due to load, the pool might be empty(all connections are borrowed) when a RC asks for a connection:
>> the behaviour in this case is specified by whenExhaustedAction (by default caller blocks)
>>> - when is a connection released? When an operation is completed? Is this the same as becoming idle?
>> A connection is released by the connection eviction thread (see timeBetweenEvictionRunsMillis):
>> - if minEvictableIdleTimeMillis is reached for that connection
>> - if the tcp connection breaks
>>> - does this mean that cache instances can not concurrently use the same TCP connection? Or does this
>>> mean that cache operations, even from the same cache, cannot use the same TCP connection?
>> there's no relation between cache and connection. All remote caches borrow connections from the same connection pool instance, which is one/RCM.
>>> 2. maxTotal - sets a limit on the number of persistent connections that can be in circulation within the combined
>>> set of servers
>>> - seems clear
>>> 
>>> 3. maxIdle - controls the maxiumum number of idle persistent connections per server at any one time
>>> - seems clear
>>> 
>>> 4. whenExhaustedAction - specifies what happens when asking for a connection from a server's pool
>>> and that pool is exhausted
>>> - again, under what circumstances do we "ask for a connection from a server's pool"?
>>> - "the pool is exhausted" mean that maxActive has been reached, right?
>> that + all connections are in-use at that moment.
>>> Any help with this appreciated. Trying to figure out what I need to test with this.
>> Hope it is more clear now, I'll also update the javadoc with this.
>>> Richard
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20101109/a16c1314/attachment.html 


More information about the infinispan-dev mailing list