[infinispan-dev] Remoting package refactor

Sanne Grinovero sanne at infinispan.org
Fri Nov 7 07:43:08 EST 2014


I think our priority should be to get rid of the need for threadpools
- not their configuration options.

If there is a real need for threadpools, then you have to provide full
configuration options as you simply don't know how it's going to be
used nor on what kind of hardware it's going to be run.

Sounds like yet another reason to see if we should split the
configuration in two areas:
 - high level simple configuration (what you need to set to get started)
 - expert tuning (what you'll need when production time comes)

Also some of most recent users on Hibernate forums are puzzled on how
to do tuning for Infinispan, when they're deploying several
applications using it in the same container. I'm educating them on
FORK, but we should be able to go beyond that: allow containers and
platform developers to share threadpools across CacheManagers, so in
such a case you'd want Infinispan to use a service for threadpool
management, and allow people to inject a custom component for it.

Sanne


On 7 November 2014 12:21, Bela Ban <bban at redhat.com> wrote:
> Hi Radim,
>
> no I haven't. However, you can replace the thread pools used by JGroups
> and use custom pools.
>
> I like another idea better: inject Byteman code at runtime that keeps
> track of this, and *other useful stats as well*.
>
> It would be very useful to support if we could ship a package to a
> customer that is injected into their running system and grabs all the
> vital stats we need for a few minutes, then removes itself again and
> those stats are then sent to use as a ZIP file.
> The good thing about byteman is that it can remove itself without a
> trace; ie. there's no overhead before / after running byteman.
>
>
> On 07/11/14 09:31, Radim Vansa wrote:
>> Btw., have you ever considered checks if a thread returns to pool
>> reasonably often? Some of the other datagrids use this, though there's
>> not much how to react upon that beyond printing out stack traces (but
>> you can at least report to management that some node seems to be broken).
>>
>> Radim
>>
>> On 11/07/2014 08:35 AM, Bela Ban wrote:
>>> That's exactly what I suggested. No config gives you a shared global
>>> thread pool for all caches.
>>>
>>> Those caches which need a separate pool can do that via configuration
>>> (and of course also programmatically)
>>>
>>> On 06/11/14 20:31, Tristan Tarrant wrote:
>>>> My opinion is that we should aim for less configuration, i.e.
>>>> threadpools should mostly have sensible defaults and be shared by
>>>> default unless there are extremely good reasons for not doing so.
>>>>
>>>> Tristan
>>>>
>>>> On 06/11/14 19:40, Radim Vansa wrote:
>>>>> I second the opinion that any threadpools should be shared by default.
>>>>> There are users who have hundreds or thousands of caches and having
>>>>> separate threadpool for each of them could easily drain resources. And
>>>>> sharing resources is the purpose of threadpools, right?
>>>>>
>>>>> Radim
>>>>>
>>>>> On 11/06/2014 04:37 PM, Bela Ban wrote:
>>>>>> #1 I would by default have 1 thread pool shared by all caches
>>>>>> #2 This global thread pool should be configurable, perhaps in the
>>>>>> <global> section ?
>>>>>> #3 Each cache by default uses the gobal thread pool
>>>>>> #4 A cache can define its own thread pool, then it would use this one
>>>>>> and not the global thread pool
>>>>>>
>>>>>> I think this gives you a mixture between ease of use and flexibility in
>>>>>> configuring pool per cache if needed
>>>>>>
>>>>>> On 06/11/14 16:23, Pedro Ruivo wrote:
>>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote:
>>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote:
>>>>>>>>> * added a single thread remote executor service. This will handle the
>>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups incoming
>>>>>>>>> threads and with a new executor service, each cache can process their
>>>>>>>>> own FIFO commands concurrently.
>>>>>>>> +1000. This allows multiple updates from the same sender but to
>>>>>>>> different caches to be executed in parallel, and will speed thing up.
>>>>>>>>
>>>>>>>> Do you intend to share a thread pool between the invocations handlers of
>>>>>>>> the various caches, or do they each have their own thread pool ? Or is
>>>>>>>> this configurable ?
>>>>>>>>
>>>>>>> That is question that cross my mind and I don't have any idea what would
>>>>>>> be the best. So, for now, I will leave the thread pool shared between
>>>>>>> the handlers.
>>>>>>>
>>>>>>> Never thought to make it configurable, but maybe that is the best
>>>>>>> option. And maybe, it should be possible to have different max-thread
>>>>>>> size per cache. For example:
>>>>>>>
>>>>>>> * all caches using this remote executor will share the same instance
>>>>>>> <remote-executor name="shared" shared="true" max-threads=4.../>
>>>>>>>
>>>>>>> * all caches using this remote executor will create their own thread
>>>>>>> pool with max-threads equals to 1
>>>>>>> <remote-executor name="low-throughput-cache" shared="false"
>>>>>>> max-threads=1 .../>
>>>>>>>
>>>>>>> * all caches using this remote executor will create their own thread
>>>>>>> pool with max-threads equals to 1000
>>>>>>> <remote executor name="high-throughput-cache" shared="false"
>>>>>>> max-thread=1000 .../>
>>>>>>>
>>>>>>> is this what you have in mind? comments?
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Pedro
>>>>>>> _______________________________________________
>>>>>>> infinispan-dev mailing list
>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>
>>
>
> --
> Bela Ban, JGroups lead (http://www.jgroups.org)
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


More information about the infinispan-dev mailing list