[infinispan-dev] Remoting package refactor

Dan Berindei dan.berindei at gmail.com
Sat Nov 8 05:12:35 EST 2014


I don't think we'll ever get to the point where we don't need *any* thread
pools in Infinispan :)

OTOH I also want to reduce the number of thread pools and thread pool
configurations, so I'd rather not add per-cache thread pools until we see a
clear need for it.

In particular, I don't think we need a single-thread pool per cache for
non-OOB commands, they can be executed on the global remote executor thread
pool just like total order messages. The same way we maintain FIFO order
per cache + key for total order commands, we can maintain FIFO order per
cache for non-OOB commands.

I'm not sure it will lead to better performance than executing the commands
on the JGroups threads directly, as we're gaining the execution in parallel
of commands for different caches and we're losing the execution in parallel
of commands from different senders. But I guess it's worth trying.

Further comments inline...

On Fri, Nov 7, 2014 at 2:43 PM, Sanne Grinovero <sanne at infinispan.org>
wrote:

> I think our priority should be to get rid of the need for threadpools
> - not their configuration options.
>
> If there is a real need for threadpools, then you have to provide full
> configuration options as you simply don't know how it's going to be
> used nor on what kind of hardware it's going to be run.


> Sounds like yet another reason to see if we should split the
> configuration in two areas:
>  - high level simple configuration (what you need to set to get started)
>  - expert tuning (what you'll need when production time comes)
>

I hope you don't mean to say that most Infinispan users will give up before
production time comes, so they will never need to learn the expert tuning
configuration :)

I'd rather remove a configuration option than tell the users that they're
not smart enough to use it.


>
> Also some of most recent users on Hibernate forums are puzzled on how
> to do tuning for Infinispan, when they're deploying several
> applications using it in the same container. I'm educating them on
> FORK, but we should be able to go beyond that: allow containers and
> platform developers to share threadpools across CacheManagers, so in
> such a case you'd want Infinispan to use a service for threadpool
> management, and allow people to inject a custom component for it.
>

I'm not sure what kind of service you have in mind here, we allow the
injection of each executor in the programmatic configuration, so you can
already share thread pools between cache managers. The server XML
configuration also allows you to reuse a thread pool for all the
cache-containers.



>
> Sanne
>
>
> On 7 November 2014 12:21, Bela Ban <bban at redhat.com> wrote:
> > Hi Radim,
> >
> > no I haven't. However, you can replace the thread pools used by JGroups
> > and use custom pools.
> >
> > I like another idea better: inject Byteman code at runtime that keeps
> > track of this, and *other useful stats as well*.
> >
> > It would be very useful to support if we could ship a package to a
> > customer that is injected into their running system and grabs all the
> > vital stats we need for a few minutes, then removes itself again and
> > those stats are then sent to use as a ZIP file.
> > The good thing about byteman is that it can remove itself without a
> > trace; ie. there's no overhead before / after running byteman.
> >
> >
> > On 07/11/14 09:31, Radim Vansa wrote:
> >> Btw., have you ever considered checks if a thread returns to pool
> >> reasonably often? Some of the other datagrids use this, though there's
> >> not much how to react upon that beyond printing out stack traces (but
> >> you can at least report to management that some node seems to be
> broken).
> >>
> >> Radim
> >>
> >> On 11/07/2014 08:35 AM, Bela Ban wrote:
> >>> That's exactly what I suggested. No config gives you a shared global
> >>> thread pool for all caches.
> >>>
> >>> Those caches which need a separate pool can do that via configuration
> >>> (and of course also programmatically)
> >>>
> >>> On 06/11/14 20:31, Tristan Tarrant wrote:
> >>>> My opinion is that we should aim for less configuration, i.e.
> >>>> threadpools should mostly have sensible defaults and be shared by
> >>>> default unless there are extremely good reasons for not doing so.
> >>>>
> >>>> Tristan
> >>>>
> >>>> On 06/11/14 19:40, Radim Vansa wrote:
> >>>>> I second the opinion that any threadpools should be shared by
> default.
> >>>>> There are users who have hundreds or thousands of caches and having
> >>>>> separate threadpool for each of them could easily drain resources.
> And
> >>>>> sharing resources is the purpose of threadpools, right?
> >>>>>
> >>>>> Radim
> >>>>>
> >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote:
> >>>>>> #1 I would by default have 1 thread pool shared by all caches
> >>>>>> #2 This global thread pool should be configurable, perhaps in the
> >>>>>> <global> section ?
> >>>>>> #3 Each cache by default uses the gobal thread pool
> >>>>>> #4 A cache can define its own thread pool, then it would use this
> one
> >>>>>> and not the global thread pool
> >>>>>>
> >>>>>> I think this gives you a mixture between ease of use and
> flexibility in
> >>>>>> configuring pool per cache if needed
> >>>>>>
> >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote:
> >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote:
> >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote:
> >>>>>>>>> * added a single thread remote executor service. This will
> handle the
> >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups
> incoming
> >>>>>>>>> threads and with a new executor service, each cache can process
> their
> >>>>>>>>> own FIFO commands concurrently.
> >>>>>>>> +1000. This allows multiple updates from the same sender but to
> >>>>>>>> different caches to be executed in parallel, and will speed thing
> up.
> >>>>>>>>
> >>>>>>>> Do you intend to share a thread pool between the invocations
> handlers of
> >>>>>>>> the various caches, or do they each have their own thread pool ?
> Or is
> >>>>>>>> this configurable ?
> >>>>>>>>
> >>>>>>> That is question that cross my mind and I don't have any idea what
> would
> >>>>>>> be the best. So, for now, I will leave the thread pool shared
> between
> >>>>>>> the handlers.
> >>>>>>>
> >>>>>>> Never thought to make it configurable, but maybe that is the best
> >>>>>>> option. And maybe, it should be possible to have different
> max-thread
> >>>>>>> size per cache. For example:
> >>>>>>>
> >>>>>>> * all caches using this remote executor will share the same
> instance
> >>>>>>> <remote-executor name="shared" shared="true" max-threads=4.../>
> >>>>>>>
> >>>>>>> * all caches using this remote executor will create their own
> thread
> >>>>>>> pool with max-threads equals to 1
> >>>>>>> <remote-executor name="low-throughput-cache" shared="false"
> >>>>>>> max-threads=1 .../>
> >>>>>>>
> >>>>>>> * all caches using this remote executor will create their own
> thread
> >>>>>>> pool with max-threads equals to 1000
> >>>>>>> <remote executor name="high-throughput-cache" shared="false"
> >>>>>>> max-thread=1000 .../>
> >>>>>>>
> >>>>>>> is this what you have in mind? comments?
> >>>>>>>
> >>>>>>> Cheers,
> >>>>>>> Pedro
> >>>>>>> _______________________________________________
> >>>>>>> infinispan-dev mailing list
> >>>>>>> infinispan-dev at lists.jboss.org
> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>>>>>>
> >>>> _______________________________________________
> >>>> infinispan-dev mailing list
> >>>> infinispan-dev at lists.jboss.org
> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>>>
> >>
> >>
> >
> > --
> > Bela Ban, JGroups lead (http://www.jgroups.org)
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141108/ce6884fb/attachment-0001.html 


More information about the infinispan-dev mailing list