[infinispan-dev] Remoting package refactor

Dan Berindei dan.berindei at gmail.com
Thu Nov 13 07:59:08 EST 2014


Radim, I also knew the 1.7 ForkJoinPool isn't really optimized for blocking
tasks, but the ManagedBlocker interface mentioned in [3] seems to be
intended just for that.

Re: commonPool(), we can (and should) still create our own ForkJoinPool
instead of using the global one.

Cheers
Dan


On Thu, Nov 13, 2014 at 9:57 AM, Radim Vansa <rvansa at redhat.com> wrote:

> F/J tasks should not acquire any locks (or, generally, block) during
> their execution. At least according to JavaDocs. Are we ready for that?
>
> Btw., I really don't like the fact that the commonPool() cannot be
> properly shutdown. This leads to threadlocal variables leaking when the
> component using F/J pool is undeployed (the classloader cannot be GCed
> and you end up with OOME in PermGen space).
>
> Radim
>
> On 11/13/2014 08:28 AM, Galder Zamarreño wrote:
> > @Pedro, did you consider using a ForkJoinPool instead?
> >
> > Traditional JDK pools are known to be very hard to configure and get it
> “right”. Fork join pools are being used as default thread pools in other
> libraries, vastly reducing configuration.
> >
> > Jessitron has published some interesting blog posts on the advantages of
> traditional ExecutorService vs Fork/Join pools and viceversa. See [1] and
> [3]. She also did a talk on it, see [4].
> >
> > Cheers,
> >
> > p.s. I’ve not studied your use case in depth to decide whether F/J would
> suite better, but it’s certainly worth a look now that we’re on Java 7.
> >
> > [1]
> https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ForkJoinPool.html
> > [2] http://blog.jessitron.com/2014/01/choosing-executorservice.html
> > [3]
> http://blog.jessitron.com/2014/02/scala-global-executioncontext-makes.html
> > [4] https://www.youtube.com/watch?v=yhguOt863nw
> >
> > On 07 Nov 2014, at 09:31, Radim Vansa <rvansa at redhat.com> wrote:
> >
> >> Btw., have you ever considered checks if a thread returns to pool
> >> reasonably often? Some of the other datagrids use this, though there's
> >> not much how to react upon that beyond printing out stack traces (but
> >> you can at least report to management that some node seems to be
> broken).
> >>
> >> Radim
> >>
> >> On 11/07/2014 08:35 AM, Bela Ban wrote:
> >>> That's exactly what I suggested. No config gives you a shared global
> >>> thread pool for all caches.
> >>>
> >>> Those caches which need a separate pool can do that via configuration
> >>> (and of course also programmatically)
> >>>
> >>> On 06/11/14 20:31, Tristan Tarrant wrote:
> >>>> My opinion is that we should aim for less configuration, i.e.
> >>>> threadpools should mostly have sensible defaults and be shared by
> >>>> default unless there are extremely good reasons for not doing so.
> >>>>
> >>>> Tristan
> >>>>
> >>>> On 06/11/14 19:40, Radim Vansa wrote:
> >>>>> I second the opinion that any threadpools should be shared by
> default.
> >>>>> There are users who have hundreds or thousands of caches and having
> >>>>> separate threadpool for each of them could easily drain resources.
> And
> >>>>> sharing resources is the purpose of threadpools, right?
> >>>>>
> >>>>> Radim
> >>>>>
> >>>>> On 11/06/2014 04:37 PM, Bela Ban wrote:
> >>>>>> #1 I would by default have 1 thread pool shared by all caches
> >>>>>> #2 This global thread pool should be configurable, perhaps in the
> >>>>>> <global> section ?
> >>>>>> #3 Each cache by default uses the gobal thread pool
> >>>>>> #4 A cache can define its own thread pool, then it would use this
> one
> >>>>>> and not the global thread pool
> >>>>>>
> >>>>>> I think this gives you a mixture between ease of use and
> flexibility in
> >>>>>> configuring pool per cache if needed
> >>>>>>
> >>>>>> On 06/11/14 16:23, Pedro Ruivo wrote:
> >>>>>>> On 11/06/2014 03:01 PM, Bela Ban wrote:
> >>>>>>>> On 06/11/14 15:36, Pedro Ruivo wrote:
> >>>>>>>>> * added a single thread remote executor service. This will
> handle the
> >>>>>>>>> FIFO deliver commands. Previously, they were handled by JGroups
> incoming
> >>>>>>>>> threads and with a new executor service, each cache can process
> their
> >>>>>>>>> own FIFO commands concurrently.
> >>>>>>>> +1000. This allows multiple updates from the same sender but to
> >>>>>>>> different caches to be executed in parallel, and will speed thing
> up.
> >>>>>>>>
> >>>>>>>> Do you intend to share a thread pool between the invocations
> handlers of
> >>>>>>>> the various caches, or do they each have their own thread pool ?
> Or is
> >>>>>>>> this configurable ?
> >>>>>>>>
> >>>>>>> That is question that cross my mind and I don't have any idea what
> would
> >>>>>>> be the best. So, for now, I will leave the thread pool shared
> between
> >>>>>>> the handlers.
> >>>>>>>
> >>>>>>> Never thought to make it configurable, but maybe that is the best
> >>>>>>> option. And maybe, it should be possible to have different
> max-thread
> >>>>>>> size per cache. For example:
> >>>>>>>
> >>>>>>> * all caches using this remote executor will share the same
> instance
> >>>>>>> <remote-executor name="shared" shared="true" max-threads=4.../>
> >>>>>>>
> >>>>>>> * all caches using this remote executor will create their own
> thread
> >>>>>>> pool with max-threads equals to 1
> >>>>>>> <remote-executor name="low-throughput-cache" shared="false"
> >>>>>>> max-threads=1 .../>
> >>>>>>>
> >>>>>>> * all caches using this remote executor will create their own
> thread
> >>>>>>> pool with max-threads equals to 1000
> >>>>>>> <remote executor name="high-throughput-cache" shared="false"
> >>>>>>> max-thread=1000 .../>
> >>>>>>>
> >>>>>>> is this what you have in mind? comments?
> >>>>>>>
> >>>>>>> Cheers,
> >>>>>>> Pedro
> >>>>>>> _______________________________________________
> >>>>>>> infinispan-dev mailing list
> >>>>>>> infinispan-dev at lists.jboss.org
> >>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>>>>>>
> >>>> _______________________________________________
> >>>> infinispan-dev mailing list
> >>>> infinispan-dev at lists.jboss.org
> >>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>>>
> >>
> >> --
> >> Radim Vansa <rvansa at redhat.com>
> >> JBoss DataGrid QA
> >>
> >> _______________________________________________
> >> infinispan-dev mailing list
> >> infinispan-dev at lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> > --
> > Galder Zamarreño
> > galder at redhat.com
> > twitter.com/galderz
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Radim Vansa <rvansa at redhat.com>
> JBoss DataGrid QA
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20141113/e85464b7/attachment-0001.html 


More information about the infinispan-dev mailing list