[wildfly-dev] Pooling EJB Session Beans per default
Andrig Miller
anmiller at redhat.com
Tue Aug 5 15:54:22 EDT 2014
----- Original Message -----
> From: "Bill Burke" <bburke at redhat.com>
> To: "Jason Greene" <jason.greene at redhat.com>
> Cc: "Andrig Miller" <anmiller at redhat.com>, wildfly-dev at lists.jboss.org
> Sent: Tuesday, August 5, 2014 9:49:29 AM
> Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default
>
>
>
> On 8/5/2014 10:51 AM, Jason Greene wrote:
> >
> > On Aug 5, 2014, at 9:32 AM, Bill Burke <bburke at redhat.com> wrote:
> >
> >>
> >>
> >> On 8/5/2014 10:15 AM, Jason Greene wrote:
> >>>> Those silly benchmarks, are indeed silly. Any workload that
> >>>> doesn't actually do anything with the requests is not very
> >>>> helpful. You can be really fast on them, and be really slow on
> >>>> a workload that actually does something. We have already seen
> >>>> this in one case recently.
> >>>
> >>> It’s all about precision, and limiting variables. I know its fun
> >>> to criticize micro benchmarks (obviously micro is indeed
> >>> relative because a great deal of processing goes on even when
> >>> you have EJB and HTTP requests that don’t do much), however
> >>> micro benchmarks, provided they are measured correctly, are
> >>> quick to run and make it easy to tell what you have to fix.
> >>> Large gigantic apps on the other hand are difficult to analyze,
> >>> and do not necessarily represent activity that a differently
> >>> architected app would show. To use an example, if you have
> >>> pattern matching code, and you want it to be fast, it’s much
> >>> easier (and accurate) to benchmark the pattern matching code
> >>> then it is to benchmaark a full EE app that happens to use
> >>> pattern matching in a few places.
> >>>
> >>> Arguing against targeted benchmarks is like arguing that you
> >>> should never write a test case, because only real production
> >>> load will show problems…. To some extent thats true, but that
> >>> doesn’t mean that the problems you catch in simulations won’t
> >>> affect production users.
> >>>
> >>
> >> I'm more worried about focus rather than the actual benchmarks
> >> considering how resource strapped every project seems to be. I'd
> >> like to see a discussion started on what are the most important
> >> and *ripe* areas for optimization and then have the performance
> >> team validate and make recommendations on any priority list that
> >> this discussion generates.
> >
> > The pooling concerns (aside from the occasional expensive post
> > constructs) actually came from a large app/benchmark that the perf
> > team was testing, and they were seeing an optimized strict max
> > pool outperform no pooling at all, and it wasn’t post construct.
> > Their theory is/was GC pressure, because this benchmark/app spends
> > a lot of time in GC, and they see higher GC activity with no
> > pooling. It’s possible it could be an indirect difference though,
> > like the fact that strict max pool acts as a throttle might
> > prevent the system from degrading as a result of no longer being
> > able to keep up with the benchmark load.
> >
>
> Its a horrible theory. :) How many EJB instances of a give type are
> created per request? Generally only 1. 1 instance of one object of
> one
> type! My $5 bet is that if you went into EJB code and started
> counting
> how many object allocations were made per request, you'd lose count
> very
> quickly. Better yet, run a single remote EJB request through a perf
> tool and let it count the number of allocations for you. It will be
> greater than 1. :)
>
> Maybe the StrictMaxPool has an effect on performance because it
> creates
> a global synchronization bottleneck. Throughput is less and you end
> up
> having less concurrent per-request objects being allocated and GC'd.
>
The number per request, while relevant is only part of the story. The number of concurrent requests happening in the server dictates the object allocation rate. Given enough concurrency, even a very small number of object allocations per request can create an object allocation rate that can no longer be sustained.
Andy
>
> > Another possibility to look into is that I see we do:
> > interceptorContext.setContextData(new HashMap<String,
> > Object>());
> >
> > *AND*
> >
> > private Map<Object, Object> instanceData = new HashMap<Object,
> > Object>();
> >
> >
> > Aside from the unnecessary overhead, since JDK7, HashMap
> > construction uses the murmur algorithm which requires using a
> > random number generation under global locking. There were plans to
> > optimize this, but it might not be in JDK7 yet. In a few places we
> > use alternative map implementations to work around the issue.
> >
>
>
>
> IMO, it is more likely that most projects haven't gone through the
> level
> of refactorings that performance-focused projects like Undertow have
> gone through. I'll put another $5 bet down that only the Wildfly
> family
> of projects under you and DML have gone through any real performance
> analysis and optimizations. Just think of all the crappiness that is
> happening in the security layer alone!
>
> Also, any optimizations projects have done are probably focused on
> speed
> and throughput which generally is at the expense of memory.
>
> --
> Bill Burke
> JBoss, a division of Red Hat
> http://bill.burkecentral.com
>
More information about the wildfly-dev
mailing list