On Aug 6, 2014, at 10:18 AM, John O'Hara <johara(a)redhat.com> wrote:
On 08/06/2014 03:47 PM, Andrig Miller wrote:
>
> ----- Original Message -----
>
>> From: "Bill Burke" <bburke(a)redhat.com>
>>
>> To: "Andrig Miller"
>> <anmiller(a)redhat.com>
>>
>> Cc:
>> wildfly-dev(a)lists.jboss.org, "Jason Greene"
<jason.greene(a)redhat.com>
>>
>> Sent: Tuesday, August 5, 2014 4:36:11 PM
>> Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default
>>
>>
>>
>> On 8/5/2014 3:54 PM, Andrig Miller wrote:
>>
>>>> Its a horrible theory. :) How many EJB instances of a give type
>>>> are
>>>> created per request? Generally only 1. 1 instance of one object
>>>> of
>>>> one
>>>> type! My $5 bet is that if you went into EJB code and started
>>>> counting
>>>> how many object allocations were made per request, you'd lose
>>>> count
>>>> very
>>>> quickly. Better yet, run a single remote EJB request through a
>>>> perf
>>>> tool and let it count the number of allocations for you. It will
>>>> be
>>>> greater than 1. :)
>>>>
>>>> Maybe the StrictMaxPool has an effect on performance because it
>>>> creates
>>>> a global synchronization bottleneck. Throughput is less and you
>>>> end
>>>> up
>>>> having less concurrent per-request objects being allocated and
>>>> GC'd.
>>>>
>>>>
>>> The number per request, while relevant is only part of the story.
>>> The number of concurrent requests happening in the server
>>> dictates the object allocation rate. Given enough concurrency,
>>> even a very small number of object allocations per request can
>>> create an object allocation rate that can no longer be sustained.
>>>
>>>
>> I'm saying that the number of concurrent requests might not dictate
>> object allocation rate. There are probably a number of allocations
>> that
>> happen after the EJB instance is obtained. i.e. interception chains,
>> contexts, etc. If StrictMaxPool blocks until a new instance is
>> available, then there would be less allocations per request as
>> blocking
>> threads would be serialized.
>>
>> Whoever is investigating StrictMaxPool, or EJB pooling in general
>> should
>> stop. Its pointless.
>>
>>
> Ah, no its not pointless. We have a new non-blocking implementation of
StrictMaxPool, and its upstream in Wildfly 9, and will be in EAP 6.4. It has helped us
increase our throughput, and reduce response times alot!
>
> Andy
>
Some contextual numbers around what Andy is describing. These are results from one of our
benchmarks;
Average response times (28600 concurrent users)
Pooled
Non-pooled
Remote EJB invocations
0.114s 2.094s
WS invocations
0.105s 0.332s
HTTP web app invocations
HttpCallTypeA
0.090s 5.589s
HttpCallTypeB 0.042s 2.510s
HttpCallTypeC 0.116s 7.267S
I love data, thanks :) Have you by chance taken object histogram samples for these two? It
would be useful to see how strong the correlation is, and if a use-pattern shows up in the
benchmark that leads to a non-pool implementation creating massive amounts of objects
(like the loop scenario I mentioned)
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat