Am 06.08.2014 um 17:18 schrieb John O'Hara <>:

On 08/06/2014 03:47 PM, Andrig Miller wrote:
----- Original Message -----
From: "Bill Burke" <>
To: "Andrig Miller" <>
Cc:, "Jason Greene" <>
Sent: Tuesday, August 5, 2014 4:36:11 PM
Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default

On 8/5/2014 3:54 PM, Andrig Miller wrote:
Its a horrible theory. :)  How many EJB instances of a give type
created per request?  Generally only 1.  1 instance of one object
type!  My $5 bet is that if you went into EJB code and started
how many object allocations were made per request, you'd lose
quickly.   Better yet, run a single remote EJB request through a
tool and let it count the number of allocations for you.  It will
greater than 1.  :)

Maybe the StrictMaxPool has an effect on performance because it
a global synchronization bottleneck.  Throughput is less and you
having less concurrent per-request objects being allocated and

The number per request, while relevant is only part of the story.
 The number of concurrent requests happening in the server
dictates the object allocation rate.  Given enough concurrency,
even a very small number of object allocations per request can
create an object allocation rate that can no longer be sustained.

I'm saying that the number of concurrent requests might not dictate
object allocation rate.  There are probably a number of allocations
happen after the EJB instance is obtained.  i.e. interception chains,
contexts, etc.   If StrictMaxPool blocks until a new instance is
available, then there would be less allocations per request as
threads would be serialized.

Whoever is investigating StrictMaxPool, or EJB pooling in general
stop.  Its pointless.

Ah, no its not pointless.  We have a new non-blocking implementation of StrictMaxPool, and its upstream in Wildfly 9, and will be in EAP 6.4.  It has helped us increase our throughput, and reduce response times alot!

Some contextual numbers around what Andy is describing. These are results from one of our benchmarks;

Average response times (28600 concurrent users)
Remote EJB invocations
0.114s 2.094s
WS invocations
0.105s 0.332s
HTTP web app invocations

0.090s 5.589s
HttpCallTypeB 0.042s 2.510s
HttpCallTypeC 0.116s 7.267S

The only difference between these two sets of numbers is EJB pooling.

I guess this are just average numbers…. do you have graphics with response in function of time during the loadtest?
And how much time JVM spent doing GC… with both tests?
Dynatrace has nice integration with jenkins to generate such reports.