Am 06.08.2014 um 17:18 schrieb John O'Hara <johara@redhat.com>:

On 08/06/2014 03:47 PM, Andrig Miller wrote:
----- Original Message -----
From: "Bill Burke" <bburke@redhat.com>
To: "Andrig Miller" <anmiller@redhat.com>
Cc: wildfly-dev@lists.jboss.org, "Jason Greene" <jason.greene@redhat.com>
Sent: Tuesday, August 5, 2014 4:36:11 PM
Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default



On 8/5/2014 3:54 PM, Andrig Miller wrote:
Its a horrible theory. :)  How many EJB instances of a give type
are
created per request?  Generally only 1.  1 instance of one object
of
one
type!  My $5 bet is that if you went into EJB code and started
counting
how many object allocations were made per request, you'd lose
count
very
quickly.   Better yet, run a single remote EJB request through a
perf
tool and let it count the number of allocations for you.  It will
be
greater than 1.  :)

Maybe the StrictMaxPool has an effect on performance because it
creates
a global synchronization bottleneck.  Throughput is less and you
end
up
having less concurrent per-request objects being allocated and
GC'd.

The number per request, while relevant is only part of the story.
 The number of concurrent requests happening in the server
dictates the object allocation rate.  Given enough concurrency,
even a very small number of object allocations per request can
create an object allocation rate that can no longer be sustained.

I'm saying that the number of concurrent requests might not dictate
object allocation rate.  There are probably a number of allocations
that
happen after the EJB instance is obtained.  i.e. interception chains,
contexts, etc.   If StrictMaxPool blocks until a new instance is
available, then there would be less allocations per request as
blocking
threads would be serialized.

Whoever is investigating StrictMaxPool, or EJB pooling in general
should
stop.  Its pointless.

Ah, no its not pointless.  We have a new non-blocking implementation of StrictMaxPool, and its upstream in Wildfly 9, and will be in EAP 6.4.  It has helped us increase our throughput, and reduce response times alot!

Andy
Some contextual numbers around what Andy is describing. These are results from one of our benchmarks;

Average response times (28600 concurrent users)
Pooled
Non-pooled
Remote EJB invocations
0.114s 2.094s
WS invocations
0.105s 0.332s
HTTP web app invocations


HttpCallTypeA
0.090s 5.589s
HttpCallTypeB 0.042s 2.510s
HttpCallTypeC 0.116s 7.267S

The only difference between these two sets of numbers is EJB pooling.

I guess this are just average numbers…. do you have graphics with response in function of time during the loadtest?
And how much time JVM spent doing GC… with both tests?
Dynatrace has nice integration with jenkins to generate such reports.