Fwd: Pooling EJB Session Beans per default
by Radoslaw Rodak
Anfang der weitergeleiteten Nachricht:
> Von: Radoslaw Rodak <rodakr(a)gmx.ch>
> Betreff: Aw: [wildfly-dev] Pooling EJB Session Beans per default
> Datum: 6. August 2014 19:06:01 MESZ
> An: Andrig Miller <anmiller(a)redhat.com>
>
>
> Am 06.08.2014 um 16:50 schrieb Andrig Miller <anmiller(a)redhat.com>:
>
>>
>>
>> ----- Original Message -----
>>> From: "Radoslaw Rodak" <rodakr(a)gmx.ch>
>>> To: wildfly-dev(a)lists.jboss.org
>>> Sent: Tuesday, August 5, 2014 6:51:03 PM
>>> Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default
>>>
>>>
>>> Am 06.08.2014 um 00:36 schrieb Bill Burke <bburke(a)redhat.com>:
>>>
>>>>
>>>>
>>>> On 8/5/2014 3:54 PM, Andrig Miller wrote:
>>>>>> Its a horrible theory. :) How many EJB instances of a give type
>>>>>> are
>>>>>> created per request? Generally only 1. 1 instance of one object
>>>>>> of
>>>>>> one
>>>>>> type! My $5 bet is that if you went into EJB code and started
>>>>>> counting
>>>>>> how many object allocations were made per request, you'd lose
>>>>>> count
>>>>>> very
>>>>>> quickly. Better yet, run a single remote EJB request through a
>>>>>> perf
>>>>>> tool and let it count the number of allocations for you. It will
>>>>>> be
>>>>>> greater than 1. :)
>>>>>>
>>>>>> Maybe the StrictMaxPool has an effect on performance because it
>>>>>> creates
>>>>>> a global synchronization bottleneck. Throughput is less and you
>>>>>> end
>>>>>> up
>>>>>> having less concurrent per-request objects being allocated and
>>>>>> GC'd.
>>>>>>
>>>>>
>>>>> The number per request, while relevant is only part of the story.
>>>>> The number of concurrent requests happening in the server
>>>>> dictates the object allocation rate. Given enough concurrency,
>>>>> even a very small number of object allocations per request can
>>>>> create an object allocation rate that can no longer be sustained.
>>>>>
>>>>
>>>> I'm saying that the number of concurrent requests might not dictate
>>>> object allocation rate. There are probably a number of allocations
>>>> that
>>>> happen after the EJB instance is obtained. i.e. interception
>>>> chains,
>>>> contexts, etc. If StrictMaxPool blocks until a new instance is
>>>> available, then there would be less allocations per request as
>>>> blocking
>>>> threads would be serialized.
>>>>
>>>
>>> Scenarion 1 )
>>> ------------------
>>> Let say we have a pool of 100 Stateless EJBs and a constant Load of
>>> 50 Requests per second proceeded by 50 EJBs from the pool in one
>>> second.
>>> After 1000 seconds how many new EJB Instances will be created having
>>> a pool? answer 0 new EJBs worst case 100 EJB’s in pool… of course
>>> object allocation is much higher as of course 1 EJB call leads to
>>> many Object from one EJB but…let see situation without pool.
>>>
>>> 50 Request/s * 1000 seconds = worst case 50’ 000 EJB Instances on
>>> Java heap where 1 EJB might have many objects… as long as Garbage
>>> Collection was not triggered… which sounds to me like faster filling
>>> JVM heap and having ofter GC probable depending on GC Strategy.
>>>
>>> Scenarion 2)
>>> ------------------
>>> Same as before, Load is still 50 Requests per second BUT EJB Method
>>> call takes 10s.
>>> after 10s we have 500 EJB Instances without pool, after 11s 550 - 10
>>> = 540EJB Instances , after 12s 580 EJBs … after some time very bad
>>> perf…full GC …and mabe OutOfMemory..
>>>
>>> So… performance advantage could also turn in to disadvantage :-)
>>>
>>>
>>>> Whoever is investigating StrictMaxPool, or EJB pooling in general
>>>> should
>>>> stop. Its pointless.
>>>
>>> Agree, pools are outdated…. but something like WorkManager for min,
>>> max Threads or even better always not less the X idle Threads would
>>> be useful :-)
>>>
>>> Radek
>>>
>>
>> The scenarios above are what is outddated. Fifty requests per second isn't any load at all! We have 100's of thousands of clients that we have to scale to, and lots more than 50 requests per second.
>>
>> Andy
>
> It’s not about size of load unit, it’s about behavior with and without pool with load over period of time with side effects :-)
> Just Number of Clients doesn’t matter so much… but number concurrent requests from the clients per time unit...
>
>
>>
>>>
>>>
>>>
>>> _______________________________________________
>>> wildfly-dev mailing list
>>> wildfly-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev
>>>
>
10 years, 3 months
AppClientXml - Post Code Split
by Darran Lofthouse
After forward porting some schema changes from EAP to WildFly it has
become apparent that it is a little cumbersome to work with AppClientXml
as it's implementation is dependent on the schema definitions in
wildfly-core and yet it lives in wildfly.
The first point I realise is that the root element parsed by
AppClientXml is 'server' however it only accepts a subset of the
elements defined as supported by the 'server' element. Could it make
sense for version 3 of the schema and onward to have a new root element
'client'?
Secondly this could open up the option to have the client element
defined in a schema in wildfly and just reference the types that are in
wildfly-core. wildfly would then contain the parsing code for client
and the parsing code for the referenced types would be in wildfly-core
and accessed through an agreed API that we maintain for compatibility.
Regards,
Darran Lofthouse.
10 years, 3 months
MDBs in JBoss EAP 6.x/Wildfly 8.x
by Hamed Hatami
Hi,
I use MDB in my project under JBoss EAP 6.2/Wildfly 8.1 and change my MOM
from HornetQ to IBM Websphere MQ with resource adapter but when jboss
startup the connection count grow until my MOM crashed and we have to
restart the jboss after 4 or 5 hours , what happened?
Regards,
Hamed Hatami.
10 years, 3 months