deployment information
by Claudio Miranda
Hi, for any jar deployed, deployment shows
/deployment=mysql-connector-java-5.1.26-bin.jar:read-resource(include-runtime=true,include-aliases=true,include-defaults=true,recursive=true)
{
"outcome" => "success",
"result" => {
"content" => [{"hash" => bytes {
0x22, 0x53, 0xb6, 0xad, 0x12, 0x0d, 0x95, 0x46,
0xe4, 0x84, 0xe3, 0x3b, 0x54, 0x66, 0xb4, 0xdd,
0xa9, 0x02, 0xa8, 0xfd
}}],
"enabled" => true,
"name" => "mysql-connector-java-5.1.26-bin.jar",
"persistent" => true,
"runtime-name" => "mysql-connector-java-5.1.26-bin.jar",
"status" => "OK",
"subdeployment" => undefined,
"subsystem" => undefined
}
}
I would like to add more information, the timestamp of deployment
(probably the timestamp of content file on the filesystem), size and
the hash as in data/content directory.
Tried to look into wildfly-core projects (host-controller,
deployment-scanner, deployment-repository, wildfly-controller), but
was unable to find the code that outputs the information to jboss-cli.
I know it uses the instruction below, to request deployment
information, but what is the project/class invoked for the
"deployment" command ?
final ModelNode op =
Util.getEmptyOperation(READ_CHILDREN_RESOURCES_OPERATION, new
ModelNode());
op.get(CHILD_TYPE).set(DEPLOYMENT);
ModelNode response;
try {
response = controllerClient.execute(op);
Kind regards
--
Claudio Miranda
claudio(a)claudius.com.br
http://www.claudius.com.br
10 years, 3 months
Domain Overview design
by Liz Clayton
Hi,
I'm sketching out some ideas for the Domain Overview screen. I'd like to find a visualization that make it easier to scan the page to determine server availability, and possibly alerts.
Given that the domain could be large, the visualization needs to scale. I started by looking at heatmap visualizations, which worked pretty well. Although I didn't feel like they helped in describing the overall relationships of servers, server groups and hosts... So I decided to break the heat maps into individual (stacked) heatmaps, ordered by server group. My hope is that this helps to define groupings and such.
I posted the current design proposal at:
https://community.jboss.org/wiki/DomainOverview070114pdf
It would be great to get feedback on the designs. Some questions I have are:
- Is it difficult/easy to understand that the boxes, in the server groupings, are intended to represent servers?
- Should the servers be laid out in the visualization by level of availability/status (as illustrated), or by some other ordering (A-Z, Z-A...)?
- Is it difficult/easy to understand that when a box is a different color, that it is indicating its availability status?
- What do you expect to be the relationship between (Availability) Status and Alerts? Would “x” alerts equate to a change in availability status, or can they function independently? For example: Could you have an error on a server and it still be “available?”
Thanks,
Liz
10 years, 3 months
Is JMX Needed in Core?
by Darran Lofthouse
Working with the split repo just questioning if JMX is really needed in
core?
Whilst most distributions would include it I am not convinced it is a
subsystem all must have.
Regards,
Darran Lofthouse.
10 years, 3 months
Enabling Hibernate Search to all Hibernate/JPA users
by Sanne Grinovero
Currently users wishing for Search super-powers in their Hibernate
applications need to enable the module explicitly.
This seems to be rather annoying, and we have questions like this one:
- https://community.jboss.org/thread/243346
I see no reasons to not enable it by default, following the same
activation rules of other Hibernate dependencies: it's very
conservative about not auto-enabling itself when not needed.
May I open such a feature request? Happy to try the patching myself.
Sanne
10 years, 3 months
Pooling EJB Session Beans per default
by Ralph Soika
Hi,
I want to discuss the topic of Session Bean Pooling in WildFly. I know
that there was a discussion in the past to disable pooling of EJB
Session Beans per default in WildFly.
I understand when you argue that pooling a session bean is not faster
than creating the bean from scratch each time a method is called. From
the perspective of a application server developer this is a clear and
easy decision. But from the view of an application developer this breaks
one of the main concepts of session beans - the pooling.
As a application developer I suppose my bean is pooled and I can use one
of the life-cycle annotations to control my bean. This is a basic
concept for all kind of beans. First I thought it could be a compromise
to pool only these beans which have a life-cycle annotation. But this
isn't a solution.
Knowing that my bean will be pooled allows me - as a component developer
- to use this as a caching mechanism. For example time intensive
routines can also cache results in a local variable to be used next time
a method is called. This isn't a bad practice and can increase
performance of my component depending on the pool settings.
So my suggestion is to pool also stateless session ejbs in the future. I
guess form the specification there is no duty to pool beans. So there is
nothing wrong when not pooling beans. And again I don't want to
criticize. But at the end not pooling will decrease the performance of
WildFly. Not the container itself but the applications running in WildFly.
It takes me a long time to figure out why my application was a little
bit slower in WildFly than in GlassFish until I recognized the missing
pooling. I can activate pooling and everything is cool. But I guess some
other application developers will only see that there application is
slower in WildFly than on other application servers.
And this will effect their decision. That is the argument to activate
the pool per default.
best regards
Ralph
--
*Imixs*...extends the way people work together
We are an open source company, read more at: www.imixs.org
<http://www.imixs.org>
------------------------------------------------------------------------
Imixs Software Solutions GmbH
Agnes-Pockels-Bogen 1, 80992 München
*Web:* www.imixs.com <http://www.imixs.com>
*Office:* +49 (0)89-452136 16 *Mobil:* +49-177-4128245
Registergericht: Amtsgericht Muenchen, HRB 136045
Geschaeftsfuehrer: Gaby Heinle u. Ralph Soika
10 years, 3 months
Fwd: Pooling EJB Session Beans per default
by Radoslaw Rodak
Anfang der weitergeleiteten Nachricht:
> Von: Radoslaw Rodak <rodakr(a)gmx.ch>
> Betreff: Aw: [wildfly-dev] Pooling EJB Session Beans per default
> Datum: 6. August 2014 19:20:07 MESZ
> An: Bill Burke <bburke(a)redhat.com>
>
>
> Am 06.08.2014 um 17:30 schrieb Bill Burke <bburke(a)redhat.com>:
>
>>
>>
>> On 8/6/2014 10:50 AM, Andrig Miller wrote:
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Radoslaw Rodak" <rodakr(a)gmx.ch>
>>>> To: wildfly-dev(a)lists.jboss.org
>>>> Sent: Tuesday, August 5, 2014 6:51:03 PM
>>>> Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default
>>>>
>>>>
>>>> Am 06.08.2014 um 00:36 schrieb Bill Burke <bburke(a)redhat.com>:
>>>>
>>>>>
>>>>>
>>>>> On 8/5/2014 3:54 PM, Andrig Miller wrote:
>>>>>>> Its a horrible theory. :) How many EJB instances of a give type
>>>>>>> are
>>>>>>> created per request? Generally only 1. 1 instance of one object
>>>>>>> of
>>>>>>> one
>>>>>>> type! My $5 bet is that if you went into EJB code and started
>>>>>>> counting
>>>>>>> how many object allocations were made per request, you'd lose
>>>>>>> count
>>>>>>> very
>>>>>>> quickly. Better yet, run a single remote EJB request through a
>>>>>>> perf
>>>>>>> tool and let it count the number of allocations for you. It will
>>>>>>> be
>>>>>>> greater than 1. :)
>>>>>>>
>>>>>>> Maybe the StrictMaxPool has an effect on performance because it
>>>>>>> creates
>>>>>>> a global synchronization bottleneck. Throughput is less and you
>>>>>>> end
>>>>>>> up
>>>>>>> having less concurrent per-request objects being allocated and
>>>>>>> GC'd.
>>>>>>>
>>>>>>
>>>>>> The number per request, while relevant is only part of the story.
>>>>>> The number of concurrent requests happening in the server
>>>>>> dictates the object allocation rate. Given enough concurrency,
>>>>>> even a very small number of object allocations per request can
>>>>>> create an object allocation rate that can no longer be sustained.
>>>>>>
>>>>>
>>>>> I'm saying that the number of concurrent requests might not dictate
>>>>> object allocation rate. There are probably a number of allocations
>>>>> that
>>>>> happen after the EJB instance is obtained. i.e. interception
>>>>> chains,
>>>>> contexts, etc. If StrictMaxPool blocks until a new instance is
>>>>> available, then there would be less allocations per request as
>>>>> blocking
>>>>> threads would be serialized.
>>>>>
>>>>
>>>> Scenarion 1 )
>>>> ------------------
>>>> Let say we have a pool of 100 Stateless EJBs and a constant Load of
>>>> 50 Requests per second proceeded by 50 EJBs from the pool in one
>>>> second.
>>>> After 1000 seconds how many new EJB Instances will be created having
>>>> a pool? answer 0 new EJBs worst case 100 EJB’s in pool… of course
>>>> object allocation is much higher as of course 1 EJB call leads to
>>>> many Object from one EJB but…let see situation without pool.
>>>>
>>>> 50 Request/s * 1000 seconds = worst case 50’ 000 EJB Instances on
>>>> Java heap where 1 EJB might have many objects… as long as Garbage
>>>> Collection was not triggered… which sounds to me like faster filling
>>>> JVM heap and having ofter GC probable depending on GC Strategy.
>>>>
>>>> Scenarion 2)
>>>> ------------------
>>>> Same as before, Load is still 50 Requests per second BUT EJB Method
>>>> call takes 10s.
>>>> after 10s we have 500 EJB Instances without pool, after 11s 550 - 10
>>>> = 540EJB Instances , after 12s 580 EJBs … after some time very bad
>>>> perf…full GC …and mabe OutOfMemory..
>>>>
>>>> So… performance advantage could also turn in to disadvantage :-)
>>>>
>>>>
>>>>> Whoever is investigating StrictMaxPool, or EJB pooling in general
>>>>> should
>>>>> stop. Its pointless.
>>>>
>>>> Agree, pools are outdated…. but something like WorkManager for min,
>>>> max Threads or even better always not less the X idle Threads would
>>>> be useful :-)
>>>>
>>>> Radek
>>>>
>>>
>>> The scenarios above are what is outddated. Fifty requests per second isn't any load at all! We have 100's of thousands of clients that we have to scale to, and lots more than 50 requests per second.
>>>
>> What you mean to say is that you need to scale to 100's of thousands of
>> clients on meaningless no-op benchmarks. :) I do know that that old
>> SpecJ Java EE benchmarks artifically made EJB pooling important as
>> process intensive calculation results were cached in these instances.
>> But real-world apps don't use this feature/anti-pattern.
>>
>> Also however crappy it was, I did implement an EJB container at one time
>> in my career. :) I know for a fact that there are a number of
>> per-request internal support objects that need to be allocated. Let's
>> count:
>>
>> * The argument array (for reflection)
>> * Each argument of the method call
>> * The response object
>> * Interceptor context object
>> * The interceptor context attribute map
>> * EJBContext
>> * Subject, Principal, role mappings
>> * Transaction context
>> * The message object(s) specific to the remote EJB protocol
>>
>> Starts to add up huh? I'm probably missing a bunch more. Throw in
>> interaction with JPA and you end up with even more per-request objects
>> being allocated. You still believe pooling one EJB instance matters?
>>
>> --
>> Bill Burke
>> JBoss, a division of Red Hat
>> http://bill.burkecentral.com
>> _______________________________________________
>> wildfly-dev mailing list
>> wildfly-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/wildfly-dev
>
> Totally agree!
> The point is, pool has not only pool but also throttling function.
> If you remove pool you still need throttling and you might have better performance over time with throttling as without and run out of resources...
>
10 years, 3 months
Fwd: Pooling EJB Session Beans per default
by Radoslaw Rodak
Anfang der weitergeleiteten Nachricht:
> Von: Radoslaw Rodak <rodakr(a)gmx.ch>
> Betreff: Aw: [wildfly-dev] Pooling EJB Session Beans per default
> Datum: 6. August 2014 19:14:18 MESZ
> An: Jason Greene <jason.greene(a)redhat.com>
>
>
> Am 06.08.2014 um 17:13 schrieb Jason Greene <jason.greene(a)redhat.com>:
>
>>
>> On Aug 5, 2014, at 7:51 PM, Radoslaw Rodak <rodakr(a)gmx.ch> wrote:
>>
>>>
>>> Am 06.08.2014 um 00:36 schrieb Bill Burke <bburke(a)redhat.com>:
>>>
>>>>
>>>>
>>>> On 8/5/2014 3:54 PM, Andrig Miller wrote:
>>>>>> Its a horrible theory. :) How many EJB instances of a give type are
>>>>>> created per request? Generally only 1. 1 instance of one object of
>>>>>> one
>>>>>> type! My $5 bet is that if you went into EJB code and started
>>>>>> counting
>>>>>> how many object allocations were made per request, you'd lose count
>>>>>> very
>>>>>> quickly. Better yet, run a single remote EJB request through a perf
>>>>>> tool and let it count the number of allocations for you. It will be
>>>>>> greater than 1. :)
>>>>>>
>>>>>> Maybe the StrictMaxPool has an effect on performance because it
>>>>>> creates
>>>>>> a global synchronization bottleneck. Throughput is less and you end
>>>>>> up
>>>>>> having less concurrent per-request objects being allocated and GC'd.
>>>>>>
>>>>>
>>>>> The number per request, while relevant is only part of the story. The number of concurrent requests happening in the server dictates the object allocation rate. Given enough concurrency, even a very small number of object allocations per request can create an object allocation rate that can no longer be sustained.
>>>>>
>>>>
>>>> I'm saying that the number of concurrent requests might not dictate
>>>> object allocation rate. There are probably a number of allocations that
>>>> happen after the EJB instance is obtained. i.e. interception chains,
>>>> contexts, etc. If StrictMaxPool blocks until a new instance is
>>>> available, then there would be less allocations per request as blocking
>>>> threads would be serialized.
>>>>
>>>
>>> Scenarion 1 )
>>> ------------------
>>> Let say we have a pool of 100 Stateless EJBs and a constant Load of 50 Requests per second proceeded by 50 EJBs from the pool in one second.
>>> After 1000 seconds how many new EJB Instances will be created having a pool? answer 0 new EJBs worst case 100 EJB’s in pool… of course object allocation is much higher as of course 1 EJB call leads to many Object from one EJB but…let see situation without pool.
>>>
>>> 50 Request/s * 1000 seconds = worst case 50’ 000 EJB Instances on Java heap where 1 EJB might have many objects… as long as Garbage Collection was not triggered… which sounds to me like faster filling JVM heap and having ofter GC probable depending on GC Strategy.
>>
>> If you think about a single Java EE request invocation that processes data with one EJB in the call, there is typically hundreds of temporary objects created (perhaps even thousands when you are pulling back many rows of data from JPA). Aside from the container API requirements (the container has to create a string for every http header name and value, which can easily be 20+ objects), just writing plain java code that does things like substring creates temporary objects. Now, I don’t have an exact object instance count for an SLSB creation, but glancing at the code it looks ~ 6 objects. So we are talking about a very small percentage of object space, probably around 1-2%.
>>
>> On the other hand the percentage could be high if you have an ejb method that doesn’t do much (e.g. just returns a constant) and you call it in a big loop as part of a request. Then you could get 6 * N object churn, which could very well end up to be a high percentage (for a large enough value of N).
>>
>
> This is exactly what I was trying to point!
>
>
>
>> --
>> Jason T. Greene
>> WildFly Lead / JBoss EAP Platform Architect
>> JBoss, a division of Red Hat
>>
>
10 years, 3 months