Embedding a WF instance in the CLI
by Brian Stansberry
Moving a thread to the dev list.
This is about some prototyping I've been doing on weekends 'cause I'm
bored with my regular tasks. I've been playing with direct local
administration of a WF installation via the CLI without requiring a
socket-based connection. The general use case is initial setup type
activities where the user doesn't want to have to launch a WF server or
HC process and potentially have it be visible on the network.
https://issues.jboss.org/browse/WFLY-3288 is one use case; another is a
desire some folks have expressed in being able to do configuration
without first having to edit any xml to avoid port conflicts on 9990 or
9999.
This isn't a major initiative or big priority or anything at this point.
Just something I find interesting and perhaps you will too.
On 5/14/14, 8:54 AM, Alexey Loubyansky wrote:
> Neat :) Yes, figuring out the module path is biting everywhere.
> For file system path command line arguments there is a specialized
> FileSystemPathArgument.
>
Thanks; I'll switch to that.
>
> On 05/13/2014 10:54 PM, Brian Stansberry wrote:
>> Copying Heiko Braun as he expressed some interest in the topic.
>>
>> BTW, I played with this a bit more last weekend and was able to start an
>> embedded server inside the CLI easily enough. See [1] for very raw
>> prototype stuff. You can run bin/jboss-cli.sh (no -c) and then
>>
>> [disconnected/] embed-server
>>
>> There are a couple issues I see, besides the HC stuff I mentioned in my
>> last message.
>>
>> 1) If the CLI is started in a non-modular environment via java -jar
>> bin/client/jboss-cli-client.jar, we'd have to shade jboss-modules into
>> the jar. And then the embed-server command would need params specifying
>> the location of JBOSS_HOME, possibly module path etc. But it could embed
>> a server installed in any accessible filesystem location.
>>
>> But what I did at [1] is based on bin/jboss-cli.sh, where the CLI is
>> running from a WF dist in a modular environment and the embedded server
>> modules are coming from the CLI's own module path. It would be more
>> effort to support embedding a server based on some other module path.
>> Maybe it's no big deal; maybe it's really hard. :)
>>
>> 2) The console logging from the embedded server goes to stdout mixed in
>> with the CLI output. Maybe that's good, maybe it's bad.
>>
>> [1] https://github.com/bstansberry/wildfly/tree/cli-embed
>>
>> On 4/28/14, 10:04 AM, Brian Stansberry wrote:
>>> I was poking around at this for an hour or so over the weekend.
>>>
>>> The standalone case seems pretty straightforward. Seems the existing
>>> embedded server API could work readily enough. The
>>> org.jboss.as.embedded.StandaloneServer interface already provides a
>>> ModelControllerClient.
>>>
>>> The domain case is much harder, as the CLI wants a HostController, not a
>>> ProcessController. I'd really like this to use an in-VM client, not a
>>> remote one, so I don't like having the CLI embed a PC and then the HC is
>>> an external process. My thoughts of the morning are to allow inverting
>>> the HC/PC relationship for this kind of usage. That is, remove
>>> controlling the HC lifecycle from the charge of the PC component. CLI
>>> launches HC, and then the HC creates an in-process PC-ish component (not
>>> a separate process) to manage the server lifecycles. There could be all
>>> sorts of problems with that; it's just the thought for the morning.
>>>
>>> On 4/25/14, 11:49 AM, Alexey Loubyansky wrote:
>>>> Embedding the AS is the best starting point to achieve that! And more
>>>> fun, I agree :)
>>>>
>>>> On 04/25/2014 06:28 PM, Darran Lofthouse wrote:
>>>>> And to think my reason for opening the Jira was just for a common
>>>>> way to
>>>>> mask password inputs where java.io.Console is not available ;-)
>>>>>
>>>>> On 25/04/14 17:09, Brian Stansberry wrote:
>>>>>> On 4/25/14, 10:40 AM, Alexey Loubyansky wrote:
>>>>>>> Wow! Indeed :)
>>>>>>>
>>>>>>> There could be an embedded scope - true, i.e. commands available
>>>>>>> only
>>>>>>> this mode, like add-user, module mgmt related stuff, etc.
>>>>>>
>>>>>> Those commands wouldn't need to be only in that mode though. The
>>>>>> implementation of all of them would be based in the server; the
>>>>>> "client"
>>>>>> aspect of the CLI would just use the management interface. The
>>>>>> difference between an embedded mode and what we have now would
>>>>>> just be
>>>>>> in how the "client" side gets its ModelControllerClient -- what we
>>>>>> have
>>>>>> now vs starting an embedded server and getting some sort of in-vm
>>>>>> client.
>>>>>>
>>>>>>> But it would still mean the server/controller would have to actually
>>>>>>> provide implementations of that functionality and expose it to the
>>>>>>> management tools like the CLI in the embedded mode.
>>>>>>
>>>>>> Yep.
>>>>>>
>>>>>>> I like this idea as a concept - direct local management. W/o any
>>>>>>> remote
>>>>>>> connect/re-connect/disconnect burden.
>>>>>>>
>>>>>>> Extending the CLI with custom modules is on the list too. It's
>>>>>>> probably
>>>>>>> easier to implement at this point.
>>>>>>>
>>>>>>
>>>>>> Likely so, but maybe less fun. ;) I copied you on a PRD-related
>>>>>> thread
>>>>>> where I briefly get into this general area too.
>>>>>>
>>>>>>> Alexey
>>>>>>>
>>>>>>> On 04/25/2014 05:00 PM, Brian Stansberry wrote:
>>>>>>>> Hi Alexey,
>>>>>>>>
>>>>>>>> Wanted to point the discussion on this JIRA out to you as it gets
>>>>>>>> into
>>>>>>>> some fairly fundamental brainstorming that you may find
>>>>>>>> interesting.
>>>>>>>>
>>>>>>>>
>>>>>>>> -------- Original Message --------
>>>>>>>> Subject: [JBoss JIRA] (WFLY-3288) Update add-user to use AESH or
>>>>>>>> move it
>>>>>>>> into the CLI
>>>>>>>> Date: Fri, 25 Apr 2014 09:44:35 -0400 (EDT)
>>>>>>>> From: Darran Lofthouse (JIRA) <issues(a)jboss.org>
>>>>>>>> To: brian.stansberry(a)redhat.com
>>>>>>>>
>>>>>>>>
>>>>>>>> [
>>>>>>>> https://issues.jboss.org/browse/WFLY-3288?page=com.atlassian.jira.plugin....
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ]
>>>>>>>>
>>>>>>>> Darran Lofthouse commented on WFLY-3288:
>>>>>>>> ----------------------------------------
>>>>>>>>
>>>>>>>> That could be very interested, won't go into too much detail in
>>>>>>>> this
>>>>>>>> Jira as it is not directly related shortly I am switching to the
>>>>>>>> SSL
>>>>>>>> related tasks we have outstanding including the out of the box
>>>>>>>> enablement we talked about in Brno - managing an embedded instance
>>>>>>>> could
>>>>>>>> be useful there as well to get it all op based.
>>>>>>>>
>>>>>>>> I can see this task may end up coming back my way combined with the
>>>>>>>> other stuff ;-)
>>>>>>>>
>>>>>>>>> Update add-user to use AESH or move it into the CLI
>>>>>>>>> ---------------------------------------------------
>>>>>>>>>
>>>>>>>>> Key: WFLY-3288
>>>>>>>>> URL: https://issues.jboss.org/browse/WFLY-3288
>>>>>>>>> Project: WildFly
>>>>>>>>> Issue Type: Feature Request
>>>>>>>>> Security Level: Public(Everyone can see)
>>>>>>>>> Components: Domain Management, Scripts
>>>>>>>>> Reporter: Darran Lofthouse
>>>>>>>>> Fix For: Awaiting Volunteers
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Within the add-user utility it is difficult to handle situations
>>>>>>>>> where
>>>>>>>>> we do not have access to a java.io.Console which is the easiest
>>>>>>>>> way to
>>>>>>>>> handle password reading without an echo to the user e.g. in Cygwin
>>>>>>>>> Switching to AESH would allow us to use the implementation
>>>>>>>>> there to
>>>>>>>>> handle this.
>>>>>>>>> Alternatively it may actually make sense to make add-user a
>>>>>>>>> special
>>>>>>>>> mode of the CLI, we may at some point want to switch to runtime
>>>>>>>>> operations being executed on the server so porting to the CLI
>>>>>>>>> could be
>>>>>>>>> the first step to make this possible.
>>>>>>>>> Overall this is going to require further discussion so the
>>>>>>>>> comments
>>>>>>>>> here are just a starting point.
>>>>>>>>
>>>>>>>> --
>>>>>>>> This message is automatically generated by JIRA.
>>>>>>>> If you think it was sent incorrectly, please contact your JIRA
>>>>>>>> administrators
>>>>>>>> For more information on JIRA, see:
>>>>>>>> http://www.atlassian.com/software/jira
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>
>>>
>>
>>
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
9 years, 9 months
Proposal to add notifications to WildFly management model and API
by Jeff Mesnil
# Add Notification support to WildFly Management
Tracked by https://issues.jboss.org/browse/WFLY-266
Use Cases
---------
Notifications are an useful mechanism to observe management changes on WildFly servers.
It allows an administrator to be informed of changes outside of his own actions (e.g. a server has been killed, a new application is deployed, etc.)
Currently WildFly lacks notifications and users that were depending on JMX notifications in previous versions have no similar feature to use.
The most expected use cases for WildFly notifications are:
- enhance UX for Web console. Using notifications, the Web console could notify the users of changes outside its own actions.
- replacement for JMX notifications. Users that were listening for JMX notifications to observe management changes would have a similar feature using WildFly own notifications
- integration with JMX. Notifications emitted by WildFly could be converted and made available using JMX notifications (including notifications for mbean registered/unregistered)
Part 1: Notification Definition
-------------------------------
A resource will define the notifications it emits. These definitions will be added to the attributes and operations definitions on a resource.
{
"description" => "A manageable resource",
"attributes" => {
...
},
"operations" => {
...
},
"notifications" => {
"resource-added" => {
...
}
},
"children" => {
...
}
}
The description of a notification will be composed of:
* type - String - the type of notification (resource-added, server-stopped, etc.)
* description - String - i18ned description of the notification
* access-constraints - the RBAC access constraints that controls who can receive the notifications
* data-type - ModelType or complex structure - optional - only present if the notification will have a data value. data-type will detail the structure of the data value, enumerating the value's fields and the type of their value
The read-resource-description will be enhanced with a notifications parameter (boolean) to include the notifications descriptions (default value is false, same as the operations parameter).
The ManagementResourceRegistration interface will be enhanced to register a notification definition with registerNotification(NotificationDefinition notification). The NotificationDefinition interface corresponds to the detyped representation of a notifications and comes with a builder API.
Part 2: Emitting a notification
-------------------------------
A notification can be emitted in any OperationStepHandler using the OperationContext.emit(Notification method)
public void execute(OperationContext context, ModelNode operation) throws OperationFailedException {
// perform some actions
...
context.emit(new Notification(SERVER_RESTARTED_NOTIFICATION, address, ROOT_LOGGER.serverHasBeenRestarted()));
context.stepCompleted();
}
The notification is *not* emitted (i.e. delivered to interested parties) when OperationContext.emit() is called. It is emitted at the end of the operation step only if it is successful. A call to OperationContext.emit() will have no effect if the operation is rolled back.
Notification emission is done asynchronously using the server thread pool and does not block the execution of the operation that triggered the notification: having zero or any notification handlers must have no impact of the execution of the operation.
A Notification is a simple Java class that represents the notification. It is composed of:
* type - String - the notification type
* address - PathAddress - the address of the resource that emits the notification
* message - String - the i18ned description of the message
* timestamp - long - the timestamp of the notification. It is set when the Notification object is created.
* data - ModelNode - optional - a detyped representation of data associated to the notification. If a notification includes a data field, its definition must describe it (in its data-type parameter).
If RBAC is enabled, the notification access-constraints will be checked to ensure that the handler have the required privileges to receive the notification. Notification will potentially contain critical information (e.g. if a security-credential attribute is updated, the notification will contain its old and new values) and must be constrained accordingly.
Part 3: Global Resource Notifications
——————————————————
In the same way that some operations are available for any resource (e.g. add, remove, read-resource-description), some notifications will be added to any resource of WildFly management model:
* resource-added - when a resource is added, it emits a resource-added notification
* resource-removed - when a resource is removed, it emits a resource-removed notification
* attribute-value-written - when a write-attribute operation is performed successfully on a resource, it emits a attribute-value-written notification. The notification's data field contains the following information:
* name - String - the name of the attribute
* old-value - the detyped representation of the previous value of the attribute
* new-value - the detyped representation of the new value
Part 4: Notification Handlers
——————————————
Any interested parties can receive notifications by registering a NotificationHandler using the ModelController.getNotificationSupport().registerNotificationHandler(source, handler, filter) method.
The source is a path address to handle notifications emitted by resources at this address.
The NotificationHandler is an interface with a single handleNotification(Notification notification) method.
The isNotificationEnabled(Notification notification) is an interface with a single isNotificationEnabled(Notification notification) method to filter out uninteresting notifications.
There is a similar unregister method to unregister a (handler, filter)
To be useful, the source path address will have to accept wildcards for the address' values:
* /subsystem=messaging/hornetq-server=* to receive notifications emitted by any hornetq-server resources
* /subsystem=messaging/hornetq-server=*/jms-queue=* to receive notifications emitted by any jms-queue on any hornetq-server resources
Wildcards for address' keys or key/value paris are not allowed (/subsystem=messaging/*=*/jms-queue=* and /subsystem=messaging/*/jms-queue=* are not valid).
This notion of wildcard for the resource addresses should be made to match current usage (e.g. in the CLI).
The main reason for the wildcard is for the resource-added/resource-removed notifications. I find more intuitive to have the notifications at the same resource-level than their corresponding add/remove operations. However until the resource is created, there is no way to register a notification listener on it without using a wildcard.
If that proves problematic, we could change this approach with two alternatives:
* have a single well-known resource emit the notifications for all resource (that's the JMX approach). A likely candidate would be /core-service=management
* the resource-added/-removed notifications can be emitted by the resource parents (but it only fixes the issue for the last leaf of the address tree…)
I still have questions about RBAC enforcements and it is possible that the registration of a handler will have to be done with additional metadata identifying the user roles wrt RBAC...
Part 5: Domain Notifications
——————————————
Notifications are also intended to work in domain mode. In particular, they will be used to observe server state.
The following notifications will be emitted by resources at /host=XXX/server-config=YYY (i.e. the resource to start/stop/etc. a server):
* server-started
* server-stopped
* server-restarted
* server-destroyed
* server-killed
Part 6: Integration with local JMX
—————————————————
The jmx subsystem will be updated to leverage the WildFly notifications and expose them as MBean notifications in our jmx facade for the management model:
* the WildFly notification description will be converted to MBeanNotificationInfo and added to the MBeanInfo
* when a JMX notification listener is added to an ObjectName, a WildFly NotificationHandler will be added to the path address corresponding to the ObjectName.
* depending on the user feedback, we may provide a hack to convert some WildFly notifications to their well-known JMX equivalent notifications (e.g. resource-added => jmx.mbean.registered).
In a first step, integration will be limited to use of JMX locally. Remoting will not be supported.
Part 7: Integration with Remote Management API
———————————————————————
We will enhance the remote management native API to register/unregister a notification handler from the ModelControllerClient
void registerNotificationHandler(ModelNode resourceAddress, NotificationClientHandler handler, NotificationClientFilter filter);
The client contract will have to taken into account reconnection when server is reloaded (possibly by caching the handler & filter and register them again after reconnection to the server...)
The Management HTTP API will also be enhance to support notifications with its REST API.
A neat addition will be to provide a browser-specific way to push notifications to the browser (e.g. using Server-Sent Events or Web Sockets).
=> the Web Console is the recipient for this feature and will have their say in how they prefer to consume notifications
Part 8: Integration with Remote JMX
—————————————————
Once the WildFly Management API will support notifications (for both native and HTTP), we can add support to JMX remotely (if there is any user interest for it).
Part 9: Web Console UX improvement
—————————————————
Once the Management HTTP API supports notifications, the Web console can leverage it to improve its UX.
This is a task that touch different parts of the app server (mainly in wildfly-core though) and I intend to split it in different JIRA issues (approx. one for each part) that can be merged one after the other instead of a big huge commit.
What do you think?
jeff
--
Jeff Mesnil
JBoss, a division of Red Hat
http://jmesnil.net/
10 years, 1 month
Automatically resolving expressions in the CLI
by Edward Wertz
I'm looking into whether it's possible to automatically resolve expressions when executing operations and commands in the CLI.
>From my understanding, there are two variations of the problem.
* Operations are server-side processes that are accessed via ':' in the CLI and, currently, the CLI presents the results returned as-is to the users. ex: ':read-resource'
* Commands are processes that get manipulated by the CLI before getting presented to users. ex: 'ls'
I've been experimenting with adding arguments to the CLI commands, like 'ls --resolve-expressions', and gotten it working for the standalone and domain side of things. However, I can't control the scope of the argument, so it's available in situations that cannot accurately resolve expressions like the 'profile=full' section of the domain tree. The results wouldn't be reliable.
The same problem would apply to adding parameters to the server-side operations. The scope of the operations themselves can be controlled, but not their parameters. An execution like ':read-resource(recursive=true resolve-expressions=true)' can't resolve expressions unless it's used against an actual server or host, but the operation is available almost everywhere. Again, the results wouldn't be reliable.
I'm wondering if anyone can suggest a way to attack this problem? There is already a ':resolve-expression(expression=___)' operation, so users can somewhat laboriously get the runtime values they want, but I can't figure out a way to integrate the values into the existing framework successfully. Other than creating entirely new operations and commands, like 'ls-resolve' and ':read-resource-resolve', which seems like an unsustainable way to solve the problem.
Thanks,
Joe Wertz
10 years, 2 months
Wildfly provisioning tools and packaging format
by Stuart Douglas
Hi all,
So now that the core split has happened, I want to start work on the new
tooling for creating Wildfly feature packs.
At the moment the build is using a simple maven plugin that I created,
that will take an existing server (e.g. the core server), layer some
extra modules over the top of it, build the config files, and perform
any other build tasks that are required. It can also turn a 'thin'
server into a traditional fat server.
This is going to change to having two separate tools, the build tool and
the provisioning tool. The build tool will be used to create Wildfly
feature packs. A feature pack is kinda similar to what is already
produced, but with some major differences:
- It does not contain the contents of any feature packs it was built on.
For example at the moment the results of web-build also contains the
server core. The web-build feature pack will only contain modules
provided by the web-build pack.
- It is not a server, in that it cannot just be unzipped and ran, the
provisioning tool must be used first to create a runnable server from a
set of feature packs.
Once the build tool has created the feature packs, it is then up to the
provisioning tool to use them to assemble a working server. The
provision tool will be written as a library, with multiple front ends.
At the very least we will provide a standalone version and a maven plugin.
The provisioning tool takes a server descriptor, and uses that to
download all the relevant feature packs and assemble them into a server.
This process will give the user a lot of flexibility over how the server
is configured, including:
- The ability to specify only the subsystems they are after, and a cut
down subsystem with just these subsystems and their dependents will be
installed.
- The ability to override versions, e.g. to provision a server with an
updated version of Resteasy.
- The ability to install deployments into the server by specifying the
deployments GAV.
- The ability to customise the default config (not sure how this will
work yet. A yuck solution would be xslt, but no one likes that. A nicer
solution could be some kind of CLI script that is run on first boot).
This provisioning tool will also be used to build our server for our
traditional distribution and test suite. Basically as part of the build
process the maven plugin will be run to provision a server from the
constituent feature packs.
The feature pack layout will look like below:
------
versions.properties
wildfly-pack.xml
modules/
com/acme/mymodule
module.xml
...
repository (optional)
com/acme/myartifact/my-artifact.jar
...
configuration
standalone.xml
standalone-full.xml
domain.xml
...
content
bin/README.txt
bin/LICENSE.txt
...
------
The contents of these files and directories is as follows:
versions.properties - properties file with the format G:A(:C)=V, e.g.
org.jboss.resteasy:resteasy-jaxrs=3.0.0.Final
wildfly-pack.xml - This contains all additional pack metadata:
pack name: The name of the feature pack, must not contain spaces
pack description: Self explanatory
packaging version: This is inferred from the schema
required tool version: The minimum version of the provisioning tool
that is required to handle this pack
permissions: A section to set unix file permissions
version overrides: A section that allows for specific overrides of
versions in the base system
dependencies: Information on the feature packs this pack depends on
modules:
Similar to the modules dir we have today, with some exceptions:
- only artifact references are used, and these artifacts just refer to
group and artifact, without the version number. See the modules.xml
files in the current build for an example.
repository:
Contains maven artifacts in the maven repository layout. This allows
for the creation of 'offline' feature packs, where the pack
does not need access to an external maven repository. This is not
required, and in most cases will not be used.
I am not 100% sure if we actually want this.
configuration:
contains configuration template files, e.g. standalone.xml template
content:
anything in this directory will be copied directly into the server
Comments? It is expected that work will be started on this tooling very
shortly to replace the current build plugin. I am going to create a
wildfly-build-tools repository to hold these new plugins.
Stuart
10 years, 2 months
deployment information
by Claudio Miranda
Hi, for any jar deployed, deployment shows
/deployment=mysql-connector-java-5.1.26-bin.jar:read-resource(include-runtime=true,include-aliases=true,include-defaults=true,recursive=true)
{
"outcome" => "success",
"result" => {
"content" => [{"hash" => bytes {
0x22, 0x53, 0xb6, 0xad, 0x12, 0x0d, 0x95, 0x46,
0xe4, 0x84, 0xe3, 0x3b, 0x54, 0x66, 0xb4, 0xdd,
0xa9, 0x02, 0xa8, 0xfd
}}],
"enabled" => true,
"name" => "mysql-connector-java-5.1.26-bin.jar",
"persistent" => true,
"runtime-name" => "mysql-connector-java-5.1.26-bin.jar",
"status" => "OK",
"subdeployment" => undefined,
"subsystem" => undefined
}
}
I would like to add more information, the timestamp of deployment
(probably the timestamp of content file on the filesystem), size and
the hash as in data/content directory.
Tried to look into wildfly-core projects (host-controller,
deployment-scanner, deployment-repository, wildfly-controller), but
was unable to find the code that outputs the information to jboss-cli.
I know it uses the instruction below, to request deployment
information, but what is the project/class invoked for the
"deployment" command ?
final ModelNode op =
Util.getEmptyOperation(READ_CHILDREN_RESOURCES_OPERATION, new
ModelNode());
op.get(CHILD_TYPE).set(DEPLOYMENT);
ModelNode response;
try {
response = controllerClient.execute(op);
Kind regards
--
Claudio Miranda
claudio(a)claudius.com.br
http://www.claudius.com.br
10 years, 3 months
Domain Overview design
by Liz Clayton
Hi,
I'm sketching out some ideas for the Domain Overview screen. I'd like to find a visualization that make it easier to scan the page to determine server availability, and possibly alerts.
Given that the domain could be large, the visualization needs to scale. I started by looking at heatmap visualizations, which worked pretty well. Although I didn't feel like they helped in describing the overall relationships of servers, server groups and hosts... So I decided to break the heat maps into individual (stacked) heatmaps, ordered by server group. My hope is that this helps to define groupings and such.
I posted the current design proposal at:
https://community.jboss.org/wiki/DomainOverview070114pdf
It would be great to get feedback on the designs. Some questions I have are:
- Is it difficult/easy to understand that the boxes, in the server groupings, are intended to represent servers?
- Should the servers be laid out in the visualization by level of availability/status (as illustrated), or by some other ordering (A-Z, Z-A...)?
- Is it difficult/easy to understand that when a box is a different color, that it is indicating its availability status?
- What do you expect to be the relationship between (Availability) Status and Alerts? Would “x” alerts equate to a change in availability status, or can they function independently? For example: Could you have an error on a server and it still be “available?”
Thanks,
Liz
10 years, 3 months
Is JMX Needed in Core?
by Darran Lofthouse
Working with the split repo just questioning if JMX is really needed in
core?
Whilst most distributions would include it I am not convinced it is a
subsystem all must have.
Regards,
Darran Lofthouse.
10 years, 3 months
Pooling EJB Session Beans per default
by Ralph Soika
Hi,
I want to discuss the topic of Session Bean Pooling in WildFly. I know
that there was a discussion in the past to disable pooling of EJB
Session Beans per default in WildFly.
I understand when you argue that pooling a session bean is not faster
than creating the bean from scratch each time a method is called. From
the perspective of a application server developer this is a clear and
easy decision. But from the view of an application developer this breaks
one of the main concepts of session beans - the pooling.
As a application developer I suppose my bean is pooled and I can use one
of the life-cycle annotations to control my bean. This is a basic
concept for all kind of beans. First I thought it could be a compromise
to pool only these beans which have a life-cycle annotation. But this
isn't a solution.
Knowing that my bean will be pooled allows me - as a component developer
- to use this as a caching mechanism. For example time intensive
routines can also cache results in a local variable to be used next time
a method is called. This isn't a bad practice and can increase
performance of my component depending on the pool settings.
So my suggestion is to pool also stateless session ejbs in the future. I
guess form the specification there is no duty to pool beans. So there is
nothing wrong when not pooling beans. And again I don't want to
criticize. But at the end not pooling will decrease the performance of
WildFly. Not the container itself but the applications running in WildFly.
It takes me a long time to figure out why my application was a little
bit slower in WildFly than in GlassFish until I recognized the missing
pooling. I can activate pooling and everything is cool. But I guess some
other application developers will only see that there application is
slower in WildFly than on other application servers.
And this will effect their decision. That is the argument to activate
the pool per default.
best regards
Ralph
--
*Imixs*...extends the way people work together
We are an open source company, read more at: www.imixs.org
<http://www.imixs.org>
------------------------------------------------------------------------
Imixs Software Solutions GmbH
Agnes-Pockels-Bogen 1, 80992 München
*Web:* www.imixs.com <http://www.imixs.com>
*Office:* +49 (0)89-452136 16 *Mobil:* +49-177-4128245
Registergericht: Amtsgericht Muenchen, HRB 136045
Geschaeftsfuehrer: Gaby Heinle u. Ralph Soika
10 years, 3 months
We started seeing test failure in PassivationTestCase.testPassivationMaxSize() which has passivation max-size=1 and repeated called to two separate beans...
by Scott Marlow
We started to see what looks like a JPA extended persistence context
related error. [1] is the server.log that shows the exception (see the
last one near the bottom) that shouldn't be happening on WildFly master.
Also, there are some marshalling errors that I didn't see on brontes
(I'm wondering if there is a concurrency error between the bean
invocation and passivation/activation when Hibernate throws the
"java.lang.IllegalStateException: Cannot serialize a session while
connected" error during marshalling as if bean is active).
I am able to recreate the failure locally with a modification to the
PassivationTestCase.testPassivationMaxSize() [2] to repeatedly
alternative between calls to remote1 + remote2 beans.
I don't have this nailed down to the actual cause but it seems like a
race condition between passivation/activation and bean invocation (imo).
Scott
[1] https://www.dropbox.com/s/277pwvxv53dp8vk/server.zip contains the
results from more than one test run. If you look at the server.log, you
probably should go to the end and see the last "javax.ejb.EJBException:
WFLYJPA0030: Found extended persistence context in SFSB invocation call
stack but that cannot be used" error
[2] unit test change to loop repeatedly until failure occurs
https://github.com/scottmarlow/wildfly/tree/passivationxpcissue
10 years, 3 months
TODO Comments
by Darran Lofthouse
Just a random idea.
Can we block merging pull requests if they contain a TODO comment that
don't reference a Jira issue?
The views in GitHub are easy to see if a TODO is involved so quite
simple to double check - and if no Jira is justified maybe the TODO
isn't either.
Regards,
Darran Lofthouse.
10 years, 3 months