WFLY-2422 or simplifying the remote outbound connection configuration for server to server communication
by Wolf-Dieter Fink
I start a request for simplifying the configuration for "in EE
application" clients and get rid of extra cluster configuration and
repeat properties many times.
Also the client should not need to have knowledge about the server
topology, there is no need to know how many servers there are or whether
they are clustered or not.
Starting point in EAP6/WF8 is a application configuration like this:
https://github.com/wildfly/quickstart/blob/master/ejb-multi-server/app-ma...
and a server side configuration like this:
<subsystem xmlns="urn:jboss:domain:remoting:3.0">
<endpoint worker="default"/>
<http-connector name="http-remoting-connector"
connector-ref="default" security-realm="ApplicationRealm"/>
<outbound-connections>
<remote-outbound-connection
name="remote-ejb-connection-1"
outbound-socket-binding-ref="remote-ejb-1" username="quickuser1"
security-realm="ejb-security-realm-1" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
<remote-outbound-connection
name="remote-ejb-connection-2"
outbound-socket-binding-ref="remote-ejb-2" username="quickuser2"
security-realm="ejb-security-realm-2" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
Tomasz did some refactoring (WF9) to use a profile from the application
perspective. The configuration is like this:
jboss-ejb-client.xml
<client-context>
<profile name="main-app"/>
</client-context>
server profile:
<remote connector-ref="http-remoting-connector"
thread-pool-name="default">
<profiles>
<profile name="main-app">
<remoting-ejb-receiver name="AppOneA"
outbound-connection-ref="remote-ejb-connection-1"/>
<remoting-ejb-receiver name="AppTwoA"
outbound-connection-ref="remote-ejb-connection-2"/>
</profile>
</profiles>
</remote>
....
<subsystem xmlns="urn:jboss:domain:remoting:3.0">
<outbound-connections>
<remote-outbound-connection
name="remote-ejb-connection-1"
outbound-socket-binding-ref="remote-ejb-1" username="quickuser1"
security-realm="ejb-security-realm-1" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
<remote-outbound-connection
name="remote-ejb-connection-2"
outbound-socket-binding-ref="remote-ejb-2" username="quickuser2"
security-realm="ejb-security-realm-2" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
With the current implementation there are some issues or
concerns/enhancements
- profile does not work with clusters
- not possible to have multiple profiles
- the properties/user must be still repeated
From my point of view
- a cluster need to have the same property configuration, also different
users make no sense. Might work, but at least the cluster view will use
the same user
- a similar group of servers for the same application should not have
different properties/users as this will be error prone
- configuration should be as small and intuitive as possible
My initial idea was to have a jboss-ejb-client.xml which reference
'applications' to connect, that is similar to profiles
The server side as followed (don't care about the exact XML elements or
names)
<subsystem xmlns="urn:jboss:domain:remoting:3.0">
<outbound-connections>
<profile name="App1" username="quickuser1"
security-realm="ejb-security-realm-1" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
<outbound-sockets>remote-ejb-1,remote-ejb2</outbound-sockets> <!--
repeated elements seems better -->
</remote-outbound-connection>
<remote-outbound-connection
name="remote-ejb-connection-X"
outbound-socket-binding-ref="remote-ejb-X" username="quickuser2"
security-realm="ejb-security-realm-2" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
In this case the profile use the user/security and properties for all
connections and the cluster as well. In this it is necessary to have the
same configuration for all the servers in the profile-bunch.
Another option I thought about is to use the user/properties in
<profile> as default and have the possibility to use a inner element
remote-outbound-connection, or a reference to remote-outbound-connection
which can override these, but I'm not sure whether this is needed.
We (Tomasz Adamski and me) had a discussion about this and, technically
there is no problem with each approach.
But ...
I know that all the ejb-client stuff is subject to change and to prevent
from incompatible changes which are changed in every version
and from unnecessary work if the code will be changed before it will be
used at all
I think it will need to be discussed with others because of this.
cheers
Wolf
9 years, 5 months
Dropping legacy XSD schemas & its parsers
by Tomaž Cerar
Hi folks,
we discussed on team meeting in Brno about dropping support for old legacy
host controllers
when running in mixed domain mode (having DC of newer version managing
older version HCs)
We also discussed dropping old xsd sachems & parsers as it would help us
cleanup and simplify code
in many subsystems as there are cases where we support and maintain 5 and
more different
versions of parser. For example currently web subsystem has 8, infinispan
7, ejb & jackorb have 6, ...
We still have parsers that that ware shipped back in 7.0.0 and became
obsolete in later 7.0.x releases.
Given that we decided that we are dropping support for running mixed domain
mode for host controller
that are older than 7.3.0 (EAP 6.2) as is tracked by
https://issues.jboss.org/browse/WFLY-3564
I would also like to suggest that we do the same for xml schemas & parsers.
*What is the downside?*
Automatic upgrading from JBoss AS 7.1.x/EAP < 6.2 version with using same
standalone.xml won't work anymore.
User would need to upgrade to WildFly 8.x and from there to 9 or 10
(depending when we drop this)
Because of replacement of web subsystem with undertow and introduction of
few other subsystems (io, SM)
this already doesn't work for 7.x-->8+, but we do have plans how to improve
that.
So, are there any objections against this?
--
Tomaž
9 years, 5 months
Management Parser Versioning
by Darran Lofthouse
Working with the parsers for the core config has become increasingly
cryptic, we are now at the point where we have three different major
versions which diverge and converge as we work on them. Most recent
changes have resulted in large sections of the config converging for 1.x
and 3.x leaving 2.x independent.
So that I can add references to Elytron I am starting to add support for
version 4.
One think that I have learned is that each major version tends to belong
to one branch of the codebase, all changes to that version happen on
that branch first: -
1.x - Maintained only for EAP
2.x - WildFly 8.x branch
3.x - WildFly Core master branch
I would expect if further changes are made to core for WildFly 9
releases we will end up with 1.x branch of core and and 4.x version of
the schema will be owned by the master branch.
To make things less cryptic I am proposing that until we find a better
solution for all subsequent major schema versions we just fork the
parser and all related classes.
This will simplify the code being modified for the upstream development.
Forward porting parsing changes will also become a simple copy and paste.
For the current cryptic approach I think almost every engineer (and I am
finding it really hard to think of exceptions) that has worked in-depth
in this area has introduced at least one bug and I don't think the test
coverage is high enough to protect against this.
Regards,
Darran Lofthouse.
9 years, 7 months
Embedded Arquillian Container
by Lukas Fryc
Hey guys,
just wondering if wildfly-arquillian-container-embedded was discontinued
with split of 9.x:
https://github.com/wildfly/wildfly-arquillian/blob/master/pom.xml#L96
When working on a re-enablement, I found out that even though arq adapter
now depends on wildfly-core/embedded, particularly
on EmbeddedServerFactory, this class has its counterpart in
wildfly/embedded as well.
Question is, should be embedded arquillian container still available for
9.x?
If yes, I can continue and provide a PR, just I will need a bit of guidance
with what EmbeddedServerFactory it should actually use (if that matters).
Cheers,
~ Lukas
--
Lukas Fryc
AeroGear Core Developer
Red Hat
9 years, 7 months
Management operation for legacy subsystem migration
by Jeff Mesnil
With WildFly 9 and 10, we will have new subsystems that will replace some older subsystems (called legacy subsystems below).
We have to deal with migrating these subsystems:
* migrate from web (JBoss Web) to undertow
* migrate from messaging (HornetQ) to messaging-activemq (with Apache ActiveMQ Artemis)
* migrate from jacorb to iiop-openjdk
These 3 tasks are about providing a management operation to perform one-time migration (i.e. the migration is an operation performed by the server on its management model).
I have started to look at this from the messaging perspective.
To constrain this task, I have added some requirements:
1. the legacy subsystem must be an empty shell and has no runtime
=> in WildFLy 10, /subsystem=messaging is only exposing its management model but there is no runtime (HornetQ server library is not included)
2. the server must be in admin-only mode during migration
=> the server is not serving any client during migration.
=> the migration deals only with the server management model by creating the model for the new subsystem based on the legacy subsystem's model
3. Data are not moved during this migration operation
=> moving messages from HornetQ to ActiveMQ destinations is not performed during this management migration.
=> we already have process (such as using JMS bridges) to move messages from one messaging provider to another
Having these three requirements simplifies the migration task and sounds reasonable.
Do you foresee any issues with having them?
Given these requirements, the legacy subsystem would need to expose a :migrate operation (at the root of the subsystem) to perform the actual migration of the management model.
Its pseudo code would be something like:
* check the server is in admin-only mode
defined any child resource)
* :describe the legacy subsystem model
* transform the legacy subsystem description to the new subsystem
=> if everything is successful
* create a composite operation to add the new messaging-activemq extension and all the transformed :add operations
* report the composite operation outcome to the user
=> else
* report the error(s) to the user
It is possible that the legacy subsystem can not be fully migrated (e.g. if it defines an attribute that has no equivalent on the new subsystem). In that case, the :migrate operation reports the error(s) to the user.
The user can then change the legacy subsystem model to remove the problematic resource/attributes and invoke :migrate again
For the messaging subsystem, I expect that it will not be possible to fully migrate the replication configuration of the legacy subsystem to the new subsystem (the configuration has significantly changed between HornetQ and ActiveMQ, some configuration will be incompatible).
In that case, I'd expect the user to migrate to the new messaging-activemq subsystem by discarding the legacy subsystem's replication configuration, invoke :migrate and then configure replication for the new subsystem.
In my proof of concept, the :migrate operation has a dry-run boolean attribute. If set to true, the operation will not run the composite operation. It will instead return to the user the list of operations that will be executed when the :migrate operation is actually performed.
I have talked to Tomek which is charge of the iiop migration and he has an additional requirement to emulate the legacy jacorb subsystem with the new iioop-openjdk subsystem. I have not this requirement for the messaging subsystem so I have not given much thought about it...
Same goes for the web -> undertow migration.
It's also important to note that this operation to migrate the management model of a legacy subsystem to a new one is only one step of the whole migration story.
For messaging, the workflow to upgrade an WFLY 9 server to WFLY 10 is made of several other steps (and I may have forgotten some)
* install the new server
* copy the old configuration (with the legacy messaging subsystem)
* start the new server in admin-only mode
* invoke /subsystem=messaging:migrate
=> rinse and repeat by tweaking the legacy subsystem until the migration is successful
* if migration of data can be done offline, do it now (the server is in admin-only mode, so it's ok)
* reload the server to return to running mode with the new messaging subsystem
* if the migration of data must be done offline, it can be done now
(e.g. create a new JMS bridge from the old running WFLY9/messaging server to this new WFLY10/messaging-activemq server)
* if everything is fine, invoke /subsystem=messaging:remove to remote the legacy subsystem model.
Any comment, critic, feedback?
--
Jeff Mesnil
JBoss, a division of Red Hat
http://jmesnil.net/
9 years, 7 months
WFCORE-665
by Brian Stansberry
Hi Jay,
Re: https://issues.jboss.org/browse/WFCORE-665
I wanted to discuss this here, just because it's a potential change in
longstanding behavior, so it could use more visibility than JIRA
comments or pull requests get.
I think the simple solution is just to invert the priority between
groupVM and hostVM in the call to the JVMElement constructor at
https://github.com/wildfly/wildfly-core/blob/master/host-controller/src/m...
There's one other analogous situation, with system properties, and there
the host model value takes precedence over server-group. See
ManagedServerOperationsFactory.getAllSystemProperties.
I want those two to be consistent, as I want all such precedence things
to be consistent. (The "path" and "interface" resources are the other
precedence things, but there "server-group" is not a factor.)
The argument for host overriding server-group is better to me than the
opposite, so if we're picking one to be consistent with, I pick the
former. One of the basic ideas of domain.xml versus host.xml is
domain.xml sets the base config and then specific items can be overriden
at the host level. So having server-group take precedence is unintuitive.
I don't like the approach in your patch because it makes the rules more
complex. Whether host takes precedence over server-group depends on
whether there's a jvm config at the server level with a name that
matches one of the host level configs. That's too complicated.
Cheers,
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
9 years, 8 months
CLI 'module add/remove' Command Discussion
by Edward Wertz
I discovered that the CLI 'module' command has been in need of a discussion for awhile. There are some issues with the command, and lingering enhancement requests, that can't be cleared up unless a future is clearly established. Since I can't seem to find any existing discussion on the subject, I wanted to throw it out here.
Issues:
* The main issue is that the current command simply doesn't interact with the server to do anything. It relies on file system access and creates or removes module directories and resources on it's own. The server can then reload and find them. Simple, but effective.
* Another issue is that the command is disabled for domain mode. Interestingly, since the command can be used in a disconnected state, it can still be used to manipulate domain modules assuming the servers module path remains the default at 'JBOSS_HOME/modules'. A module can be added or removed while the CLI is disconnected, then it can connect and reload the servers to complete the add or remove.
Refactoring Thoughts:
To bring it in-line with most other commands, and allow it to be expanded, the process should probably be happening server-side rather than CLI-side. I'm envisioning something akin to the deploy functionality. For the time being refactoring a simple add/remove, but later expanding it to be far more functional. However I haven't had the chance yet to look into how, or if, the server maintains or manipulates the modules and paths anywhere or if the value is simply passed into the jboss-modules system. I don't believe there's a clear API for the modules system at the moment, so this puts me somewhat in the dark as to whether what I'm thinking about is even possible.
Of course, everything is possible. With enough work.
Concerns:
* My main concern with needing a connection is that the current functionality would suffer. Right now the module command, being independent from the server, is enabled while the CLI is disconnected. I think that functionality is somewhat valuable since a standalone server reload does not actually remove a module. Once added and the server reloaded, a module is active until a full shutdown and restart. From what I understand it's much easier to add classes to a JVM than remove them, so this is expected. If not fixed a standalone server would have to be started, then the module removed, then shutdown and restarted. While not excessive, it is a little annoying.
* Also, as pointed out earlier, the command can currently be used on domain servers if certain criteria are met. If a connection is required the command would become completely unusable for domain servers unless the functionality is expanded during the refactoring.
** Note: The first concern doesn't seem to apply to domain servers, since the servers can be completely restarted from the host controller.
Thoughts and input are encouraged. Whether on the technical aspects of a refactor or simply using the command.
Joe Wertz
9 years, 8 months
WildFly NoSQL integration prototype
by Scott Marlow
Are you interested in allowing NoSQL databases access from WildFly
application deployments? This email is about an integration effort to
allow WildFly applications to use NoSQL. Feedback is welcome on this
effort, as well as help in improving [1]. Some basic unit tests are
already added that show a session bean reading/writing MongoDB [2] +
Cassandra [3] databases. In order for the tests to pass, the local
machine must already be running MongoDB or Cassandra databases.
1. Things that currently (seems to be) working in the prototype:
* During WildFly startup, MongoDB/Cassandra databases are connected to
based on settings in their respective subsystems. See the configuration
example [4].
* Applications can access native MongoDB/Cassandra objects that
represent database connections (with internal native connection
pooling). See @Resource example [2][3]. Will see how the requirements
evolve going forward and whether @Resource is the right way and/or
whether other annotations are needed.
2. Currently not working in the prototype:
* Multiple hosts/ports cannot be specified yet for target database.
* Protection against applications closing pooled connections.
* NoSQL drivers currently may create threads in EE application threads
which could leak ClassLoaders/AccessControlContexts. One solution might
be to contribute a patch that allows WildFly to do the thread creation
in some fashion for the NoSQL drivers.
* We have not (yet) tried using (Java) security manager support with the
NoSQL driver clients.
* Additional NoSQL connection attributes need to be added to the NoSQL
subsystems.
* Native NoSQL class objects are bound to JNDI currently (e.g.
MongoClient). We might want to bind wrapper or proxy objects so that we
can extend the NoSQL classes or in some cases, prevent certain actions
(e.g. prevent calls to MongoClient.close()). Perhaps we will end up
with a mixed approach, where we could extend the NoSQL driver if that is
the only way to manage it, or contribute a listener patch for WildFly to
get control during certain events (e.g. for ignoring close of pooled
database connections).
* The prototype currently gives all (WildFly) deployments access to the
Cassandra/MongoDB driver module classloaders. This is likely to change
but not yet sure to what.
3. The Weld (CDI) project is also looking at NoSQL enhancements, as is
the Narayana project. There is also the Hibernate OGM project that is
pushing on JPA integration and will also help contribute changes to the
NoSQL drivers that are needed for WildFly integration (e.g. introduce
alternative way for NoSQL drivers manage thread creation for background
task execution).
4. We will need a place to track issues for NoSQL integration. If the
NoSQL integration changes are merged directly into WildFly, perhaps we
could have a nosql category under https://issues.jboss.org/browse/WFLY.
5. You can view outstanding issues in the MongoDB server [5], Java
driver [6] to get feel for problems that others have run into (just like
you would with WildFly). You can view outstanding issues in the
Cassandra server [7] and Java driver [8] to get a feel for problems as well.
6. Infinispan [9] integration in WildFly is still going strong.
Infinispan is still the backbone of WildFly clustering and also
available for applications to use as a datasource.
7. The standalone.xml settings [4] will soon change (would like to
eliminate the "name=default", add more attributes and get the multiple
host/ports wired in).
8. If the NoSQL unit tests do stay in the WildFly repo, they will need
to be disabled by default, as most WildFly developers will not have a
NoSQL database running. Speaking of which, we need to wire the unit
tests to update the standalone.xml to contain the MongoDB/Cassandra
subsystem settings [4].
9. What version of NoSQL databases will work with the WildFly NoSQL
integration? At this point, we will only work with one version of each
NoSQL database that is integrated with. Because we are likely to need
some changes in the NoSQL client drivers, we will work with the upstream
communities to ensure the NoSQL driver code can run in an EE container
thread, without causing leaks. First we have to identity the changes
that we need (e.g. find some actual leaks that I only suspect will
happen at this point and propose some changes). The Hibernate OGM team
is going to help with the driver patches (thanks Hibernate OGM team! :-)
10. Going forward, how can WildFly extend the NoSQL (client driver
side) capabilities to improve the different application life cycles
through development, test, production?
Scott
[1] https://github.com/scottmarlow/wildfly/tree/nosql-dev
[2]
https://github.com/scottmarlow/wildfly/blob/nosql-dev/testsuite/compat/sr...
[3]
https://github.com/scottmarlow/wildfly/blob/nosql-dev/testsuite/compat/sr...
[4] https://gist.github.com/scottmarlow/b8196bdc56431bb171c8
[5] https://jira.mongodb.org/browse/SERVER
[6] https://jira.mongodb.org/browse/JAVA
[7] https://issues.apache.org/jira/browse/CASSANDRA
[8] https://datastax-oss.atlassian.net/browse/JAVA
[9] http://infinispan.org
9 years, 8 months