WFLY-2422 or simplifying the remote outbound connection configuration for server to server communication
by Wolf-Dieter Fink
I start a request for simplifying the configuration for "in EE
application" clients and get rid of extra cluster configuration and
repeat properties many times.
Also the client should not need to have knowledge about the server
topology, there is no need to know how many servers there are or whether
they are clustered or not.
Starting point in EAP6/WF8 is a application configuration like this:
https://github.com/wildfly/quickstart/blob/master/ejb-multi-server/app-ma...
and a server side configuration like this:
<subsystem xmlns="urn:jboss:domain:remoting:3.0">
<endpoint worker="default"/>
<http-connector name="http-remoting-connector"
connector-ref="default" security-realm="ApplicationRealm"/>
<outbound-connections>
<remote-outbound-connection
name="remote-ejb-connection-1"
outbound-socket-binding-ref="remote-ejb-1" username="quickuser1"
security-realm="ejb-security-realm-1" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
<remote-outbound-connection
name="remote-ejb-connection-2"
outbound-socket-binding-ref="remote-ejb-2" username="quickuser2"
security-realm="ejb-security-realm-2" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
Tomasz did some refactoring (WF9) to use a profile from the application
perspective. The configuration is like this:
jboss-ejb-client.xml
<client-context>
<profile name="main-app"/>
</client-context>
server profile:
<remote connector-ref="http-remoting-connector"
thread-pool-name="default">
<profiles>
<profile name="main-app">
<remoting-ejb-receiver name="AppOneA"
outbound-connection-ref="remote-ejb-connection-1"/>
<remoting-ejb-receiver name="AppTwoA"
outbound-connection-ref="remote-ejb-connection-2"/>
</profile>
</profiles>
</remote>
....
<subsystem xmlns="urn:jboss:domain:remoting:3.0">
<outbound-connections>
<remote-outbound-connection
name="remote-ejb-connection-1"
outbound-socket-binding-ref="remote-ejb-1" username="quickuser1"
security-realm="ejb-security-realm-1" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
<remote-outbound-connection
name="remote-ejb-connection-2"
outbound-socket-binding-ref="remote-ejb-2" username="quickuser2"
security-realm="ejb-security-realm-2" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
With the current implementation there are some issues or
concerns/enhancements
- profile does not work with clusters
- not possible to have multiple profiles
- the properties/user must be still repeated
From my point of view
- a cluster need to have the same property configuration, also different
users make no sense. Might work, but at least the cluster view will use
the same user
- a similar group of servers for the same application should not have
different properties/users as this will be error prone
- configuration should be as small and intuitive as possible
My initial idea was to have a jboss-ejb-client.xml which reference
'applications' to connect, that is similar to profiles
The server side as followed (don't care about the exact XML elements or
names)
<subsystem xmlns="urn:jboss:domain:remoting:3.0">
<outbound-connections>
<profile name="App1" username="quickuser1"
security-realm="ejb-security-realm-1" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
<outbound-sockets>remote-ejb-1,remote-ejb2</outbound-sockets> <!--
repeated elements seems better -->
</remote-outbound-connection>
<remote-outbound-connection
name="remote-ejb-connection-X"
outbound-socket-binding-ref="remote-ejb-X" username="quickuser2"
security-realm="ejb-security-realm-2" protocol="http-remoting">
<properties>
<property name="SASL_POLICY_NOANONYMOUS"
value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
In this case the profile use the user/security and properties for all
connections and the cluster as well. In this it is necessary to have the
same configuration for all the servers in the profile-bunch.
Another option I thought about is to use the user/properties in
<profile> as default and have the possibility to use a inner element
remote-outbound-connection, or a reference to remote-outbound-connection
which can override these, but I'm not sure whether this is needed.
We (Tomasz Adamski and me) had a discussion about this and, technically
there is no problem with each approach.
But ...
I know that all the ejb-client stuff is subject to change and to prevent
from incompatible changes which are changed in every version
and from unnecessary work if the code will be changed before it will be
used at all
I think it will need to be discussed with others because of this.
cheers
Wolf
9 years, 7 months
Recursive management resource removes
by Brian Stansberry
tl;dr
We're going to start ensuring that any time a resource is removed, all
it's children are properly removed too.
This will let users remove an entire profile from the domain config in
one op, which in important now that we make it easier to clone an entire
profile or include one profile in another. These features should mean
people will be more likely to perform major configuration surgery via
the management API instead of xml editing, and surgeons need to 'remove'
large chunks.
Long version
Just an FYI on pending changes in how 'remove' management operations are
handled in WildFly 10.
First, any resource that represents persistent configuration must expose
a no-arg 'remove' operation. This was an informal requirement before;
now it's formal.
Second, you'd think that if a resource is removed via a 'remove' op that
we'd *always* ensure that all child resources are properly removed as
well. "Properly" meaning necessary runtime changes are made (e.g.
services removed), not just that the resource is dropped from the
configuration model.
But this isn't the case. There are a couple resources that explicitly
reject the 'remove' request if their children haven't been removed
first. There are others where the Stage.RUNTIME logic handling the
removal of the child assumes the parent model is still present, which is
an invalid assumption if the parent is removed in the same op. And there
are *may* be some where the child resource is just dropped ignoring the
need to clean up child services.
With https://github.com/wildfly/wildfly-core/pull/880 and
https://github.com/wildfly/wildfly/pull/7734 I'm changing this. Now the
base handler class for 'remove' ops (AbstractRemoveStepHandler) will by
default add steps to recursively remove any child resources before
executing a step to remove the target resource.
If for some reason you don't want a step added to remove a particular
child, override the
AbstractRemoveStepHandler.removeChildRecursively(PathElement child)
method. For example, see [1]. This is expected to be a quite unusual
thing to do.
If your resource doesn't use AbstractRemoveStepHandler for implementing
'remove':
1) Why not?
2) You're responsible for implementing the same semantic.
Note there are a couple remove handler impls that have implemented
similar logic in a custom manner. Once a core release with WFCORE-808 in
it is in full, that custom logic can be dropped.
[1]
https://github.com/bstansberry/wildfly/commit/7a046d7b99eeb1a73b53253786a...
[2]
https://github.com/bstansberry/wildfly/commit/e64a1bd83b83e20f5f3964aece8...
--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
9 years, 7 months
Including Keycloak client adapters in WildFly 10
by Stian Thorgersen
Keycloak provides an adapter, including a WildFly extensions, to make it easier to add authentication to JavaEE applications with Keycloak.
It includes a few modules. Currently 8 Keycloak specific modules and one 1 third-party. The third-party is net.iharder.base64.
As the WildFly extensions includes a deployment processor that configures the authentication method as well as dependencies for a deployment it's easy to add authentication to a JavaEE application. All you need to do is specify it in standalone.xml, for example:
...
<secure-deployment name="mywar.war">
<realm>myrealm</realm>
<realm-public-key>MIIBIjAN...</realm-public-key>
<auth-server-url>http://localhost:8081/auth</auth-server-url>
<ssl-required>EXTERNAL</ssl-required>
<resource>mywar</resource>
<credential name="secret">675356d8-2b6b-4602-a74f-7079e0555885</credential>
</secure-deployment>
...
I'd like to explore if we can add this extension and the required modules directly to WildFly 10, rather than require users to add it themselves.
9 years, 7 months
Wildfly start-up as service script depends on console log to determinate start result
by Chao Wang
Hi all,
The Wildfly start-up as service scripts wildfly-init-redhat.sh and
wildfly-init-debian.sh currently depend on a grep action of key message
'WFLYSRV0025:' in console log to determinate whether service start is
successful. The log message indication is accurate, however, it's not
that robust since user can always remove console handler from logging
subsystem. I have opened a WFCORE enhancement jira
https://issues.jboss.org/browse/WFCORE-747 for it.
For the moment, I have tried three options, they're all not that perfect
to implement
1. Stay with exact log message, users need to define their jboss log
directory such as $JBOSS_HOME/standalone/log/server.log for standalone
and $JBOSS_HOME/domain/log/host-controller.log for domain instead of
searching in console log. This is more like another workaround since it
is also volatile once we update log message in future release.(EAP has
'JBAS015874:')
2. Use service pid, this is not precise because a long start-up can
crash in the last second. It needs to wait a suitable seconds before
checking pid existence. and still it can not avoid fake success in rare
case just before timeout.
3. Use read-attribute server-state through CLI connection as I did in
Pull Request on Jira. This is declined as it is possible that
authentication is required before connection. In such case, any non
encrypted password is not advised in configuration files.
Therefore, I would like to listen for your opinions for them. Any other
suggestion is certainly welcomed in mail or on jira.
Best regards,
Chao
9 years, 7 months
WildFly 10.0.0.Alpha5: NPE in org.hibernate.cache.internal.CacheDataDescriptionImpl.decode() ?
by Frank Langelage
After upgrading WildFly to latest alpha5 including the Hibernate ORM
update to 5.0.0.CR2 the persistence unit inside my ear cannot be started
anymore.
11.07. 12:17:54,698 INFO [org.jboss.as.jpa#run] WFLYJPA0010: Starting
Persistence Unit (phase 1 of 2) Service
'maj2e-langfr-dev.ear/ejb-entity.jar#maj2e-langfr-dev'
11.07. 12:17:54,954 INFO
[org.hibernate.jpa.internal.util.LogHelper#logPersistenceUnitInformation] HHH000204:
Processing PersistenceUnitInfo [
name: maj2e-langfr-dev
...]
11.07. 12:17:55,546 INFO [org.hibernate.Version#logVersion] HHH000412:
Hibernate Core {5.0.0.CR2}
11.07. 12:17:55,552 INFO [org.hibernate.cfg.Environment#<clinit>]
HHH000206: hibernate.properties not found
11.07. 12:17:55,562 INFO
[org.hibernate.cfg.Environment#buildBytecodeProvider] HHH000021:
Bytecode provider name : javassist
11.07. 12:17:55,701 INFO [org.hibernate.orm.deprecation#<init>]
HHH90000001: Found usage of deprecated setting for specifying Scanner
[hibernate.ejb.resource_scanner]; use [hibernate.archive.scanner] instead
11.07. 12:17:55,761 INFO
[org.hibernate.annotations.common.Version#<clinit>] HCANN000001:
Hibernate Commons Annotations {5.0.0.Final}
11.07. 12:17:55,877 INFO [org.jboss.weld.deployer#deploy] WFLYWELD0003:
Processing weld deployment ejb-session-core.jar
11.07. 12:17:55,940 INFO [org.jboss.weld.deployer#deploy] WFLYWELD0006:
Starting Services for CDI deployment: maj2e-langfr-dev.ear
11.07. 12:17:56,447 INFO [org.jboss.weld.Version#<clinit>] WELD-000900:
2.3.0 (Beta2)
11.07. 12:18:14,049 INFO [org.jboss.weld.deployer#start] WFLYWELD0009:
Starting weld service for deployment maj2e-langfr-dev.ear
[GC (Allocation Failure) [PSYoungGen: 475136K->58002K(573440K)]
560180K->143054K(1974272K), 0.4947124 secs] [Times: user=0.80 sys=0.04,
real=0.49 secs]
11.07. 12:18:18,028 INFO [org.jboss.as.jpa#run] WFLYJPA0010: Starting
Persistence Unit (phase 2 of 2) Service
'maj2e-langfr-dev.ear/ejb-entity.jar#maj2e-langfr-dev'
11.07. 12:18:19,201 INFO [org.hibernate.dialect.Dialect#<init>]
HHH000400: Using dialect: org.hibernate.dialect.Oracle10gDialect
11.07. 12:18:19,521 INFO
[org.hibernate.envers.boot.internal.EnversServiceImpl#configure] Envers
integration enabled? : true
[GC (Allocation Failure) [PSYoungGen: 537234K->50463K(581632K)]
622286K->135516K(1982464K), 0.4342550 secs] [Times: user=0.80 sys=0.00,
real=0.43 secs]
11.07. 12:18:36,361 ERROR [org.jboss.msc.service.fail#failed] MSC000001:
Failed to start service
jboss.persistenceunit."maj2e-langfr-dev.ear/ejb-entity.jar#maj2e-langfr-dev":
org.jboss.msc.service.StartException in service
jboss.persistenceunit."maj2e-langfr-dev.ear/ejb-entity.jar#maj2e-langfr-dev":
javax.persistence.PersistenceException: [PersistenceUnit:
maj2e-langfr-dev] Unable to build Hibernate SessionFactory
at
org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:172)
at
org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:117)
at
org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:666)
at
org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1.run(PersistenceUnitServiceImpl.java:182)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: javax.persistence.PersistenceException: [PersistenceUnit:
maj2e-langfr-dev] Unable to build Hibernate SessionFactory
at
org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.persistenceException(EntityManagerFactoryBuilderImpl.java:877)
at
org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:805)
at
org.jboss.as.jpa.hibernate5.TwoPhaseBootstrapImpl.build(TwoPhaseBootstrapImpl.java:44)
at
org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1$1.run(PersistenceUnitServiceImpl.java:154)
... 7 more
Caused by: java.lang.NullPointerException
at
org.hibernate.cache.internal.CacheDataDescriptionImpl.decode(CacheDataDescriptionImpl.java:77)
at
org.hibernate.internal.SessionFactoryImpl.determineEntityRegionAccessStrategy(SessionFactoryImpl.java:628)
at
org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:330)
at
org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:444)
at
org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:802)
... 9 more
My persistence.xml:
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd" version="2.1">
<persistence-unit name="@MBI_DBNAME@" transaction-type="JTA">
<jta-data-source>java:jboss/datasources/@MBI_DBNAME@</jta-data-source>
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<validation-mode>NONE</validation-mode>
<properties>
<property name="hibernate.dialect"
value="org.hibernate.dialect.@HIBERNATE_DIALECT@"/>
<!--
used values are
"org.hibernate.dialect.Oracle10gDialect"
"org.hibernate.dialect.InformixDialect"
-->
<property name="hibernate.show_sql" value="@SHOW_SQL@"/>
<property name="hibernate.format_sql" value="@SHOW_SQL@"/>
<property name="hibernate.use_sql_comments"
value="@SHOW_SQL@"/>
</properties>
</persistence-unit>
</persistence>
9 years, 7 months
WildFly Core 2.0.0.Alpha8 Released
by James R. Perkins
But wait a minute, what happened to Alpha7? Long story short, I messed
it up :)
--
James R. Perkins
JBoss by Red Hat
9 years, 7 months
Batch Subsystem Changes
by James R. Perkins
Hello All,
The past couple weeks I've been working on basically a redo of the batch
subsystem. Almost the entire management model is changing to hopefully
make it more user friendly.
In WildFly 8 and WildFly 9 the model looked like the following:
{
"job-repository-type" => "in-memory",
"job-repository" => {"jdbc" => {"jndi-name" => undefined}},
"thread-factory" => undefined,
"thread-pool" => {"batch" => {
"keepalive-time" => {
"time" => 30L,
"unit" => "SECONDS"
},
"max-threads" => 10,
"name" => "batch",
"thread-factory" => undefined
}}
}
The job-repository-type could either be jdcb or in-memory. The jndi-name
attribute on the single job-repository=jdbc resource could be undefined
indicating the default data-source should be used or JNDI name to look
up the data-source with no validation being done until the user actually
tries to deploy a batch deployment.
The thread-pool and thread-factory are the same as other resources that
use the thread "subsystem" shared resources.
As you can see it's not very intuitive and somewhat clumsy to say the
least. Only a single job-repository could be defined which isn't great
for multiple deployments.
In WildFly 10 the model, at least currently, will look like:
{
"default-job-repository" => "default",
"in-memory-job-repository" => {"default" => {}},
"jdbc-job-repository" => {"jdbc" => {"data-source" => "ExampleDS"}},
"thread-factory" => undefined,
"thread-pool" => {"batch" => {
"active-count" => 0,
"completed-task-count" => 0L,
"current-thread-count" => 0,
"keepalive-time" => {
"time" => 30L,
"unit" => "SECONDS"
},
"largest-thread-count" => 0,
"max-threads" => 10,
"name" => "batch",
"queue-size" => 0,
"rejected-count" => 0,
"task-count" => 0L,
"thread-factory" => undefined
}}
}
The default-job-repository will be an attribute similar to the previous
job-repository attribute. The difference being you can use any named
in-memory-job-repository or jdbc-job-repository. You can have any number
of in-memory or JDBC job repositories.
The data-source attribute value on a jdbc-job-repository resource will
use the org.wildfly.data-source [1]. The name of the data-source is used
instead of the JNDI which is a much cleaner approach.
The thread-factory may be removed and the thread-pool may be changed to
use attribute groups (once I figure out how to use them :)).
As part of this I considered changing the name from batch to
batch-jberet. The main concern I had with this was the web console, but
I seem to have broken that anyway with the changes to the model. Does
anyone have opinions on a name change to batch-jberet?
Also parsing an old configuration may have some issues if the user was
using a JDBC job repository. I've currently not found a good way to find
a data-source resource name based on a JNDI name. I'm not sure if we
should just fail when adding a legacy JDBC job repository. Any
suggestions here would be helpful.
Any comments or concerns in general are welcome. This is our chance to
get it right this time.
[1]: https://github.com/wildfly/wildfly/pull/7682
--
James R. Perkins
JBoss by Red Hat
9 years, 7 months
Dropping legacy XSD schemas & its parsers
by Tomaž Cerar
Hi folks,
we discussed on team meeting in Brno about dropping support for old legacy
host controllers
when running in mixed domain mode (having DC of newer version managing
older version HCs)
We also discussed dropping old xsd sachems & parsers as it would help us
cleanup and simplify code
in many subsystems as there are cases where we support and maintain 5 and
more different
versions of parser. For example currently web subsystem has 8, infinispan
7, ejb & jackorb have 6, ...
We still have parsers that that ware shipped back in 7.0.0 and became
obsolete in later 7.0.x releases.
Given that we decided that we are dropping support for running mixed domain
mode for host controller
that are older than 7.3.0 (EAP 6.2) as is tracked by
https://issues.jboss.org/browse/WFLY-3564
I would also like to suggest that we do the same for xml schemas & parsers.
*What is the downside?*
Automatic upgrading from JBoss AS 7.1.x/EAP < 6.2 version with using same
standalone.xml won't work anymore.
User would need to upgrade to WildFly 8.x and from there to 9 or 10
(depending when we drop this)
Because of replacement of web subsystem with undertow and introduction of
few other subsystems (io, SM)
this already doesn't work for 7.x-->8+, but we do have plans how to improve
that.
So, are there any objections against this?
--
Tomaž
9 years, 7 months