On the WildFly Elytron PasswordFactory API
by David M. Lloyd
The JDK's cryptography/security architecture includes facilities for
handling many kinds of cryptographic key materials, but it does not
include one to handle text passwords.
Text passwords are handled in a very wide variety of formats and used in
a variety of ways, especially when you add challenge/response algorithms
and legacy systems into the mix. Pursuant to that, there is a new API
inside of WildFly Elytron for the purpose of handling passwords and
translating them between various useful formats.
At present this API is designed to be similar to and consistent with the
JDK key handling APIs.
So I'll dive right in to examples of usage, based on the use cases that
have been identified so far:
Example: Importing an verifying a passwd file password
------------------------------------------------------
PasswordFactory pf = PasswordFactory.getInstance("crypt");
// Get a Password for a crypt string
PasswordSpec spec = new CryptStringPasswordSpec(passwdChars);
Password password = pf.generatePassword(spec);
// Now we can verify it
if (! pf.verify(password, "mygu3ss".toCharArray())) {
throw new AuthenticationException("Wrong password");
}
Example: Importing and exporting a clear password
-------------------------------------------------
PasswordFactory pf = PasswordFactory.getInstance("clear");
// Import
PasswordSpec spec = new ClearPasswordSpec("p4ssw0rd".toCharArray());
Password password = pf.generatePassword(spec);
// Verify
boolean ok = pf.verify(password, "p4ssw0rd".toCharArray());
// Is it clear?
boolean isClear = pf.convertibleToKeySpec(password,
ClearPasswordSpec.class);
assert password instanceof TwoWayPassword;
assert ! (password instanceof OneWayPassword);
// Export again
ClearPasswordSpec clearSpec = pf.getKeySpec(password,
ClearPasswordSpec.class);
System.out.printf("The password is: %s%n", new
String(clearSpec.getEncodedPassword()));
Example: Encrypting a new password
----------------------------------
PasswordFactory pf = PasswordFactory.getInstance("sha1crypt");
// API not yet established but will be similar to this possibly:
???? parameters = new
???SHA1CryptPasswordParameterSpec("p4ssw0rd".toCharArray());
Password encrypted = pf.generatePassword(parameters);
assert encrypted instanceof SHA1CryptPassword;
If anyone has other use cases they feel need to be covered, or questions
or comments about the API, speak up.
--
- DML
10 years, 6 months
Design Proposal: Build split and provisioning
by Stuart Douglas
This design proposal covers the inter related tasks of splitting up the
build, and also creating a build/provisioning system that will make it
easy for end users to consume Wildfly. Apologies for the length, but it
is a complex topic. The first part explains what we are trying to
achieve, the second part covers how we are planning to actually
implement it.
The Wildfly code base is over a million lines of java and has a test
suite that generally takes close to two hours to run in its entirety.
This makes the project very unwieldily, and the large size and slow test
suite makes development painful.
To deal with this issue we are going to split the Wildfly code base into
smaller discrete repositories. The planned split is as follows:
- Core: just the WF core
- Arquillian: the arquillian adaptors
- Servlet: a WF distribution with just Undertow, and some basic EE
functionality such as naming
- EE: All the core EE related functionality, EJB's, messaging etc
- Clustering: The core clustering functionality
- Console: The management console
- Dist: brings all the pieces together, and allows us to run all tests
against a full server
Note that this list is in no way final, and is open to debate. We will
most likely want to split up the EE component at some point, possibly
along some kind of web profile/full profile type split.
Each of these repos will build a feature pack, which will contain the
following:
- Feature specification / description
- Core version requirements (e.g. WF10)
- Dependency info on other features (e.g. RestEASY X requires CDI 1.1)
- module.xml files for all required modules that are not provided by
other features
- References to maven GAV's for jars (possibly a level of indirection
here, module.xml may just contain the group and artifact, and the
version may be in a version.properties file to allow it to be easily
overridden)
- Default configuration snippet, subsystem snippets are packaged in the
subsystem jars, templates that combine them into config files are part
of the feature pack.
- Misc files (e.g. xsds) with indication of where on path to place them
Note that a feature pack is not a complete server, it cannot simply be
extracted and run, it first needs to be assembled into a server by the
provisioning tool. The feature packs also just contain references to the
maven GAV of required jars, they do not have the actual jars in the pack
(which should make them very lightweight).
Feature packs will be assembled by the WF build tool, which is just a
maven plugin that will replace our existing hacky collection of ant
scripts.
Actual server instances will be assembled by the provisioning tool,
which will be implemented as a library with several different front
ends, including a maven plugin and a CLI (possibly integrated into our
existing CLI). In general the provisioning tool will be able to
provision three different types of servers:
- A traditional server with all jar files in the distribution
- A server that uses maven coordinates in module.xml files, with all
artifacts downloaded as part of the provisioning process
- As above, but with artifacts being lazily loaded as needed (not
recommended for production, but I think this may be useful from a
developer point of view)
The provisioning tool will work from an XML descriptor that describes
the server that is to be built. In general this information will include:
- GAV of the feature packs to use
- Filtering information if not all features from a pack are required
(e.g. just give me JAX-RS from the EE pack. In this case the only
modules/subsystems installed from the pack will be modules and subystem
that JAX-RS requires).
- Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8),
which will allow community users to easily upgrade individual components.
- Configuration changes that are required (e.g. some way to add a
datasource to the assembled server). The actual form this will take
still needs to be decided. Note that this need to work on both a user
level (a user adding a datasource) and a feature pack level (e.g. the
JON feature packing adding a required data source).
- GAV of deployments to install in the server. This should allow a
server complete with deployments and the necessary config to be
assembled and be immediately ready to be put into service.
Note that if you just want a full WF install you should be able to
provision it with a single line in the provisioning file, by specifying
the dist feature pack. We will still provide our traditional download,
which will be build by the provisioning tool as part of our build process.
The provisioning tool will also be able to upgrade servers, which
basically consists of provisioning a new modules directory. Rollback is
provided by provisioning from an earlier version of provisioning file.
When a server is provisioned the tool will make a backup copy of the
file used, so it should always be possible to examine the provisioning
file that was used to build the current server config.
Note that when an update is performed on an existing server config will
not be updated, unless the update adds an additional config file, in
which case the new config file will be generated (however existing
config will not be touched).
Note that as a result of this split we will need to do much more
frequent releases of the individual feature packs, to allow the most
recent code to be integrated into dist.
Implementation Plan
The above changes are obviously a big job, and will not happen
overnight. They are also highly likely to conflict with other changes,
so maintaining a long running branch that gets rebased is not a
practical option. Instead the plan it to perform the split in
incremental changes. The basic steps are listed below, some of which can
be performed in parallel.
1) Using the initial implementation of my build plugin (in my
wildfly-build-plugin branch) we split up the server along the lines
above. The code will all stay in the same repo, however the plugin will
be used to build all the individual pieces, which are then assembled as
part of the final build process. Note that the plugin in its current
form does both the build and provision step, and the pack format is
produces is far from the final pack format that we will want to use.
2) Split up the test suite into modules based on the features that they
test. This will result in several smaller modules in place of a single
large one, which should also be a usability improvement as individual
tests will be be faster to run, and run times for all tests in a module
should be more manageable.
3) Split the core into into own module.
4) Split everything else into its own module. As part of this step we
need to make sure we still have the ability to run all tests against the
full server, as well as against the cut down feature pack version of the
server.
5) Focus on the build an provisioning tool, to implement all the
features above, and to finalize the WF pack format.
I think that just about covers it. There are still lots of nitty gritty
details that need to be worked out, however I think this covers all the
main aspects of the design. We are planning on starting work on this
basically immediately, as we want to get this implemented as early in
the WF9 cycle as possible.
Stuart
10 years, 6 months
Subsystems
by Florian Pirchner
Hi,
i got a question. Are subsystems in WildFly 8 based on the OSGi
subsystem specification?
It seems that OSGi was removed from kernel and can be added as an addon;
right?
Thanks, Florian
10 years, 6 months
Re: [wildfly-dev] Design Proposal: Build split and provisioning
by Tomaž Cerar
I already have some work done for 3)...
Sent from my Phone From: Stuart Douglas
Sent: 11.6.2014 17:58
To: Stuart Douglas
Cc: Wildfly Dev mailing list
Subject: Re: [wildfly-dev] Design Proposal: Build split and provisioning
Something that I did not cover was how to actually do the split it terms
of preserving history. We have a few options:
1) Just copy the files into a clean repo. There is no history in the
repo, but you could always check the existing wildfly repo if you really
need it.
2) Copy the complete WF repo and then delete the parts that are not
going to be part of the new repo. This leaves complete history, but
means that the check outs will be larger than they should be.
3) Use git-filter-branch to create a new repo with just the history of
the relevant files. We still have a small checkout size, but the history
is still in the repo.
I think we should go with option 3.
Stuart
Stuart Douglas wrote:
> This design proposal covers the inter related tasks of splitting up the
> build, and also creating a build/provisioning system that will make it
> easy for end users to consume Wildfly. Apologies for the length, but it
> is a complex topic. The first part explains what we are trying to
> achieve, the second part covers how we are planning to actually
> implement it.
>
> The Wildfly code base is over a million lines of java and has a test
> suite that generally takes close to two hours to run in its entirety.
> This makes the project very unwieldily, and the large size and slow test
> suite makes development painful.
>
> To deal with this issue we are going to split the Wildfly code base into
> smaller discrete repositories. The planned split is as follows:
>
> - Core: just the WF core
> - Arquillian: the arquillian adaptors
> - Servlet: a WF distribution with just Undertow, and some basic EE
> functionality such as naming
> - EE: All the core EE related functionality, EJB's, messaging etc
> - Clustering: The core clustering functionality
> - Console: The management console
> - Dist: brings all the pieces together, and allows us to run all tests
> against a full server
>
> Note that this list is in no way final, and is open to debate. We will
> most likely want to split up the EE component at some point, possibly
> along some kind of web profile/full profile type split.
>
> Each of these repos will build a feature pack, which will contain the
> following:
>
> - Feature specification / description
> - Core version requirements (e.g. WF10)
> - Dependency info on other features (e.g. RestEASY X requires CDI 1.1)
> - module.xml files for all required modules that are not provided by
> other features
> - References to maven GAV's for jars (possibly a level of indirection
> here, module.xml may just contain the group and artifact, and the
> version may be in a version.properties file to allow it to be easily
> overridden)
> - Default configuration snippet, subsystem snippets are packaged in the
> subsystem jars, templates that combine them into config files are part
> of the feature pack.
> - Misc files (e.g. xsds) with indication of where on path to place them
>
> Note that a feature pack is not a complete server, it cannot simply be
> extracted and run, it first needs to be assembled into a server by the
> provisioning tool. The feature packs also just contain references to the
> maven GAV of required jars, they do not have the actual jars in the pack
> (which should make them very lightweight).
>
> Feature packs will be assembled by the WF build tool, which is just a
> maven plugin that will replace our existing hacky collection of ant
> scripts.
>
> Actual server instances will be assembled by the provisioning tool,
> which will be implemented as a library with several different front
> ends, including a maven plugin and a CLI (possibly integrated into our
> existing CLI). In general the provisioning tool will be able to
> provision three different types of servers:
>
> - A traditional server with all jar files in the distribution
> - A server that uses maven coordinates in module.xml files, with all
> artifacts downloaded as part of the provisioning process
> - As above, but with artifacts being lazily loaded as needed (not
> recommended for production, but I think this may be useful from a
> developer point of view)
>
> The provisioning tool will work from an XML descriptor that describes
> the server that is to be built. In general this information will include:
>
> - GAV of the feature packs to use
> - Filtering information if not all features from a pack are required
> (e.g. just give me JAX-RS from the EE pack. In this case the only
> modules/subsystems installed from the pack will be modules and subystem
> that JAX-RS requires).
> - Version overrides (e.g. give me Reaseasy 3.0.10 instead of 3.0.8),
> which will allow community users to easily upgrade individual components.
> - Configuration changes that are required (e.g. some way to add a
> datasource to the assembled server). The actual form this will take
> still needs to be decided. Note that this need to work on both a user
> level (a user adding a datasource) and a feature pack level (e.g. the
> JON feature packing adding a required data source).
> - GAV of deployments to install in the server. This should allow a
> server complete with deployments and the necessary config to be
> assembled and be immediately ready to be put into service.
>
> Note that if you just want a full WF install you should be able to
> provision it with a single line in the provisioning file, by specifying
> the dist feature pack. We will still provide our traditional download,
> which will be build by the provisioning tool as part of our build process.
>
> The provisioning tool will also be able to upgrade servers, which
> basically consists of provisioning a new modules directory. Rollback is
> provided by provisioning from an earlier version of provisioning file.
> When a server is provisioned the tool will make a backup copy of the
> file used, so it should always be possible to examine the provisioning
> file that was used to build the current server config.
>
> Note that when an update is performed on an existing server config will
> not be updated, unless the update adds an additional config file, in
> which case the new config file will be generated (however existing
> config will not be touched).
>
> Note that as a result of this split we will need to do much more
> frequent releases of the individual feature packs, to allow the most
> recent code to be integrated into dist.
>
> Implementation Plan
>
> The above changes are obviously a big job, and will not happen
> overnight. They are also highly likely to conflict with other changes,
> so maintaining a long running branch that gets rebased is not a
> practical option. Instead the plan it to perform the split in
> incremental changes. The basic steps are listed below, some of which can
> be performed in parallel.
>
> 1) Using the initial implementation of my build plugin (in my
> wildfly-build-plugin branch) we split up the server along the lines
> above. The code will all stay in the same repo, however the plugin will
> be used to build all the individual pieces, which are then assembled as
> part of the final build process. Note that the plugin in its current
> form does both the build and provision step, and the pack format is
> produces is far from the final pack format that we will want to use.
>
> 2) Split up the test suite into modules based on the features that they
> test. This will result in several smaller modules in place of a single
> large one, which should also be a usability improvement as individual
> tests will be be faster to run, and run times for all tests in a module
> should be more manageable.
>
> 3) Split the core into into own module.
>
> 4) Split everything else into its own module. As part of this step we
> need to make sure we still have the ability to run all tests against the
> full server, as well as against the cut down feature pack version of the
> server.
>
> 5) Focus on the build an provisioning tool, to implement all the
> features above, and to finalize the WF pack format.
>
> I think that just about covers it. There are still lots of nitty gritty
> details that need to be worked out, however I think this covers all the
> main aspects of the design. We are planning on starting work on this
> basically immediately, as we want to get this implemented as early in
> the WF9 cycle as possible.
>
> Stuart
>
>
>
>
>
>
>
>
>
> _______________________________________________
> wildfly-dev mailing list
> wildfly-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/wildfly-dev
_______________________________________________
wildfly-dev mailing list
wildfly-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/wildfly-dev
10 years, 6 months
New security sub-project: WildFly Elytron
by David M. Lloyd
WildFly Elytron [1] is a new WildFly sub-project which will completely
replace the combination of PicketBox and JAAS as the WildFly client and
server security mechanism.
An "elytron" (ĕl´·ĭ·trŏn, plural "elytra") is the hard, protective
casing over a wing of certain flying insects (e.g. beetles).
Here is a high-level project summary:
WildFly Elytron does presently, or will, satisfy the following goals:
• Establish and clearly define terminology around WildFly's security
concepts
• Provide support for secure server-side authentication mechanisms (i.e.
eliminating the historical "send the password everywhere" style of
authentication and forwarding) supporting HTTP [2], SASL [3] (including
SASL+GSSAPI [4]), and TLS [5] connection types, as well as supporting
other authentication protocols in the future without change (such as
RADIUS [6], GSS [7], EAP [8])
• Provide a simple means to support multiple security associations per
security context (one per authentication system, including local and
remote application servers, remote databases, remote LDAP, etc.)
• Provide support for password credential types using the standard JCE
archetypal API structure (including but not limited to plain, UNIX
DES/MD5/SHA crypt types, bcrypt, mechanism-specific pre-hashed
passwords, etc.)
• Provide SPIs to support all of the above, such that consumers such as
Undertow, JBoss SASL, HornetQ etc. can use them directly with a minimum
of integration overhead
• Provide SPIs to support and maintain security contexts
• Integrate seamlessly with PicketLink IDM and Keycloak projects
• Provide SPIs to integrate with IDM systems (such as PicketLink) as
well as simple/local user stores (such as KeyStores or plain files, and
possibly also simple JDBC and/or LDAP backends as well)
• Provide SPIs to support name rewriting and realm selection based on
arbitrary, pluggable criteria
• Provide a Remoting-based connection-bound authentication service to
establish or forward authentication between systems
• Provide SPIs to allow all Remoting-based protocols to reuse/share
security contexts (EJB, JNDI, etc.)
• Integrate seamlessly with Kerberos authentication schemes for all
authentication mechanisms (including inbound and outbound identity
propagation for all currently supporting protocols)
• Provide improved integration with EE standards (JAAC and JASPIC)
The following are presently non- or anti-goals:
• Any provision to support JAAS Subject as a security context (due to
performance and correctness concerns)†
• Any provision to support JAAS LoginContext (due to tight integration
with Subject)
• Any provision to maintain API compatibility with PicketBox (this is
not presently an established requirement and thus would add undue
implementation complexity, if it is indeed even possible)
• Replicate Kerberos-style ticket-based credential forwarding (just use
Kerberos in this case)
† You may note that this is in contrast with a previous post to the AS 7
list [9] in which I advocated simply unifying on Subject. Subsequent
research uncovered a number of performance and implementation weaknesses
in JAAS that have since convinced the security team that we should no
longer be relying on it.
Most of the discussion on this project happens in the #wildfly-dev+
(note the plus sign) channel on FreeNode IRC. At some point in the
near-ish future I will hopefully also have some (open-source)
presentation materials about the architecture.
Questions and comments welcome; feel free to peruse the code and comment
in GitHub as well.
References/links:
[1] https://github.com/wildfly-security/wildfly-elytron
[2] http://tools.ietf.org/html/rfc2616
[3] http://tools.ietf.org/html/rfc4422
[4] http://tools.ietf.org/html/rfc4752
[5] http://tools.ietf.org/html/rfc5246
[6] http://tools.ietf.org/html/rfc2865 and
http://tools.ietf.org/html/rfc2866
[7] http://tools.ietf.org/html/rfc2743 and related
[8] http://tools.ietf.org/html/rfc3748
[9] http://lists.jboss.org/pipermail/jboss-as7-dev/2013-February/007730.html
--
- DML
10 years, 6 months
WildFly 9 Naming Rework (Design+Impl Discussion)
by Eduardo Martins
Last year I’ve been gathering what are the pain points with our current JNDI and @Resource injection related code, mostly the complaints I’ve noted (meetings, user forum, mail list, etc.) are:
too much code needed to do simple things, such as bind a JNDI entry, and very low code reusage
Naming related APIs not only easy to misuse, i.e. very error prone, but also promoting multiple ways to do same things
not as slim or performance as it could and should be
Also, new functionality is needed/desired, of most relevance:
ability to use Naming subsystem configuration to add bindings to the scoped EE namespaces java:comp, java:module and java:app
access bindings in the scoped EE namespaces even without EE components in context, for instance Persistence Units targeting the default datasource at java:comp/DefaultDatasource
With all above in mind, I started reworking Naming/EE for WFLY 9, and such work is ready to be presented and reviewed.
I created a Wiki page to document the design and APIs, which should later evolve as the definitive guide for WildFly subsystem developers wrt JNDI and @Resource. Check it out at https://docs.jboss.org/author/display/WFLY9/WildFly+9+JNDI+Implementation
A fully working PoC, which passes our testsuites, is already available at https://github.com/emmartins/wildfly/tree/wfly9-naming-rework-v3
Possible further design/impl enhancements
Is there really a good reason to not merge all the global naming stores into a single “java:” one, and simplify (a lot) the logic to compute which store is relative to a jndi name?
java:
java:jboss
java:jboss/exported
java:global
shared java:comp
shared java:module
shared java:app
Since there is now a complete java: namespace always present, we could avoid multiple binds of same resource unless asked by spec, or with remote access in mind, e.g. java:jboss/ORB and java:comp/ORB
Don’t manage binds made from Context#bind() (JNDI API), the module/app binder would be responsible for both binding and unbinding, as expected elsewhere when using the standard JNDI API. Besides simplifying our writable naming store logic, this would make 3rd party libs usable in WildFly without modifications or exposing special APIs. Note that this applies to global namespaces only, the scoped java:app, java:module and java:comp namespaces are read only when accessed through JNDI API.
Remove the unofficial(?) policy that defines jndi names relative to java:, and use only the EE (xml & annotations) standard policy, which defines that all of these are relative to java:comp/env
—E
PS: the shared PoC is not completed wrt new API usage, it just includes show cases for each feature.
10 years, 6 months
WildFly Bootstrap(ish)
by James R. Perkins
For the wildfly-maven-plugin I've written a simple class to launch a
process that starts WildFly. It also has a thin wrapper around the
deployment builder to ease the deployment process.
I've heard we've been asked a few times about possibly creating a Gradle
plugin. As I understand it you can't use a maven plugin with Gradle. I'm
considering creating a separate bootstrap(ish) type of project to simple
launch WildFly from Java. Would anyone else find this useful? Or does
anyone have any objections to this?
--
James R. Perkins
JBoss by Red Hat
10 years, 6 months
getRequestURI returns welcome file instead of original request
by arjan tijms
Hi,
I noticed there's a difference in behaviour between JBossWeb/Tomcat and
Undertow with respect to welcome files.
Given a request to / and a welcome file set to /index
JBossWeb wil return "/" when HttpServletRequest#getRequestURI is called,
and "/index" when HttpServletRequest#getServletPath is called.
Undertow will return "/index" in both cases.
It's clear what happens by looking at ServletInitialHandler#handleRequest
which does a full rewrite for welcome files:
exchange.setRelativePath(exchange.getRelativePath() +
info.getRewriteLocation());
exchange.setRequestURI(exchange.getRequestURI() +
info.getRewriteLocation());
exchange.setRequestPath(exchange.getRequestPath() +
info.getRewriteLocation());
The Servlet spec (10.10) does seem to justify this somewhat by saying the
following:
"The container may send the request to the welcome resource with a forward,
a redirect, or a container specific mechanism that is indistinguishable
from a direct request."
However, the JavaDoc for HttpServletRequest#getRequestURI doesn't seem to
allow this.
At any length, it's a nasty difference that breaks various things.
Wonder what the general opinion is about this. Was it a conscious decision
to do a full rewrite in Undertow, or was it something that slipped through?
Kind regards,
Arjan
10 years, 6 months