EJB Remote client design
by David M. Lloyd
Okay so after a bunch of discussion with various personages, it looks
like we have some remote EJB client design shaken out. Please read
through a couple times to make sure you have gotten a clear
understanding to avoid questions that are already answered here.
Client API
----------
We'll introduce a concept of an "EJB invocation receiver" or "EJB
receiver" for short. The job of this fellow is to receive invocations
on behalf of a connection to a peer or a cluster or locally. So for the
Remoting transport, there is a 1:1 correspondence between connections
and EJB receivers.
Multiple EJB receivers can be collected up into a single "EJB client
context", each with a "preference" level. Same-VM receivers will
typically have a higher preference than remote receivers.
An EJB remote proxy (and by extension, its Handle) identify their EJB by
the combination of application name, module name, "distinct" name, and
bean name. All of the equals()-type operations that are specified by
EJB are implemented in terms of this basic level of equivalence (though
some types have additional criteria, like SFSB using session ID as
well). There is no notion of a specific server address or URI in a
proxy or handle.
The "distinct" name is an _optional_ name which can be associated with a
deployment (by deployment descriptor, and possibly also by
configuration) in the event that two different deployments which are
visible from one client have the same application and module name.
(Yes, I know you can't have two deployments with the same app+module
name in one server; however, a client can "see" more than one server at
once and may need to distinguish, especially if the user does not
control the name of the deployment.)
Each EJB receiver has an obligation to keep the EJB client context
updated with the list of module identities that it can access. When a
proxy is invoked upon, the EJB client context uses this table to select
the destination. This means that for the Remoting protocol for example,
the server sends back messages informing the client of changes in
deployment status (which should be relatively infrequent) as well as
sending an initial summary of what modules are available.
Most client interceptors are associated with both an EJB client context
and a receiver type. This is because things like transactions,
security, etc. will vary in their implementation based upon the protocol
in use.
We _may_ support protocol-independent interceptors, however we'll have
to evaluate use cases first. If we do then most likely they'd work with
a protocol-dependent counterpart; for example, a general security
interceptor might attach additional principal information to the
invocation in a standard spot, which the protocol-specific interceptor
might then publish to the server (or not).
An invocation on a remote proxy will use whatever EJB client context is
current for the calling thread. Thus remote proxies and handles can be
passed from one context to another (directly or via cloning or
serialization) without any special action being taken. If no EJB client
context is available, remote invocation will fail immediately.
Remote Async Invocation
-----------------------
Any Session Bean remote interface method which returns a Future will
always be treated as asynchronous by the client. Methods which return
void but are visibly (to the client) marked with the appropriate
annotation will be treated as asynchronous. Other methods which return
void will not be automatically treated as asynchronous, though the
server protocol allows for a message to come back to inform the client
that the invocation will proceed asynchronously.
If the client knows that a method is asynchronous or wants to call a
void method asynchronously, it may use a static API method to acquire an
asynchronous "view" of that interface so that all calls to void methods
proceed asynchronously:
myProxy.theVoidMethod(); // called sync unless the server unblocks
EJB.async(myProxy).theVoidMethod(); // called async always
This cannot be solved any other way as the client might not have access
to the EJB metadata which specifies whether the method is asynchronous.
Client JNDI
-----------
Client JNDI is just a simple in-memory JNDI implementation. We will not
automatically bind anything to it at all, except in two possible cases:
1. A simple configuration file which describes what to bind, or 2. A
configuration instructing a binding list to be fetched from somewhere.
Clients running in an AS7 instance or in an AS7 application client
container will be using our server JNDI implementation which supports
injection, not the client JNDI implementation, which is reserved for
truly standalone clients.
Server Implementation
---------------------
The EE subsystem already contains most of the required APIs for handling
remote invocations. Any remotely accessible ComponentView will be
registered with the remote invocation service along with its
app/module/distinctName identification.
A special in-VM EJB receiver will also exist and will be similarly
updated with the current available component views and their modules.
Management configurations will be introduced for the purpose of
establishing and maintaining Remoting connections to remote servers.
EJB client contexts should be created for each deployment, and a
deployment descriptor should be made available to any EE deployment for
the purpose of associating Remoting connections (and probably clusters
etc.) with the EJB client context for that EE module. This allows each
deployment to specify what remote servers it can "see", while still
keeping the network configuration business in the central management model.
All deployments on the server which have an EE client context will have
the in-VM receiver added to it automatically, so that it can access all
EJBs in the server via remote interface locally.
Summary
-------
Okay I think that's it for the first draft. Please give feedback on
anything that I might have missed or anything that's just outright wrong
and needs to be changed. In particular I'm currently assuming that IIOP
exists in a world outside of other invocation forms, but that might not
actually be desirable.
--
- DML
13 years, 3 months
security APIs/SPIs really need a redesign
by Bill Burke
I'll try to write a blog about this too, but, the security APIs/SPIs
really need a rethink. Originally, the whole security-domain concept,
and Tomcat Realm centered around passwords or an X509Certificate (for
client cert). Passwords alone basically suck for security. We use a
soft or hard token for our VPN, why wouldn't we use something similar
for JBoss-deployed applications?
There's all different kinds of information that needs to be stored in a
security-domain now:
- passwords
- hashed passwords
- secret-keys (for TOTP, soft-tokens)
- remembering nonces (Digest and OAuth come to mind hear)
- remember request and access tokens (Think OAuth)
- URLs (Think OpenID)
- KeyPairs when you're dealing with digital signatures or client
certificate authentication
- JPG images. Think of Bank of America that shows you a secret image
when you log in so that you know somebody isn't spoofing their site.
- Client IP addresses for when you want to tie a user to a client IP
Our legacy APIs/SPIs worked nicely because, since everything was
password based, the Security-Domain could also do authentication.
Extract the username/password from the HTTP request (or remote EJB
request) and just check it vs. the password storage. Now though,
there's a growing set of protocols that need access to the HTTP request
itself, especially if the request is digitally signed in some manner,
and the line between the protocol and security-domain starts to really blur.
Another huge problem with our security SPIs is that LoginModules are
stateless. There's really no way, other than hacks, to point it at
specific storage so it can do things like: remember nonces, temporary
secrets or certificates, previous IP address connections.
Yet another that I think may come up is dual-authentication mechanisms
for the same resources (URLs). A regular user may query the site via
traditional authentication vs. a 3rd party consumer which uses OAuth.
With our current WAR/web.xml model, you can only use one or the other.
The final problem I'm currently seeing is that its hard to re-use the
storage capabilities of our security plugins (.properites, ldap, JDBC,
etc.). What you currently have is a mish-mash of weird, hard-to extend
class hierarchies with no clear line between storage of information and
the algorithm being used and the process of authentication and
authorization. If we're going to support more complex models, we need
to create better SPIs here.
So what to do?
#1 I suggest defining a Security Storage SPI. Something that is
key/value/values based that is listable. i.e. something like:
interface SecurityStore {
public Object get(String key);
public List<Object> list(String key);
public void put(String key, Object value);
public void put(String key, List<Object> value);
}
A key would look like a URL i.e.:
/users
/users/bill
/users/bill/private-key
/users/bill/public-key
/users/bill/password
/users/bill/totp-key
/applications/myapp
/applications/myapp/roles
/applications/myapp/roles/admin
/applications/myapp/roles/admin/users
/applications/myapp/roles/admin/users/bill
The the security store could be mapped to a properties file, XML file,
LDAP storage, JDBC, etc.
Whether or not we use an existing thing here i.e. Infinispan, JCR, or
whatever is irrelevant, but we need a simple generic storage mechanism
to give ultimate flexibility to security extension developers. Some
suggestions on what to use for this mechanism would be greatly appreciated.
#2 Deprecate JAAS and write our own SPIs/APIs.
#3 Decide where authentication happens. Does it happen within a Tomcat
Valve and persistent security information queried directly from the
SecurityStore? Do we have a Security domain and delegate to it for
authentication? (In this case, the Security-Domain would need access to
the request object). I think I prefer a full delegation to a
SecurityDomain as storage, the authentication mechanism, and
configuration of the authentication mechanism pretty much go hand in hand.
#4 We need to make it fairly easy to develop security extensions.
#5 Try to support legacy deployment options with the new model.
#6 Going along with #3, I really like the idea of adding a <auth-method>
of JBOSS, or JBOSS-SECURITY-DOMAIN, so that authentication is handling
fully by a JBoss subsystem.
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
13 years, 3 months
security metadata
by Bill Burke
I want to talk about where app-developers want to security metadata,
how, and what the format is.
I've already discussed a bit of the types of information that needs to
be stored:
- username/password
- keypairs
- JPG images
- TOTP keys
- nonces
- Tokens
Where do people stored this information?
- 3rd Party IDP
- 3rd party directory services (LDAP, ActiveDirectory)
- config files within an app-deployment (WAR, EAR)
- config files outside an app-deployment
- a database
What does the metadata look like?
- JBoss defined schemas
- Extenerally defined schemas (SAML, XACML, custom)
How do they manage this metadata? Do our larger customers want to use
non-JBoss identity management solutions? Would they use something we
provided?
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
13 years, 3 months
Re: [jboss-as7-dev] Remoting issues
by Darran Lofthouse
On 09/23/2011 01:20 PM, Kabir Khan wrote:
>
> On 22 Sep 2011, at 21:41, Brian Stansberry wrote:
>
>> Real quick reply:
>>
>> 1) IMO only having a single remoting socket for both management and
>> other uses is A Real Good Thing™. But if there are situations that
>> prevent that, I don't think it's terrible.
> Since the socket sets up the security, we might want different ones for management vs user stuff? Darran how does this fit in with your plans?
I was talking to David about this one the other night, but it actually
goes beyond a split for management stuff vs user stuff as each
deployment could be backed by it's own different user repository with
different capabilities - these different capabilities then affect the
mechanisms that can be used for the authentication step.
The idea David is thinking of is to make the realm selection before the
authentication actually begins that way the authentication can be based
on the capabilities of the selected realm.
Regardless of opinions related to sharing the management interface
publicly this problem would need to be solved anyway to be able to cope
with different realms for different apps so I don't think management is
a specific concern.
Clients using the connection would then need to treat the connections as
an authenticated connection or a connection with an identity - if they
want to operate on an EJB as one user and perform domain management
tasks as another user they would need two connections to the server.
>> 2) We need to continue to support AS 7.0-style configs. That to me means
>> for cases where that style config is used, we create a separate endpoint.
>>
>> 3) For a domain mode server, we can't force people to add a remoting
>> subsystem in their domain.xml profile. We talked about having the
>> HostController generate one, but that will result in the server having a
>> profile that does not match what was configured in domain.xml --
>> suddenly a bonus subsystem appears. That is no good.
>>
>> 4) For a domain mode server if the HostController is going to configure
>> the server to set up a native management interface that points to the
>> remoting subsystem in the profile, it needs to be told how to do so. One
>> solution, is, in domain.xml:
>>
>> <server-group name="main-server-group" profile="default">
>> <native-management remoting-connector="management"/>
>> </server-group>
>>
>> In summary, I think reusing the remoting subsystem endpoint is great and
>> we should have our standard configs set up that way but there are cases
>> where things aren't going to be configured that way.
>>
>> On 9/22/11 2:19 PM, Kabir Khan wrote:
>>> I'm trying to understand the issues in remoting subsystem vs the management usage a bit better before I dig into this. We have 3 ways remoting is set up:
>>>
>>> 1) Standalone server
>>> a) Endpoint is set up when installing the subsystem
>>> b) Management is set up and creates a new stream server and channel open listener for ("management") with the endpoint from a) injected
>>>
>>> 2) Host controller
>>> a) Endpoint is set up by the bootstrap
>>> b) Bootstrap sets up the management stream server and channel open listeners for (using endpoint from a) injected
>>> -"management" - i.e. traffic on the management address
>>> -"server" - i.e. traffic from a server
>>> -"domain" - if it is the master, to listen to traffic from slaves
>>> c) If it is a slave it connects to the master on the "domain" channel
>>>
>>> 3) Domain mode server
>>> a) Endpoint is set up when installing the subsystem
>>> b) No management stream server is created
>>> c) A channel is opened to the HC using the endpoint from a) on the "server" channel.
>>>
>>> So, I think the issue is that the core depends on stuff set up by a subsystem? A problem in 3 is that if there is no remoting subsystem no endpoint is created, so communication with HC will not start and we will not get the subsystem config from the HC.
>>>
>>> Something doesn't feel quite right but I'm not sure what, so I'm throwing out some ideas.
>>>
>>> The remoting subsystem is quite basic at the moment and the code to set up new connectors is commented out.
>>>
>>> So maybe we should stick with what we have for HC:
>>> <management>
>>> <security-realms>
>>> SNIP
>>> </security-realms>
>>> <management-interfaces>
>>> <native-interface interface="management" port="9999" />
>>> <http-interface interface="management" port="9990"/>
>>> </management-interfaces>
>>> </management>
>>>
>>> But for the standalone server case do something like
>>>
>>> <management>
>>> <management-interfaces>
>>> <native-channel name="server" />
>>> <http-interface interface="management" port="9990">
>>> </management-interfaces>
>>> </management>
>>>
>>> <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
>>> <connector socket-binding="remote-management">
>>> <security-stuff><security-stuff/>
>>> <channel name="server" type="management"/>
>>> </connector>
>>> <connector socket-binding="user">
>>> <security-stuff><security-stuff/>
>>> <channel name="jndi" type="jndi"/>
>>> <channel name="jndi" type="ejb"/>
>>> </connector>
>>> </subsystem>
>>>
>>> Or maybe everything all goes over one socket so
>>> <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
>>> <connector socket-binding="remote-management">
>>> <security-stuff><security-stuff/>
>>> <channel name="server" type="management"/>
>>> <channel name="jndi" type="jndi"/>
>>> <channel name="jndi" type="ejb"/>
>>> </connector>
>>> </subsystem>
>>>
>>> I'm not clear on the security side of this but we now have the http side of it securing itself in one way and the native channel in another, so maybe this is better
>>>
>>> <management>
>>> <management-interfaces>
>>> <native-channel name="server" />
>>> <http-interface interface="management" port="9990">
>>> <security-realms>
>>> SNIP
>>> </security-realms>
>>> </http-interface>
>>> </management-interfaces>
>>> </management>
>>>
>>>
>>> Or the alternative for the domain mode server is to use two endpoints, one for management installed by the core, and one for other stuff installed by the remoting subsystem.
>>>
>>>
>>> _______________________________________________
>>> jboss-as7-dev mailing list
>>> jboss-as7-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>>
>>
>> --
>> Brian Stansberry
>> Principal Software Engineer
>> JBoss by Red Hat
>> _______________________________________________
>> jboss-as7-dev mailing list
>> jboss-as7-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>
13 years, 3 months
Fwd: Remoting issues
by Kabir Khan
> Using my right account to send to list this time
>
> Begin forwarded message:
>
>> From: Kabir Khan <kabir.khan(a)jboss.com>
>> Subject: Re: [jboss-as7-dev] Remoting issues
>> Date: 23 September 2011 13:20:34 GMT+01:00
>> To: Brian Stansberry <brian.stansberry(a)redhat.com>, Darran Lofthouse <darran.lofthouse(a)jboss.com>
>> Cc: "jboss-as7-dev(a)lists.jboss.org Development" <jboss-as7-dev(a)lists.jboss.org>
>>
>>
>> On 22 Sep 2011, at 21:41, Brian Stansberry wrote:
>>
>>> Real quick reply:
>>>
>>> 1) IMO only having a single remoting socket for both management and
>>> other uses is A Real Good Thing™. But if there are situations that
>>> prevent that, I don't think it's terrible.
>> Since the socket sets up the security, we might want different ones for management vs user stuff? Darran how does this fit in with your plans?
>>
>>>
>>> 2) We need to continue to support AS 7.0-style configs. That to me means
>>> for cases where that style config is used, we create a separate endpoint.
>>>
>>> 3) For a domain mode server, we can't force people to add a remoting
>>> subsystem in their domain.xml profile. We talked about having the
>>> HostController generate one, but that will result in the server having a
>>> profile that does not match what was configured in domain.xml --
>>> suddenly a bonus subsystem appears. That is no good.
>>>
>>> 4) For a domain mode server if the HostController is going to configure
>>> the server to set up a native management interface that points to the
>>> remoting subsystem in the profile, it needs to be told how to do so. One
>>> solution, is, in domain.xml:
>>>
>>> <server-group name="main-server-group" profile="default">
>>> <native-management remoting-connector="management"/>
>>> </server-group>
>>>
>>> In summary, I think reusing the remoting subsystem endpoint is great and
>>> we should have our standard configs set up that way but there are cases
>>> where things aren't going to be configured that way.
>>>
>>> On 9/22/11 2:19 PM, Kabir Khan wrote:
>>>> I'm trying to understand the issues in remoting subsystem vs the management usage a bit better before I dig into this. We have 3 ways remoting is set up:
>>>>
>>>> 1) Standalone server
>>>> a) Endpoint is set up when installing the subsystem
>>>> b) Management is set up and creates a new stream server and channel open listener for ("management") with the endpoint from a) injected
>>>>
>>>> 2) Host controller
>>>> a) Endpoint is set up by the bootstrap
>>>> b) Bootstrap sets up the management stream server and channel open listeners for (using endpoint from a) injected
>>>> -"management" - i.e. traffic on the management address
>>>> -"server" - i.e. traffic from a server
>>>> -"domain" - if it is the master, to listen to traffic from slaves
>>>> c) If it is a slave it connects to the master on the "domain" channel
>>>>
>>>> 3) Domain mode server
>>>> a) Endpoint is set up when installing the subsystem
>>>> b) No management stream server is created
>>>> c) A channel is opened to the HC using the endpoint from a) on the "server" channel.
>>>>
>>>> So, I think the issue is that the core depends on stuff set up by a subsystem? A problem in 3 is that if there is no remoting subsystem no endpoint is created, so communication with HC will not start and we will not get the subsystem config from the HC.
>>>>
>>>> Something doesn't feel quite right but I'm not sure what, so I'm throwing out some ideas.
>>>>
>>>> The remoting subsystem is quite basic at the moment and the code to set up new connectors is commented out.
>>>>
>>>> So maybe we should stick with what we have for HC:
>>>> <management>
>>>> <security-realms>
>>>> SNIP
>>>> </security-realms>
>>>> <management-interfaces>
>>>> <native-interface interface="management" port="9999" />
>>>> <http-interface interface="management" port="9990"/>
>>>> </management-interfaces>
>>>> </management>
>>>>
>>>> But for the standalone server case do something like
>>>>
>>>> <management>
>>>> <management-interfaces>
>>>> <native-channel name="server" />
>>>> <http-interface interface="management" port="9990">
>>>> </management-interfaces>
>>>> </management>
>>>>
>>>> <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
>>>> <connector socket-binding="remote-management">
>>>> <security-stuff><security-stuff/>
>>>> <channel name="server" type="management"/>
>>>> </connector>
>>>> <connector socket-binding="user">
>>>> <security-stuff><security-stuff/>
>>>> <channel name="jndi" type="jndi"/>
>>>> <channel name="jndi" type="ejb"/>
>>>> </connector>
>>>> </subsystem>
>>>>
>>>> Or maybe everything all goes over one socket so
>>>> <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
>>>> <connector socket-binding="remote-management">
>>>> <security-stuff><security-stuff/>
>>>> <channel name="server" type="management"/>
>>>> <channel name="jndi" type="jndi"/>
>>>> <channel name="jndi" type="ejb"/>
>>>> </connector>
>>>> </subsystem>
>>>>
>>>> I'm not clear on the security side of this but we now have the http side of it securing itself in one way and the native channel in another, so maybe this is better
>>>>
>>>> <management>
>>>> <management-interfaces>
>>>> <native-channel name="server" />
>>>> <http-interface interface="management" port="9990">
>>>> <security-realms>
>>>> SNIP
>>>> </security-realms>
>>>> </http-interface>
>>>> </management-interfaces>
>>>> </management>
>>>>
>>>>
>>>> Or the alternative for the domain mode server is to use two endpoints, one for management installed by the core, and one for other stuff installed by the remoting subsystem.
>>>>
>>>>
>>>> _______________________________________________
>>>> jboss-as7-dev mailing list
>>>> jboss-as7-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>>>
>>>
>>> --
>>> Brian Stansberry
>>> Principal Software Engineer
>>> JBoss by Red Hat
>>> _______________________________________________
>>> jboss-as7-dev mailing list
>>> jboss-as7-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>>
>
13 years, 3 months
Fwd: Remoting issues
by Kabir Khan
Using my right account to send to list this time
Begin forwarded message:
> From: Kabir Khan <kabir.khan(a)jboss.com>
> Subject: Re: [jboss-as7-dev] Remoting issues
> Date: 23 September 2011 13:18:23 GMT+01:00
> To: Brian Stansberry <brian.stansberry(a)redhat.com>, Darran Lofthouse <darran.lofthouse(a)jboss.com>
> Cc: "jboss-as7-dev(a)lists.jboss.org Development" <jboss-as7-dev(a)lists.jboss.org>
>
> So:
>
> 1) HC continues to function the same way as it does at the moment.
> 2) For standalone servers we use a remoting-connector attribute to say we want to use a connector from the remoting subsystem, if we use the interface and address we open the endpoint ourselves.
> 3) For domain mode servers we use the native management stuff to say we want to use the endpoint from the remoting subsystems. This is slightly different from your
>> <server-group name="main-server-group" profile="default">
>> <native-management remoting-connector="management"/>
>> </server-group>
>
> since as far as I know domain mode servers don't actually open a management server connection, they connect in to the HC. So something like
> <server-group name="main-server-group" profile="default">
> <native-management remoting-endpoint="endpoint"/>
> </server-group>
>
> Since the channels are named I think they would need to be configured as well under the sockets as mentioned already:
>>> <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
>>> <connector socket-binding="remote-management">
>>> <security-stuff><security-stuff/>
>>> <channel name="server" type="management"/>
>>> </connector>
>>> <connector socket-binding="user">
>>> <security-stuff><security-stuff/>
>>> <channel name="jndi" type="jndi"/>
>>> <channel name="jndi" type="ejb"/>
>>> </connector>
>>> </subsystem>
>
>
> Security becomes a bit strange as it is currently configured as follows which applies to both the configured management interfaces:
> <management>
> <security-realms>
> SNIP
> </security-realms>
> <management-interfaces>
> <native-interface interface="management" port="9999" />
> <http-interface interface="management" port="9990"/>
> </management-interfaces>
> </management>
>
> Since security is configured at connector level I'm not sure what to do when we have:
> <management>
> <security-realms>
> SNIP
> </security-realms>
> <management-interfaces>
> <native-remoting-interface remoting-connector="management"/>
> <http-interface interface="management" port="9990"/>
> </management-interfaces>
> </management>
> In any case the native one gets whatever the remoting subsystem sets up. So I guess we either leave it as it is, and only apply the security settings from here to http-interface. Or we need to allow the security stuff to be used in the individual management interface entries so we could have:
> <management>
> <management-interfaces>
> <native-interface remoting-connector="management"/>
> <http-interface interface="management" port="9990">
> <security-realms>
> SNIP
> </security-realms>
> <http-interface>
> </management-interfaces>
> </management>
>
>
> On 22 Sep 2011, at 21:41, Brian Stansberry wrote:
>
>> Real quick reply:
>>
>> 1) IMO only having a single remoting socket for both management and
>> other uses is A Real Good Thing™. But if there are situations that
>> prevent that, I don't think it's terrible.
>>
>> 2) We need to continue to support AS 7.0-style configs. That to me means
>> for cases where that style config is used, we create a separate endpoint.
>>
>> 3) For a domain mode server, we can't force people to add a remoting
>> subsystem in their domain.xml profile. We talked about having the
>> HostController generate one, but that will result in the server having a
>> profile that does not match what was configured in domain.xml --
>> suddenly a bonus subsystem appears. That is no good.
>>
>> 4) For a domain mode server if the HostController is going to configure
>> the server to set up a native management interface that points to the
>> remoting subsystem in the profile, it needs to be told how to do so. One
>> solution, is, in domain.xml:
>>
>> <server-group name="main-server-group" profile="default">
>> <native-management remoting-connector="management"/>
>> </server-group>
>>
>> In summary, I think reusing the remoting subsystem endpoint is great and
>> we should have our standard configs set up that way but there are cases
>> where things aren't going to be configured that way.
>>
>> On 9/22/11 2:19 PM, Kabir Khan wrote:
>>> I'm trying to understand the issues in remoting subsystem vs the management usage a bit better before I dig into this. We have 3 ways remoting is set up:
>>>
>>> 1) Standalone server
>>> a) Endpoint is set up when installing the subsystem
>>> b) Management is set up and creates a new stream server and channel open listener for ("management") with the endpoint from a) injected
>>>
>>> 2) Host controller
>>> a) Endpoint is set up by the bootstrap
>>> b) Bootstrap sets up the management stream server and channel open listeners for (using endpoint from a) injected
>>> -"management" - i.e. traffic on the management address
>>> -"server" - i.e. traffic from a server
>>> -"domain" - if it is the master, to listen to traffic from slaves
>>> c) If it is a slave it connects to the master on the "domain" channel
>>>
>>> 3) Domain mode server
>>> a) Endpoint is set up when installing the subsystem
>>> b) No management stream server is created
>>> c) A channel is opened to the HC using the endpoint from a) on the "server" channel.
>>>
>>> So, I think the issue is that the core depends on stuff set up by a subsystem? A problem in 3 is that if there is no remoting subsystem no endpoint is created, so communication with HC will not start and we will not get the subsystem config from the HC.
>>>
>>> Something doesn't feel quite right but I'm not sure what, so I'm throwing out some ideas.
>>>
>>> The remoting subsystem is quite basic at the moment and the code to set up new connectors is commented out.
>>>
>>> So maybe we should stick with what we have for HC:
>>> <management>
>>> <security-realms>
>>> SNIP
>>> </security-realms>
>>> <management-interfaces>
>>> <native-interface interface="management" port="9999" />
>>> <http-interface interface="management" port="9990"/>
>>> </management-interfaces>
>>> </management>
>>>
>>> But for the standalone server case do something like
>>>
>>> <management>
>>> <management-interfaces>
>>> <native-channel name="server" />
>>> <http-interface interface="management" port="9990">
>>> </management-interfaces>
>>> </management>
>>>
>>> <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
>>> <connector socket-binding="remote-management">
>>> <security-stuff><security-stuff/>
>>> <channel name="server" type="management"/>
>>> </connector>
>>> <connector socket-binding="user">
>>> <security-stuff><security-stuff/>
>>> <channel name="jndi" type="jndi"/>
>>> <channel name="jndi" type="ejb"/>
>>> </connector>
>>> </subsystem>
>>>
>>> Or maybe everything all goes over one socket so
>>> <subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
>>> <connector socket-binding="remote-management">
>>> <security-stuff><security-stuff/>
>>> <channel name="server" type="management"/>
>>> <channel name="jndi" type="jndi"/>
>>> <channel name="jndi" type="ejb"/>
>>> </connector>
>>> </subsystem>
>>>
>>> I'm not clear on the security side of this but we now have the http side of it securing itself in one way and the native channel in another, so maybe this is better
>>>
>>> <management>
>>> <management-interfaces>
>>> <native-channel name="server" />
>>> <http-interface interface="management" port="9990">
>>> <security-realms>
>>> SNIP
>>> </security-realms>
>>> </http-interface>
>>> </management-interfaces>
>>> </management>
>>>
>>>
>>> Or the alternative for the domain mode server is to use two endpoints, one for management installed by the core, and one for other stuff installed by the remoting subsystem.
>>>
>>>
>>> _______________________________________________
>>> jboss-as7-dev mailing list
>>> jboss-as7-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>>
>>
>> --
>> Brian Stansberry
>> Principal Software Engineer
>> JBoss by Red Hat
>> _______________________________________________
>> jboss-as7-dev mailing list
>> jboss-as7-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>
13 years, 3 months
Jira version inconsistency
by Thomas Diesler
Hi Jas,
currently there is some confusion with the jira versions not being in
sync with what is in github. Alpha1 has been tagged but issues are still
getting resolved against that jira version. What should be done? Create
an Alpha2 in jira?
cheers
-thomas
--
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Thomas Diesler
JBoss OSGi Lead
JBoss, a division of Red Hat
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
13 years, 3 months
Remoting issues
by Kabir Khan
I'm trying to understand the issues in remoting subsystem vs the management usage a bit better before I dig into this. We have 3 ways remoting is set up:
1) Standalone server
a) Endpoint is set up when installing the subsystem
b) Management is set up and creates a new stream server and channel open listener for ("management") with the endpoint from a) injected
2) Host controller
a) Endpoint is set up by the bootstrap
b) Bootstrap sets up the management stream server and channel open listeners for (using endpoint from a) injected
-"management" - i.e. traffic on the management address
-"server" - i.e. traffic from a server
-"domain" - if it is the master, to listen to traffic from slaves
c) If it is a slave it connects to the master on the "domain" channel
3) Domain mode server
a) Endpoint is set up when installing the subsystem
b) No management stream server is created
c) A channel is opened to the HC using the endpoint from a) on the "server" channel.
So, I think the issue is that the core depends on stuff set up by a subsystem? A problem in 3 is that if there is no remoting subsystem no endpoint is created, so communication with HC will not start and we will not get the subsystem config from the HC.
Something doesn't feel quite right but I'm not sure what, so I'm throwing out some ideas.
The remoting subsystem is quite basic at the moment and the code to set up new connectors is commented out.
So maybe we should stick with what we have for HC:
<management>
<security-realms>
SNIP
</security-realms>
<management-interfaces>
<native-interface interface="management" port="9999" />
<http-interface interface="management" port="9990"/>
</management-interfaces>
</management>
But for the standalone server case do something like
<management>
<management-interfaces>
<native-channel name="server" />
<http-interface interface="management" port="9990">
</management-interfaces>
</management>
<subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
<connector socket-binding="remote-management">
<security-stuff><security-stuff/>
<channel name="server" type="management"/>
</connector>
<connector socket-binding="user">
<security-stuff><security-stuff/>
<channel name="jndi" type="jndi"/>
<channel name="jndi" type="ejb"/>
</connector>
</subsystem>
Or maybe everything all goes over one socket so
<subsystem xmlns="urn:jboss:domain:remoting:1.0"/>
<connector socket-binding="remote-management">
<security-stuff><security-stuff/>
<channel name="server" type="management"/>
<channel name="jndi" type="jndi"/>
<channel name="jndi" type="ejb"/>
</connector>
</subsystem>
I'm not clear on the security side of this but we now have the http side of it securing itself in one way and the native channel in another, so maybe this is better
<management>
<management-interfaces>
<native-channel name="server" />
<http-interface interface="management" port="9990">
<security-realms>
SNIP
</security-realms>
</http-interface>
</management-interfaces>
</management>
Or the alternative for the domain mode server is to use two endpoints, one for management installed by the core, and one for other stuff installed by the remoting subsystem.
13 years, 3 months
JMXSubsystemTestCase failure in upstream
by Jaikiran Pai
I am seeing this test failure in upstream since yesterday. Is it just my
setup or is anyone else seeing this too:
-------------------------------------------------------------------------------
Test set: org.jboss.as.jmx.JMXSubsystemTestCase
-------------------------------------------------------------------------------
Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.747
sec <<< FAILURE!
testInstallIntoController(org.jboss.as.jmx.JMXSubsystemTestCase) Time
elapsed: 0.029 sec <<< ERROR!
java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.CommunicationException [Root exception is
java.rmi.NoSuchObjectException: no such object in table]
at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:338)
at
javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248)
at
org.jboss.as.jmx.JMXSubsystemTestCase.testInstallIntoController(JMXSubsystemTestCase.java:204)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at
org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:172)
at
org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:104)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:70)
Caused by: javax.naming.CommunicationException [Root exception is
java.rmi.NoSuchObjectException: no such object in table]
at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101)
at
com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:185)
at javax.naming.InitialContext.lookup(InitialContext.java:392)
at
javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1886)
at
javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1856)
at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257)
... 33 more
Caused by: java.rmi.NoSuchObjectException: no such object in table
at
sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255)
at
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:359)
at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source)
at
com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97)
... 38 more
-Jaikiran
13 years, 3 months