[Design of Security on JBoss] - Re: SecurityContext
by anil.saldhana@jboss.com
Thanks Scott for the feedback!
anonymous wrote :
| We need to make the SecurityContext an interface (it already essentially is), and define some utility methods for accessing key Subject info so that consumers don't have to know how the Subject/SubjectInfo maintain info for things like:
|
| - String getUsername(SubjectInfo s)
| - Principal getUserPrincipal(SubjectInfo s)
|
No issue with that.
anonymous wrote : We also want to move away from requiring a thread local as an aspect of the spi. In general the call context metadata should contain the SecurityContext, and security aspects would access this. For some apis like JACC we may still need integration via thread locals.
|
We have to have just one setup. JACC is just installation of a policy and a jacc authorization module. So the call metadata is fine.
anonymous wrote : I'm not seeing how the run-as identity and roles fits into the SecurityContext. How does it?
RunAsIdentity is not applicable to every JEMS project. It is more of an JEE aspect. Both RAI and Roles will be keyed in the context map inside SC.
anonymous wrote : How would we implement the jsr196 authentication?
An additional isValid method in the AuthenticationManager interface with the parameters being the wrapper request/response types for the layers.
anonymous wrote : - Validate incoming calls that have no authentication info. This could be a trusted call based on internal run-as, external run-as with trust assertion, external run-as with trust based on known hosts, external run-as based on trusted transport cert.
Intra-vm will be easier. But the inter-vm association will be complex. I have not thought about it yet.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3991391#3991391
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3991391
18 years, 1 month
[Design of JBoss ESB] - Configuration - normalization
by kurt.stam@jboss.com
Hi guys,
Currently we have 2 config files for each esb node (server). The jbossesb-listener.xml and the jbossesb-gateway.xml.
1. Firstly, this can result in a LOT of files if you have a few nodes.
2. And secondly there is a lot of duplication in these files.
I'm proposing to normalize the configuration. Basically I want to front the collection of these files with 1 master file from which we can generate the config files for each esb node. Later we can offer a bottum approach too, if we think this is needed.
The point is that the user can debug the configuration by just looking at one file. For this file we can build an xsd (which is not possible for the individual files at the moment). The format of this file will facilitate the generation of wizzards, as they can build on organized information. Also we can easiliy generate a global deployment picture from this file.
Here is an example XML:
| <jbossesb>
| <hosts>
| <host name="filebank" dnsName="localhost"/>
| <host name="jmsbank" dnsName="localhost"/>
| <host name="loanbroker" dnsName="localhost"/>
| <host name="jms-provider" dnsName="localhost"/>
| </hosts>
| <servers>
| <server name="fileBankServer" host="fileBank" appserver="jboss-4.0.3SP1">
| <properties>
| <property name="java.naming.provider.url" value="localhost:1099"/>
| <property name="java.naming.factory.initial" value="org.jnp.interfaces.NamingContextFactory"/>
| <property name="java.naming.factory.url.pkgs" value="org.jboss.naming:org.jnp.interfaces"/>
| </properties>
| </server>
| <server name="inhouse-jboss-4.0.4" host="jms-provider" appserver="jboss-4.0.4">
| <properties>
| <property name="java.naming.provider.url" value="localhost:1099"/>
| <property name="java.naming.factory.initial" value="org.jnp.interfaces.NamingContextFactory"/>
| <property name="java.naming.factory.url.pkgs" value="org.jboss.naming:org.jnp.interfaces"/>
| </properties>
| </server>
| <server name="loanbroker-listener" host="loanbroker" appserver="jbossesb"/>
| </servers>
| <buses>
| <bus name="bank-jms-channel" server="fileBankServer" resourceType="QUEUE" userName="" password=""/>
| <bus name="bank-ftp-channel" server="inhouse-jboss-4.0.4" resourceType="FTP" userName="kurt" password="secret"/>
| </channels>
| <buses>
| <service name="filebank-gateway" category="gateway" server="loanbroker-listener" description="This listener picks up files deposited by the fileBank"
| class="org.jboss.soa.esb.FileBankGateway">
| <listeners>
| <listener description="A File-Based listener" bus="bank-ftp-bus"/>
| <listener description="A JMS-Based listener" bus="bank-jms-bus"/>
| </listeners>
| <actions>
| <action name="TestDefaultRouteAction" process="route" class="org.jboss.soa.esb.actions.CbrProxyAction" service-category="MessageRouting"
| service-name="ContentBasedRoutingService" />
| </actions>
| </service>
| </services>
| </jbossesb>
|
Yes I posted this xml before, but this time I replaced 'channel' by 'bus' as Mark seems to like this better :). I'm sure the Brits amongst us will want to spell buses with 3 s'es.
Note that I'm proposing this for GA. After GA our configuration may change again (however this file may stay the same, and we could just change the xslt script to break it up).
--Kurt
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3991387#3991387
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3991387
18 years, 1 month
[Design of JBossCache] - Re: Locking tree before setting initial state
by vblagojevic@jboss.com
Here is a transcript of conversation that I had with Brian regarding the algorithm details of JBCACHE-315:
Replying privately, but think we should take this to jbcache-dev or the forum. This is complex and Vladimir Blagojevic wrote:
> Hey Brian,
>
> I though a bit more about the locking algorithm and I would like to
> bounce it off you. If you recall we agreed on our phone call that we
> have to go through the steps of:
>
> a) set a flag or something that will block any ACTIVE transactions
> from proceeding (i.e. entering prepare() phase of 2PC)
>
> We then revised this by saying that infact "any ACTIVE transactions"
> should rather be "any ACTIVE locally initiated transactions". We also
> agreed how we can do this by using a latch in TxInterceptor.
>
>
> b) wait for completion of any transactions that are beyond ACTIVE.
>
>
> We thought that this was great idea but soon realized that
> transactions can still deadlock. You said:"For example, locally
> initiated transaction is holding a lock on some node and you have
> remote prepare that comes in. Remote won't be able to acquire lock. At
> some point we have to deal with that. Whoever sent that prepare call
> isn't going to proceed - sender will block on that synchronous call.
> So on remote node prepare is not going to be progressing."
To be even more specific:
On the remote node (i.e. the one we're working on) the JG up-handler thread will be blocked while the prepare() call waits to acquire a lock. That thread will block until there is a lock timeout. This will occur whether we are using REPL_ASYNC or REPL_SYNC. One effect of this is that no other JG messages will be received until there is a timeout. Note also that *I think* that having another thread roll back the tx associated with the prepare call will not cause the JG up-handler thread to unblock!!
If REPL_SYNC, on the node that originated the GTX, the client thread that committed the tx will be blocking waiting for a response to the prepare() call.
>
>
> I have a another proposal. If we already have to introduce a latch why
> not introduce it in "better" location. So the proposal is to introduce
> our latch in InvocationContextInterceptor rather than in
> TxInterceptor.
> InvocationContextInterceptor is always first interceptor in the chain.
> By introducing a latch here we can inspect a call and determine its
> origin and transactional status and block transactions prior to them
> grabbing any locks.
Can this be done in the TxInterceptor? I.e. isn't it always before any LockInterceptor? I would think it would be. I expect Manik would put up a fuss about doing tx-related stuff outside TxInterceptor; the whole reason it was added in 1.3.0 was to encapsulate stuff that was previously spread around other interceptors.
> If transaction
> is originating locally and has not been registered in
> TransactionalTable (had not yet performed any operation) block it on
> latch prior it has a chance to acquire any locks.
+1. No reason to let a tx that hasn't acquired any locks go through and cause trouble.
> Then we look at the table and rollback any local transactions that
> have not yet gone to prepare i.e transactions that we have missed with
> our latch. If any rollbacked transaction retries it will be caught by
> our latch:) All other transactions we let go through. Start a timer
> and give it enough time to have beyond prepare transactions finish.
>
> So in pseudocode, algorithm executed on each node:
>
> receive block call
> flip a latch in InvocationContextInterceptor and block any subsequent
> local transactions
> rollback local transactions if not yet in prepare phase and start
> timer T (allow some time for beyond prepare transactions to finish)
> if lock still exists at integration node after T expires rollback our
> local transaction
> flip latch back and allow transactions to proceed
> return from block ok
>
>
> flush blocks all down threads (thus no prepare will go through
> although local transactions will proceed on each node)
>
> Proceed with algorithm on state provider:
>
> receive getState
> grab a lock on integration point using LockUtil.breakLock variant
> possibly rolling back some local
> transactions read state and do state transfer with state receiver
>
> when state transfer is done prepare messages will hit the cluster and
> state will be consistent
> no matter what happens will all global transactions
>
>
The concern I have with this is we give up one of the key goals -- not rolling back a tx if its not hurting anything. Here we assume that an ACTIVE locally originated tx is going to cause a problem by blocking a remote prepare() call. So we roll back the tx. Actually the odds of a remote prepare() call being blocked are pretty low.
How about this:
1) receive block call
2) flip a latch in TxInterceptor (I'm assuming it will work putting it here instead of InvocationContextInterceptor). This latch is used at 2 or 3 different control points to block any threads that:
a) are not associated with a GTX (i.e. non-transactional reads/writes from the local server)
b) are associated with a GTX, but not yet in TransactionTable (your idea above)
c) are associated with a locally originated GTX and are about to enter the beforeCompletion phase (i.e. the original idea of preventing the tx from proceeding to making a prepare() call.)
3) Loop through the GTXs in the TransactionTable. Create and throw in a map little object for each GTX. Object is a state machine that uses the JTA Transaction.STATUS, the elapsed time and whether the tx is locally or remotely originated to govern its state transitions. Keep looping through the TransactionTable, create more of these objects if more GTXs appear and for each GTX update the object with the current Transaction.STATUS, then read the object state. The object state tells you whether you need to rollback the tx, etc.
4) If a the state machine is for a *remotely initiated* GTX that's in ACTIVE status, after some elapsed time its state will tell you that its likely held up by a lock conflict with a locally originated tx. At that point we have a choice.
a) roll back all locally originated tx's that are ACTIVE or PREPARED. Con: indiscriminately breaks transactions. Con: if tx has already entered beforeCompletion() we don't know whether its in prepare() call or later. We can only roll it back during beforeCompletion(); otherwise we introduce a heuristic.
b) roll back the remotely originated tx. Pro: doesn't indiscriminately break transactions. Con: I *think* this rollback won't unblock the JG up-handler thread.
5) We'd need to work out all the state transitions; i.e. what conditions lead to tx rollback.
6) flip latch back and allow transactions to proceed
7) return from block ok
>
>
>
> So in summary the goal of the first part of the algorithm is to allow
> transactions beyond prepare to finish and prevent any local
> transactions from hitting the cluster and becoming global. That leaves
> us dealing only with local transactions at state provider in the
> second part of the algorithm. In the second part with deal just with
> state provider. We grab lock at integration point, possibly rollback
> any local transaction there, do state transfer and let the prepares
> hit the cluster thus preserving state consistency and disturbing least
> number of global transactions.
>
>
> It seems like if we had that blockDone Jgroups callback then things
> would be nicer,algorithm executed on each node:
>
>
> receive block call
> flip a latch in InvocationContextInterceptor and block any subsequent
> local transactions
> rollback local transactions if not yet in prepare phase and start
> timer T
> if lock still exists at integration node after T expires rollback our
> local transaction
> return from block ok
>
>
> flush blocks all down threads (thus no prepare will go through
> although local transactions will proceed on each node)
>
> Proceed with algorithm on state receiver and provider:
>
> do state transfer
>
> Proceed with algorithm executed on each node:
>
> flip latch back and allow transactions to proceed
>
>
Here's a question for you about FLUSH: When a service returns from block() or sends blockOK, does the channel immediately block? Is there coordination across the cluster?
My concern:
Node A doesn't have much going on, quickly returns from block(), so his channel is blocked.
Node B takes a little longer; has some txs the completion of which requires sending messages to A. Those messages don't get through due to A being blocked.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3991386#3991386
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3991386
18 years, 1 month
[Design of JBossCache] - Re: Locking tree before setting initial state
by vblagojevic@jboss.com
Following up discussion we had on jbosscache-dev. In summary, Brian found a fundemental problem in FLUSH that needed to be resolved before attacking JBCACHE-315. Problem is described below. FLUSH was retrofitted in JGroups 2.4 final to include solution Brian describes in last paragraph.
Brian said on jbosscache-dev:
We have a problem in that the FLUSH protocol makes the decision to shut off the ability to pass messages down the channel independently at each node. The protocol doesn't include anything at the JGroups level to readily support coordination between nodes as to when to shut off down messages. But, JBC needs coordination since it needs to make RPC calls around the cluster (e.g. commit()) as part of how it handles FLUSH.
Basically, when the FLUSH protocol on a node receives a message telling it to START_FLUSH, it calls block() on the JBC instance. JBC does what it needs to do, then returns from block(). Following the return from
block() the FLUSH protocol in that channel then begins blocking any further down() messages.
Problem is as follows. 2 node REPL_SYNC cluster, A B where A is just starting up and thus initiates a FLUSH:
1) JBC on B has tx in progress, just starting the 2PC. Sends out the prepare().
2) A sends out a START_FLUSH message.
3) A gets START_FLUSH, calls block() on JBC.
4) JBC on A is new, doesn't have much going on, very quickly returns from block(). A will no longer pass *down* any messages below FLUSH.
5) A gets the prepare() (no problem, FLUSH doesn't block up messages, just down messages.)
6) A executes the prepare(), but can't send the response to B because FLUSH is blocking the channel.
7) B gets the START_FLUSH, calls block() on JBC.
8) JBC B doesn't immediately return from block() as it is giving the
prepare() some time to complete (avoid unnecessary tx rollback). But
prepare() won't complete because A's channel is blocking the RPC response!! Eventually JBC B's block() impl will have to roll back the tx.
Basically you have a race condition between calls to block() and
prepare() calls, and can have different winners on different nodes.
A solution we discussed, rejected and then came back to this evening (please read FLUSH.txt to understand the change we're discussing):
Channel does not block down messages when block() returns. Rather it just sends out a FLUSH_OK message (see FLUSH.txt). It shouldn't initiate any new cluster activity (e.g. a prepare()) after sending FLUSH_OK, but it can respond to RPC calls. When it gets a FLUSH_OK from all the other members, it then blocks down messages and multicasts a FLUSH_COMPLETED to the cluster.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=3991384#3991384
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=3991384
18 years, 1 month