[Design of Management Features on JBoss] - problems using secured remote EJB interface to Profile Servi
by ips
https://jira.jboss.org/jira/browse/JOPR-263 is preventing us from using the secured remote EJB interface from the jbas5 plugin running within an Enterprise Jopr Agent. Note, we are able to use the non-secured non-EJB remote interface without any issues, but this doesn't do us much good, since this interface will be disabled in EAP5 anyway.
The issue stems from the following existing code in the EMS library (a JMX client library, which is used by the jbas5 plugin for remote JMX calls), which was added as a workaround for https://jira.jboss.org/jira/browse/JOPR-9:
SecurityAssociation.clear();
| SecurityAssociation.setPrincipal(new SimplePrincipal(principal));
| SecurityAssociation.setCredential(credential);
|
This code is called every time a JMX invocation is made via EMS, in order to ensure the principal and credential, which are stored in ThreadLocals, have the correct values for the current thread. This is necessary, since a single Jopr Agent can be used to manage multiple JBAS instances, each with different JNP usernames/passwords. The problem is that the above code appears to have the side effect of resetting the JBoss-Security SecurityContext for the current thread to null, which causes subsequent calls to the EJB Profile Service proxies to fail with "javax.ejb.EJBAccessException: Caller unauthorized" exceptions.
I've written a simple test client that demonstrates the issue:
https://svn.jboss.org/repos/jopr/trunk/etc/jbas5-ejb-client/
How can we fix JOPR-263 without reintroducing JOPR-9?
Thanks,
Ian
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4242234#4242234
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4242234
16 years, 9 months
[Design the new POJO MicroContainer] - Re: Parallel deployments
by kabir.khan@jboss.com
"david.lloyd(a)jboss.com" wrote : From a performance perspective, there actually isn't a lot of difference between blocking when the queue is full vs. executing directly - it's the difference between N possible executing threads and N-1 possible executing threads, if you think about it.
Hmm, maybe you're right. I'll implement some underneath swapping of the thread pool.
I have marked the HAPartition and CacheManager beans as asynchronous, and am seeing some improvements.
Original
---------
36.631
33.447
33.21
32.914
32.840
33.638
33.78
33.157
Asynchronous
---------------
30.432
31.166
31.13
31.68
30.788
31.467
30.455
30.459
I need to talk a bit more with Brian about how to test this with actual deployments using the caches, since the caches are not started until actually used.
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4242221#4242221
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4242221
16 years, 9 months
[Design the new POJO MicroContainer] - Re: Parallel deployments
by smarlow@redhat.com
If we are worried about not having enough threads to start the server, couldn't we require that the pool be configured with enough threads to boot the server? This could be via a comment in the xml file.
A related concern is how we will deal with several deployment threads trying to synchronize on the same object locks. I assume this is were additional tuning will come into play (reducing object lock contention through code modifications). One possible solution, is that we could manually designate which thread (e.g. "cluster partition init") is to be used on a per component basis. Before diving into solving this potential issue, we need to evaluate how expensive object lock contention will be (to avoid premature optimizing). But, it doesn't hurt to consider that we might need to do something in this area. If we were to asynchronously deploy in named threads, that might require a separate thread pool.
Another concern is redeployment in a production app server deployment. We need to be clear in the thread pool xml file (via comments) as to what JBoss elements use the system thread pools (there is a long running task pool and short running task pool). If a customer sizes the pools only on the application needs, then redeployment on a loaded system could run short of threads needed to redeploy. Even if the pool is increased to accommodate the number of threads needed for deployment, applications can easily consume those as well in a loaded system. I'm wondering if we need to introduce throttling between the users of the system thread pool, as a way to carve up the thread pool between different types of tasks. Logically, this would create sub-pools of threads, where perhaps there could be borrowing of the threads between sub-pools. I'm not sure if this is over-engineering, what could possibly be handled via configuration and operational considerations (only hot-deploy during idle time).
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4242211#4242211
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4242211
16 years, 9 months