[Design the new POJO MicroContainer] - Re: JBMICROCONT-181 & ManagedDeployments
by scott.stark@jboss.org
I'm looking at reusing the existing ManagedObjectFactory, ManagedObjectBuilder, ManagedObjectPopulator classes as the bases for the ManagedDeployment/ManagedComponent. I am thinking the latter are just extensions of ManagedObject now with the aggregate notion and component notion added.
A couple of issues:
1) The AbstractManagedObjectFactory.buildManagedObject(Class) method always produces a ManagedObject. If an attachment class does not have a ManagementObject annotation I don't think we should as a loop over all DeploymentUnit attachments is going to have too many ManagementObjects.
2) We can't adequately describe the ManagedProperty constraints (allowed values, min/max) via annotations. It seems we need something like a ManagedPropertyConstraintsPopulator:
| interface ManagedPropertyConstraintsPopulator
| {
| /** Populate the ManagedProperty fields constraint related info
| */
| populateManagedProperty(Class attachmentClass, PropertyInfo info, Fields fields);
| }
|
and this would be registered with the ManagedObjectFactory similar to how ManagedObjectBuilders are.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4068099#4068099
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4068099
17 years, 5 months
[Design of JCA on JBoss] - Re: JCA AOP-MC Integration
by vickyk
Kabir
I get the folowing error when I am running the test cases 11377 DEBUG [NoTxUnitTestCase] ==== setUp org.jboss.test.jca.notx.test.NoTxUnitTestCase ====
| 11377 DEBUG [KernelFactory] Starting JBoss Kernel construction...
| 11380 DEBUG [KernelFactory] Completed JBoss Kernel construction. Duration: 3 milliseconds
| 11380 DEBUG [NoTxUnitTestCase] Deploying file:/home/vicky/workspace/JBOSS%20JCA/trunk/core/output/org/jboss/test/jca/notx/test/NoTxUnitTestCase.xml
| 11558 DEBUG [Aspect] Bound aspect TraceAspect; deployed:true
| 11597 DEBUG [Aspect] Bound aspect ExceptionAspect; deployed:true
| 11632 DEBUG [Aspect] Bound aspect SecurityAspect; deployed:true
| 11674 DEBUG [Aspect] Bound aspect PoolAspect; deployed:true
| 11713 DEBUG [Aspect] Bound aspect ListenerAspect; deployed:true
| 11873 DEBUG [GenericBeanAspectFactory] Creating advice 0000-ihb4eo-f4ma5t47-1-f4ma5t48-2
| 12287 DEBUG [ConnectionManagerFactory] Created connectionFactory=org.jboss.test.jca.rar.support.TestConnectionFactoryImpl@15f4a7f for MCF=org.jboss.test.jca.rar.support.TestManagedConnectionFactory@113e9fd
| 12288 DEBUG [NoTxUnitTestCase] ==== Starting testNoTx ====
| 12291 TRACE [TraceAdvice] BEFORE: [advisedMethod=public abstract java.lang.Object javax.resource.spi.ConnectionManager.allocateConnection(javax.resource.spi.ManagedConnectionFactory,javax.resource.spi.ConnectionRequestInfo) throws javax.resource.ResourceException, unadvisedMethod=public abstract java.lang.Object javax.resource.spi.ConnectionManager.allocateConnection(javax.resource.spi.ManagedConnectionFactory,javax.resource.spi.ConnectionRequestInfo) throws javax.resource.ResourceException, metadata=null, targetObject=java.lang.Object@12462b3, arguments=[Ljava.lang.Object;@1cbfa42]
| 12293 TRACE [TraceAdvice] AFTER: [advisedMethod=public abstract java.lang.Object javax.resource.spi.ConnectionManager.allocateConnection(javax.resource.spi.ManagedConnectionFactory,javax.resource.spi.ConnectionRequestInfo) throws javax.resource.ResourceException, unadvisedMethod=public abstract java.lang.Object javax.resource.spi.ConnectionManager.allocateConnection(javax.resource.spi.ManagedConnectionFactory,javax.resource.spi.ConnectionRequestInfo) throws javax.resource.ResourceException, metadata=null, targetObject=java.lang.Object@12462b3, arguments=[Ljava.lang.Object;@1cbfa42]
| java.lang.IllegalArgumentException: Wrong target. class java.lang.Object for public abstract java.lang.Object javax.resource.spi.ConnectionManager.allocateConnection(javax.resource.spi.ManagedConnectionFactory,javax.resource.spi.ConnectionRequestInfo) throws javax.resource.ResourceException
| at org.jboss.aop.joinpoint.MethodInvocation.handleErrors(MethodInvocation.java:150)
| at org.jboss.aop.joinpoint.MethodInvocation.invokeTarget(MethodInvocation.java:125)
| at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:110)
| at org.jboss.jca.plugins.cm.TraceAdvice.invoke(TraceAdvice.java:42)
| at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
| at AOPContainerProxy$0.allocateConnection(AOPContainerProxy$0.java)
| at org.jboss.test.jca.rar.support.TestConnectionFactoryImpl.getConnection(TestConnectionFactoryImpl.java:57)
| at org.jboss.test.jca.notx.test.NoTxUnitTestCase.testNoTx(NoTxUnitTestCase.java:47)
| at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
| at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| at java.lang.reflect.Method.invoke(Method.java:585)
| at junit.framework.TestCase.runTest(TestCase.java:154)
| at junit.framework.TestCase.runBare(TestCase.java:127)
| at junit.framework.TestResult$1.protect(TestResult.java:106)
| at junit.framework.TestResult.runProtected(TestResult.java:124)
| at junit.framework.TestResult.run(TestResult.java:109)
| at junit.framework.TestCase.run(TestCase.java:118)
| at junit.framework.TestSuite.runTest(TestSuite.java:208)
| at junit.framework.TestSuite.run(TestSuite.java:203)
| at junit.extensions.TestDecorator.basicRun(TestDecorator.java:22)
| at junit.extensions.TestSetup$1.protect(TestSetup.java:19)
| at junit.framework.TestResult.runProtected(TestResult.java:124)
| at junit.extensions.TestSetup.run(TestSetup.java:23)
| at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
| at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| at java.lang.reflect.Method.invoke(Method.java:585)
| at org.apache.maven.surefire.junit.JUnitTestSet.execute(JUnitTestSet.java:213)
| at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:138)
| at org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:125)
| at org.apache.maven.surefire.Surefire.run(Surefire.java:132)
| at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
| at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
| at java.lang.reflect.Method.invoke(Method.java:585)
| at org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
| at org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:818)
| 12295 DEBUG [NoTxUnitTestCase] testNoTx took 7ms
| 12295 DEBUG [NoTxUnitTestCase] ==== Stopping testNoTx ====
| 12301 DEBUG [NoTxUnitTestCase] ==== tornDown org.jboss.test.jca.notx.test.NoTxUnitTestCase ====
| 12301 DEBUG [NoTxUnitTestCase] Undeploying [file:/home/vicky/workspace/JBOSS%20JCA/trunk/core/output/org/jboss/test/jca/notx/test/NoTxUnitTestCase.xml]
| 12306 DEBUG [Aspect] Unbinding aspect ListenerAspect
| 12309 DEBUG [Aspect] Unbinding aspect PoolAspect
| 12312 DEBUG [Aspect] Unbinding aspect SecurityAspect
| 12314 DEBUG [Aspect] Unbinding aspect ExceptionAspect
| 12316 DEBUG [Aspect] Unbinding aspect TraceAspect
|
I have noticed that I don't get the error when I run this from the Windows Box , I have been getting the error when I am doing development on Fedora box :(
Any clue about what could be happening ?
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4068096#4068096
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4068096
17 years, 5 months
[Design of JBossCache] - FLUSH and state transfer during high invocation load
by vblagojevic@jboss.com
Hi,
We are moving internal thread discussion to the forum so everyone can contribute and discuss.
Discovery:
While clearing up remaining unit test failures and getting ready for 2.0 GA release we noticed transient StateTransferConcurrencyTest#testConcurrentUseSync failure. This discovery has implications beyond this test and concerns concurrent region activation (state transfer) and high invocation load. testConcurrentUseSync does bunch of synchronous cache invocations on five cache instances while one of the caches activates regions on itself.
Problem description:
Lets say we have five cache instances A,B,C,D, and E. A is the instances that does region.activate() for regions under which B,C,D, and E do synchronous put invocations. These invocations and region.activate() are concurrent. Here is what happens. At the moment FLUSH is started at A (in order to do state transfer triggered by region.activate) one of the members (B,C,D, and/or E) invokes cache.put. Lets say D. This cache put invocation, say CP, coming from D goes down D's stack before channel gets blocked at D. CP arrives at A and goes up the stack at A. At that moment FLUSH proceeded and is already blocking down on all channels. Invocation response for CP never returns from A since channels are blocked. After careful observation we have concluded that until CP returns from A (response blocked down due to flush) any subsequent mcast messages from D will not arrive at FLUSH protocol level at A. This is not a JGroups bug but a valid property related to FIFO message delivery. This unfortunately includes STOP_FLUSH_OK messages and thus FLUSH cannot complete gracefully at A until CP returns. Finally, we run into timeouts, test starts barfing TimeoutExceptions and fails.
Solution:
We have to augment FLUSH slightly. Have a look at the semantics of FLUSH in the link below. Currently we have one semaphore B (depicted in diagram as "wait on FLUSH.down()") that is activated once each channel receives all FLUSH_OK messages from all channels. We will introduce another semaphore A that will block down *only* non JGroups threads i.e application threads. Here are the details:
Semaphore A: When each channel gets START_FLUSH message do not allow user/application threads to call channel.down(). Upon switching on semaphore A, JGroups thread that percolated up START_FLUSH, travels up to application level and carries BLOCK even/callback. This JGroups thread then can do any necessary cleanup or whatever and can even send messages back down the stack because semaphore B has not been activated yet.
Semaphore B: keeps current semantics. When each channel finishes first round of FLUSH (FLUSH_OK) do not allow any threads to call channel.down()
So how does this work for the problem above? FLUSH_OK round is strictly guaranteed to be sent after any applications messages so all synchronous calls have to unwind and return because FLUSH_OK will flush them. In the above problem description CP is guaranteed to unwind. Solution we had before relied on a good will of application that it will not send any more messages after it receives BLOCK event. If application disobeyed like JBC did we have a race condition of application message and FLUSH_OK. If application message was sent after FLUSH_OK we get the problem from above.
As a side note, we could have solved this in JBC but having JBC follow FLUSH BLOCK semantics - however that solution looked prohibitively expensive performance wise.
Currently we have only semaphore B. We originally had B in the place of A but have since moved it to B so we can have a BLOCK mechanism notification that allows solution of JBCACHE-315. This proposal with two semaphores will still leave room to solve JBCACHE-315 because BLOCK event travels up the stack on JGroups thread when START_FLUSH is received. So any work has to be done on that thread and channel can potentially send messages down the stack before semaphore B kicks in.
Brian had concerns about implications of semaphore A on a solution for JBCACHE-315 and rightly so. JBCACHE-315 solution involves the following:
1) JGroups thread that percolated up START_FLUSH, travels up
to application level and invokes block() callback.
2) JBC block() impl involves
a) setting some flag/latch to prevent new tx's accessing the cache and thus causing unreplicated state changes. (We should deal with non-tx write calls as well.)
b) Monitoring existing tx's, giving them a chance to complete.
c) Rolling back tx's that don't complete.
2b and 2c involve letting application threads send messages (PREPARE/COMMIT/ROLLBACK RPCs). If JGroups is going to be blocking those messages on semaphore A, we're stuck.
So we have to also think of a solution for JBCACHE-315 concurrently so to speak. Maybe the algorithm for JBCACHE-315 can be simplified somehow. What if we simply turn the latch on to prevent new txs accessing cache and send rollback message on JGroups thread for all txs that are in progress? The cost is more rollbacks but we get strict consistency and simplicity.
Lets hear your suggestions!
http://wiki.jboss.org/wiki/attach?page=JGroupsFLUSH%2Fflushdiagram-state-...
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4068012#4068012
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4068012
17 years, 5 months