[JBoss JIRA] (JASSIST-205) "VerifyError: Inconsistent stackmap frames" from PowerMock (using Javassist 3.18)
by Scott Marlow (JIRA)
[ https://issues.jboss.org/browse/JASSIST-205?page=com.atlassian.jira.plugi... ]
Scott Marlow commented on JASSIST-205:
--------------------------------------
This doesn't appear to be a dead code issue, I'll take a look at the generated code just to be sure. The attached powermockdemoDEBUG.txt is output from trying to build the demo and includes some debug prints.
> "VerifyError: Inconsistent stackmap frames" from PowerMock (using Javassist 3.18)
> ---------------------------------------------------------------------------------
>
> Key: JASSIST-205
> URL: https://issues.jboss.org/browse/JASSIST-205
> Project: Javassist
> Issue Type: Bug
> Affects Versions: 3.18.0-GA
> Environment: jdk1.7.0_21, Win8amd64
> Reporter: Ryan Kenney
> Assignee: Shigeru Chiba
> Attachments: PowerMockDemo.zip, powermockdemoDEBUG.txt
>
>
> Apologies if this is a duplicate of JASSIST-204. I didn't delve into the actual Javassist APIs used, I'm simply seeing an error in my PowerMock usage. Fortunately, this ticket provides a very simple test case for reproducibility.
> I was prompted to open a Javassist ticket by the following PowerMock ticket: https://code.google.com/p/powermock/issues/detail?id=355
> I'm attaching a very simple maven project with a unit test to demonstrate the problem.
> Here are the guts of the failing unit test:
> {code}
> @RunWith(PowerMockRunner.class)
> @PrepareForTest( {MyClassUnderTest.class} )
> public class PowerMockTest {
> /**************************************************************************
> * Demonstrates an "Inconsistent stackmap frames" exception that results
> * from PowerMock 1.5 and JDK 7.
> *************************************************************************/
> @Test
> public void testWaitForExitMockMonitors() throws InterruptedException {
> resetAll();
>
> Object mockMonitor = createStrictMock(Object.class);
> MyClassUnderTest myClass = new MyClassUnderTest(mockMonitor);
>
> mockMonitor.wait(0);
> mockMonitor.notifyAll();
>
> replayAll();
>
> myClass .run();
>
> verifyAll();
> }
>
> public static class MyClassUnderTest {
> private Object m_monitor;
>
> public MyClassUnderTest(Object monitor) {
> m_monitor = monitor;
> }
>
> public void run() throws InterruptedException {
> m_monitor.wait(0);
> m_monitor.notifyAll();
> }
> }
> }
> {code}
> And here is my error:
> {code}
> java.lang.VerifyError: Inconsistent stackmap frames at branch target 283 in method com.scea.dart.cmd.targetcontrol.target.process.PowerMockTest.testWaitForExitMockMonitors()V at offset 274
> at java.lang.Class.getDeclaredMethods0(Native Method)
> at java.lang.Class.privateGetDeclaredMethods(Class.java:2451)
> at java.lang.Class.privateGetPublicMethods(Class.java:2571)
> at java.lang.Class.getMethods(Class.java:1429)
> at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.getTestMethods(PowerMockJUnit44RunnerDelegateImpl.java:95)
> at org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.<init>(PowerMockJUnit44RunnerDelegateImpl.java:71)
> at org.powermock.modules.junit4.internal.impl.PowerMockJUnit49RunnerDelegateImpl.<init>(PowerMockJUnit49RunnerDelegateImpl.java:29)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.createDelegatorFromClassloader(JUnit4TestSuiteChunkerImpl.java:143)
> at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.createDelegatorFromClassloader(JUnit4TestSuiteChunkerImpl.java:39)
> at org.powermock.tests.utils.impl.AbstractTestSuiteChunkerImpl.createTestDelegators(AbstractTestSuiteChunkerImpl.java:217)
> at org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.<init>(JUnit4TestSuiteChunkerImpl.java:59)
> at org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.<init>(AbstractCommonPowerMockRunner.java:32)
> at org.powermock.modules.junit4.PowerMockRunner.<init>(PowerMockRunner.java:33)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
> at org.junit.internal.builders.AnnotatedBuilder.buildRunner(AnnotatedBuilder.java:31)
> at org.junit.internal.builders.AnnotatedBuilder.runnerForClass(AnnotatedBuilder.java:24)
> at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
> at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29)
> at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57)
> at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24)
> at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.<init>(JUnit4TestReference.java:33)
> at org.eclipse.jdt.internal.junit4.runner.JUnit4TestClassReference.<init>(JUnit4TestClassReference.java:25)
> at org.eclipse.jdt.internal.junit4.runner.JUnit4TestLoader.createTest(JUnit4TestLoader.java:48)
> at org.eclipse.jdt.internal.junit4.runner.JUnit4TestLoader.loadTests(JUnit4TestLoader.java:38)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:452)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (WFLY-490) Domain Management Role Based Access Control
by Gopi Chand Uppala (JIRA)
[ https://issues.jboss.org/browse/WFLY-490?page=com.atlassian.jira.plugin.s... ]
Gopi Chand Uppala commented on WFLY-490:
----------------------------------------
Brian, do you know which version of EAP would include this Feature and when we can expect that release?
> Domain Management Role Based Access Control
> -------------------------------------------
>
> Key: WFLY-490
> URL: https://issues.jboss.org/browse/WFLY-490
> Project: WildFly
> Issue Type: Feature Request
> Components: Domain Management, Security
> Reporter: Darran Lofthouse
> Assignee: Darran Lofthouse
> Priority: Blocker
> Labels: Authorization
> Fix For: 8.0.0.CR1
>
>
> Implement some coarse permissions for domain operations. Possibly allowing a break down for subsystem, profile, server, server-group - maybe read - write - execute.
> Also consider confidentiality in exchange e.g. Can read metrics over http but must use https to add new server.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (JGRP-1613) FORK: cactus stacks
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1613?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1613 at 8/1/13 7:16 AM:
--------------------------------------------------------
OK, based on a conversation with Sanne at Red Hat Summit, we decided to narrow the scope.
One of the main reasons not to implement a cactus stack like architecture like in the picture above was that - as grafts could be dynamically created (as a matter of fact, Sanne would need a graft to be created *per deployed application*) and deleted - not all nodes would have all grafts. This means that GMS would have to maintain something like the dreaded service views (google for Multiplexer service views, e.g. [1]). This means that if we have view \{A,B,C,D\}, but only nodes B and C have a certain graft created, then the *service view* for that graft would be \{B,C\}.
Another reason was that a number of protocols (discovery, failure detection, merge) would have to be rewritten to become singletons. It turned out that it's unlikely this could be genericized, e.g. FD_ALL maintains mappings between logical addresses (which are per channel) and physical addresses. It is therefore tied to a channel and would have to be rewritten to only deal with physical addresses. I'll keep this task for a later day...
So the new, limited, functionality should be:
* FORK can only be the top protocol in a stack (or towards the top of the stack)
* An app gets a call createChannel() which takes a string that has to be unique. We create a fork-channel, which is a subclass of JChannel and has a ref to the main channel
* The createChannel() method initially carries a list of instantiated protocols. Later we might also accept XML snippets
* The close() or disconnect() method on the fork-channel does *not* close or disconnect the main channel, only the fork-channel
* In other words, a fork-channel is a very light weight channel, and we might create hundreds of them without a big penalty
* That string is used to mux/demux messages to the app
* A header is added to each message including that max-id so we know how to mux/demux
* There is a *main channel* and the dynamically created channels refer to it, e.g. their lifetime is less than or equal to the main channel
* When the main channel is closed, sending of messages on the fork-channel throws an exception
* The view and address of the fork-channels is the same as that of the main channel
[1] https://community.jboss.org/wiki/MigrationFromMultiplexerToSharedTransport
was (Author: belaban):
OK, based on a conversation with Sanne at Red Hat Summit, we decided to narrow the scope.
One of the main reasons not to implement a cactus stack like architecture like in the picture above was that - as grafts could be dynamically created (as a matter of fact, Sanne would need a graft to be created *per deployed application*) and deleted - not all nodes would have all grafts. This means that GMS would have to maintain something like the dreaded service views (google for Multiplexer service views, e.g. [1]). This means that if we have view \{A,B,C,D\}, but only nodes B and C have a certain graft created, then the *service view* for that graft would be \{B,C\}.
The new functionality should be:
* FORK can only be the top protocol in a stack (or towards the top of the stack)
* An app gets a call createChannel() which takes a string that has to be unique. We create a fork-channel, which is a subclass of JChannel and has a ref to the main channel
* The createChannel() method initially carries a list of instantiated protocols. Later we might also accept XML snippets
* The close() or disconnect() method on the fork-channel does *not* close or disconnect the main channel, only the fork-channel
* In other words, a fork-channel is a very light weight channel, and we might create hundreds of them without a big penalty
* That string is used to mux/demux messages to the app
* A header is added to each message including that max-id so we know how to mux/demux
* There is a *main channel* and the dynamically created channels refer to it, e.g. their lifetime is less than or equal to the main channel
* When the main channel is closed, sending of messages on the fork-channel throws an exception
* The view and address of the fork-channels is the same as that of the main channel
[1] https://community.jboss.org/wiki/MigrationFromMultiplexerToSharedTransport
> FORK: cactus stacks
> -------------------
>
> Key: JGRP-1613
> URL: https://issues.jboss.org/browse/JGRP-1613
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.4
>
> Attachments: IMAG0129.jpg
>
>
> Introduce cactus stacks where we can have multiple, different, stacks grafted onto the same base stack.
> The problem today is that different applications need different functionality (protocol stack configs) in the AS. For example, we have the default stack used by AS. Then, Hibernate Search wants to use distributed locking (CENTRAL_LOCK) and counting (COUNTER). The total order stack wants to use TOA/SEQUENCER and so on.
> Cactus stacks add the ability to:
> * Provide custom (partial) stacks that are grafted onto a base stack
> * Add/remove stacks at runtime
> See the attached picture for details.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (JGRP-1613) FORK: cactus stacks
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-1613?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-1613 at 8/1/13 7:09 AM:
--------------------------------------------------------
OK, based on a conversation with Sanne at Red Hat Summit, we decided to narrow the scope.
One of the main reasons not to implement a cactus stack like architecture like in the picture above was that - as grafts could be dynamically created (as a matter of fact, Sanne would need a graft to be created *per deployed application*) and deleted - not all nodes would have all grafts. This means that GMS would have to maintain something like the dreaded service views (google for Multiplexer service views, e.g. [1]). This means that if we have view \{A,B,C,D\}, but only nodes B and C have a certain graft created, then the *service view* for that graft would be \{B,C\}.
The new functionality should be:
* FORK can only be the top protocol in a stack (or towards the top of the stack)
* An app gets a call createChannel() which takes a string that has to be unique. We create a fork-channel, which is a subclass of JChannel and has a ref to the main channel
* The createChannel() method initially carries a list of instantiated protocols. Later we might also accept XML snippets
* The close() or disconnect() method on the fork-channel does *not* close or disconnect the main channel, only the fork-channel
* In other words, a fork-channel is a very light weight channel, and we might create hundreds of them without a big penalty
* That string is used to mux/demux messages to the app
* A header is added to each message including that max-id so we know how to mux/demux
* There is a *main channel* and the dynamically created channels refer to it, e.g. their lifetime is less than or equal to the main channel
* When the main channel is closed, sending of messages on the fork-channel throws an exception
* The view and address of the fork-channels is the same as that of the main channel
[1] https://community.jboss.org/wiki/MigrationFromMultiplexerToSharedTransport
was (Author: belaban):
OK, based on a conversation with Sanne at Red Hat Summit, we decided to narrow the scope. One of the main reasons was that - as grafts could be dynamically created (as a matter of fact, Sanne would need a graft to be created *per deployed application*) and deleted - not all nodes would have all grafts. This means that GMS would have to maintain something like the dreaded service views (google for Multiplexer service views, e.g. [1]). This means that if we have view \{A,B,C,D\}, but only nodes B and C have a certain graft created, then the *service view* for that graft would be \{B,C\}.
The new functionality should be:
* FORK can only be the top protocol in a stack (or towards the top of the stack)
* An app gets a call createChannel() which takes a string that has to be unique. We create a fork-channel, which is a subclass of JChannel and has a ref to the main channel
* The createChannel() method initially carries a list of instantiated protocols. Later we might also accept XML snippets
* The close() or disconnect() method on the fork-channel does *not* close or disconnect the main channel, only the fork-channel
* In other words, a fork-channel is a very light weight channel, and we might create hundreds of them without a big penalty
* That string is used to mux/demux messages to the app
* A header is added to each message including that max-id so we know how to mux/demux
* There is a *main channel* and the dynamically created channels refer to it, e.g. their lifetime is less than or equal to the main channel
* When the main channel is closed, sending of messages on the fork-channel throws an exception
* The view and address of the fork-channels is the same as that of the main channel
[1] https://community.jboss.org/wiki/MigrationFromMultiplexerToSharedTransport
> FORK: cactus stacks
> -------------------
>
> Key: JGRP-1613
> URL: https://issues.jboss.org/browse/JGRP-1613
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 3.4
>
> Attachments: IMAG0129.jpg
>
>
> Introduce cactus stacks where we can have multiple, different, stacks grafted onto the same base stack.
> The problem today is that different applications need different functionality (protocol stack configs) in the AS. For example, we have the default stack used by AS. Then, Hibernate Search wants to use distributed locking (CENTRAL_LOCK) and counting (COUNTER). The total order stack wants to use TOA/SEQUENCER and so on.
> Cactus stacks add the ability to:
> * Provide custom (partial) stacks that are grafted onto a base stack
> * Add/remove stacks at runtime
> See the attached picture for details.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months