Is ByteMan in the maven repository somewhere? I'd like to try it out
to test parallel deployments in MC
On 11 Jun 2009, at 15:08, Andrew Dinn wrote:
Galder Zamarreno wrote:
> Andrew, did you get around to compare ByteMan with JBoss AOP?
Yes, I had a closer look at AOP.
> IMO, both seem to do the same, instrument classes to add some
> behaviour here or there. ByteMan simply seems to use load time
> weaving style whereas JBoss AOP had two modes.
Yes, I think you can use AOP to inject code in all the same places
that
ByteMan can inject code. ByteMan does make local variables available
for use in the injected code which I don't think AOP can do but
that's not the most important feature of either.
> So, why should I use ByteMan over JBoss AOP if JBoss AOP is already
> integrated with AS? Easy of use? Memory consumption? Speed?
> Functionality?
Well, firstly, when it comes to testing via fault injection ByteMan
is a lot simpler and easier to use than AOP not that I am thereby
knocking AOP). You don't have to define any classes or compile any
code in order to use Byteman. The injected side effects can simply
be written into the rule script as Java code or built-in helper
method calls. Mostly when you are testing code you want to tweak the
current behaviour and that doesn't normally need you to write a lot
of code. Usually, most of the behaviour you need to change is either
simple to script using a few basic Java operations and/or the public
API of the classes you are testing.
However, sometimes the desired side-effects include operations which
are not simple to script and ByteMan provides help in that area too.
The extra features (beyond calls to Java operations or application
APIs) that ByteMan provides out of the box are exposed via the
Helper class. This class implements the default built-in methods
available for use in scripts. These built-ins allow you to write
simple, clear rules which do many of the things that are needed
during testing.
The default Helper provides 3 distinct sets of operations for:
coordinating the timing of independent threads; creating and
managing rule state from one rule triggering to the next; and
generating output to trace progress of a test. I specifically
implemented the first set to help enforce normal and abnormal
timings during XTS crash recovery testing and they have been very
useful when it comes to ensuring that tests run with both expected
and unexpected interleavings. The second set allows scripts to
succinctly express quite complex scenarios. The third set is just a
basic way of dumping trace output to files.
Now you _could_ implement the same functionality as a library to be
called from AOP injected code. In fact its easy to do so because
it's just the public API of a single class that I defined. But I
don't think the resulting AOP-based code would be as quick to write,
test and change, nor as clear and easy for others to read and
follow. The rules I used in the TS tests are few in number, concise
and express directly how they modify the behaviour of the
application code. Anyone who understands the application can easily
follow how these rules implement the desired test scenario.
The same concern for clarity, simplicity and flexibility led me to
provide support for redefinition of the helper class. I envisage
that when testing a specific application there will be the need to
perform common operations not contained in the set I provided. As
with XTS these operations will be required in order to to set up,
maintain and monitor test conditions across multiple triggerings and
in different tests. By defining a helper class to encapsulate those
operations you can still employ small, simple and clear rule sets to
define the test scenarios.
As regards performance, I don't yet know how well Byteman performs.
I have not yet had a chance to test performance of either the
trigger injection code or the rule type checker/compiler. The former
is very efficient in that it avoids _any_ work if a loaded class
does not match a rule in the current rule set i.e. the target class
and target method do not hash to values mentioned in the rule base
or. If they do match, then a very simple bytecode scan filters out
methods which do not contain a location matching the rule
specification (e.g. there is no read of a field call name).
If a method does match a rule's class, method and lcocation then the
transformer code still only performs a relatively simple
modification of the bytecode. It injects a single call to the rule
engine at each trigger location and provides a catch block for the
potential exceptions thrown by that call. This involves a single
walk through the bytecode for each matched rule. It's not quite so
simple as just throwing in a few LOADS and an INVOKE_STATIC call
since doing the exception handling correctly requires identifying
enough of the control flow to ensure that synchronized blocks are
exited cleanly. But I still expect this transformation to be very
fast (if it is not then of course we could probably switch to using
AOP to do the job, but I don't suppose that is likely to do a lot
better ;-)
The type checker is pretty simple also, especially if the rules are
kept compact. The overhead of type checking (and compilation to
bytecode) are determined by the complexity of the rule body since
both involve little beyond a simple walk of the rule parse tree
(another reason to define custom helpers for test-specific
operations). Once again I expect them to be very fast. Type checking
is done at first execute so any cost is not incurred unless and
until a rule is triggered. By default rules are run interpreted. I
have found with my tests that rules are triggered only once or twice
and I envisage that this will be common since setting up and tarcing
test scenarios usually requires a small number of tweaks at very
specific points in the code. However, rule compilation i.e.
translation of the rule body to invokable bytecode has now been
implemented so it should help in cases where rules are triggered
frequuemtly. The initial crop of obvious bugs have been fixed and
I'll be happy to get my teeth into the harder ones as they turn up
(so please break it :-).
regards,
Andrew Dinn
-----------
_______________________________________________
jboss-development mailing list
jboss-development(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/jboss-development