"alex.loubyansky(a)jboss.com" wrote : The time results for each number of
repetitions are consistent in differences, i.e. for each number of repetitions the winner
is consistent with more or less the same difference.
I don't think that is true looking at the numbers you post.
fast jaxb is clearly faster at the binding,
but slower at building the model (looking at the numbers for one repetition).
e.g. If you look at the last 900 reps (the 1000 case - 100 case) you get:
fastjaxb = 808 (1057-249) avg 0.89ms
xb=1116ms (1402-286) avg 1.24ms
The slow part of building the model will obviously be the code generation.
I imagine this would get even slower as it becomes more feature rich?
If the code generation was moved to compile time (like xjc), then having
already generated classes will obviously out perform the reflection used by
"david.lloyd(a)jboss.com" wrote : I knew he should have used StAX :-)
StAX has nothing to do with the binding. It will probably be faster overall,
but it just replaces the SAX xml parsing, something that has to be done for both cases.
That is 33% of the time versus 67% for binding according the numbers Alex posted above.
If Alex's numbers are typical (never trust a simple benchmark ;-), then
a precompiled fast jaxb implementation might reclaim ~30% (1-808/1116) of that 67%
binding time. i.e. ~20% (.3 * 67%) of the overall time.
Which would mean the overall xml startup would be 2 seconds instead of 2.4 seconds.
Not exactly a huge improvement. :-(
View the original post :
Reply to the post :