On 8/5/2014 10:32 AM, Bill Burke wrote:
On 8/5/2014 10:15 AM, Jason Greene wrote:
>> Those silly benchmarks, are indeed silly. Any workload that doesn't actually
do anything with the requests is not very helpful. You can be really fast on them, and be
really slow on a workload that actually does something. We have already seen this in one
case recently.
>
> It’s all about precision, and limiting variables. I know its fun to criticize micro
benchmarks (obviously micro is indeed relative because a great deal of processing goes on
even when you have EJB and HTTP requests that don’t do much), however micro benchmarks,
provided they are measured correctly, are quick to run and make it easy to tell what you
have to fix. Large gigantic apps on the other hand are difficult to analyze, and do not
necessarily represent activity that a differently architected app would show. To use an
example, if you have pattern matching code, and you want it to be fast, it’s much easier
(and accurate) to benchmark the pattern matching code then it is to benchmaark a full EE
app that happens to use pattern matching in a few places.
>
> Arguing against targeted benchmarks is like arguing that you should never write a
test case, because only real production load will show problems…. To some extent thats
true, but that doesn’t mean that the problems you catch in simulations won’t affect
production users.
>
I'm more worried about focus rather than the actual benchmarks
considering how resource strapped every project seems to be. I'd like
to see a discussion started on what are the most important and *ripe*
areas for optimization and then have the performance team validate and
make recommendations on any priority list that this discussion generates.
Furthermore, if we put efforts into a particularly ripe area for
optimization and there exists no popular benchmark to showcase these
efforts, then create one, open source it, and promote it.
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com