On Aug 5, 2014, at 9:32 AM, Bill Burke <bburke(a)redhat.com> wrote:
On 8/5/2014 10:15 AM, Jason Greene wrote:
>> Those silly benchmarks, are indeed silly. Any workload that doesn't actually
do anything with the requests is not very helpful. You can be really fast on them, and be
really slow on a workload that actually does something. We have already seen this in one
case recently.
>
> It’s all about precision, and limiting variables. I know its fun to criticize micro
benchmarks (obviously micro is indeed relative because a great deal of processing goes on
even when you have EJB and HTTP requests that don’t do much), however micro benchmarks,
provided they are measured correctly, are quick to run and make it easy to tell what you
have to fix. Large gigantic apps on the other hand are difficult to analyze, and do not
necessarily represent activity that a differently architected app would show. To use an
example, if you have pattern matching code, and you want it to be fast, it’s much easier
(and accurate) to benchmark the pattern matching code then it is to benchmaark a full EE
app that happens to use pattern matching in a few places.
>
> Arguing against targeted benchmarks is like arguing that you should never write a
test case, because only real production load will show problems…. To some extent thats
true, but that doesn’t mean that the problems you catch in simulations won’t affect
production users.
>
I'm more worried about focus rather than the actual benchmarks considering how
resource strapped every project seems to be. I'd like to see a discussion started on
what are the most important and *ripe* areas for optimization and then have the
performance team validate and make recommendations on any priority list that this
discussion generates.
The pooling concerns (aside from the occasional expensive post constructs) actually came
from a large app/benchmark that the perf team was testing, and they were seeing an
optimized strict max pool outperform no pooling at all, and it wasn’t post construct.
Their theory is/was GC pressure, because this benchmark/app spends a lot of time in GC,
and they see higher GC activity with no pooling. It’s possible it could be an indirect
difference though, like the fact that strict max pool acts as a throttle might prevent the
system from degrading as a result of no longer being able to keep up with the benchmark
load.
Another possibility to look into is that I see we do:
interceptorContext.setContextData(new HashMap<String, Object>());
*AND*
private Map<Object, Object> instanceData = new HashMap<Object, Object>();
Aside from the unnecessary overhead, since JDK7, HashMap construction uses the murmur
algorithm which requires using a random number generation under global locking. There were
plans to optimize this, but it might not be in JDK7 yet. In a few places we use
alternative map implementations to work around the issue.
--
Jason T. Greene
WildFly Lead / JBoss EAP Platform Architect
JBoss, a division of Red Hat