Section 6.5.5 of the spec (second bullet list) is written to allow the
container to optimize these cases. I don't see that this is something
that we need an SPI for.
On Fri, Feb 26, 2010 at 4:20 PM, Mark Struberg <struberg(a)yahoo.de> wrote:
Hi folks!
After doing lots of profiling in OpenWebBeans in the last few days, I really feel that we
should introduce some common caching mechanism in the Context interface.
Let's start from the beginning.
1.) The first hotspot I experienced while profiling was that each EL-Expression is going
all the way down and resolving a Bean<T> for the injection point (which is btw even
more expensive, if we cannot find anything because it's e.g. a i18n resource EL of JSF
which will only be resolved later in the EL chain) . This should be cacheable pretty well
inside the container already, so there is nothing to do on SPI side.
2.) Another much harder to solve hotspot are the MethodHandlers of our bean proxies.
Whenever you inject a NormalScoped bean, a proxy (aka contextual reference) will
introduced itself. And for each method invocation on that very bean proxy, we need to go
deep into the container, first looking up the right Context, and then ask the Context for
the contextual instance. This happens really often and is not a cheap operation.
Additionally, having to always go to the center of our application will basically
serialize all operations through this very bottle neck. Thus I fear this approach will not
scale well on systems with lots of CPUs.
Here is my suggestion:
I will try to explain from the most simple scenario up to the most complicated:
A) Let's look at an @ApplicationScoped contextual reference. The injected proxy
instance at an injection point of that bean could simply remember the contextual instance
once it got resolved for the first time (and after it was resolved again if it got lost by
serialization):
public class AppScopedBeanMethodHandler<T> implements MethodHandler {
private transient volatile T theInstance == null;
private Bean<T> bean;
public Object invoke(Object instance, Method method, Method proceed, Object[] arguments)
throws Exception {
if (theInstance == null) {
// resolve contextual instance of bean from Context
}
return proceed(theInstance, ..);
}
(Sidenote: I'm not 100% sure if we really need make it volatile, because in worst
case - if another Core/CPU L1 cache doesn't see the variable set - it will get
resolved number of cores times at maximum. And not making it volatile would even remove
the cache writeback load. Not sure if it pays of though.)
B) It gets a bit more complicated for a @SessionScoped bean, because a Session will get
created/closed and multiple of them may exist at the same time. But we still like to cache
the resolved contextual instance in the MethodHandler for this scope. Thus, we have to
somehow invalidate this cache of course! Since an active invalidation is not really
practicable, we could simply ask the Context for the sessionId. And if the current
sessionId is still the same as the one we got at the time we created the cached instance,
then we can easily use that cached instance. Otherwise we have to ask the container to
resolve the bean. Since there may be multiple sessions active at the same time, we should
use a ThreadLocal for the cache (in fact caching one contextual instance per thread).
This is necessary since e.g. a contextual reference of a @SessionScoped bean injected
into an @ApplicationScoped bean may get called from n threads in parallel but each thread
may resolve to another Session!
We may need a bit synchronization here, but it is far better to do this in n proxies
(which don't block each other) than to do this in the central context (which would
block all other beans of that context too).
C) A MethodHandler for a @ReqeustScoped bean is not much more complex. A concatenated
ThreadId+Sytem.nanoTime() should do the trick for acting as unique requestId (need to
check how expensive nanoTime is in praxis). Oh yes, please dont use the sessionId here,
since it theoretically may get invalidated in the middle of a request.
The SPI Extension:
Until here, all of those tricks may be done internally in OpenWebBeans. But those
optimizations would not be applicable for 3rd party Context implementations like my
Context for the @javax.faces.bean.ViewScoped.
This would require extend the Context SPI to hand over an optional unique identifier for
the current context situation. And if the Context makes such an id available, the
NormalScopedBeanMethodHandler could use it to provide a pretty decent caching scenario out
of the box.
Imo the strongest part in this scenario is not only that it reduces work to resolve the
bean instances, but to heavily de-centralize the burden, and thus would scale vastly
better than always having to go to the central instance storage.
So, can it really be that easy? I somehow have the feeling that I forgot something, wdyt?
Candidates to think about for sure are synchronization, multi-tier classloader scenarios
and garbage collection issues.
oki, blame me, tell me nuts or give me pet names, any comment is welcome ;)
txs and LieGrue,
strub
__________________________________________________
Do You Yahoo!?
Sie sind Spam leid? Yahoo! Mail verfügt über einen herausragenden Schutz gegen
Massenmails.
http://mail.yahoo.com
_______________________________________________
weld-dev mailing list
weld-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/weld-dev