On 09/16/2011 05:15 AM, Carlo de Wolf wrote:
I'll illustrate how inverting the pool complements the big thread
pool.
On 09/16/2011 08:35 AM, Stuart Douglas wrote:
> What about if we had 1 underlying thread pool that provides all the threads, and then
subsystems use a custom executor that provides a 'view' of this underlying thread
pool.
+1. We need this to guard the total number of threads.
> So for instance we have the main thread pool that has a max of 25 threads, we could
then give the EJB remote invocation pool is then given an executor that can use a maximum
of 10 threads from the pool.
If we just stick to EJB, this means that those 10 (or more) threads
become an extremely precious resource. Any time one of these threads
blocks we're wasting precious cycles. So instead of blocking for
resources to become available we should only pick up a request that can
proceed without blocking. Further more we should prioritize jobs that
require lesser work, this is called 'maximum throughput scheduling'.
Note that there are more criteria (like max wait time, queue size etc)
that need to be worked in, but that should be expressible in a formula
the drives a 'priority queue'.
The application developer, should have a feel for which beans or even
bean methods are expensive and to what level. For example, the
developer may want to scale the application, in such a way that only N
of a certain bean execute on a single cloud node/standalone
installation/clustered installation. I have done this in clustered
applications, on both the web tier and for session beans. It gives the
app deployer/installer person, a way to make better use of the hardware
(because they know that the most expensive requests, will be throttled
separately).
Another simple example, imagine that there is an admin function "recalc
data", invoked in a bean that reads a certain subset of the database
(thousands of database rows), recalculating something that couldn't be
done in a stored procedure. Imagine if you will, that this function,
isn't invoked very often, but when it is. Each app server instance, can
only handle a max of 4 concurrent users executing this function, without
seriously reducing performance for other (lightweight) users of the
system. For this simple example, each app server instance can easily
handle many more lightweight user requests (e.g. 20 times the number of
CPUs), without pegging the CPU.
Without the ability to throttle the machines based on per session bean
thread limits, users can still use the max number of threads to prevent
too many requests from executing but they don't have the flexibility to
handle the above. Basically, they are forced into handling this
operationally (training users to coordinate when expensive functions are
performed).
Is there room to introduce per bean throttling or thread limits?
Perhaps this is what someone already had in mind?
> This would also allow you to give these executors a priority, so
for example you could give the web subsystem thread pool a higher priority to make sure
that web requests always get handled first.
We should not put a maximum on a 'subsystem' pool, because that would
not allow it to fully utilize the total pool if the others are idle.
Better to leave a minimum on hot standby.
Carlo
> I'm not really sure how well this would work, but I just thought I would put it
out there.
>
> Stuart
>
> On 15/09/2011, at 3:53 AM, Jason T. Greene wrote:
>
>> Moving to a new thread.
>>
>> The big problem we run into with this is that almost every application
>> of a thread pool that we have needs to be highly tailored to its usage
>> to get the most optimal performance. So we end up with quite a few
>> different pools and it becomes difficult to impose a server wide limit.
>>
>> There however some potential strategies we could take. Although I am
>> unsure as to how the overall effectiveness would be:
>>
>> 1. Sharing idle threads between pools
>> 2. Force everything to go through a special blocking thread factory via
>> instrumentation of java.lang.Thread. Any attempt to allocate over the
>> max would lead to thread reclamation attempts and finally blocking until
>> a timeout is reached.
>> 3. Some kind of auto-tuning weighting model. If the max total threads is
>> N, force all thread pools to use a percentage of N, potentially based on
>> establishing current config value divided by combined total.
>>
>> One thing I wonder though is if cloud providers are "barking up the
>> wrong tree"? It seems a better limitation of an application is raw CPU
>> clock time and max memory usage. How they split that time into threads
>> doesn't really affect the scalability of the physical server, it's all
>> virtual process performance (who cares if someone wastes time context
>> switching?).
>>
>> On 9/14/11 10:39 AM, Scott Stark wrote:
>>> The other big cross cutting concern is controlling the total number of
>>> threads in use by the application server. When running under a
>>> constrained environment that uses something like pam_limits module to
>>> control how many process(==java threads) a user can have, it is
>>> difficult to know what the server max thread usage is right now.
>>>
>> --
>> Jason T. Greene
>> JBoss AS Lead / EAP Platform Architect
>> JBoss, a division of Red Hat
>> _______________________________________________
>> jboss-as7-dev mailing list
>> jboss-as7-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>
> _______________________________________________
> jboss-as7-dev mailing list
> jboss-as7-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
_______________________________________________
jboss-as7-dev mailing list
jboss-as7-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev