[jboss-user] [Performance Tuning] - Re: Diff response times at different times

PeterJ do-not-reply at jboss.com
Mon Oct 5 15:00:28 EDT 2009


No need to apologize about your Excel, I am not that good at it either.

The full GC pause time is only around 1 second, so GC is not the cause of the response time discrepancies you are seeing. In total you are spending about 53 seconds in GC (72 when not setting young gen size), which is a little high (1/8th of your run time)., but not too bad.

I think the max heap size is due to the permgen size setting. Between the heap and the permgen you can allocate around 1700MB. You could do a run with -XX:+PrintHeapAtGC to see what your perm gen requirements are and set it accordingly.

Other than that, you might want to look at your CPU usage. Is the kernel time high? If so, then that could indicate contention issues between your threads.

"Think time" is the delay that you place into your load test script between requests. Some load tests simulate real environments where the users have to "think"  between the time they are shown a page and when they submit the request. For example, after entering a certain page back the script might wait 20 seconds before sending the next request on the assumption that it takes the average user 20 seconds to fill in the page before making the request. This the question about your SLA is very apropos - if it is 500 logged in users then added the think time would place less of a burden on the system and lower the response time. Usually you will want to under guess the think time. In other words, when I mentioned 20 second think time in the earlier example, that was probably because most users take 40 seconds or longer.  Of course, the other way to do that is assume that only, say, 1/4 or 1/5 of the users will have simultaneous requests. Then you can try a 100 or 150 user run to simulate that (though I would run 200 users just to be safe.)

Have you tried 100, 200, ... 400 users? Or going 50, 100, .. 450, 500? If so, have you plotted the response times for each such run? If there is a dramatic drop between two runs that could pinpoint your saturation level. In which case you might want to reduce the number of HTTP threads to match that. For example, if with 300 users, 90% of your responses are within 15 seconds, then you could set 300 threads with a 200 request wait queue. Then the overall response time should be around 30 seconds. The idea here is to not overload the system (more threads is not always better, and adding more thread to an overloaded system is a well-known performance anti-pattern). I found out the hard way when first doing performance testing many years ago that doing the full run with the max number of users as the first run was the wrong way to do this - you need to start small and steadily increase the workload and note at what time the response times start to change drastically - that is your saturation point. And you have to find out hwy your are saturated at that point, fix that issue, see the response times go back in line, and then continue. 

View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4258721#4258721

Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4258721




More information about the jboss-user mailing list