<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><br><div><div>Am 06.08.2014 um 19:48 schrieb Jason Greene <<a href="mailto:jason.greene@redhat.com">jason.greene@redhat.com</a>>:</div><br class="Apple-interchange-newline"><blockquote type="cite"><br>On Aug 6, 2014, at 12:21 PM, Andrig Miller <<a href="mailto:anmiller@redhat.com">anmiller@redhat.com</a>> wrote:<br><br><blockquote type="cite"><br><br>----- Original Message -----<br><blockquote type="cite">From: "Jason Greene" <<a href="mailto:jason.greene@redhat.com">jason.greene@redhat.com</a>><br>To: "Andrig Miller" <<a href="mailto:anmiller@redhat.com">anmiller@redhat.com</a>><br>Cc: "Bill Burke" <<a href="mailto:bburke@redhat.com">bburke@redhat.com</a>>, <a href="mailto:wildfly-dev@lists.jboss.org">wildfly-dev@lists.jboss.org</a><br>Sent: Wednesday, August 6, 2014 11:08:02 AM<br>Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default<br><br><br>On Aug 6, 2014, at 10:49 AM, Andrig Miller <<a href="mailto:anmiller@redhat.com">anmiller@redhat.com</a>><br>wrote:<br><br><blockquote type="cite"><br><br>----- Original Message -----<br><blockquote type="cite">From: "Bill Burke" <<a href="mailto:bburke@redhat.com">bburke@redhat.com</a>><br>To: <a href="mailto:wildfly-dev@lists.jboss.org">wildfly-dev@lists.jboss.org</a><br>Sent: Wednesday, August 6, 2014 9:30:06 AM<br>Subject: Re: [wildfly-dev] Pooling EJB Session Beans per default<br><br></blockquote><br></blockquote></blockquote></blockquote><br><br><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">This conversation is a perfect example of misinformation that<br>causes us performance and scalability problems within our code<br>bases.<br></blockquote><br>It’s just a surprising result. The pool saves a few allocations, but<br>it also has the cost of concurrency usage which can trigger<br>blocking, additional barriers, and busy looping on CAS. You also<br>still have object churn in the underlying pool data structures that<br>occurs per invocation since every invocation is a check-out and a<br>check-in (requires a new node object instance), and if the semaphore<br>blocks you have additional allocation for the entry in the wait<br>queue. You factor in the remaining allocation savings relative to<br>other allocations that are required for the invocation, and it<br>should be a very small percentage. For that very small percentage to<br>lead to several times a difference in performance to me hints at<br>other factors being involved.<br><br></blockquote><br>All logically thought through. At a 15% lower transaction rate than we are doing now, we saw 4 Gigabytes per second of object allocation. We, with Sanne doing most of the work, managed to get that down to 3 Gigabytes per second (I would have loved to get it to 2). Much of that was Hibernate allocations, and of course that was with pooling on. We have not spent the time to pinpoint the exact differences, memory and other, between having pooling on vs. off. Our priority has been continue to scale the workload and fix any problems we see as a result. We have managed to increase the transaction rate another 15% in the last couple of months, but still have another 17+% to go on a single JVM before we start looking at two JVM's for the testing. <br><br>Once we get to our goal, I would love to put this on our list of tasks, so we can get the specific facts, and instead of talking theory, we will no exactly what can and cannot be done, and whether no pooling could ever match pooled.<br></blockquote><br>Fair enough, and I certainly didn’t mean to imply that such work should be your team, I was just speaking generally. In any case, what I really really would like for us to achieve is a default implementation that performs generally well on all usage patterns, with no tuning required. Since we know that initialization can be costly for some applications usage of SLSB, such an implementation will definitely require a form of pooling. <br><br>I suspect that a thread local based design with the pooling tied to worker threads will give us this. Alternatively a shared pool which is auto-tuned to match might be worth looking into. <br><br>If there is anyone lurking who wishes to contribute in this area speak up, and I’ll worth with you on it. As doge would say “Such Fun. Much Glory” :) <br></blockquote><div><br></div><div>I would expect something better then this :-)))</div><div><br></div><div><a href="http://de.slideshare.net/jambay/weblogic-server-work-managers-and-overload-protection">http://de.slideshare.net/jambay/weblogic-server-work-managers-and-overload-protection</a></div><div><br></div><br><blockquote type="cite"><br>--<br>Jason T. Greene<br>WildFly Lead / JBoss EAP Platform Architect<br>JBoss, a division of Red Hat<br><br><br>_______________________________________________<br>wildfly-dev mailing list<br><a href="mailto:wildfly-dev@lists.jboss.org">wildfly-dev@lists.jboss.org</a><br>https://lists.jboss.org/mailman/listinfo/wildfly-dev<br></blockquote></div><br></body></html>