[rules-users] Shadow facts(was JRules\Drools benchmarking)

Mark Proctor mproctor at codehaus.org
Thu May 15 12:21:59 EDT 2008


Hehl, Thomas wrote:
>
> I did some reading about shadow facts and I was thinking about turning 
> them off, but it seems to be that if I use stateless sessions 
> (session.execute()) that shadow facts are irrelevant. Is this true?
>
No, only if you turn on sequential mode.

Mark
>
>  
>
> ------------------------------------------------------------------------
>
> *From:* rules-users-bounces at lists.jboss.org 
> [mailto:rules-users-bounces at lists.jboss.org] *On Behalf Of *Edson Tirelli
> *Sent:* Thursday, May 15, 2008 10:00 AM
> *To:* Rules Users List
> *Subject:* Re: [rules-users] JRules\Drools benchmarking...
>
>  
>
>
>    It seems you are using a good strategy to do your tests. But still, 
> it is difficult to explain why one is slower than the other without 
> seeing the actual test code. This is because all the engines have 
> stronger and weaker spots. Just to mention one example, some engines 
> (not talking specifically about drools and jrules, but about all 
> engines) implement faster alpha evaluation, others implement faster 
> beta (join) evaluation, others implement good optimizations for not() 
> while others may focus on eval(), etc. It is up to the point that when 
> comparing 2 engines, one performs better in hardware with a bigger L2 
> cache while the other performs better in hardware with a smaller L2 cache.
>
>    So, best I can do without looking at the actual tests is provide 
> you some tips:
>
> 1. First of all, are you using Drools 4.0.7? It is very important that 
> you use this version over the previous ones.
>
> 2. Are you using stateful or stateless sessions? If you are using 
> stateful sessions are you calling dispose() after using the session? 
> If not, you are inflating your memory and certainly causing the engine 
> to run slower over time.
>
> 3. Are you sharing the rulebase among multiple requests? The drools 
> rulebase is designed to be shared and the compilation process is eager 
> and pretty heavy compared to session creation. So, it pays off to 
> create the rulebase once and share among requests.
>
> 4. Did you disabled shadow facts? Test cases usually use a really 
> small fact base, so would not be much affected by shadow facts, but 
> still, disabling them improves performance, but require some best 
> practices to be followed.
>
> 5. Do your rules follow best practices (similar to SQL writing best 
> practices), i.e., write the most constraining patterns first, write 
> the most constraining restrictions first, etc? Do you write patterns 
> in the same order among rules to maximize node sharing? I guess you 
> do, but worth mentioning anyway.
>
>    Anyway, just some tips.
>
>    Regarding the jrules blog, I know it, but I make a bet with you. 
> Download the manners benchmark to your machine, make sure the rules 
> are the correct ones (not cheated ones), run the test on both engines 
> and share the results. I pay you a beer if you get results similar to 
> those published in the blog. :)
>    My point is not that we are faster (what I know we are) or them are 
> faster. My point is that perf benchmarks for rules engines are a 
> really tricky matter, with lots of variables involved, that make every 
> test case configuration unique. Try to reproduce in a different 
> environment and you will get different performance rates between the 
> engines.
>
>    That is why, our recommendation is to always do what you are doing: 
> try your own use case. Now, whatever you are trying, I'm sure it is 
> possible to optimize if we see the test case, but is it worth it? Or 
> the perf as it is already meets your requirements?
>
>    Cheers,
>        Edson
>
> PS: I'm serious about the beer... ;) run and share the results with us...
>
> 2008/5/15 mmquelo massi <mmquelo at gmail.com <mailto:mmquelo at gmail.com>>:
>
> You r right...
>
> I have to tell you what I have done...
>
> I did not define a "stand-alone" benchmark like the "Manners" one.
>
> I benchmarked a real j2ee application.
>
> I have got jrules deployed with a resource adapter and drools deployed
> with simple jars libraries plus jbrms.
>
> Jrules uses a "bres" module which does the same trick jbrms does.
>
> Both of them are deployed on the same AS, in the same time, same
> machine (my laptop: dual core 2 duo 1.66, 2GB).
>
> Using the inversion of control pattern I found out how to "switch the
> rule engine" at run-time. So I can easily choose then rule engine to
> use between drools and jrules.
>
> Ofcourse thay have got two separate rule repositories but both of them
> persist the rules on the same DB which is Derby.
>
> The j2ee application I benchmarked sends a request object to the
> current rule engine and get back a reply from it. I just measured the
> elapsed time between the request and reply generation using drools
> first and the jrules.
>
> I did the measurements tens of times.
>
> Both rule engines implement the same rules and the Drools rules (which
> I personally implemented) are at least as optimized as the jrules
> ones. In the Jrules version of the rules there are a lot of
> "Eval(...)" blocks in the Drools version I did not use the "Eval()" at
> all ....but I just did pattern matching.
>
> If you want i can send you a more specific documentation but I hope
> this explanation will be enough to show you that the measurements I
> have done are not that bad.
>
> In any case I noticed that after a warming-up phase, the drools engine
> gives a reply back 3 times slower than the jrules engine.
>
> The link I have sent show you something related to it, It reports the
> manners execution time using drools and jrules. As you can see the
> difference is a 1,5x factor....so I was wrong... drools is not that
> slow. In anycase seems to be slower that jrules.
>
> Look at this:
>
>
> http://blogs.ilog.com/brms/wp-content/uploads/2007/10/jrules-perf-manners.png
>
> Massimiliano
>
>
>
>
> On 5/15/08, Edson Tirelli <tirelli at post.com <mailto:tirelli at post.com>> 
> wrote:
> >    The old recurring performance evaluation question... :)
> >
> >    You know that an explanation can only be made after having looked 
> at the
> > tests used in the benchmark, the actual rules used by both products,
> > hardware specs, etc... so, not quite sure what answer do you want?
> >
> >    For instance, there are a lot of people that think exactly the 
> contrary.
> > Just one example:
> > http://blog.athico.com/2007/08/drools-vs-jrules-performance-and-future.html
> >
> >    My preferred answer is still:
> >
> > "In 99% of the applications, the bottleneck is IO: databases, 
> network, etc.
> > So, test your use case with both products, make sure it performs well
> > enough, add to your analysis the products feature set, expressiveness 
> power,
> > product flexibility, cost, professionals availability, support 
> quality, etc,
> > and choose the one that best fits you."
> >
> >    That is because I'm sure, whatever your rules are, in whatever product
> > you try them, they can be further optimized by having a product expert
> > looking into them. But what is the point?
> >
> >    Cheers,
> >       Edson
> >
> >
> >
> > 2008/5/14 mmquelo massi <mmquelo at gmail.com <mailto:mmquelo at gmail.com>>:
> >
> >>
> >> Hi everybody,
> >>
> >> I did a benchmark on Drools\Jrules.
> >>
> >> I found out that drools is about 2,5-3 times slower than Jrules.
> >>
> >> How comes?
> >>
> >> The results I got are quite similar to the ones in:
> >>
> >>
> >> 
> http://images.google.com/imgres?imgurl=http://blogs.ilog.com/brms/wp-content/uploads/2007/10/jrules-perf-manners.png&imgrefurl=http://blogs.ilog.com/brms/category/jrules/&h=516&w=722&sz=19&hl=it&start=1&um=1&tbnid=YBqwC0nwaSLxwM:&tbnh=100&tbnw=140&prev=/images%3Fq%3Dbrms%2Bbencmark%26um%3D1%26hl%3Dit 
> <http://images.google.com/imgres?imgurl=http://blogs.ilog.com/brms/wp-content/uploads/2007/10/jrules-perf-manners.png&imgrefurl=http://blogs.ilog.com/brms/category/jrules/&h=516&w=722&sz=19&hl=it&start=1&um=1&tbnid=YBqwC0nwaSLxwM:&tbnh=100&tbnw=140&prev=/images%3Fq%3Dbrms%2Bbencmark%26um%3D1%26hl%3Dit>
> >>
> >> Any explanations?
> >>
> >> Thank you.
> >>
> >> Bye
> >>
> >> Massi
> >>
> >> _______________________________________________
> >> rules-users mailing list
> >> rules-users at lists.jboss.org <mailto:rules-users at lists.jboss.org>
> >> https://lists.jboss.org/mailman/listinfo/rules-users
> >>
> >>
> >
> >
> > --
> > Edson Tirelli
> > JBoss Drools Core Development
> > Office: +55 11 3529-6000
> > Mobile: +55 11 9287-5646
> > JBoss, a division of Red Hat @ www.jboss.com <http://www.jboss.com>
> >
> _______________________________________________
> rules-users mailing list
> rules-users at lists.jboss.org <mailto:rules-users at lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/rules-users
>
>
>
>
> -- 
> Edson Tirelli
> JBoss Drools Core Development
> Office: +55 11 3529-6000
> Mobile: +55 11 9287-5646
> JBoss, a division of Red Hat @ www.jboss.com <http://www.jboss.com>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> rules-users mailing list
> rules-users at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/rules-users
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/rules-users/attachments/20080515/0c5b1f56/attachment.html 


More information about the rules-users mailing list