You r right...
I have to tell you what I have done...
I did not define a "stand-alone" benchmark like the "Manners" one.
I benchmarked a real j2ee application.
I have got jrules deployed with a resource adapter and drools deployed
with simple jars libraries plus jbrms.
Jrules uses a "bres" module which does the same trick jbrms does.
Both of them are deployed on the same AS, in the same time, same
machine (my laptop: dual core 2 duo 1.66, 2GB).
Using the inversion of control pattern I found out how to "switch the
rule engine" at run-time. So I can easily choose then rule engine to
use between drools and jrules.
Ofcourse thay have got two separate rule repositories but both of them
persist the rules on the same DB which is Derby.
The j2ee application I benchmarked sends a request object to the
current rule engine and get back a reply from it. I just measured the
elapsed time between the request and reply generation using drools
first and the jrules.
I did the measurements tens of times.
Both rule engines implement the same rules and the Drools rules (which
I personally implemented) are at least as optimized as the jrules
ones. In the Jrules version of the rules there are a lot of
"Eval(...)" blocks in the Drools version I did not use the "Eval()"
at
all ....but I just did pattern matching.
If you want i can send you a more specific documentation but I hope
this explanation will be enough to show you that the measurements I
have done are not that bad.
In any case I noticed that after a warming-up phase, the drools engine
gives a reply back 3 times slower than the jrules engine.
The link I have sent show you something related to it, It reports the
manners execution time using drools and jrules. As you can see the
difference is a 1,5x factor....so I was wrong... drools is not that
slow. In anycase seems to be slower that jrules.
Look at this:
The old recurring performance evaluation question... :)
You know that an explanation can only be made after having looked at the
tests used in the benchmark, the actual rules used by both products,
hardware specs, etc... so, not quite sure what answer do you want?
For instance, there are a lot of people that think exactly the contrary.
Just one example:
http://blog.athico.com/2007/08/drools-vs-jrules-performance-and-future.html
My preferred answer is still:
"In 99% of the applications, the bottleneck is IO: databases, network, etc.
So, test your use case with both products, make sure it performs well
enough, add to your analysis the products feature set, expressiveness power,
product flexibility, cost, professionals availability, support quality, etc,
and choose the one that best fits you."
That is because I'm sure, whatever your rules are, in whatever product
you try them, they can be further optimized by having a product expert
looking into them. But what is the point?
Cheers,
Edson
2008/5/14 mmquelo massi <mmquelo(a)gmail.com>:
>
> Hi everybody,
>
> I did a benchmark on Drools\Jrules.
>
> I found out that drools is about 2,5-3 times slower than Jrules.
>
> How comes?
>
> The results I got are quite similar to the ones in:
>
>
>
http://images.google.com/imgres?imgurl=http://blogs.ilog.com/brms/wp-cont...
>
> Any explanations?
>
> Thank you.
>
> Bye
>
> Massi
>
> _______________________________________________
> rules-users mailing list
> rules-users(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/rules-users
>
>
--
Edson Tirelli
JBoss Drools Core Development
Office: +55 11 3529-6000
Mobile: +55 11 9287-5646
JBoss, a division of Red Hat @
www.jboss.com