Those “benchmarks” were develop in the mid 80s, they don’t have any relevancy to judging
the performance of something like Drools 6.0. For instance it would not benefit in anyway
from the lazy algorithm provided in 6.0, as it uses a “context’ root object, which is no
longer needed.
If those “benchmarks” aren’t running right now, it’s because they rely on a very specific
ordering of the matches for a given rule - which is based on the OPS5 conflict resolution
strategy. We don’t support this strategy, as it’s too complex for general use.
What you have to ask is, even if they do run - what does that tell you? What are you
learning about that engine and where it performs well and where it doesn’t? We don’t learn
anything of specific details, from those.
Imho you should probably look into develop a set of micro benchmark’s, that look to stress
specific parts of the engine. So that from each benchmark, we learn specific details about
the engines behaviour for that configuration.
For 6.0 I’d be very interested to try and prove how, when and how much the lazy algo can
help, it’s not something we’ve had time ourselves to figure out yet.
Mark
On 3 Jan 2014, at 22:00, Tom DC <neosniperkiller(a)gmail.com> wrote:
Hello,
For my master thesis I'm benchmarking different rule engines by making use
of the micro benchmarks Miss Manners, Waltz and Waltzdb.
In the previous version of Drools Expert (5.5.0) the results were
surprisingly fast!
So when I saw the release of the latest Drools Expert version, 6, I was
wondering if there would be a difference in execution time as well.
So I started making a new project, using Netbeans, and added the old
benchmark code and modified the imports according to the example projects
(there is a benchmark project included in the latest release of Drools). I
also checked for possible differences between the old code and the new, but
didn't find any.
When I now run the benchmark for Miss Manners, it seems that Drools is
unable to even solve the easiest problem with 16 visitors. It keeps on
executing the same rule over and over without going to the next one when it
becomes available.
To see if this was a problem in my custom edition of the ruleset, I also ran
the example benchmark, but also noticed that this one could not run the Miss
Manners benchmark anymore.
Did somebody already figure out what is wrong with this benchmark?
Thx
Tom DC
--
View this message in context:
http://drools.46999.n3.nabble.com/Drools-6-0-1-micro-benchmark-Miss-Manne...
Sent from the Drools: User forum mailing list archive at
Nabble.com.
_______________________________________________
rules-users mailing list
rules-users(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users