Few more questions based on my requirements.
I want to load all rules from the database once for everyday(during daily
job startup) and apply them against huge amount of data.here rules are once
loaded, no further need for dynamic addition/removal during that execution.
But If I add any new rules to the database, rule engine should be able to
pick those during next execution(Intention here is to provide a custom
editor later on for the user to manage the rules.since droosl's BRMS does
not seem handle all of our use case conditions, I've concluded to have
1.Can I have rule flow as database driven?
i.e Can I store rule flow content for each rule as text column in the
database and create rule flow file during the execution of each day?
2. my requirement is to run 1200 use cases(rules) against 40- 50 million
records each day, I'm really concerned about execution time also. Can I do
any optimization in the rule engine for faster execution?
Is rule engine still a good option in this case?
appreciate your help.
Edson Tirelli-3 wrote:
Right now, the only way is to work with different rule bases and
memories. Even using agenda-groups or rule-flow, rules are still being
eagerly evaluated, as this is how standard Rete works.
The problem of creating and canceling too many activations is a known
problem and the only way around it right now is sequential mode, but
sequential mode has some restrictions on what you can do. For instance,
must work with a stateless working memory and can not modify/retract facts
in your rules to work with sequential mode, but it will give you big
We are evaluating the possibility of creating physical network
partitions for next version, but that will require some R&D yet.
2007/8/14, Yuri <ydewit(a)gmail.com>:
> Dr. Gernot Starke <gs <at> gernotstarke.de> writes:
> > can you detail your problem a little?
> I basically need to find perfect matches between two different sets of
> If perfect matches are not found, I then create bulks of objects that are
> used in combination with the individual ones to find bulk matches. If no
> are found I need then to categorize the breaks (around 10 different
> categorizations) and report them.
> The matching criteria between two object is specific enough to be fast.
> Once I
> get into break, which basically is removing some criteria components, the
> possible combinations increase exponentially. Bulking just compounds the
> by adding more matchable/breakable facts into memory.
> My bulking logic (I didnt have collect when I started with 3.0) starts a
> looking for two diff objects with the same bulkling criteria (this is my
> potential cross product since drools would produce C!/N!(C-N)!
> Then once the bulk for a given criteria is create I have a second rule
> expands or contracts the bulks as new facts are asserted causing many
> side effects.
> What I am basically seeing is that asserting a fact that would for
> instance be a
> perfect match, causes many of the bulking and breaking rule activations
> created and then cancelled. Considering that I am talking about tens or
> of thousands of facts I thought that if I could stage the activation
> creations I
> would increase processing speed.
> With 15K objects on each side I have been seeing something like 1
> assertion per
> I am aware that this could be cross product somewhere but I have already
> the rules many many times so now I am looking for other alternatives.
> I am now trying to understand looking I basically need to find perfect
> between two different sets of objects. If perfect matches are not found,
> create bulks of objects that are then used in combination with the
> one to find bulk matches. If no matches are found I need then to
> categorize the
> breaks (around 10 different categorizations) and report them.
> rules-users mailing list
Software Engineer - JBoss Rules Core Developer
Office: +55 11 3529-6000
Mobile: +55 11 9287-5646
JBoss, a division of Red Hat @ www.jboss.com
rules-users mailing list
View this message in context:
Sent from the drools - user mailing list archive at Nabble.com