Thanks Edson for the clarification.
1. actually I want to manage rule flow from the database.
Do I have to store whole rfm file content in the database as text column?
If so, I was looking at the sample rfm file and its content does not look
simpler. If I want to edit to rfm file in the database(add/remove nodes or
rule group etc),it seems too much complicated or very error prone.Editing of
rfm does not look feasible without using eclipse graphical editor. If that's
the case, How can I have rule flow as database driven? Or may be whenever I
want to edit rule flow, then I can do it in the eclipse GUI and update the
content in the databse. then I wont call this as database driven. Please
Edson Tirelli-3 wrote:
Trying to answer:
1. not sure what you mean. You design a rule flow as a sequence of steps
(groups) in the same way you would do when designing a workflow. When you
write your rules, you associate those rules to groups in that flow. The
ruleflow design itself is an XML file. So, the general answer is: you can
store a ruleflow in a database. Rebuilding the rulebase is fine, and
ruleflows are dynamic, so you can add them to existing rulebases. Kris can
confirm this to you.
2. yes, there are several optimizations you can do to the engine and to
rules, in a similar way you would do for a database. Which optimizations
how to do it can only be answered by seeing your use case, and fine
it. Regarding volumes, we have users using thousands of rules and millions
of facts without problems, but as you know, the bigger the size, more
careful you need to be on the engine configuration and the quality of your
I'm trying to get a user to write a blog on the sizing of the application
they have right now. I don't want to disclose the information before they
allow me to do it, but their specific solution does not have as many rules
as yours, but process a similar volume of facts as yours in 2-hours
They run in a server with just a few cores, but use a considerable amount
Regarding your question about if a production rules engine fits your
case, Rete algorithm as a general rule, will provide increasing benefits
the rule base grows, because more rules you have, more you will benefit of
optimizations like node sharing and other techniques.
If you go ahead, I suggest you contact Red Hat. They can help with your
solution design, training (if necessary) as well as provide support for
development and production.
2008/11/4 techy <techluver007(a)gmail.com>
> Thanks Edson.
> Few more questions based on my requirements.
> I want to load all rules from the database once for everyday(during daily
> job startup) and apply them against huge amount of data.here rules are
> loaded, no further need for dynamic addition/removal during that
> But If I add any new rules to the database, rule engine should be able to
> pick those during next execution(Intention here is to provide a custom
> editor later on for the user to manage the rules.since droosl's BRMS does
> not seem handle all of our use case conditions, I've concluded to have
> custom BRMS).
> 1.Can I have rule flow as database driven?
> i.e Can I store rule flow content for each rule as text column in the
> database and create rule flow file during the execution of each day?
> 2. my requirement is to run 1200 use cases(rules) against 40- 50 million
> records each day, I'm really concerned about execution time also. Can I
> any optimization in the rule engine for faster execution?
> Is rule engine still a good option in this case?
> appreciate your help.
> Edson Tirelli-3 wrote:
> > Yuri,
> > Right now, the only way is to work with different rule bases and
> > working
> > memories. Even using agenda-groups or rule-flow, rules are still being
> > eagerly evaluated, as this is how standard Rete works.
> > The problem of creating and canceling too many activations is a
> > problem and the only way around it right now is sequential mode, but
> > sequential mode has some restrictions on what you can do. For instance,
> > you
> > must work with a stateless working memory and can not modify/retract
> > in your rules to work with sequential mode, but it will give you big
> > performance boosts.
> > We are evaluating the possibility of creating physical network
> > partitions for next version, but that will require some R&D yet.
> > s
> > Edson
> > 2007/8/14, Yuri <ydewit(a)gmail.com>:
> >> Dr. Gernot Starke <gs <at> gernotstarke.de> writes:
> >> > can you detail your problem a little?
> >> I basically need to find perfect matches between two different sets of
> >> objects.
> >> If perfect matches are not found, I then create bulks of objects that
> >> then
> >> used in combination with the individual ones to find bulk matches. If
> >> matches
> >> are found I need then to categorize the breaks (around 10 different
> >> categorizations) and report them.
> >> The matching criteria between two object is specific enough to be
> >> Once I
> >> get into break, which basically is removing some criteria components,
> >> possible combinations increase exponentially. Bulking just compounds
> >> problem
> >> by adding more matchable/breakable facts into memory.
> >> My bulking logic (I didnt have collect when I started with 3.0) starts
> >> bulk
> >> looking for two diff objects with the same bulkling criteria (this is
> >> first
> >> potential cross product since drools would produce C!/N!(C-N)!
> >> combinations).
> >> Then once the bulk for a given criteria is create I have a second rule
> >> that
> >> expands or contracts the bulks as new facts are asserted causing many
> >> different
> >> side effects.
> >> What I am basically seeing is that asserting a fact that would for
> >> instance be a
> >> perfect match, causes many of the bulking and breaking rule
> >> to
> >> be
> >> created and then cancelled. Considering that I am talking about tens
> >> hundreds
> >> of thousands of facts I thought that if I could stage the activation
> >> creations I
> >> would increase processing speed.
> >> With 15K objects on each side I have been seeing something like 1
> >> assertion per
> >> second.
> >> I am aware that this could be cross product somewhere but I have
> >> revised
> >> the rules many many times so now I am looking for other alternatives.
> >> I am now trying to understand looking I basically need to find perfect
> >> matches
> >> between two different sets of objects. If perfect matches are not
> >> I
> >> then
> >> create bulks of objects that are then used in combination with the
> >> individual
> >> one to find bulk matches. If no matches are found I need then to
> >> categorize the
> >> breaks (around 10 different categorizations) and report them.
> >> _______________________________________________
> >> rules-users mailing list
> >> rules-users(a)lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/rules-users
> > --
> > Edson Tirelli
> > Software Engineer - JBoss Rules Core Developer
> > Office: +55 11 3529-6000
> > Mobile: +55 11 9287-5646
> > JBoss, a division of Red Hat @ www.jboss.com
> > _______________________________________________
> > rules-users mailing list
> > rules-users(a)lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/rules-users
> View this message in context:
> Sent from the drools - user mailing list archive at Nabble.com
> rules-users mailing list
JBoss Drools Core Development
JBoss, a division of Red Hat @ www.jboss.com
rules-users mailing list