Edson Tirelli-3 wrote:
>
> Trying to answer:
>
> 1. not sure what you mean. You design a rule flow as a sequence of steps
> (groups) in the same way you would do when designing a workflow. When you
> write your rules, you associate those rules to groups in that flow. The
> ruleflow design itself is an XML file. So, the general answer is: you can
> store a ruleflow in a database. Rebuilding the rulebase is fine, and
> AFAIK,
> ruleflows are dynamic, so you can add them to existing rulebases. Kris can
> confirm this to you.
>
> 2. yes, there are several optimizations you can do to the engine and to
> your
> rules, in a similar way you would do for a database. Which optimizations
> and
> how to do it can only be answered by seeing your use case, and fine
> tunning
> it. Regarding volumes, we have users using thousands of rules and millions
> of facts without problems, but as you know, the bigger the size, more
> careful you need to be on the engine configuration and the quality of your
> rules.
> I'm trying to get a user to write a blog on the sizing of the application
> they have right now. I don't want to disclose the information before they
> allow me to do it, but their specific solution does not have as many rules
> as yours, but process a similar volume of facts as yours in 2-hours
> windows.
> They run in a server with just a few cores, but use a considerable amount
> of
> memory.
>
> Regarding your question about if a production rules engine fits your
> use
> case, Rete algorithm as a general rule, will provide increasing benefits
> as
> the rule base grows, because more rules you have, more you will benefit of
> optimizations like node sharing and other techniques.
>
> If you go ahead, I suggest you contact Red Hat. They can help with your
> solution design, training (if necessary) as well as provide support for
> development and production.
>
> []s
> Edson
>
> 2008/11/4 techy <
techluver007@gmail.com>
>
>>
>> Thanks Edson.
>> Few more questions based on my requirements.
>>
>> I want to load all rules from the database once for everyday(during daily
>> job startup) and apply them against huge amount of data.here rules are
>> once
>> loaded, no further need for dynamic addition/removal during that
>> execution.
>> But If I add any new rules to the database, rule engine should be able to
>> pick those during next execution(Intention here is to provide a custom
>> editor later on for the user to manage the rules.since droosl's BRMS does
>> not seem handle all of our use case conditions, I've concluded to have
>> custom BRMS).
>>
>> 1.Can I have rule flow as database driven?
>> i.e Can I store rule flow content for each rule as text column in the
>> database and create rule flow file during the execution of each day?
>>
>> 2. my requirement is to run 1200 use cases(rules) against 40- 50 million
>> records each day, I'm really concerned about execution time also. Can I
>> do
>> any optimization in the rule engine for faster execution?
>> Is rule engine still a good option in this case?
>>
>> appreciate your help.
>>
>>
>>
>> Edson Tirelli-3 wrote:
>> >
>> > Yuri,
>> >
>> > Right now, the only way is to work with different rule bases and
>> > working
>> > memories. Even using agenda-groups or rule-flow, rules are still being
>> > eagerly evaluated, as this is how standard Rete works.
>> > The problem of creating and canceling too many activations is a
>> known
>> > problem and the only way around it right now is sequential mode, but
>> > sequential mode has some restrictions on what you can do. For instance,
>> > you
>> > must work with a stateless working memory and can not modify/retract
>> facts
>> > in your rules to work with sequential mode, but it will give you big
>> > performance boosts.
>> >
>> > We are evaluating the possibility of creating physical network
>> > partitions for next version, but that will require some R&D yet.
>> >
>> > []s
>> > Edson
>> >
>> > 2007/8/14, Yuri <
ydewit@gmail.com>:
>> >>
>> >> Dr. Gernot Starke <gs <at>
gernotstarke.de> writes:
>> >> > can you detail your problem a little?
>> >>
>> >> I basically need to find perfect matches between two different sets of
>> >> objects.
>> >> If perfect matches are not found, I then create bulks of objects that
>> are
>> >> then
>> >> used in combination with the individual ones to find bulk matches. If
>> no
>> >> matches
>> >> are found I need then to categorize the breaks (around 10 different
>> >> categorizations) and report them.
>> >>
>> >> The matching criteria between two object is specific enough to be
>> fast.
>> >> Once I
>> >> get into break, which basically is removing some criteria components,
>> the
>> >> possible combinations increase exponentially. Bulking just compounds
>> the
>> >> problem
>> >> by adding more matchable/breakable facts into memory.
>> >>
>> >> My bulking logic (I didnt have collect when I started with 3.0) starts
>> a
>> >> bulk
>> >> looking for two diff objects with the same bulkling criteria (this is
>> my
>> >> first
>> >> potential cross product since drools would produce C!/N!(C-N)!
>> >> combinations).
>> >> Then once the bulk for a given criteria is create I have a second rule
>> >> that
>> >> expands or contracts the bulks as new facts are asserted causing many
>> >> different
>> >> side effects.
>> >>
>> >> What I am basically seeing is that asserting a fact that would for
>> >> instance be a
>> >> perfect match, causes many of the bulking and breaking rule
>> activations
>> >> to
>> >> be
>> >> created and then cancelled. Considering that I am talking about tens
>> or
>> >> hundreds
>> >> of thousands of facts I thought that if I could stage the activation
>> >> creations I
>> >> would increase processing speed.
>> >>
>> >> With 15K objects on each side I have been seeing something like 1
>> >> assertion per
>> >> second.
>> >>
>> >> I am aware that this could be cross product somewhere but I have
>> already
>> >> revised
>> >> the rules many many times so now I am looking for other alternatives.
>> >>
>> >> I am now trying to understand looking I basically need to find perfect
>> >> matches
>> >> between two different sets of objects. If perfect matches are not
>> found,
>> >> I
>> >> then
>> >> create bulks of objects that are then used in combination with the
>> >> individual
>> >> one to find bulk matches. If no matches are found I need then to
>> >> categorize the
>> >> breaks (around 10 different categorizations) and report them.
>> >>
>> >>
>> >> _______________________________________________
>> >> rules-users mailing list
>> >>
rules-users@lists.jboss.org
>> >>
https://lists.jboss.org/mailman/listinfo/rules-users
>> >>
>> >
>> >
>> >
>> > --
>> > Edson Tirelli
>> > Software Engineer - JBoss Rules Core Developer
>> > Office: +55 11 3529-6000
>> > Mobile: +55 11 9287-5646
>> > JBoss, a division of Red Hat @
www.jboss.com
>> >
>> > _______________________________________________
>> > rules-users mailing list
>> >
rules-users@lists.jboss.org
>> >
https://lists.jboss.org/mailman/listinfo/rules-users
>> >
>> >
>>
>> --
>> View this message in context:
>>
http://www.nabble.com/Independent-rule-evaluations-tp12129972p20335308.html
>> Sent from the drools - user mailing list archive at Nabble.com.
>>
>> _______________________________________________
>> rules-users mailing list
>>
rules-users@lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/rules-users
>>
>
>
>
> --
> Edson Tirelli
> JBoss Drools Core Development
> JBoss, a division of Red Hat @
www.jboss.com
>
> _______________________________________________
> rules-users mailing list
>
rules-users@lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/rules-users
>
>
--