[rules-users] Working memory batch insert performance

Wolfgang Laun wolfgang.laun at gmail.com
Tue Dec 20 09:06:49 EST 2011


Frequently, slow insertion is an indication of excessive joins
resulting from unlucky pattern combinations. Only recently there was
thread where things improved after a very simple reordering of
patterns, constraining the combinations early on.

We'd have to see your rules for more detailed hints.

-W

On 20/12/2011, Zhuo Li <milanello1998 at gmail.com> wrote:
> Hi, folks,
>
>
>
> I recently did a benchmark on Drools 5.1.2 and noticed that data insert into
> a stateful session is very time consuming. It took me about 30 minutes to
> insert 10,000 data rows on a 512M heapsize JVM. Hence I have to keep
> inserting data rows when I receive them and keep them in working memory,
> rather than loading them in a batch at a given time. This is not a friendly
> way for disaster recovery and I have two questions here to see if anybody
> has any thoughts:
>
>
>
> 1.       Is there any better way to improve the performance of data insert
> into a stateful session;
>
> 2.       I noticed that there is a method called BatchExecution() for a
> stateless session. Did not get a chance to test it yet but is this a better
> way to load data in a batch and then run rules?
>
>
>
> My requirement is I need to load a batch of data once by end of the day, and
> then run the rules to filter out matched data against unmatched data. I have
> a 3-hour processing window to complete this loading and matching process,
> and the data I need to load is about 1 million to 2 millions. My JVM
> heapsize can be set up to 1024 M.
>
>
>
> Best regards
>
> Abe
>
>
>
>



More information about the rules-users mailing list