Hi,
As a drools newbie, I'm grappling with the above question; any help much
appreciated. To elaborate:
- We have a largish dataset (~50GB+) stored in an rdbms (oracle).
- we're considering using drools to implement business rules (e.g. for
data validation constraints and derivations).
- the issue is how to give the rulebase efficient & scalable access to
the db. It would potentially need to access the whole dataset, since
business rules can potentially affect all tables.
We've done an artificial pilot with a much smaller dataset, simply by
syncing the entire db into the rulebase. Users like it because they can
read the rules directly (using a DSL). Before we go any further however
we need to find a scaling strategy before we go any further.
We were thinking about some kind of caching strategy: conceptually the
rulebase would have a cache hit failure causing data to be loaded from
the db. However we've no idea if that's a practical option, or if
there's something better.
Hope that makes sense; any help much appreciated. Oh, and btw, thanks
for a great piece of software!
- Scoot.