We basically have a batch application which needs to be able to pump through
2 billion records and apply a relatively small ruleset(say a couple hundred
rules). My thought is to look at doing something with Drools and Hadoop but
I am wondering if anyone else has done a large dataset such as this(I have
watched the video on large data drools and it has some good ideas but I am
wondering if there is any other experiences).
View this message in context:
Sent from the Drools: User forum mailing list archive at Nabble.com