Dear all,
I'm designing a system that operates on data collected from software
installations deployed to customers. Every day, a batch of data comes in
from every customer. Over time, we want to build a knowledge base that,
e.g., fires for certain patterns in that data. (Yes, I think a rule engine
is the right tool for this task.)
The fact base will become large: 356 days x 1000 customers x 100 or so
facts. What are "reasonable" sizes of fact bases that Drools can cope with?
The kind of data and the expected rules might allow to restrict the data to
slices: Only data from one customer, only from a few customer in a time
period, etc. That sounds like OLAP: The data is a cube with dimensions
customer and time. For (re-)evaluating the rules to a specific end: slice
the cube, establish the Drools fact base from the slice, and fire rules of
the corresponding set.
Do you have suggestions on how to organize the data? What database
technologie to use?
Does what I say make any sense at all?
Best, Ulrich
--
View this message in context:
http://drools.46999.n3.nabble.com/Drools-organizing-and-using-a-large-fac...
Sent from the Drools: User forum mailing list archive at
Nabble.com.