just to be clear, the "fixed" was only referring to my nick on gmail ... ;-)
well, fixed now :-)
the benchmark (part of a junit) looks like this:
protected void init() {
KnowledgeBuilderConfiguration bConf = KnowledgeBuilderFactory.newKnowledgeBuilderConfiguration();
KnowledgeBuilder builder = KnowledgeBuilderFactory.newKnowledgeBuilder(bConf);
builder.add(...);
KnowledgeBaseConfiguration kbConf = KnowledgeBaseFactory.newKnowledgeBaseConfiguration();
kbConf.setOption( EventProcessingOption.STREAM );
//todo - turn this on when https://issues.jboss.org/browse/JBRULES-2845 is fixed
//kbConf.setOption( MultithreadEvaluationOption.YES );
kb = KnowledgeBaseFactory.newKnowledgeBase(kbConf);
kb.addKnowledgePackages(builder.getKnowledgePackages());
KnowledgeSessionConfiguration sConf = KnowledgeBaseFactory.newKnowledgeSessionConfiguration();
sConf.setOption(ClockTypeOption.get("pseudo"));
Environment env = EnvironmentFactory.newEnvironment();
s = kb.newStatefulKnowledgeSession(sConf ,env);
clock = (SessionPseudoClock) s.getSessionClock();
backupStream = s.getWorkingMemoryEntryPoint("Backup Stream");
}
followed by a test method that looks like this:
public void testLargeBatches() {
Node node1 = new Node("node 1");
node1.setId(1L);
s.insert(node1);
s.fireAllRules();
clock.advanceTime(1L, TimeUnit.DAYS);
s.fireAllRules();
int batchSize = 40000;
int daysToRun = 1;
ArrayList<Backup> batch = new ArrayList<Backup>(batchSize);
ArrayList<Long> nodeIds = new ArrayList<Long>();
nodeIds.add(1L);
DateTime engineTime;
long start;
long accumulated = 0;
System.err.println("days: "+daysToRun+". batch: "+batchSize+". rules: "+ruleFile);
for (int i=0; i<daysToRun; i++) {
engineTime = new DateTime(clock.getCurrentTime(), DateTimeZone.UTC);
batch.clear();
//generate random
BackupGenerationUtil.fillBackupBatch(batch, nodeIds, engineTime.minusHours(23).minusMinutes(59).minusSeconds(59), engineTime.minusSeconds(1), batchSize);
start = System.currentTimeMillis();
//feed the batch in
for (Backup b : batch) {
backupStream.insert(b);
}
//advance a day and run all rules
s.fireAllRules();
clock.advanceTime(1L, TimeUnit.DAYS);
s.fireAllRules();
accumulated += (System.currentTimeMillis() - start);
}
long averageBatchProcessingTime = (long)(accumulated/(double)daysToRun);
double perBackupEventTime = ((double)accumulated)/(batchSize*daysToRun);
int eventsPerSecond = (int)(1000D/perBackupEventTime);
System.err.println("average batch processing time is "+averageBatchProcessingTime+" which is "+perBackupEventTime+"/event or "+eventsPerSecond+" events/second");
}
running this with or without the 2nd "trimming" rules makes a huge difference. on my machine i can run the non-trimming version in batches of 300K @~2500 events/second @1GB RAM. with the trimming rule i cant get anywhere near that - the above value (40K) is already enough to slow it down to ~1800 events/second, and i dont understand whats so "heavy" about my attempt to save memory.
any help/thoughts/clues would be most welcome.2010/12/23 Mauricio Salatino <salaboy@gmail.com>hehe yes.. the same happens to "me" :)2010/12/23 Wolfgang Laun <wolfgang.laun@gmail.com>(OT: your nick "me" is confusing other gmail users. I see your mail as if I had sent it to myself. I was almost binning it.)
The delay you see may depend on the way you run the test with 40K events. Please describe the sequence of insertions and calls to fire...(), the session setup (pseudo-clock, etc.)
-W2010/12/23 me <radai.rosenblatt@gmail.com>_______________________________________________Hi.
im trying to demo drools-fusion for a system that processes backup events.
i have the following 2 CEP rules:
rule "Backup Not Succeeded For At Least 3 Days"
@ruleId(1)
when
Node($id : id)
not ( Backup(clientId == $id, $state: state == BackupStateEnum.FINISHED) over window:time( 3d ) from entry-point "Backup Stream" )
then
//nothing for now
end
rule "Prune Previous Successful Backups"
@ruleId(2)
when
$prevBackup : Backup($id : clientId, state == BackupStateEnum.FINISHED) from entry-point "Backup Stream"
$newerBackup : Backup(clientId == $id, state == BackupStateEnum.FINISHED, this after $prevBackup) from entry-point "Backup Stream"
then
drools.retract($prevBackup);
end
rule #2 is my attempt to cut down on memory usage. the problem is that rule #2 slows down the processing very very considerably (2 orders of magnitude slower when simulating batches of 40K events/day).
what would be my next step in trying to find the reason for it? my intuition tells me that with 2 events in the system at any given time instead of 120K (single node, 40K events/day 3 day window) i should have seen either a speedup or reduced memory consumption (if not both) and yet the results im seeing are the exact opposite.
is there anything im missing ? i could try adding some sort of LastSuccessfulBackup "marker" object into main memory and updating it using events from the backup stream, but then how would i express rule #1 ?
any help/clues/suggestions would be greatly appreciated,
radai.
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users
_______________________________________________
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users
--
- CTO @ http://www.plugtree.com
- MyJourney @ http://salaboy.wordpress.com
- Co-Founder @ http://www.jbug.com.ar
- Salatino "Salaboy" Mauricio -
_______________________________________________
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users