In version 5.x of Drools I see that it offered configurable conflict resolver
strategies. I also read a few different Drools documentation sources that
discussed varieties of "complex" conflict resolution strategies. One source
discussed a tiered implementation by the name of CompositeConflictResolver -
which was @
http://legacy.drools.codehaus.org/Conflict+Resolution.
We have been experimenting with uplifting from Drools v5.6.0.Final to a v6.x
version and we noted a fairly significant performance degradation /(a)/.
When digging into some rule logging,
we found that the issue was that our rule load order was not behaving well
in terms of conflict resolution. The "wrong" rule activations were chosen
to go first on the agenda, and this
was causing a lot of unnecessary/redundant "movement" within the Rete
network.
After reading through the documentation on conflict resolution strategies
and noting that it was configurable in Drools v5.x, I started thinking more
about the importance a conflict resolution strategy on performance. Digging
deeper, I believe Drools v5.6.0.Final uses
`org.drools.core.conflict.DepthConflictResolver` as the default resolver
(which is not the CompositeConflictResolver mentioned above interestingly).
In Drools v6.x (around v6.2.x I believe) with Phreak enabled, there is a
`org.drools.core.conflict.PhreakConflictResolver` that is used as the
default resolver.
This raises a few questions:
/1)/ With Phreak enabled, it looks like the conflict resolver is *not*
configurable anymore. I believe this is the case due to these lines
<
https://github.com/droolsjbpm/drools/blob/master/drools-core/src/main/jav...
. Why was this configuration options removed?
/2)/ The PhreakConflictResolver does not seem to be doing anything very
sophisticated now. I gather that it respects salience first, then falls
back to rule load order. I found this around these lines
<
https://github.com/droolsjbpm/drools/blob/master/drools-core/src/main/jav...
. Why was this implementation chosen? Is it discussed or documented
anywhere? Was it determined that this performs better than anything else or
that there is no significant difference either way? There are about 4
conflict resolver impls in the org.drools.core.conflict package. In Drools
v5.6.x I saw there were 10-11 of them in the similarly purposed
org.drools.conflict package.
/3)/ (related to /(2)/) Why was the default conflict resolution strategy
changed to PhreakConflictResolver from the DepthConflictResolver used in
v5.6.x? I do not think they are the same based on the source code, however,
I cannot say I fully understand all of the semantics of the
`Activation#getActivationNumber` used in the DepthConflictResolver.
I know that my perf issue /(a)/ noted above could be solved by using
salience, etc. I was wanting to avoid using salience where possible as it
leads to more fragile, less declarative systems. We were able to fix the
perf issue, by simply changing the rule load order anyways. My question is
not specifically asking how to deal with this perf issue. Instead I'm
asking about Drools choice of conflict resolution strategy in v6, as I have
listed in points /1-3/ above.
In the blog post @
http://blog.athico.com/2013/11/rip-rete-time-to-get-phreaky.html,
I think the sentence,
/"A simple heuristic, based on the rule most likely to result in firings, is
used to select the next rule for evaluation; this delays the evaluation and
firing of the other rules."/
is the only thing I see on the topic of conflict resolution (at least I
think it is). I understand the parts about unlinked and linked rules,
however, when the agenda is populated by multiple activations of the same
salience, that's when things get interesting from the point of view of the
topic discussed here.
I appreciate any feedback with regards to this.
--
View this message in context:
http://drools.46999.n3.nabble.com/Conflict-resolution-strategy-before-vs-...
Sent from the Drools: User forum mailing list archive at
Nabble.com.