So it's either discard the messages with a repeated time stamp or fix
the time stamp? If you can assume a constant channel delay you can
modify the time stamp.
I'm not sure whether either one should be done by a rule in the regular
processing set - it's not really part of the fact handling logic. And if it's
eventually fixed, you don't have to change the rules.
-W
On 25/02/2012, Benjamin Bennett <benbennett(a)gmail.com> wrote:
It isn't duplicates but corrupted data coming off an vehicle.
There is bug
in the vehicle code that isn't zeroing out memory, or updating a time value
during reset procedure which causes a bunch of facts to have the same end
time stamp. We cannot really change the vehicle code because it was
certified by the government and it is going to be a while before another
one is certified.
On Sat, Feb 25, 2012 at 1:00 AM, Wolfgang Laun
<wolfgang.laun(a)gmail.com>wrote:
> Would you please define unambiguously what constitutes an "invalid time"?
> >From your code it would appear that one time has to occur fivefold or
> more often to
> be "invalid".
>
> In case any duplicates should be removed a simple high-priority rule
> is sufficient:
>
> rule killDuplicate
> salience 1000
> when
> $f1: Fact()
> $f2 :Fact( endTime == $f1.endTime )
> then
> retract( $f2 );
> end
>
> -W
>
>
>
>
> On 24/02/2012, Benjamin Bennett <benbennett(a)gmail.com> wrote:
> > Trying to figure out if it can be done in a rule almost some sort of pre
> > rule before other rules are triggered.
> >
> > The current rule I have is
> >
> > rule "RemoveInvalidEndTimestamps"
> > salience 100
> >
> > when
> >
> > $factN : Fact()
> > $factsToRemove : ArrayList(size>=5)
> > from collect( Fact(endTime==$factN.endTime))
> > then
> > List newFactsToRemove = new ArrayList();
> > newFactsToRemove.addAll($factsToRemove);
> > for(Fact n : newFactsToRemove ){
> > retract(n);
> > }
> > end
> >
> > I am using a cloud based process . I could sort the facts and stream
> > them
> > in.
> > Just in a few test cases there are many facts with invalid times , which
> > kills the speed .
> > >From the log I think that each collection of size>=5 , is triggered
> which
> > means triggered for 5,6,7, etc.
> >
> > Just wondering if there is way to say before doing any other rules
> collect
> > up all these invalid times and remove them.
> >
> > I was just going to write up a some java code and filter before feeding
> > facts into drools but I find the rule syntax is much easier to read for
> the
> > non software developers in my group.
> >
> >
> > --
> > Thanks,
> >
> > Benjamin Bennett
> >
> > <benbennett(a)gmail.com>
> >
> _______________________________________________
> rules-users mailing list
> rules-users(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/rules-users
>
--
Sincerely,
Benjamin Bennett
314.246.0645
benbennett(a)gmail.com
"For a successful technology, reality must take precedence over public
relations, for Nature cannot be fooled."
Richard Feynman