[
https://issues.jboss.org/browse/DROOLS-4805?page=com.atlassian.jira.plugi...
]
Jesse White commented on DROOLS-4805:
-------------------------------------
I think the problem may be related to the fact that our events are not strictly ordered
sequentially in time.
I have a test case that reproduces the problem here:
https://gist.github.com/j-white/7adee3e5c628f012ea83cd69695b93f9
(if the value of delta == 0, then the events are effectively ordered, otherwise they are
random)
With out-of-order events
{{org.drools.core.time.impl.TrackableTimeJobFactoryManager#addTimerJobInstance}} gets
called many times and {{#removeTimerJobInstance}} is never called, leading to growth in
the {{timerInstances}} map.
With ordered events, {{#addTimerJobInstance}} is only called once.
Memory leak in STREAM mode (# PointInTimeTriggers growing)
----------------------------------------------------------
Key: DROOLS-4805
URL:
https://issues.jboss.org/browse/DROOLS-4805
Project: Drools
Issue Type: Bug
Affects Versions: 7.29.0.Final
Reporter: Jesse White
Assignee: Mario Fusco
Priority: Major
Attachments: image-2019-11-25-10-31-07-281.png
When running our engine in STREAM mode, we notice a number of Drools internal object
references increasing time even though the fact count stays relatively constant.
jmap -histo <pid> shows:
{noformat}
num #instances #bytes class name
----------------------------------------------
1: 17478466 1258449552
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask
2: 17475174 699006960
org.drools.core.time.impl.JDKTimerService$JDKJobHandle
3: 17762331 568394592 java.util.concurrent.ConcurrentHashMap$Node
4: 17475174 559205568
org.drools.core.rule.SlidingTimeWindow$BehaviorJobContext
5: 17475174 559205568 org.drools.core.time.impl.DefaultTimerJobInstance
6: 8596776 551411680 [C
7: 18898675 453568200 java.lang.Long
8: 17475174 419404176 org.drools.core.time.SelfRemovalJobContext
9: 17475174 279602784 org.drools.core.time.SelfRemovalJob
10: 17475174 279602784 org.drools.core.time.impl.PointInTimeTrigger
11: 9263698 222328752 java.lang.String
12: 4584671 146709472 java.util.HashMap$Node
13: 246658 138804176 [B
{noformat}
These number continue to increase over time until the system OOMs.
Graph for # of facts in the session looks like:
!image-2019-11-25-10-31-07-281.png|thumbnail!
The engine is fairly busy - we're constantly adding and removing facts.
Our rules are available here:
https://github.com/OpenNMS/opennms/blob/opennms-25.0.0-1/opennms-base-ass...
--
This message was sent by Atlassian Jira
(v7.13.8#713008)