[JBoss JIRA] (WFLY-10031) Scripts throws "illegal reflective access" warning on JDK9
by Marek Kopecký (JIRA)
[ https://issues.jboss.org/browse/WFLY-10031?page=com.atlassian.jira.plugin... ]
Marek Kopecký commented on WFLY-10031:
--------------------------------------
[~dmlloyd]: CLI prints this with {{--illegal-access=deny}}:
{noformat}
[mkopecky@dhcp-10-40-4-239 bin]$ ./jboss-cli.sh -c
java.lang.ExceptionInInitializerError: Unable to make field java.lang.ThreadLocal$ThreadLocalMap java.lang.Thread.threadLocals accessible: module java.base does not "opens java.lang" to unnamed module @c8b96ec
[mkopecky@dhcp-10-40-4-239 bin]$
{noformat}
> Scripts throws "illegal reflective access" warning on JDK9
> ----------------------------------------------------------
>
> Key: WFLY-10031
> URL: https://issues.jboss.org/browse/WFLY-10031
> Project: WildFly
> Issue Type: Bug
> Components: Scripts, Security
> Reporter: Marek Kopecký
> Assignee: James Perkins
> Priority: Blocker
>
> *Description of the issue:*
> Scripts throws "illegal reflective access" warning on JDK9/10/11.
> *How reproducible:*
> Always with JDK9, 10 and 11
> *Steps to Reproduce:*
> * ./jboss-cli.sh "echo test"
> * ./add-user.sh -u test4 -p Test123* -s
> * ./appclient.sh -v
> * ./wsconsume.sh
> * ./wsprovide.sh
> * ./domain.sh
> * ./standalone.sh
> *Actual results:*
> {noformat}
> [hudson@rhel7-large-9887 bin]$ java --version
> java 9.0.4
> Java(TM) SE Runtime Environment (build 9.0.4+11)
> Java HotSpot(TM) 64-Bit Server VM (build 9.0.4+11, mixed mode)
> [hudson@rhel7-large-9887 bin]$
> {noformat}
> {noformat}
> [hudson@rhel7-large-9887 bin]$ ./jboss-cli.sh "echo test"
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by org.jboss.logmanager.LogManager$2 (jar:file:/home/hudson/hudson_workspace/workspace/early-testing-scripts-unix/9774dccf/wildfly/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.0.9.Final.jar!/) to constructor java.util.logging.Level$KnownLevel(java.util.logging.Level)
> WARNING: Please consider reporting this to the maintainers of org.jboss.logmanager.LogManager$2
> WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> test
> [hudson@rhel7-large-9887 bin]$
> {noformat}
> {noformat}
> [hudson@rhel7-large-9887 bin]$ ./add-user.sh -u test4 -p Test123* -s
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by org.jboss.logmanager.LogManager$2 (jar:file:/home/hudson/hudson_workspace/workspace/early-testing-scripts-unix/9774dccf/wildfly/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.0.9.Final.jar!/) to constructor java.util.logging.Level$KnownLevel(java.util.logging.Level)
> WARNING: Please consider reporting this to the maintainers of org.jboss.logmanager.LogManager$2
> WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> [hudson@rhel7-large-9887 bin]$ cat ../standalone/configuration/mgmt-users.properties | tail -n 1
> test4=a95aa9d159b7afe0cc9d3795061551ad
> [hudson@rhel7-large-9887 bin]$
> {noformat}
> {noformat}
> [hudson@rhel7-large-9887 bin]$ ./appclient.sh -v
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by org.jboss.logmanager.LogManager$2 (jar:file:/home/hudson/hudson_workspace/workspace/early-testing-scripts-unix/9774dccf/wildfly/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-2.0.9.Final.jar!/) to constructor java.util.logging.Level$KnownLevel(java.util.logging.Level)
> WARNING: Please consider reporting this to the maintainers of org.jboss.logmanager.LogManager$2
> WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> 06:11:52,205 INFO [org.jboss.modules] (main) JBoss Modules version 1.7.0.Final
> WildFly Full 13.0.0.Alpha1-SNAPSHOT (WildFly Core 4.0.0.Final)
> [hudson@rhel7-large-9887 bin]$
> {noformat}
> *Expected results:*
> No warnings
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (WFLY-10068) [Artemis upgrade] Artemis creates and use NODE_MANAGER table even though HA is not configured
by Francesco Nigro (JIRA)
[ https://issues.jboss.org/browse/WFLY-10068?page=com.atlassian.jira.plugin... ]
Francesco Nigro edited comment on WFLY-10068 at 3/22/18 6:14 AM:
-----------------------------------------------------------------
[~mnovak] Yes, so that's the explanation: the file-based and jdbc-based HA (with Shared-Store) works very similarly and indeed the file-based version is actually doing the same ie:
- try to create the lock file even if HA is not configured
- try to acquire the live lock on it
But with a subtle difference: the lock acquisition timeout is infinite and not configurable (ie DEFAULT_JOURNAL_LOCK_ACQUISITION_TIMEOUT = -1) and is common that the file lock created is owned exclusively: in both cases it means that exception won't be raised.
The jdbc version instead is making uses of a configurable lock timeout with a timeout of 60 seconds, making the same issue evident.
I'm seeing 3 ways to deal with this issue:
# create/validate that for the file-based version is happening the same and fix both by checking if any HA is configured on the broker instance [longer to fix but optimal solution]
# makes the jdbc lock timeout = -1 and not configurable anymore [simpler but suboptimal]
# when no JDBC HA is configured uses the file-based version instead [very very dirty]
I will ask [~ataylor] re how to check the HA config and to see which one is the better to implement, but probably need to raise the issue for the file-based version too.
was (Author: fnigro):
[~mnovak] Yes, so that's the explanation: the file-based and jdbc-based HA (with Shared-Store) works very similarly and indeed the file-based version is actually doing the same ie:
- try to create the lock file even if HA is not configured
- try to acquire the live lock on it
But with a subtle difference: the lock acquisition timeout is infinite and not configurable (ie DEFAULT_JOURNAL_LOCK_ACQUISITION_TIMEOUT = -1), hence it won't never throw that exception.
The jdbc version instead is making uses of a configurable lock timeout with a timeout of 60 seconds, making the same issue evident.
I'm seeing 3 ways to deal with this issue:
# create/validate that for the file-based version is happening the same and fix both by checking if any HA is configured on the broker instance [longer to fix but optimal solution]
# makes the jdbc lock timeout = -1 and not configurable anymore [simpler but suboptimal]
# when no JDBC HA is configured uses the file-based version instead [very very dirty]
I will ask [~ataylor] re how to check the HA config and to see which one is the better to implement, but probably need to raise the issue for the file-based version too.
> [Artemis upgrade] Artemis creates and use NODE_MANAGER table even though HA is not configured
> ---------------------------------------------------------------------------------------------
>
> Key: WFLY-10068
> URL: https://issues.jboss.org/browse/WFLY-10068
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Reporter: Miroslav Novak
> Assignee: Jeff Mesnil
> Priority: Blocker
> Attachments: standalone-full-ha.xml
>
>
> If Artemis is not configured to use HA then it should not create and use journal-node-manager-store-table table which is normally used by live/backup pair.
> Problem here is that especially if 2 servers are started and are using the database then administrator logically does not set journal-node-manager-store-table as HA is not used. However in the moment when both of the servers are started they use default values and both of them start to use the same table (default is NODE_MANAGER) then one of them fail with:
> {code}
> 09:20:03,246 INFO [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 74) AMQ221034: Waiting 60000 milliseconds to obtain live lock
> 09:21:03,517 ERROR [org.apache.activemq.artemis.core.server] (ServerService Thread Pool -- 74) AMQ224000: Failure in initialisation: java.lang.Exception: timed out waiting for lock
> at org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.lock(JdbcNodeManager.java:240) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.startLiveNode(JdbcNodeManager.java:346) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.LiveOnlyActivation.run(LiveOnlyActivation.java:65) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.internalStart(ActiveMQServerImpl.java:522) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.start(ActiveMQServerImpl.java:461) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.start(JMSServerManagerImpl.java:376) [artemis-jms-server-2.5.0.jar:2.5.0]
> at org.wildfly.extension.messaging.activemq.jms.JMSService.doStart(JMSService.java:205) [wildfly-messaging-activemq-13.0.0.Alpha1-SNAPSHOT.jar:13.0.0.Alpha1-SNAPSHOT]
> at org.wildfly.extension.messaging.activemq.jms.JMSService.access$000(JMSService.java:64) [wildfly-messaging-activemq-13.0.0.Alpha1-SNAPSHOT.jar:13.0.0.Alpha1-SNAPSHOT]
> at org.wildfly.extension.messaging.activemq.jms.JMSService$1.run(JMSService.java:99) [wildfly-messaging-activemq-13.0.0.Alpha1-SNAPSHOT.jar:13.0.0.Alpha1-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_131]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
> at org.jboss.threads.JBossThread.run(JBossThread.java:485) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> 09:21:03,520 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 74) MSC000001: Failed to start service jboss.messaging-activemq.default.jms.manager: org.jboss.msc.service.StartException in service jboss.messaging-activemq.default.jms.manager: java.lang.Exception: timed out waiting for lock
> at org.wildfly.extension.messaging.activemq.jms.JMSService.lambda$doStart$0(JMSService.java:141) [wildfly-messaging-activemq-13.0.0.Alpha1-SNAPSHOT.jar:13.0.0.Alpha1-SNAPSHOT]
> at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.callActivationFailureListeners(ActiveMQServerImpl.java:1908) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.LiveOnlyActivation.run(LiveOnlyActivation.java:82) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.internalStart(ActiveMQServerImpl.java:522) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.start(ActiveMQServerImpl.java:461) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.start(JMSServerManagerImpl.java:376) [artemis-jms-server-2.5.0.jar:2.5.0]
> at org.wildfly.extension.messaging.activemq.jms.JMSService.doStart(JMSService.java:205) [wildfly-messaging-activemq-13.0.0.Alpha1-SNAPSHOT.jar:13.0.0.Alpha1-SNAPSHOT]
> at org.wildfly.extension.messaging.activemq.jms.JMSService.access$000(JMSService.java:64) [wildfly-messaging-activemq-13.0.0.Alpha1-SNAPSHOT.jar:13.0.0.Alpha1-SNAPSHOT]
> at org.wildfly.extension.messaging.activemq.jms.JMSService$1.run(JMSService.java:99) [wildfly-messaging-activemq-13.0.0.Alpha1-SNAPSHOT.jar:13.0.0.Alpha1-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_131]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_131]
> at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]
> at org.jboss.threads.JBossThread.run(JBossThread.java:485) [jboss-threads-2.3.1.Final.jar:2.3.1.Final]
> Caused by: java.lang.Exception: timed out waiting for lock
> at org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.lock(JdbcNodeManager.java:240) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.startLiveNode(JdbcNodeManager.java:346) [artemis-server-2.5.0.jar:2.5.0]
> at org.apache.activemq.artemis.core.server.impl.LiveOnlyActivation.run(LiveOnlyActivation.java:65) [artemis-server-2.5.0.jar:2.5.0]
> ... 14 more
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (WFLY-10072) Stale session data on failover
by Romain Pelisse (JIRA)
[ https://issues.jboss.org/browse/WFLY-10072?page=com.atlassian.jira.plugin... ]
Romain Pelisse moved JBEAP-6225 to WFLY-10072:
----------------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-10072 (was: JBEAP-6225)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Clustering
(was: Clustering)
Target Release: (was: 7.2.0.GA)
Affects Version/s: (was: 7.1.0.DR5)
> Stale session data on failover
> ------------------------------
>
> Key: WFLY-10072
> URL: https://issues.jboss.org/browse/WFLY-10072
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Reporter: Daniel Čihák
> Assignee: Paul Ferraro
> Priority: Minor
>
> Occured on client. Affected scenarios:
> eap-7x-failover-ejb-ejbservlet-shutdown-dist-async
> eap-7x-failover-http-granular-shutdown-repl-sync
> eap-7x-failover-http-session-jvmkill-dist-sync
> eap-7x-failover-http-session-jvmkill-repl-sync
> eap-7x-failover-http-session-shutdown-repl-sync-haproxy
> eap-7x-failover-http-session-undeploy-dist-sync
> Client log stacktrace:
> {code}
> 2016/09/21 17:28:51:294 EDT [WARN ][Runner - 134] HOST dev220.mw.lab.eng.bos.redhat.com:rootProcess:c - Error sampling data: <org.jboss.smartfrog.loaddriver.RequestProcessingException: Stale session data received. Expected 42, received 41, Runner: 134>
> org.jboss.smartfrog.loaddriver.RequestProcessingException: Stale session data received. Expected 42, received 41, Runner: 134
> at org.jboss.smartfrog.loaddriver.http.AbstractSerialNumberValidatorFactoryImpl$SerialNumberValidator.processRequest(AbstractSerialNumberValidatorFactoryImpl.java:133)
> at org.jboss.smartfrog.loaddriver.CompoundRequestProcessorFactoryImpl$CompoundRequestProcessor.processRequest(CompoundRequestProcessorFactoryImpl.java:52)
> at org.jboss.smartfrog.loaddriver.Runner.run(Runner.java:103)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Link to client log:
> http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-failover-http-...
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month
[JBoss JIRA] (DROOLS-2409) Memory Leak in sliding window
by Mario Fusco (JIRA)
[ https://issues.jboss.org/browse/DROOLS-2409?page=com.atlassian.jira.plugi... ]
Mario Fusco edited comment on DROOLS-2409 at 3/22/18 5:57 AM:
--------------------------------------------------------------
I gave a run at your reproducer and I see your point. The problem there is that inserting events in a so tight loop has the effect of overwhelming the engine not giving it a chance of getting any job done. In fact in this case not only the expiration mechanism is not working but also the accumulate rule is not firing at all and your reproducer generates an output as the following:
{code}
3500000 inserted into list of 2702077
3510000 inserted into list of 2712077
3520000 inserted into list of 2722077
3530000 inserted into list of 2732077
3540000 inserted into list of 2742077
3550000 inserted into list of 2752077
3560000 inserted into list of 2762077
3570000 inserted into list of 2772077
3580000 inserted into list of 2782077
3590000 inserted into list of 2792077
3600000 inserted into list of 2802077
3610000 inserted into list of 2812077
{code}
However I believe that your use case is quite unrealistic. I cannot imagine any real-world application that keeps inserting events forever in a tight all in memory for-loop as in your example. More commonly those events will come for an event bus or a similar data source which at best will be an order of magnitude slower of your for-loop. I expect that Drools will work without any problem in this situation. Indeed just adding a 1ms sleep every 20 inserted events:
{code}
if (i % 20 == 0) {
Thread.sleep( 1L );
}
{code}
gives the engine enough time to do its job producing the following output:
{code}
3500000 inserted into list of 20584
zzZZzz: 49.04286510590858
zzZZzz: 49.040446927374305
zzZZzz: 49.04486607142857
3510000 inserted into list of 15737
zzZZzz: 49.578387458006716
zzZZzz: 49.5359217877095
zzZZzz: 49.540670391061454
3520000 inserted into list of 11409
3530000 inserted into list of 21409
zzZZzz: 49.26767676767677
zzZZzz: 49.34880410022779
zzZZzz: 49.35626423690205
3540000 inserted into list of 16670
zzZZzz: 49.279316338354576
zzZZzz: 49.29466357308585
zzZZzz: 49.29895591647332
3550000 inserted into list of 12099
3560000 inserted into list of 22099
zzZZzz: 49.537780269058295
zzZZzz: 49.60458612975391
zzZZzz: 49.60173184357542
3570000 inserted into list of 18351
zzZZzz: 49.64232062780269
zzZZzz: 49.62682885811985
zzZZzz: 49.62618287698079
3580000 inserted into list of 15067
zzZZzz: 49.538758389261744
zzZZzz: 49.51251396648045
zzZZzz: 49.503407821229054
3590000 inserted into list of 12190
3600000 inserted into list of 22190
zzZZzz: 49.426733622975796
zzZZzz: 49.446553672316384
zzZZzz: 49.44441309255079
3610000 inserted into list of 19332
{code}
As you can see in this case the engine is now having time to fire the rule and by doing so it is also evicting the expired objects from the working memory as expected.
I'm not saying that we don't have a problem here. I'll keep investigating your test case to check if I can make this work in a more reliable way even in this extreme scenario. Any further feedback is welcome.
was (Author: mfusco):
I gave a run at your reproducer and I see your point. The problem there is that inserting events in a so tight loop has the effect of overwhelming the engine not giving it a chance of getting any job done. In fact in this case not only the expiration mechanism is not working but also the accumulate rule is not firing at all and your reproducer generates an output as the following:
{code}
3500000 inserted into list of 2702077
3510000 inserted into list of 2712077
3520000 inserted into list of 2722077
3530000 inserted into list of 2732077
3540000 inserted into list of 2742077
3550000 inserted into list of 2752077
3560000 inserted into list of 2762077
3570000 inserted into list of 2772077
3580000 inserted into list of 2782077
3590000 inserted into list of 2792077
3600000 inserted into list of 2802077
3610000 inserted into list of 2812077
{code}
However I believe that your use case is quite unrealistic. I cannot imagine any real-world application that keeps inserting events forever in a tight all in memory for-loop as in your example. More commonly those events will come for an event bus or a similar data source which at best will be an order of magnitude slower of your for-loop. I expect that Drools will work without any problem in this situation. Indeed just adding a 1ms sleep every 20 inserted events:
{code}
if (i % 20 == 0) {
Thread.sleep( 1L );
}
{code}
gives the engine enough time to do its job producing the following output:
3500000 inserted into list of 20584
zzZZzz: 49.04286510590858
zzZZzz: 49.040446927374305
zzZZzz: 49.04486607142857
3510000 inserted into list of 15737
zzZZzz: 49.578387458006716
zzZZzz: 49.5359217877095
zzZZzz: 49.540670391061454
3520000 inserted into list of 11409
3530000 inserted into list of 21409
zzZZzz: 49.26767676767677
zzZZzz: 49.34880410022779
zzZZzz: 49.35626423690205
3540000 inserted into list of 16670
zzZZzz: 49.279316338354576
zzZZzz: 49.29466357308585
zzZZzz: 49.29895591647332
3550000 inserted into list of 12099
3560000 inserted into list of 22099
zzZZzz: 49.537780269058295
zzZZzz: 49.60458612975391
zzZZzz: 49.60173184357542
3570000 inserted into list of 18351
zzZZzz: 49.64232062780269
zzZZzz: 49.62682885811985
zzZZzz: 49.62618287698079
3580000 inserted into list of 15067
zzZZzz: 49.538758389261744
zzZZzz: 49.51251396648045
zzZZzz: 49.503407821229054
3590000 inserted into list of 12190
3600000 inserted into list of 22190
zzZZzz: 49.426733622975796
zzZZzz: 49.446553672316384
zzZZzz: 49.44441309255079
3610000 inserted into list of 19332
As you can see in this case the engine is now having time to fire the rule and by doing so it is also evicting the expired objects from the working memory as expected.
I'm not saying that we don't have a problem here. I'll keep investigating your test case to check if I can make this work in a more reliable way even in this extreme scenario. Any further feedback is welcome.
> Memory Leak in sliding window
> -----------------------------
>
> Key: DROOLS-2409
> URL: https://issues.jboss.org/browse/DROOLS-2409
> Project: Drools
> Issue Type: Feature Request
> Components: core engine
> Affects Versions: 7.6.0.Final
> Environment: Mac with Java 1.8 using Spring Boot (see attached demo project)
> Reporter: Mike Baranski
> Assignee: Mario Fusco
> Priority: Critical
> Attachments: complete.tar.gz
>
>
> See attached project.
> `gradle test` causes an out of memory exception on my machine after ~2.6 million events. Events should be released for GC after 1 second, but that does not appear to be happening.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years, 1 month