Sorry here are the error logs :
*19:28:18,508 ERROR [stderr] (Incoming-1,ee,a1-dev-ksa049c) Exception in
thread "INT-6,ee,a1-dev-ksa049c" java.lang.OutOfMemoryError: GC overhead
limit exceeded*
*19:28:23,544 ERROR [stderr] (Incoming-1,ee,a1-dev-ksa049c)
ava.lang.OutOfMemoryError: GC overhead limit exceededdev-ksa049c"
7.196:7600],ee,a1-dev-ksa049c" lang.OutOfMemoryError: GC overhead limit
exceededException in thread "Incoming-1,ee,a1-dev-ksa049c"
java.lang.OutOfMemoryError: GC overhead limit exceeded*
*19:28:28,099 ERROR [org.jgroups.protocols.UNICAST3]
(OOB-50,ee,a1-dev-ksa049c) JGRP000039: a1-dev-ksa049c: failed to deliver
OOB message [dst: a1-dev-ksa049c, src: a1-dev-ksa049b (4 headers),
size=8602 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER]:
java.lang.OutOfMemoryError: GC overhead limit exceeded*
*19:28:33,187 ERROR [org.jgroups.protocols.UNICAST3]
(OOB-58,ee,a1-dev-ksa049c) JGRP000039: a1-dev-ksa049c: failed to deliver
OOB message [dst: a1-dev-ksa049c, src: a1-dev-ksa049b (4 headers), size=7
bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER]: java.lang.OutOfMemoryError:
GC overhead limit exceeded*
*19:28:01,882 WARN [org.jgroups.protocols.pbcast.NAKACK2]
(INT-8,ee,a1-dev-ksa049c) JGRP000041: a1-dev-ksa049c: message
a1-dev-ksa049c::50545 not found in retransmission table*
*Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread "pool-8-thread-1"*
*19:29:06,194 ERROR [org.jgroups.util.TimeScheduler3] (Timer
runner-1,ee,a1-dev-ksa049c) failed submitting task to thread pool:
java.lang.OutOfMemoryError: GC overhead limit exceeded*
*19:29:11,650 ERROR [org.jgroups.util.TimeScheduler3]
(Timer-7,ee,a1-dev-ksa049c) failed executing task NAKACK2: RetransmitTask
(interval=1000 ms): java.lang.OutOfMemoryError: GC overhead limit exceeded*
*19:29:04,656 ERROR [org.jgroups.protocols.UNICAST3]
(OOB-47,ee,a1-dev-ksa049c) JGRP000039: a1-dev-ksa049c: failed to deliver
OOB message [dst: a1-dev-ksa049c, src: a1-dev-ksa049a (4 headers),
size=8503 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER]:
java.lang.OutOfMemoryError: GC overhead limit exceeded*
*19:29:17,055 ERROR [stderr] (INT-7,ee,a1-dev-ksa049c) Exception in thread
"pool-8-thread-1" Exception in thread "INT-7,ee,a1-dev-ksa049c"
java.lang.OutOfMemoryError: GC overhead limit exceeded*
*19:29:19,497 ERROR [org.jgroups.protocols.TCP]
(TransferQueueBundler,ee,a1-dev-ksa049c) JGRP000034: a1-dev-ksa049c:
failure sending message to a1-dev-ksa049b: java.lang.NullPointerException*
*19:29:27,964 ERROR [org.jgroups.util.TimeScheduler3]
(Timer-7,ee,a1-dev-ksa049c) failed executing task UNICAST3: RetransmitTask
(interval=500 ms): java.lang.OutOfMemoryError: GC overhead limit exceeded*
*19:29:40,151 ERROR [org.jgroups.util.TimeScheduler3]
(Timer-7,ee,a1-dev-ksa049c) failed executing task FD: Monitor
(timeout=3000ms): java.lang.OutOfMemoryError: GC overhead limit exceeded*
On Tue, Nov 22, 2016 at 1:45 PM, robinfernandes . <robin1233(a)gmail.com>
wrote:
Hi Guys,
I added the eviction policy to the standalone-ha.xml and it looks like
this :
*<subsystem xmlns="urn:jboss:domain:infinispan:4.0">*
* <cache-container name="keycloak"
jndi-name="infinispan/Keycloak">*
* <transport lock-timeout="60000"/>*
* <invalidation-cache name="realms"
mode="SYNC"/>*
* <invalidation-cache name="users"
mode="SYNC">*
* <eviction max-entries="10000"
strategy="LRU"/>*
* </invalidation-cache>*
* <distributed-cache name="sessions" mode="SYNC"
owners="3"/>*
* <distributed-cache name="offlineSessions"
mode="SYNC"
owners="3"/>*
* <distributed-cache name="loginFailures"
mode="SYNC"
owners="1"/>*
* <replicated-cache name="work" mode="SYNC"/>*
* <local-cache name="realmVersions">*
* <transaction mode="BATCH"
locking="PESSIMISTIC"/>*
* <eviction max-entries="10000"
strategy="LRU"/>*
* </local-cache>*
* </cache-container>*
I ran some tests to do concurrent logins by spawning multiple threads and
the Keycloak node still went down after there were around *~170K* active
sessions. Took the thread and heap dump as well if that is helpful. We have
the -Xms set to 512m and -Xmx set to 2048m.
Would you recommend a higher heap size as well?
Thanks,
Robin
On Wed, Oct 12, 2016 at 10:03 AM, Stian Thorgersen <sthorger(a)redhat.com>
wrote:
> Could be
https://issues.jboss.org/browse/KEYCLOAK-3202 if so it's not
> fixed
> in 1.9.8. There's a work around though, you can set "<eviction
> max-entries="10000" strategy="LRU"/>" for the
realmVersions cache. Also,
> make sure you have a sane max entries on the users cache.
>
> On 11 October 2016 at 15:33, Bill Burke <bburke(a)redhat.com> wrote:
>
> > I believe we fixed some cache leakage problems sometime between 1.9.1
> > and 1.9.8. You'll have to search JIRA. I strongly suggest you upgrade
> > to 1.9.8. We did a huge amount of stability, performance, and bug fixes
> > between 1.9.1 and 1.9.8 to get Keycloak ready for product. RH-SSO is
> > based on Keycloak 1.9.8.
> >
> >
> > On 10/10/16 11:21 AM, robinfernandes . wrote:
> > > Hi,
> > >
> > > We are using Keycloak 1.9.2.Final and have a cluster with an hap and 3
> > > keycloak nodes behind it.
> > > For the first time in about 4-6 months we received errors that java
> heap
> > > space out of memory and the nodes just went down.
> > > We had around 100k users as well as 35k active connections at the
> time.
> > > We have around 512MB heap space assigned.
> > >
> > > I am not able to reproduce it after restarting the nodes.
> > >
> > > Is there any reason that this could happen?
> > >
> > >
> > > _______________________________________________
> > > keycloak-user mailing list
> > > keycloak-user(a)lists.jboss.org
> > >
https://lists.jboss.org/mailman/listinfo/keycloak-user
> >
> > _______________________________________________
> > keycloak-user mailing list
> > keycloak-user(a)lists.jboss.org
> >
https://lists.jboss.org/mailman/listinfo/keycloak-user
> >
> _______________________________________________
> keycloak-user mailing list
> keycloak-user(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/keycloak-user
>