[jboss-jira] [JBoss JIRA] (JGRP-2396) increasing networkdata, cpu and heap
Rob van der Boom (Jira)
issues at jboss.org
Tue Nov 12 03:10:00 EST 2019
[ https://issues.jboss.org/browse/JGRP-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13811078#comment-13811078 ]
Rob van der Boom commented on JGRP-2396:
----------------------------------------
ok thanks. Sorry to bother you maybe you are right and our expectations about cpu and memory usage are different. We also have a hazelcast cluster environment and that uses a factor 10 less memory and cpu with more in memory entities (types as well as number of..).
We do see now in all our graps that memory is less increasing and the G1 is consuming much less cpu since we upgraded from keycloak 5 to 7 2 weeks agoo.
What i think is best is put our alarms on higher level (cpu 1.5 in stead of 1) and lett the system run for longer time to see if things stay (more) stable.
If it still increasing we will indeed use profile to get more details.
lets put it on hold for moment then, sorry for taking your time away...
> increasing networkdata, cpu and heap
> ------------------------------------
>
> Key: JGRP-2396
> URL: https://issues.jboss.org/browse/JGRP-2396
> Project: JGroups
> Issue Type: Bug
> Affects Versions: 4.0.19
> Reporter: Rob van der Boom
> Assignee: Bela Ban
> Priority: Major
> Attachments: Schermafbeelding 2019-11-08 om 15.43.04.png, Schermafbeelding 2019-11-08 om 15.44.52.png, Schermafbeelding 2019-11-08 om 16.00.48.png, jstack-production-pod0.dump, standalone-ha.xml
>
>
> hey,
> we have an keycloak (sso) setup, version 7.0.1 running in kubernetes - aws.
> Its build on wildfly 17, infinispan 9.4 and jgroups 4.0.19.
> We have 3 pods running in standalone-ha with cache setup on distribution (all 3 nodes - so equivalent to replication)
> ISSUE:
> We see a slowly growing of networkstatistics, heap and cpu, while the number of sessions in keycloak (cached) remain almost stable.
> The cpu growth is caused by the TQbundler process, which explaines the networkdata growth. It looks like this is causing also a memory leakage..
> every 5 days we have to restart the pods and then every resets to a very low level including the heap. this while all sessions are still valid and cached.
> The only issue i could find maybe related to this is:
> https://issues.jboss.org/browse/JGRP-2382?jql=project%20%3D%20JGRP%20AND%20text%20~%20leak
> Could this be the same issue and does it also cause increasing network and cpu (since that is why we have to restart, the heap has much space left !).
> And if so how does this issue continue since for us its a major issue.
> We als had this issue already in keycloak 5 (wildfly 15), thats why we upgraded to the latest available version.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)
More information about the jboss-jira
mailing list