[jboss-jira] [JBoss JIRA] (JGRP-2396) increasing networkdata, cpu and heap

Rob van der Boom (Jira) issues at jboss.org
Fri Nov 8 10:09:00 EST 2019


    [ https://issues.jboss.org/browse/JGRP-2396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13810160#comment-13810160 ] 

Rob van der Boom edited comment on JGRP-2396 at 11/8/19 10:08 AM:
------------------------------------------------------------------

ok... 
Dump will follow monday, as i have to be carefull if there is enough diskspace for the large dump file on the pod...

- As you can see the threads count (jvm) are very very stable always flat line any time.
- Memory grows until certain level, then G1 gets in.. I gues this is the local cache and this process repeats also any time, also during night when nothing happens. Do not now why it also grows at night and goes down by G1 again, same as during daytime when there is lots of user activity. Never the less do not see worry things here.
The cpu grows above 1.0 cpu towards 1.5 (kubernetes).. 
- Kubernetes pods, give endless growth of cpu and network stats. this is only 3 days, it  keeps growing same way anytime untill performanc drops so much we have to restart.
- See standalone-ha for configuration (KUBE-PING and other settings).

Thanks.


was (Author: robvanderboom):
ok... 
Dump will follow monday, as i have to be carefull if there is enough diskspace for the large dump file on the pod...

- As you can see the threads count (jvm) are very very stable always flat line any time.
- Memory grows until certain level, then G1 gets in.. I gues this is the local cache and this process repeats also any time, also during night when nothing happens. Do not now why it also grows at night and goes down by G1 again, same as during daytime when there is lots of user activity. Never the less do not see worry things here.
- Kubernetes pods, give endless growth of cpu and network stats. this is only 3 days, it  keeps growing same way anytime untill performanc drops so much we have to restart.
- See standalone-ha for configuration (KUBE-PING and other settings).


Thanks.

> increasing networkdata, cpu and heap
> ------------------------------------
>
>                 Key: JGRP-2396
>                 URL: https://issues.jboss.org/browse/JGRP-2396
>             Project: JGroups
>          Issue Type: Bug
>    Affects Versions: 4.0.19
>            Reporter: Rob van der Boom
>            Assignee: Bela Ban
>            Priority: Major
>         Attachments: Schermafbeelding 2019-11-08 om 15.43.04.png, Schermafbeelding 2019-11-08 om 15.44.52.png, Schermafbeelding 2019-11-08 om 16.00.48.png, standalone-ha.xml
>
>
> hey,
> we have an keycloak (sso) setup, version 7.0.1 running in kubernetes - aws.
> Its build on wildfly 17, infinispan 9.4 and jgroups 4.0.19.
> We have 3 pods running in standalone-ha with cache setup on distribution (all 3 nodes - so equivalent to replication)
> ISSUE:
> We see a slowly growing of networkstatistics, heap and cpu, while the number of sessions in keycloak (cached) remain almost stable.
> The cpu growth is caused by the TQbundler process, which explaines the networkdata growth. It looks like this is causing also a memory leakage.. 
> every 5 days we have to restart the pods and then every resets to a very low level including the heap. this while all sessions are still valid and cached.
> The only issue i could find maybe related to this is:
> https://issues.jboss.org/browse/JGRP-2382?jql=project%20%3D%20JGRP%20AND%20text%20~%20leak
> Could this be the same issue and does it also cause increasing network and cpu (since that is why we have to restart, the heap has much space left !).
> And if so how does this issue continue since for us its a major issue.
> We als had this issue already in keycloak 5 (wildfly 15), thats why we upgraded to the latest available version.



--
This message was sent by Atlassian Jira
(v7.13.8#713008)


More information about the jboss-jira mailing list