[keycloak-user] [keycloak-dev] HA mode with JDBC_PING shows warning in the logs after migration to 4.8.3 from 3.4.3

abhishek raghav abhi.raghav007 at gmail.com
Tue May 7 05:17:54 EDT 2019


Hi Sebastian,

Thanks for the response. I got it working with a clue in your last answer.
I am using docker based container orchestration framework which when I am
scaling down the service is actually force killing the docker containers
(something like kill -9), instead of a graceful shut down. So then, I
changed my docker-entrypoint (I am using my own docker-entrypoint and not
relying on default one which comes with keycloak image) and added *exec*
just before the start command. something like this as below -

exec /opt/jboss/keycloak/bin/standalone.sh --server-config
standalone-ha.xml -Djboss.bind.address=${privateaddress}.....

After adding  "exec", I noticed that keycloak is shutting down gracefully
when I am scaling down the nodes in the cluster and the respective entries
are also getting cleared off from JGROUPSPING table just as you mentioned.

Looks like this there should not be anymore stale entries until the cluster
crashes a lot.

Thanks a lot for your support.

- Best Regards
  Abhishek








On Mon, May 6, 2019 at 12:48 PM Sebastian Laskawiec <slaskawi at redhat.com>
wrote:

> Adding +Bela Ban <bban at redhat.com>, just in case :)
>
> Currently, JDBC_PING extends FILE_PING, which has some properties, that
> works similarly to `clear_table_on_view_change`:
> - remove_old_coords_on_view_change - If true, on a view change, the new
> coordinator removes files from old coordinators
> - remove_all_data_on_view_change - If true, on a view change, the new
> coordinator removes all data except its own
>
> It's also worth to mention, that the coordinator clears the table when
> shutting down (being more specific, on `JDBC_PING#stop`. So unless your
> cluster crashes a lot (by crashing I mean calling `kill -9` for example),
> you should be fine.
>
> Thanks,
> Seb
>
> On Mon, Apr 29, 2019 at 9:44 AM abhishek raghav <abhi.raghav007 at gmail.com>
> wrote:
>
>> Thanks Sebastian.
>>
>> I tried running the same setup with 5.0.0 of keycloak, I did not see any
>> such errors which I reported in my first email. This was definitely a
>> Wildfly issue and not keycloak.
>>
>> Regarding my 2nd question - i.e. support of "clear_table_on_view_change"
>> property. I see that jgroups has removed support of this property. So lets
>> say if JGROUPSPING table has lot stale entries, while keycloak starts
>> booting up - each time keycloak node will try to JOIN with all the entries
>> already present in the JGROUPSPING table and thus time taken for the
>> service to start will be more. If that timeline is more than 300s, keycloak
>> does not start and reports timeout error.
>> This scenario is highly possible in cloud scenarios, since there the
>> keycloak nodes can start on any available host/IP since no of nodes are not
>> fixed.
>>
>> Can you suggest any workaround to fix this.
>>
>> *- Best Regards*
>>    Abhishek Raghav
>>
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Apr 26, 2019 at 6:11 PM Sebastian Laskawiec <slaskawi at redhat.com>
>> wrote:
>>
>>> There was a bunch of fixed to JGroups a while ago, including changes in
>>> JDBC_PING.
>>>
>>> Could you please rerun your setup with Keycloak >= 5.0.0? I believe some
>>> of the issues (or maybe even all of them) should be fixed.
>>>
>>> On Thu, Apr 25, 2019 at 7:19 PM abhishek raghav <
>>> abhi.raghav007 at gmail.com> wrote:
>>>
>>>> Hi
>>>>
>>>> After the migration of keycloak HA configurations from 3.4.3.Final to
>>>> 4.8.3.Final, I am seeing some WARNINGS on one of the nodes of keycloak
>>>> immediately after the keycloak is started with 2 nodes. This occurs
>>>> after
>>>> every time when the cluster is scaled up or whenever infinispan is
>>>> trying
>>>> to update the cluster member list.
>>>> I am using JDBC_PING to achieve clustering in keycloak.
>>>>
>>>> Below is the stacktrace -
>>>>
>>>> 2019-04-24 12:20:43,687 WARN
>>>> >> [org.infinispan.topology.ClusterTopologyManagerImpl]
>>>> >> (transport-thread--p18-t2) [dcidqdcosagent08] KEYCLOAK DEV 1.5.RC
>>>> >> ISPN000197: Error updating cluster member list:
>>>> >> org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed
>>>> out
>>>> >> waiting for responses for request 1 from dcidqdcosagent02
>>>> >
>>>> >                   at
>>>> >>
>>>> org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
>>>> >
>>>> >                   at
>>>> >>
>>>> org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
>>>> >
>>>> >                   at
>>>> >>
>>>> org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
>>>> >
>>>> >                   at
>>>> >> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>> >
>>>> >                   at
>>>> >>
>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>>> >
>>>> >                   at
>>>> >>
>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>>> >
>>>> >                   at
>>>> >>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>> >
>>>> >                   at
>>>> >>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>> >
>>>> >                   at java.lang.Thread.run(Thread.java:748)
>>>> >
>>>> >                   Suppressed:
>>>> org.infinispan.util.logging.TraceException
>>>> >
>>>> >                                     at
>>>> >>
>>>> org.infinispan.remoting.transport.Transport.invokeRemotely(Transport.java:75)
>>>> >
>>>> >                                     at
>>>> >>
>>>> org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:525)
>>>> >
>>>> >                                     at
>>>> >>
>>>> org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:508)
>>>> >
>>>> >
>>>>
>>>> Now after I searched, I really did not see anyone reported such error on
>>>> keycloak but there is similar bug reported in WILDLFY 14 and is
>>>> categorized
>>>> as a blocker in WILDLFY 14.This bug is already fixed in WILDLFY 15.
>>>> https://issues.jboss.org/browse/WFLY-10736?attachmentViewMode=list
>>>>
>>>> Now since keycloak 4.8 is also based on WILDLFY 14, these WARNINGS
>>>> could be
>>>> because of this blocker in WILDFLY 14.
>>>>
>>>> What should I do to get rid this error. Is this really a problem in
>>>> keycloak 4.8.3.Final. Did anyone notice any such issue while running
>>>> keycloak 4.8.3 in HA mode.
>>>> Is there a workaround to fix this.
>>>>
>>>>
>>>> One more thing we noticed is - It is regarding a property in JDBC_PING
>>>> protocol we are using in our 3.4.3 setup i.e.
>>>> "clear_table_on_view_change"
>>>> but it is no more supported in 4.8 version. and thus the JGROUPSPING
>>>> table
>>>> is filled up with lot of stale entries. Is there a workaround to clear
>>>> the
>>>> table after view change in 4.8 also.
>>>>
>>>> Thanks
>>>> Abhishek
>>>> _______________________________________________
>>>> keycloak-dev mailing list
>>>> keycloak-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev
>>>>
>>>


More information about the keycloak-user mailing list