Guess what, our JDBC_PING configuration not working with 4.4.0.Final is what I am
currently working on.
Thanks Thomas!
Best regards,
Sebastian
Mit freundlichen Grüßen / Best regards
Dr.-Ing. Sebastian Schuster
Engineering and Support (INST/ESY1)
Bosch Software Innovations GmbH | Ullsteinstr. 128 | 12109 Berlin | GERMANY |
www.bosch-si.com<http://www.bosch-si.com>
Tel. +49 30 726112-485 | Fax +49 30 726112-100 |
Sebastian.Schuster@bosch-si.com<mailto:Sebastian.Schuster@bosch-si.com>
Sitz: Berlin, Registergericht: Amtsgericht Charlottenburg; HRB 148411 B
Aufsichtsratsvorsitzender: Dr.-Ing. Thorsten Lücke; Geschäftsführung: Dr. Stefan Ferber,
Michael Hahn
From: Thomas Darimont <thomas.darimont(a)googlemail.com>
Sent: Mittwoch, 12. September 2018 11:41
To: Schuster Sebastian (INST-CSS/BSV-OS) <Sebastian.Schuster(a)bosch-si.com>
Cc: Sebastian Laskawiec <slaskawi(a)redhat.com>; keycloak-dev
<keycloak-dev(a)lists.jboss.org>; Bela Ban <bban(a)redhat.com>; Radoslav Husar
<rhusar(a)redhat.com>; Tarrant, Tristan <ttarrant(a)redhat.com>; Paul Ferraro
<paul.ferraro(a)redhat.com>
Subject: Re: [keycloak-dev] Clustering configuration
Hi all,
while you are mentioning JDBC_PING it seems that the configuration somewhat changed
between 4.2.1 and 4.4.0 probably due to the wildfly /infinispan upgrade. The keycloak Helm
chart for Kubernetes just stumbled upon this while trying to upgrade to 4.4.0.Final.
See:
https://github.com/helm/charts/pull/7650#issuecomment-420174274
BTW. another configuration option would be to use TCP unicast with a set of initial hosts
for discovery. This is useful in scenarios where multiple network-segments are involved
that don't support UDP multicast and jdbc_ping cannot be used (... for what ever
reason).
Cheers,
Thomas
Schuster Sebastian (INST-CSS/BSV-OS)
<Sebastian.Schuster@bosch-si.com<mailto:Sebastian.Schuster@bosch-si.com>>
schrieb am Mi., 12. Sep. 2018, 11:25:
Hi Sebastian! :)
What about just using JDBC_PING as a default? It works in any environment, it does not add
an additional dependency as Keycloak does not do much without DB anyways.
The only problem we are currently having is that the graceful shutdown does not work since
the dependency of the Jgroups subsystem to the DB is not identified correctly.
Best regards,
Sebastian
Mit freundlichen Grüßen / Best regards
Dr.-Ing. Sebastian Schuster
Engineering and Support (INST/ESY1)
Bosch Software Innovations GmbH | Ullsteinstr. 128 | 12109 Berlin | GERMANY |
www.bosch-si.com<http://www.bosch-si.com>
Tel. +49 30 726112-485 | Fax +49 30 726112-100 |
Sebastian.Schuster@bosch-si.com<mailto:Sebastian.Schuster@bosch-si.com>
Sitz: Berlin, Registergericht: Amtsgericht Charlottenburg; HRB 148411 B
Aufsichtsratsvorsitzender: Dr.-Ing. Thorsten Lücke; Geschäftsführung: Dr. Stefan Ferber,
Michael Hahn
-----Original Message-----
From:
keycloak-dev-bounces@lists.jboss.org<mailto:keycloak-dev-bounces@lists.jboss.org>
<keycloak-dev-bounces@lists.jboss.org<mailto:keycloak-dev-bounces@lists.jboss.org>>
On Behalf Of Sebastian Laskawiec
Sent: Mittwoch, 12. September 2018 10:00
To: keycloak-dev
<keycloak-dev@lists.jboss.org<mailto:keycloak-dev@lists.jboss.org>>
Cc: Bela Ban <bban@redhat.com<mailto:bban@redhat.com>>; Radoslav Husar
<rhusar@redhat.com<mailto:rhusar@redhat.com>>; Tarrant, Tristan
<ttarrant@redhat.com<mailto:ttarrant@redhat.com>>; Paul Ferraro
<paul.ferraro@redhat.com<mailto:paul.ferraro@redhat.com>>
Subject: [keycloak-dev] Clustering configuration
Hey guys,
During our weekly sync meeting, Stian asked me to look into different options for
clustering in Keycloak server. This topic has quite hot with the context of our Docker
image (see the proposed community contributions [1][2][3]). Since we are based on WF 13,
which uses JGroups 4.0.11 and has KUBE_PING in its modules, we have a couple of options
how to do it.
Before discussing different implementations, let me quickly go through the
requirements:
- We need a configuration stack that works for on-prem and cloud deployments with
OpenShift as our primary target.
- The configuration should be automatic (if it's possible). E.g. if we discover that
Keycloak is running in the container, we should use proper discovery protocol.
- There needs to be a way to override the discovery protocol manually.
With those requirements in mind, we have a couple of implementation options on the
table:
1. Add more stacks to the configuration, e.g. openshift, azure or gcp. Then we use the
standard `-Djboss.default.jgroups.stack=<stack>` configuration switch.
2. Provide more standalone-*.xml configuration files, e.g.
standalone-ha.xml (for on-prem) or standalone-cloud.xml.
3. Add protocols dynamically using CLI. A similar approach to what we did for the Data
Grid Cache Service [4].
4. Use MULTI_PING protocols [5][6], with multiple discovery protocols on the same stack.
This will include MPING (for multicasting), KUBE_PING (if we can access Kubernetes API),
DNS_PING (if Pods are governed by a Service).
Option #1 and #2 is somewhat similar to what we did for Infinispan [7]. It works quite
well but the configuration grows quite quickly and most of the protocols (apart from
discovery) are duplicated. On the other hand, having separate configuration pieces for
each use case is very flexible. Having in mind that AWS cuts TCP connections, using
FD_SOCK might lead to false suspicions but on GCP for the instance, FD_SOCK works quite
nicely. The CLI option (#3), is also very flexible and probably should be implemented only
in our Docker image. This somehow follows the convention we already started with different
CLI files for different DBs [8]. Option #4 is brand new (implemented in JGroups 4.0.8; we
have 4.0.11 as you probably recall). It has been specifically designed for this kind of
use cases where we want to gather discovery data from multiple places. Using this way, we
should end up with two stacks in standalone-ha.xml file - UDP and TCP.
I honestly need to say, that my heart goes for options #4. However, as far as I know it
hasn't been battle tested and we might get some surprises. All other options are not
as elegant as option #4 but they are used somewhere in other projects. They are much safer
options but they will add some maintenance burden on our shoulders.
What would you suggest guys? What do you think about all this? @Rado, @Paul, @Tristan - Do
you have any plans regarding this piece in Wildfly or Infinispan?
Thanks,
Sebastian
[1]
https://github.com/jboss-dockerfiles/keycloak/pull/96
[2]
https://github.com/jboss-dockerfiles/keycloak/pull/100
[3]
https://github.com/jboss-dockerfiles/keycloak/pull/116
[4]
https://github.com/jboss-container-images/datagrid-7-image/blob/datagrid-...
[5]
http://www.jgroups.org/manual4/index.html#_multi_ping
[6]
https://issues.jboss.org/browse/JGRP-2224
[7]
https://github.com/infinispan/infinispan/tree/master/server/integration/j...
[8]
https://github.com/jboss-dockerfiles/keycloak/tree/master/server/tools/cl...
_______________________________________________
keycloak-dev mailing list
keycloak-dev@lists.jboss.org<mailto:keycloak-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/keycloak-dev
_______________________________________________
keycloak-dev mailing list
keycloak-dev@lists.jboss.org<mailto:keycloak-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/keycloak-dev