Incorporate Keycloak-Login into react-base SPAs (and ideally cordova-based mobile apps as well)
by Göttlich, Thomas
Hi, we're currently evaluating Keycloak for our systems that use react-based SPAs as well as servlet/JavaEE-based applications.
Additionally we're planning to add cordova-based mobile apps for iOS and Android as well, hence the addition in the title, though how to incorporate Keycloak into our react-based SPAs has priority.
For the servlet-based applications it's working quite well by using KeycloakOIDCFilter.
However, there's the question on how we'd add your SPAs to that.
As far as I understand it Keycloak doesn't provide an authorization api for good reasons.
Thus when a user needs to log in they're redirected to Keycloak's login page and then back to the application.
According to our SPA devs that would mean leaving the SPA and restarting it later, potentially losing any already loaded or entered data, especially if the user needs to re-login.
As an example think of an email client where the user starts to write an email, gets distracted and after returning to the application the SSO session has timed out and a re-login is required.
Losing the email in doing so wouldn't be something our SPA devs would accept.
Hence the question: how would one go about that, i.e. how would one allow the SPA to display the login page without having to reload or restart the SPA itself?
I'm no expert here but I'd guess we could use an iframe or browser window (popup/tab/new window) to redirect the user to Keycloak and after successful login we'd redirect the user to a page tells the browser or SPA that the iframe or window can be closed and the user is now allowed to continue using the SPA.
Would that be a viable way to do it? How are you doing it?
Thanks in advance,
Thomas
7 years, 7 months
Admin Client cannot access keycloak
by Denny Israel
Hi,
i am trying to use the java keycloak-admin-client to access my keycloak
server.
Dependencies:
compile group: 'org.jboss.resteasy', name: 'resteasy-jackson-provider',
version: '3.1.2.Final'
compile group: 'org.jboss.resteasy', name: 'resteasy-multipart-provider',
version: '3.1.2.Final'
compile group: 'org.jboss.resteasy', name: 'resteasy-client', version:
'3.1.2.Final'
compile group: 'org.keycloak', name: 'keycloak-admin-client', version:
'3.1.0.Final'
When i use the client to get the server info i get this exception:
Exception in thread "main" javax.ws.rs.client.ResponseProcessingException:
javax.ws.rs.ProcessingException:
org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized
field "access_token" (Class
org.keycloak.representations.AccessTokenResponse), not marked as ignorable
at [Source: org.apache.http.conn.EofSensorInputStream@68e5eea7; line: 1,
column: 18] (through reference chain:
org.keycloak.representations.AccessTokenResponse["access_token"])
Here is my code:
Keycloak kc = KeycloakBuilder.builder()
.serverUrl("http://<mykeycloak>/auth")
.realm("master")
.username("admin")
.password("admin")
.clientId("admin-cli")
.resteasyClient(
new ResteasyClientBuilder().connectionPoolSize(10).build()
).build();
System.out.println(kc.serverInfo().getInfo());
What am i doing wrong?
7 years, 7 months
Rebalancing problem while adding a new node to a domain
by Elnaz razmi
hello
please help me about this problem:
We choose to install domain mode keycloak in our company. We have a load
balancer and three slave nodes. It's working properly with two active node
but when we want to run the third node to connect to load balancer, load
balancer don't rebalance with new node. It just say that node is regestered
but it don't show these lines as we can see in other node connect process :
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000310: Starting
cluster-wide rebalance for cache work, topology CacheTopology{id=3,
rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners =
(2)[master:server-one-master: 30, srvca61-site232:server-threeslave: 30]},
pendingCH=ReplicatedConsistentHash{ns = 60, owners =
(3)[master:server-one-master: 20, srvca61-site232:server-threeslave: 20,
srvca61-site231:server-twoslave: 20]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t44) ISPN000310: Starting
cluster-wide rebalance for cache loginFailures, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000310: Starting
cluster-wide rebalance for cache authorization, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t39) ISPN000310: Starting
cluster-wide rebalance for cache sessions, topology CacheTopology{id=3,
rebalanceId=2, currentCH=DefaultConsistentHash{ns=80, owners =
(2)[master:server-one-master: 40+0, srvca61-site232:server-threeslave:
40+0]}, pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t43) ISPN000310: Starting
cluster-wide rebalance for cache offlineSessions, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache offlineSessions, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache authorization, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache loginFailures, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache work, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache sessions, topology id = 3
7 years, 7 months
Implicit Flow with the Spring Boot adapter
by Jonathan D'Andries
We have a scenario in which the application does not have access to the
Keycloak server, but the user does. In this case, the user is on our our
internal corporate network along with the Keycloak server, while the
application lives in the public Internet. We can send the user from the
public application to Keycloak to login in, but the application cannot
communicate back with Keycloak to verify the token coming back when the
user returns. It is my understanding that "Implicit Flow" should allow for
this scenario:
https://keycloak.gitbooks.io/documentation/securing_apps/topics/oidc/oidc...
But I cannot figure out how to implement this with the Spring Boot adapter.
It seems to me that the adapter should have a way to decrypt and validate
the JWT token locally (making sure the short-lived access token has not
expired), then trust the token as implicitly granted and proceed to set a
session cookie with a different timeout configured in the Keycloak
administrator. Is this available in Keycloak somewhere that I just missing?
Or perhaps you have another suggestion for how to do this?
Note that I recognize implicit flow is inherently flawed because it passes
the access token to the user (vulnerable to man-in-the-middle type leaks).
Still, it's part of the OIDC spec, and it seems that security concerns can
be somewhat mitigated with a short expiration on the Access Token and a
configurable expiration of the resulting client session expiration via
Keycloak.
Suggestions?
Thanks,
Jonathan
--
Jonathan D'Andries
http://www.linkedin.com/in/jonathandandries/
7 years, 7 months
admin cli - add composite roles to client role
by Kevin Hirschmann
Hello,
can someone please tell me how to use admin cli to add a client role to
another client role - composite? In the docs I could find a way to add
client roles to realm roles but this isnt what I need.
call kcadm.bat add-roles -r demo --rname TTest --cclientid myapp --rolename
change-color (works if TTest is a realm role)
Thanks for your help.
Kevin Hirschmann
HUEBINET Informationsmanagement GmbH & Co. KG
Telefon: +49 (0) 261 / 5 00 86 - 17
Telefax: +49 (0) 261 / 5 00 86 - 29
E-Mail: <mailto:kevin.hirschmann@huebinet.de>
kevin.hirschmann(a)huebinet.de
Internet: <http://www.huebinet.de/> www.huebinet.de
HUEBINET Informationsmanagement GmbH & Co. KG
An der Königsbach 8
56075 Koblenz
Sitz und Registergericht: Koblenz HRA 5329
Persönlich haftender Gesellschafter der KG:
HUEBINET GmbH;
Sitz und Registergericht: Koblenz HRB 6857
Geschäftsführung:
Dr. Carsten Schöpp; Michael Biemer; Michael Ewertz
----------------------------------------------------------------------------
----------------------------------------------------------------------------
----------------
Der Nachrichtenaustausch mit HUEBINET Informationsmanagement GmbH & Co. KG,
Koblenz via E-Mail dient lediglich zu Informationszwecken.
Rechtsgeschäftliche Erklärungen mit verbindlichem Inhalt können über dieses
Medium nicht ausgetauscht werden, da die Manipulation von E-Mails durch
Dritte nicht ausgeschlossen werden kann.
Email communication with HUEBINET Informationsmanagement GmbH & Co. KG is
only intended to provide information of a general kind, and shall not be
used for any statement with binding contents in respect to legal relations.
It is not totally possible to prevent a third party from manipulating emails
and email contents.
7 years, 7 months
Rebalancing problem while adding a new node to a domain
by tina zarrin
We chose to install domain mode keycloak in our company. We have a load
balancer and three slave nodes. It's working properly with two active node
but when we want to run the third node to connect to load balancer, load
balancer don't rebalance with new node. It just say that node is regestered
but it don't show these lines as we can see in other node connect process :
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000310: Starting
cluster-wide rebalance for cache work, topology CacheTopology{id=3,
rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners =
(2)[master:server-one-master: 30, srvca61-site232:server-threeslave: 30]},
pendingCH=ReplicatedConsistentHash{ns = 60, owners =
(3)[master:server-one-master: 20, srvca61-site232:server-threeslave: 20,
srvca61-site231:server-twoslave: 20]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t44) ISPN000310: Starting
cluster-wide rebalance for cache loginFailures, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000310: Starting
cluster-wide rebalance for cache authorization, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t39) ISPN000310: Starting
cluster-wide rebalance for cache sessions, topology CacheTopology{id=3,
rebalanceId=2, currentCH=DefaultConsistentHash{ns=80, owners =
(2)[master:server-one-master: 40+0, srvca61-site232:server-threeslave:
40+0]}, pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t43) ISPN000310: Starting
cluster-wide rebalance for cache offlineSessions, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache offlineSessions, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache authorization, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache loginFailures, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache work, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache sessions, topology id = 3
7 years, 7 months
Rebalancing problem while adding a new node to a domain
by tina zarrin
We chose to install domain mode keycloak in our company. We have a load
balancer and three slave nodes. It's working properly with two active node
but when we want to run the third node to connect to load balancer, load
balancer don't rebalance with new node. It just say that node is regestered
but it don't show these lines as we can see in other node connect process :
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000310: Starting
cluster-wide rebalance for cache work, topology CacheTopology{id=3,
rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners =
(2)[master:server-one-master: 30, srvca61-site232:server-threeslave: 30]},
pendingCH=ReplicatedConsistentHash{ns = 60, owners =
(3)[master:server-one-master: 20, srvca61-site232:server-threeslave: 20,
srvca61-site231:server-twoslave: 20]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t44) ISPN000310: Starting
cluster-wide rebalance for cache loginFailures, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000310: Starting
cluster-wide rebalance for cache authorization, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t39) ISPN000310: Starting
cluster-wide rebalance for cache sessions, topology CacheTopology{id=3,
rebalanceId=2, currentCH=DefaultConsistentHash{ns=80, owners =
(2)[master:server-one-master: 40+0, srvca61-site232:server-threeslave:
40+0]}, pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t43) ISPN000310: Starting
cluster-wide rebalance for cache offlineSessions, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache offlineSessions, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache authorization, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache loginFailures, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache work, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache sessions, topology id = 3
7 years, 7 months
Rebalancing problem while adding a new node to a domain
by tina zarrin
We choose to install domain mode keycloak in our company. We have a load
balancer and three slave nodes. It's working properly with two active node
but when we want to run the third node to connect to load balancer, load
balancer don't rebalance with new node. It just say that node is regestered
but it don't show these lines as we can see in other node connect process :
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000310: Starting
cluster-wide rebalance for cache work, topology CacheTopology{id=3,
rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners =
(2)[master:server-one-master: 30, srvca61-site232:server-threeslave: 30]},
pendingCH=ReplicatedConsistentHash{ns = 60, owners =
(3)[master:server-one-master: 20, srvca61-site232:server-threeslave: 20,
srvca61-site231:server-twoslave: 20]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t44) ISPN000310: Starting
cluster-wide rebalance for cache loginFailures, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000310: Starting
cluster-wide rebalance for cache authorization, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t39) ISPN000310: Starting
cluster-wide rebalance for cache sessions, topology CacheTopology{id=3,
rebalanceId=2, currentCH=DefaultConsistentHash{ns=80, owners =
(2)[master:server-one-master: 40+0, srvca61-site232:server-threeslave:
40+0]}, pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t43) ISPN000310: Starting
cluster-wide rebalance for cache offlineSessions, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache offlineSessions, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache authorization, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache loginFailures, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache work, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache sessions, topology id = 3
7 years, 7 months
Verify custom registration field
by Liam Maruff
Hi there,
I have customised a registration form to include a custom field called
'Organisation'. How can I verify that the value provided by the user for
this field is appropriate and, if it isn't, reject the user's registration
and display an error message?
Regards,
Liam M
7 years, 7 months
Rebalancing problem while adding a new node to a domain
by Elnaz razmi
We chose to install domain mode keycloak in our company. We have a load
balancer and three slave nodes. It's working properly with two active node
but when we want to run the third node to connect to load balancer, load
balancer don't rebalance with new node. It just say that node is regestered
but it don't show these lines as we can see in other node connect process :
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000310: Starting
cluster-wide rebalance for cache work, topology CacheTopology{id=3,
rebalanceId=2, currentCH=ReplicatedConsistentHash{ns = 60, owners =
(2)[master:server-one-master: 30, srvca61-site232:server-threeslave: 30]},
pendingCH=ReplicatedConsistentHash{ns = 60, owners =
(3)[master:server-one-master: 20, srvca61-site232:server-threeslave: 20,
srvca61-site231:server-twoslave: 20]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t44) ISPN000310: Starting
cluster-wide rebalance for cache loginFailures, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000310: Starting
cluster-wide rebalance for cache authorization, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t39) ISPN000310: Starting
cluster-wide rebalance for cache sessions, topology CacheTopology{id=3,
rebalanceId=2, currentCH=DefaultConsistentHash{ns=80, owners =
(2)[master:server-one-master: 40+0, srvca61-site232:server-threeslave:
40+0]}, pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t43) ISPN000310: Starting
cluster-wide rebalance for cache offlineSessions, topology
CacheTopology{id=3, rebalanceId=2, currentCH=DefaultConsistentHash{ns=80,
owners = (2)[master:server-one-master: 40+0,
srvca61-site232:server-threeslave: 40+0]},
pendingCH=DefaultConsistentHash{ns=80, owners =
(3)[master:server-one-master: 27+0, srvca61-site232:server-threeslave:
27+0, srvca61-site231:server-twoslave: 26+0]}, unionCH=null,
actualMembers=[master:server-one-master, srvca61-site232:server-threeslave,
srvca61-site231:server-twoslave]}
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache offlineSessions, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache authorization, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t42) ISPN000336: Finished
cluster-wide rebalance for cache loginFailures, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache work, topology id = 3
[org.infinispan.CLUSTER] (remote-thread--p8-t45) ISPN000336: Finished
cluster-wide rebalance for cache sessions, topology id = 3
7 years, 7 months