Sure, it's about dynamic registration of application cluster nodes to
Keycloak, so once admin wants to send global admin event (like push new
notBefore for the realm), it's send to all cluster nodes. I figured out
we discussed just the idea on ML, but I did not send summary after it's
done. Sorry for that.
So changes:
* In AdapterConfig, there are options "register-node-at-startup"
(true,false) and "register-node-period" (interval in seconds) where
people could specify that application registers itself to Keycloak at
startup and re-registers after specified interval. It's false by default
as useful just for cluster.
* On adapters side, there is new component NodesRegistrationManagement,
which is used to send those "registration" requests. It's actually not
send at startup of application but at 1st request, so
NodesRegistrationManagement has fully resolved KeycloakDeployment to
send request against. Sending at 1st request also works well with
multi-tenant (it allows that registration request is sent for each realm
resolved by KeycloakConfigResolver in any request) . At shutdown (or app
undeployment) it sends "unregistration" request to Keycloak to inform
that particular cluster node is down. Requests are authenticated by
client credentials of particular application (feature is not supported
for public clients for now)
* On Keycloak side, there is ClientsManagementService, which handles
those registration/unregistration requests and registers/unregisters
nodes at Keycloak side.
* Model changes on ApplicationModel for support this are new methods:
Map<String, Integer> getRegisteredNodes();
void registerNode(String nodeHost, int registrationTime);
void unregisterNode(String nodeHost);
int getNodeReRegistrationTimeout();
void setNodeReRegistrationTimeout(int timeout);
Node re-registration timeout is the interval for which node needs to
re-register itself in Keycloak, otherwise Keycloak will unregister it.
* Once new admin request is send to Keycloak for admin events (like push
notBefore or logoutAll on realm or application) it's sent to all
registered cluster nodes. ResourceAdminManager was refactored to
support this. Note that this applies just for "global" events like push
notBefore. Normal user logouts are still sent just to single node (node
where particular HTTP Session invalidation should happen).
Example of usage:
* Application is deployed on 2 cluster nodes "http://node1:8080/myapp"
and "http://node2:8080/myapp" . Those 2 nodes are started and registered
to Keycloak at startup
* Admin wants to push notBefore for the "myapp" application. So Keycloak
will handle it by sending request to
"http://node1:8080/myapp/k_push_not_before" and
"http://node2:8080/myapp/k_push_not_before"
* Node1 is going to shutdown and send unregistration request to
Keycloak. Next time admin wants to push notBefore, it will be send just
to node2 at "http://node2:8080/myapp/k_push_not_before" .
* In Keycloak admin console, admin has possibility to test availability
(ping) registered cluster nodes (new preAuth request type
"k_test_available" added to PreAuthActionsHandler. It is not doing
anything, just replies 204) . Admin has also possibility to manually
de-register stale nodes in admin console if automatic unregistration
fails for instance.
In the end, there are fair amount of changes but it affects just
clustering and everything should be backwards compatible. Existing 1.0.X
applications will still work after update to 1.1.X without need to
change anything in their configuration.
Marek
On 28.10.2014 23:48, Bill Burke wrote:
Did you send out an email explaining what ClientsManagementService
is?
Also what changes you made to the adapters? I'd like to review it.