<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Then the record in DB will remain
locked and needs to be fixed manually. Actually the same behaviour
like liquibase. The possibilities to repair from this state is:<br>
- Run keycloak with system property
"-Dkeycloak.dblock.forceUnlock=true" . Then Keycloak will release
the existing lock at startup and acquire new lock. The warning is
written to server.log that this property should be used carefully
just to repair DB<br>
- Manually delete lock record from DATABASECHANGELOGLOCK table (or
"dblock" collection in mongo)<br>
<br>
The other possibility is that after timeout, node2 will assume the
current lock is timed-out and will forcefully release existing
lock and replace with it's own lock. However I didn't it this way
as it's potentially dangerous though - there is some chance that 2
nodes run migration or import at the same time and DB will end in
inconsistent state. Or is it acceptable risk?<br>
<br>
Marek<br>
<br>
<br>
On 07/03/16 19:50, Stian Thorgersen wrote:<br>
</div>
<blockquote
cite="mid:CAJgngAdeRjG1rJ0jL4NSN=mGgpRFhzR=DycYtLNqv2yxr6C5MQ@mail.gmail.com"
type="cite">
<div dir="ltr">900 seconds is probably ok, but what happens if the
node holding the lock dies?</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 7 March 2016 at 11:03, Marek Posolda
<span dir="ltr"><<a moz-do-not-send="true"
href="mailto:mposolda@redhat.com" target="_blank">mposolda@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Send PR
with added support for $subject .<br>
<a moz-do-not-send="true"
href="https://github.com/keycloak/keycloak/pull/2332"
rel="noreferrer" target="_blank">https://github.com/keycloak/keycloak/pull/2332</a>
.<br>
<br>
Few details:<br>
- Added DBLockProvider, which handles acquire and release of
DB lock.<br>
When lock is acquired, the cluster node2 needs to wait until
node1<br>
release the lock<br>
<br>
- The lock is acquired at startup for the migrating model
(both model<br>
specific and generic migration), importing realms and adding
initial<br>
admin user. So this can be done always just by one node at a
time.<br>
<br>
- The lock is implemented at DB level, so it works even if
infinispan<br>
cluster is not correctly configured. For the JPA, I've added<br>
implementation, which is reusing liquibase DB locking with
the bugfix,<br>
which prevented builtin liquibase lock to work correctly.
I've added<br>
implementation for Mongo too.<br>
<br>
- Added DBLockTest, which simulates 20 threads racing for
acquire lock<br>
concurrently. It's passing with all databases.<br>
<br>
- Default timeout for acquire lock is 900 seconds and the
time for lock<br>
recheck is 2 seconds. So if node2 is not able to acquire
lock within 900<br>
seconds, it fails to start. There is possibility to change
in<br>
keycloak-server.json. Is 900 seconds too much? I was
thinking about the<br>
case when there is some large realm file importing at
startup.<br>
<br>
Marek<br>
_______________________________________________<br>
keycloak-dev mailing list<br>
<a moz-do-not-send="true"
href="mailto:keycloak-dev@lists.jboss.org">keycloak-dev@lists.jboss.org</a><br>
<a moz-do-not-send="true"
href="https://lists.jboss.org/mailman/listinfo/keycloak-dev"
rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/keycloak-dev</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>