<div dir="ltr"><div>I actually think the chance of someone killing it during upgrade is relatively high. It could be they forgot to include bind address or used wrong server config. It could be migration takes longer than they expect. We shouldn&#39;t require users to manually unlock.<br></div><div><br></div><div>The lock should be done in association with the transaction. JPA provides pessimistic locks so you can do:</div><div><br></div><div><div>DatabaseLockEntity lock = em.find(DatabaseLockEntity.class, &quot;lock&quot;, LockModeType.PESSIMISTIC_WRITE);</div></div><div><br></div><div>That will work for all databases (except Mongo of course). If the process dies the transaction will timeout and it&#39;s safe to run again at that point because no chances would have been committed to the db.</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 8 March 2016 at 09:22, Marek Posolda <span dir="ltr">&lt;<a href="mailto:mposolda@redhat.com" target="_blank">mposolda@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000"><span class="">
    <div>On 08/03/16 06:48, Stian Thorgersen
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">What about obtaining a database lock on a
        table/column? That would automatically be freed if the
        transaction dies.</div>
    </blockquote></span>
    You mean something like &quot;Give me lock for table XY until end of
    transaction&quot; ? I doubt there is some universal solution for
    something like this, which will reliably work with all databases
    which we need to support :/ Otherwise I guess liquibase would
    already use it too?<br>
    <br>
    Currently it works the way that lock is obtained by updating the
    column in database. Something similar to &quot;UPDATE
    DATABASECHANGELOGLOCK set LOCKED=true where ID=1&quot; .<br>
    Note there is always single record in this table with ID=1.
    Something similar is done for Mongo too.<br>
    <br>
    The lock is released in &quot;finally&quot; block if something fails. The only
    possibility how can DB remains locked is if someone force to kill
    the process (like &quot;kill -9&quot; command, then finally blocks are not
    called) or if network connection between server and DB is lost. The
    chance of this is very low IMO and we have option to manually
    recover from this.<span class="HOEnZb"><font color="#888888"><br>
    <br>
    Marek</font></span><div><div class="h5"><br>
    <blockquote type="cite">
      <div dir="ltr">
        <div><br>
        </div>
        <div>-1 To having a timeout, I agree it&#39;s dangerous and could
          leave the DB inconsistent so we shouldn&#39;t do it</div>
      </div>
      <div class="gmail_extra"><br>
        <div class="gmail_quote">On 7 March 2016 at 21:59, Marek Posolda
          <span dir="ltr">&lt;<a href="mailto:mposolda@redhat.com" target="_blank">mposolda@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000">
              <div>Then the record in DB will remain locked and needs to
                be fixed manually. Actually the same behaviour like
                liquibase. The possibilities to repair from this state
                is:<br>
                - Run keycloak with system property
                &quot;-Dkeycloak.dblock.forceUnlock=true&quot; . Then Keycloak
                will release the existing lock at startup and acquire
                new lock. The warning is written to server.log that this
                property should be used carefully just to repair DB<br>
                - Manually delete lock record from DATABASECHANGELOGLOCK
                table (or &quot;dblock&quot; collection in mongo)<br>
                <br>
                The other possibility is that after timeout, node2 will
                assume the current lock is timed-out and will forcefully
                release existing lock and replace with it&#39;s own lock.
                However I didn&#39;t it this way as it&#39;s potentially
                dangerous though - there is some chance that 2 nodes run
                migration or import at the same time and DB will end in
                inconsistent state. Or is it acceptable risk?<span><font color="#888888"><br>
                    <br>
                    Marek</font></span>
                <div>
                  <div><br>
                    <br>
                    <br>
                    On 07/03/16 19:50, Stian Thorgersen wrote:<br>
                  </div>
                </div>
              </div>
              <div>
                <div>
                  <blockquote type="cite">
                    <div dir="ltr">900 seconds is probably ok, but what
                      happens if the node holding the lock dies?</div>
                    <div class="gmail_extra"><br>
                      <div class="gmail_quote">On 7 March 2016 at 11:03,
                        Marek Posolda <span dir="ltr">&lt;<a href="mailto:mposolda@redhat.com" target="_blank"></a><a href="mailto:mposolda@redhat.com" target="_blank">mposolda@redhat.com</a>&gt;</span>
                        wrote:<br>
                        <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send PR with added
                          support for $subject .<br>
                          <a href="https://github.com/keycloak/keycloak/pull/2332" rel="noreferrer" target="_blank">https://github.com/keycloak/keycloak/pull/2332</a>
                          .<br>
                          <br>
                          Few details:<br>
                          - Added DBLockProvider, which handles acquire
                          and release of DB lock.<br>
                          When lock is acquired, the cluster node2 needs
                          to wait until node1<br>
                          release the lock<br>
                          <br>
                          - The lock is acquired at startup for the
                          migrating model (both model<br>
                          specific and generic migration), importing
                          realms and adding initial<br>
                          admin user. So this can be done always just by
                          one node at a time.<br>
                          <br>
                          - The lock is implemented at DB level, so it
                          works even if infinispan<br>
                          cluster is not correctly configured. For the
                          JPA, I&#39;ve added<br>
                          implementation, which is reusing liquibase DB
                          locking with the bugfix,<br>
                          which prevented builtin liquibase lock to work
                          correctly. I&#39;ve added<br>
                          implementation for Mongo too.<br>
                          <br>
                          - Added DBLockTest, which simulates 20 threads
                          racing for acquire lock<br>
                          concurrently. It&#39;s passing with all databases.<br>
                          <br>
                          - Default timeout for acquire lock is 900
                          seconds and the time for lock<br>
                          recheck is 2 seconds. So if node2 is not able
                          to acquire lock within 900<br>
                          seconds, it fails to start. There is
                          possibility to change in<br>
                          keycloak-server.json. Is 900 seconds too much?
                          I was thinking about the<br>
                          case when there is some large realm file
                          importing at startup.<br>
                          <br>
                          Marek<br>
_______________________________________________<br>
                          keycloak-dev mailing list<br>
                          <a href="mailto:keycloak-dev@lists.jboss.org" target="_blank">keycloak-dev@lists.jboss.org</a><br>
                          <a href="https://lists.jboss.org/mailman/listinfo/keycloak-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/keycloak-dev</a><br>
                        </blockquote>
                      </div>
                      <br>
                    </div>
                  </blockquote>
                  <br>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br></div>