Gah! Intended for the list not the individual.
---------- Forwarded message ----------
From: James Green <james.mk.green(a)gmail.com>
Date: 3 August 2017 at 10:40
Subject: Re: [keycloak-user] Clean Install with MySQL - Keycloak restarts
itself due to liquibase errors - Docker Swarm environment
To: Marko Strukelj <mstrukel(a)redhat.com>
I'm aware of the liquibase error, the bit I'm struggling with is how that
can happen on an empty database. I might try to launch the database
separately - could be a race condition between initialising the two
applications (clutching at straws).
On 3 August 2017 at 10:36, Marko Strukelj <mstrukel(a)redhat.com> wrote:
There's your error in line 237 of the first log:
08:53:26,263 ERROR [org.keycloak.connections.jpa.
updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService
Thread Pool -- 52) Change Set META-INF/jpa-changelog-1.7.0.x
ml::1.7.0::bburke@redhat.com failed. Error: Table 'KEYCLOAK_GROUP'
already exists [Failed SQL: CREATE TABLE keycloak.KEYCLOAK_GROUP (ID
VARCHAR(36) NOT NULL, NAME VARCHAR(255) NULL, PARENT_GROUP VARCHAR(36)
NULL, REALM_ID VARCHAR(36) NULL)]: liquibase.exception.DatabaseException:
Table 'KEYCLOAK_GROUP' already exists [Failed SQL: CREATE TABLE
keycloak.KEYCLOAK_GROUP (ID VARCHAR(36) NOT NULL, NAME VARCHAR(255) NULL,
PARENT_GROUP VARCHAR(36) NULL, REALM_ID VARCHAR(36) NULL)]
The question now is why that table exists already if you started with an
empty database.
On Thu, Aug 3, 2017 at 11:22 AM, James Green <james.mk.green(a)gmail.com>
wrote:
> Unsure what I'm doing wrong here. Circumstance: we've spotted KeyCloak,
> have reason to be interested, so are deploying an instance into our test
> environment which happens to be Docker Swarm.
>
> Problem: The KeyCloak service is being restarted by Docker, presumably due
> to a crash. The logs indicate it gets so far within a liquibase script
> then
> fails.
>
> Here's the docker-compose.yml file that we are using for deployment
> purposes:
>
>
https://gist.github.com/jmkgreen/b79f95c3eca2eac3fb66c66d12017f07
>
> Here's the log from MySQL:
>
>
https://gist.github.com/jmkgreen/75b99fe98cf1d16a99895e78dae47cce
>
> Here's an initial log from KeyCloak:
>
>
https://gist.github.com/jmkgreen/96285800949b5c4f62c31caa3eba27ef
>
> Here's an further log from KeyCloak once Docker has decided it needed to
> be
> restarted:
>
>
https://gist.github.com/jmkgreen/2051ab14e470d1d46dabcfdd519d5c42
>
> As you can see, the MySQL server starts and is configured due to there
> being no data already present. All looks good. KeyCloak eventually gets
> connected to MySQL and begins using Liquibase to roll through transitions
> but crashes (how?) and thus the container overall crashes forcing Docker
> to
> restart, which merely happens over and over.
>
> FWIW I earlier created a StackOverflow post which has us at a _different_
> liquibase change but also failing:
>
>
https://stackoverflow.com/questions/45466482/keycloak-will-n
> ot-start-due-to-liquibase-changelog-error?noredirect=1#comme
> nt77894983_45466482
>
> What I've posted in the Gists above occurred after I shut everything down
> and wiped the MySQL data directory of it's contents in full.
>
> An aside - we have multiple projects working within Swarm using stack
> deployments with externally managed networks (as recommended by Docker)
> and
> GlusterFS volumes without issue. In this particular case the only tangible
> difference is the use of the latest MySQL version which other projects may
> not be using. We also do not have experience of WildFly-based software.
>
> Any ideas what I've done wrong?
>
> Thanks,
>
> James
> _______________________________________________
> keycloak-user mailing list
> keycloak-user(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/keycloak-user
>