[keycloak-user] Clean Install with MySQL - Keycloak restarts itself due to liquibase errors - Docker Swarm environment

John Bartko john.bartko at drillinginfo.com
Thu Aug 3 08:00:22 EDT 2017


I *think* that the timeout referred to by this error:


WFLYCTL0348: Timeout after [300] seconds


can be increased by specifying -Djboss.as.management.blocking.timeout=### in the java options.

I suspect that when a liquibase transaction gets abruptly stopped like that, subsequent attempts to use the same database would possibly result in "table already exist" errors.

I experienced when performing a 1.9.x -> 2.5.x schema update the following transaction timeout also needed to be increased beyond its default value of 300:

/subsystem=transactions:write-attribute(name=default-timeout,value=###)

Hope that helps,
- John Bartko
________________________________
From: keycloak-user-bounces at lists.jboss.org <keycloak-user-bounces at lists.jboss.org> on behalf of James Green <james.mk.green at gmail.com>
Sent: Thursday, August 3, 2017 6:01:46 AM
To: Marko Strukelj
Cc: keycloak-user
Subject: Re: [keycloak-user] Clean Install with MySQL - Keycloak restarts itself due to liquibase errors - Docker Swarm environment

Looks like a timeout causes an initial crash then the liquibase crashes
begin:

https://gist.github.com/jmkgreen/4a474f1b97d8cbea5bf77a6f475ec78c<https://gist.github.com/jmkgreen/4a474f1b97d8cbea5bf77a6f475ec78c>

Unsure what is actually happening that gets timed out though - there is
mention of an http interface but is that a repercussion of something deeper?

Thanks,

James


On 3 August 2017 at 11:01, Marko Strukelj <mstrukel at redhat.com> wrote:

> Hmm, grasping for straws I would try a previous version of Keycloak to
> rule out the possibility of a regression, then I would try with a different
> version of MySql, then I would try locally running instance of Keycloak
> against containerised MySql ...
>
> On Thu, Aug 3, 2017 at 11:36 AM, Marko Strukelj <mstrukel at redhat.com>
> wrote:
>
>> There's your error in line 237 of the first log:
>> 08:53:26,263 ERROR [org.keycloak.connections.jpa.
>> updater.liquibase.conn.DefaultLiquibaseConnectionProvider]
>> (ServerService Thread Pool -- 52) Change Set META-INF/
>> jpa-changelog-1.7.0.xml::1.7.0::bburke at redhat.com failed. Error: Table
>> 'KEYCLOAK_GROUP' already exists [Failed SQL: CREATE TABLE
>> keycloak.KEYCLOAK_GROUP (ID VARCHAR(36) NOT NULL, NAME VARCHAR(255) NULL,
>> PARENT_GROUP VARCHAR(36) NULL, REALM_ID VARCHAR(36) NULL)]:
>> liquibase.exception.DatabaseException: Table 'KEYCLOAK_GROUP' already
>> exists [Failed SQL: CREATE TABLE keycloak.KEYCLOAK_GROUP (ID VARCHAR(36)
>> NOT NULL, NAME VARCHAR(255) NULL, PARENT_GROUP VARCHAR(36) NULL, REALM_ID
>> VARCHAR(36) NULL)]
>>
>> The question now is why that table exists already if you started with an
>> empty database.
>>
>> On Thu, Aug 3, 2017 at 11:22 AM, James Green <james.mk.green at gmail.com>
>> wrote:
>>
>>> Unsure what I'm doing wrong here. Circumstance: we've spotted KeyCloak,
>>> have reason to be interested, so are deploying an instance into our test
>>> environment which happens to be Docker Swarm.
>>>
>>> Problem: The KeyCloak service is being restarted by Docker, presumably
>>> due
>>> to a crash. The logs indicate it gets so far within a liquibase script
>>> then
>>> fails.
>>>
>>> Here's the docker-compose.yml file that we are using for deployment
>>> purposes:
>>>
>>> https://gist.github.com/jmkgreen/b79f95c3eca2eac3fb66c66d12017f07<https://gist.github.com/jmkgreen/b79f95c3eca2eac3fb66c66d12017f07>
>>>
>>> Here's the log from MySQL:
>>>
>>> https://gist.github.com/jmkgreen/75b99fe98cf1d16a99895e78dae47cce<https://gist.github.com/jmkgreen/75b99fe98cf1d16a99895e78dae47cce>
>>>
>>> Here's an initial log from KeyCloak:
>>>
>>> https://gist.github.com/jmkgreen/96285800949b5c4f62c31caa3eba27ef<https://gist.github.com/jmkgreen/96285800949b5c4f62c31caa3eba27ef>
>>>
>>> Here's an further log from KeyCloak once Docker has decided it needed to
>>> be
>>> restarted:
>>>
>>> https://gist.github.com/jmkgreen/2051ab14e470d1d46dabcfdd519d5c42<https://gist.github.com/jmkgreen/2051ab14e470d1d46dabcfdd519d5c42>
>>>
>>> As you can see, the MySQL server starts and is configured due to there
>>> being no data already present. All looks good. KeyCloak eventually gets
>>> connected to MySQL and begins using Liquibase to roll through transitions
>>> but crashes (how?) and thus the container overall crashes forcing Docker
>>> to
>>> restart, which merely happens over and over.
>>>
>>> FWIW I earlier created a StackOverflow post which has us at a _different_
>>> liquibase change but also failing:
>>>
>>> https://stackoverflow.com/questions/45466482/keycloak-will-n<https://stackoverflow.com/questions/45466482/keycloak-will-n>
>>> ot-start-due-to-liquibase-changelog-error?noredirect=1#comme
>>> nt77894983_45466482
>>>
>>> What I've posted in the Gists above occurred after I shut everything down
>>> and wiped the MySQL data directory of it's contents in full.
>>>
>>> An aside - we have multiple projects working within Swarm using stack
>>> deployments with externally managed networks (as recommended by Docker)
>>> and
>>> GlusterFS volumes without issue. In this particular case the only
>>> tangible
>>> difference is the use of the latest MySQL version which other projects
>>> may
>>> not be using. We also do not have experience of WildFly-based software.
>>>
>>> Any ideas what I've done wrong?
>>>
>>> Thanks,
>>>
>>> James
>>> _______________________________________________
>>> keycloak-user mailing list
>>> keycloak-user at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/keycloak-user<https://lists.jboss.org/mailman/listinfo/keycloak-user>
>>>
>>
>>
>
_______________________________________________
keycloak-user mailing list
keycloak-user at lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-user<https://lists.jboss.org/mailman/listinfo/keycloak-user>


More information about the keycloak-user mailing list