I'm using docker image jboss/keycloak-mysql:3.2.1.Final with empty db
and the error is reproducible with both mariadb:10.2 and mariadb:10.1.
Last time when I was playing with keycloak 2.x, it worked fine with
mariadb:10.1
I've tried all final 3.x docker images with mariadb:10.2 and the error
was the same. With keycloak 2.5.5 final (and the 10.2 db) the error
was:
OK I have identified a problem and a workaround:
jboss/keycloak-mysql:latest works against mysql:5.5 but not against
mysql:5.6 or mysql:5.7
Here's the log working against mysq:5.5 - note the time taken to
initialise
the database:
[ ... ]
13:37:38,210 INFO [org.jboss.as.clustering.infinispan]
(ServerService
Thread Pool -- 57) WFLYCLINF0002: Started realmRevisions cache from
keycloak container
13:37:38,219 INFO [org.jboss.as.clustering.infinispan]
(ServerService
Thread Pool -- 57) WFLYCLINF0002: Started userRevisions cache from
keycloak
container
13:37:38,224 INFO [org.jboss.as.clustering.infinispan]
(ServerService
Thread Pool -- 57) WFLYCLINF0002: Started authorizationRevisions
cache from
keycloak container
13:37:42,640 INFO
[org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterP
rovider]
(ServerService Thread Pool -- 57) Initializing database schema. Using
changelog META-INF/jpa-changelog-master.xml
13:41:13,725 INFO [org.hibernate.jpa.internal.util.LogHelper]
(ServerService Thread Pool -- 57) HHH000204: Processing
PersistenceUnitInfo
[
name: keycloak-default
...]
13:41:13,781 INFO [org.hibernate.Version] (ServerService Thread Pool
--
57) HHH000412: Hibernate Core {5.0.7.Final}
13:41:13,782 INFO [org.hibernate.cfg.Environment] (ServerService
Thread
Pool -- 57) HHH000206: hibernate.properties not found
[ ... ]
That's quite a bit of time operating against an empty database!
On 3 August 2017 at 13:00, John Bartko <john.bartko(a)drillinginfo.com>
wrote:
> I *think* that the timeout referred to by this error:
>
>
> WFLYCTL0348: Timeout after [300] seconds
>
>
> can be increased by specifying
> -Djboss.as.management.blocking.timeout=###
> in the java options.
>
> I suspect that when a liquibase transaction gets abruptly stopped
> like
> that, subsequent attempts to use the same database would possibly
> result in
> "table already exist" errors.
>
> I experienced when performing a 1.9.x -> 2.5.x schema update the
> following
> transaction timeout also needed to be increased beyond its default
> value of
> 300:
>
> /subsystem=transactions:write-attribute(name=default-
> timeout,value=###)
>
>
> Hope that helps,
> - John Bartko
> ------------------------------
> *From:* keycloak-user-bounces(a)lists.jboss.org <
> keycloak-user-bounces(a)lists.jboss.org> on behalf of James Green <
> james.mk.green(a)gmail.com>
> *Sent:* Thursday, August 3, 2017 6:01:46 AM
> *To:* Marko Strukelj
> *Cc:* keycloak-user
> *Subject:* Re: [keycloak-user] Clean Install with MySQL - Keycloak
> restarts itself due to liquibase errors - Docker Swarm environment
>
> Looks like a timeout causes an initial crash then the liquibase
> crashes
> begin:
>
>
https://gist.github.com/jmkgreen/4a474f1b97d8cbea5bf77a6f475ec78c
>
> Unsure what is actually happening that gets timed out though -
> there is
> mention of an http interface but is that a repercussion of
> something
> deeper?
>
> Thanks,
>
> James
>
>
> On 3 August 2017 at 11:01, Marko Strukelj <mstrukel(a)redhat.com>
> wrote:
>
> > Hmm, grasping for straws I would try a previous version of
> > Keycloak to
> > rule out the possibility of a regression, then I would try with a
>
> different
> > version of MySql, then I would try locally running instance of
> > Keycloak
> > against containerised MySql ...
> >
> > On Thu, Aug 3, 2017 at 11:36 AM, Marko Strukelj <mstrukel@redhat.
> > com>
> > wrote:
> >
> > > There's your error in line 237 of the first log:
> > > 08:53:26,263 ERROR [org.keycloak.connections.jpa.
> > > updater.liquibase.conn.DefaultLiquibaseConnectionProvider]
> > > (ServerService Thread Pool -- 52) Change Set META-INF/
> > > jpa-changelog-1.7.0.xml::1.7.0::bburke@redhat.com failed.
> > > Error: Table
> > > 'KEYCLOAK_GROUP' already exists [Failed SQL: CREATE TABLE
> > > keycloak.KEYCLOAK_GROUP (ID VARCHAR(36) NOT NULL, NAME
> > > VARCHAR(255)
>
> NULL,
> > > PARENT_GROUP VARCHAR(36) NULL, REALM_ID VARCHAR(36) NULL)]:
> > > liquibase.exception.DatabaseException: Table 'KEYCLOAK_GROUP'
> > > already
> > > exists [Failed SQL: CREATE TABLE keycloak.KEYCLOAK_GROUP (ID
> > > VARCHAR(36)
> > > NOT NULL, NAME VARCHAR(255) NULL, PARENT_GROUP VARCHAR(36)
> > > NULL,
>
> REALM_ID
> > > VARCHAR(36) NULL)]
> > >
> > > The question now is why that table exists already if you
> > > started with an
> > > empty database.
> > >
> > > On Thu, Aug 3, 2017 at 11:22 AM, James Green <james.mk.green@gm
> > > ail.com>
> > > wrote:
> > >
> > > > Unsure what I'm doing wrong here. Circumstance: we've
spotted
> > > > KeyCloak,
> > > > have reason to be interested, so are deploying an instance
> > > > into our
>
> test
> > > > environment which happens to be Docker Swarm.
> > > >
> > > > Problem: The KeyCloak service is being restarted by Docker,
> > > > presumably
> > > > due
> > > > to a crash. The logs indicate it gets so far within a
> > > > liquibase script
> > > > then
> > > > fails.
> > > >
> > > > Here's the docker-compose.yml file that we are using for
> > > > deployment
> > > > purposes:
> > > >
> > > >
https://gist.github.com/jmkgreen/b79f95c3eca2eac3fb66c66d1201
> > > > 7f07
> > > >
> > > > Here's the log from MySQL:
> > > >
> > > >
https://gist.github.com/jmkgreen/75b99fe98cf1d16a99895e78dae4
> > > > 7cce
> > > >
> > > > Here's an initial log from KeyCloak:
> > > >
> > > >
https://gist.github.com/jmkgreen/96285800949b5c4f62c31caa3eba
> > > > 27ef
> > > >
> > > > Here's an further log from KeyCloak once Docker has decided
> > > > it needed
>
> to
> > > > be
> > > > restarted:
> > > >
> > > >
https://gist.github.com/jmkgreen/2051ab14e470d1d46dabcfdd519d
> > > > 5c42
> > > >
> > > > As you can see, the MySQL server starts and is configured due
> > > > to there
> > > > being no data already present. All looks good. KeyCloak
> > > > eventually gets
> > > > connected to MySQL and begins using Liquibase to roll through
>
> transitions
> > > > but crashes (how?) and thus the container overall crashes
> > > > forcing
>
> Docker
> > > > to
> > > > restart, which merely happens over and over.
> > > >
> > > > FWIW I earlier created a StackOverflow post which has us at a
>
> _different_
> > > > liquibase change but also failing:
> > > >
> > > >
https://stackoverflow.com/questions/45466482/keycloak-will-n
> > > > ot-start-due-to-liquibase-changelog-error?noredirect=1#comme
> > > > nt77894983_45466482
> > > >
> > > > What I've posted in the Gists above occurred after I shut
> > > > everything
>
> down
> > > > and wiped the MySQL data directory of it's contents in full.
> > > >
> > > > An aside - we have multiple projects working within Swarm
> > > > using stack
> > > > deployments with externally managed networks (as recommended
> > > > by Docker)
> > > > and
> > > > GlusterFS volumes without issue. In this particular case the
> > > > only
> > > > tangible
> > > > difference is the use of the latest MySQL version which other
> > > > projects
> > > > may
> > > > not be using. We also do not have experience of WildFly-based
> > > > software.
> > > >
> > > > Any ideas what I've done wrong?
> > > >
> > > > Thanks,
> > > >
> > > > James
> > > > _______________________________________________
> > > > keycloak-user mailing list
> > > > keycloak-user(a)lists.jboss.org
> > > >
https://lists.jboss.org/mailman/listinfo/keycloak-user
> > > >
> > >
> > >
>
> _______________________________________________
> keycloak-user mailing list
> keycloak-user(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/keycloak-user
>
_______________________________________________
keycloak-user mailing list
keycloak-user(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-user