[keycloak-dev] Using export and import for backup and restore

Stian Thorgersen sthorger at redhat.com
Tue Oct 1 04:10:05 EDT 2019


With regards to backup it's not just realm config that needs backup. There
are clients, roles, groups, users, etc.. It does take a lot of users before
import/export via JSON before it will take a very long time (hours?!?).

It is just not a good idea. Will not scale. And would require downtime of
the SSO service during backup.

Now to my question - shouldn't the db be installed by an operator in the
first place? Shouldn't the db operator do the backup?

On Tue, 1 Oct 2019, 09:40 Peter Braun, <pbraun at redhat.com> wrote:

> That's another option we consider. We could use Volume Snapshots [1] and
>> just backup the whole Postgresql data directory. This seems to be the
>> fastest option and I believe, the Integreately Team tried that before
>
>
> For Postgres we run pg_dump on the container and then back up the
> resulting file (by uploading it to S3). But we also back up PV data and
> Kubernetes resources.
>
> In the Operator use case, all modifications should be made through the
>> Operator. That means, we could implement a simple Mutex and prevent the
>> Operator from modifying anything until the backup is complete.
>>
>
> Thats a good point. But we still expose the Admin UI. I can see use cases
> where you want the Operator to install Keycloak and set up the Realms but
> then let users self manage their Realm in the Admin UI. But even in that
> case, the Operator could just block access during export/import (by
> removing the Route or redirecting it to a 'Backup in Progress' page).
>
> If there are some guarantees (are there any?) about the structure of the
>> JSON
>> backup, we could use it for complicated migration process
>>
>
> Thats my main concern. I suppose there is no Schema the output could be
> validated against? I don't know how much the Keycloak DB schema changes but
> i reckon that unless this is a supported feature and there is some metadata
> (like schema version etc.) it's hard to use it for a reliable backup.
>
> > * It's very very slow
>
> This can be a problem. If you run regular backups this could eat into your
> availability quite a bit.
>
>
>
> On Tue, Oct 1, 2019 at 8:19 AM Sebastian Laskawiec <slaskawi at redhat.com>
> wrote:
>
>> On Mon, Sep 30, 2019 at 3:31 PM Stian Thorgersen <sthorger at redhat.com>
>> wrote:
>>
>> > Export/import using JSON has a few significant disadvantages:
>> >
>> > * It's very very slow
>> > * It can not provide a consistent snapshot unless all writes are stopped
>> > during the export
>> >
>>
>> I believe any of those two would be a problem.
>>
>> In the Operator use case, all modifications should be made through the
>> Operator. That means, we could implement a simple Mutex and prevent the
>> Operator from modifying anything until the backup is complete.
>>
>>
>> >
>> > With that regards I very much doubt it would be the ideal solution.
>> >
>>
>> The more I'm thinking about it, the more I'm convinced that's using
>> export/import gives us the most flexibility.
>>
>> At first, we could simplify basic configuration of a Realm (and other CRDs
>> in the future as well). We would expose only basic settings and if anyone
>> wants to configure every small detail, he would need to prepare a full
>> JSON
>> and restore it - just like restoring a backup - the same mechanism. If
>> there are some guarantees (are there any?) about the structure of the JSON
>> backup, we could use it for complicated migration process - like
>> Integreately, where they need to migrate off the old Keycloak Operator to
>> a
>> new one. Also, the structure remains of the JSON file remains compatible
>> across Keycloak/RHSSO versions, we could use it for migrating our
>> customers
>> from Keycloak to RHSSO. Finally, this solution doesn't tie us up to a
>> particular database and its version. During an upgrade, we can wipe the
>> database up and just restore Keycloak from a JSON file.
>>
>>
>> >
>> > One question though can't backup be performed at the DB level, and then
>> be
>> > a requirement on the DB operator rather than the Keycloak operator?
>> >
>>
>> That's another option we consider. We could use Volume Snapshots [1] and
>> just backup the whole Postgresql data directory. This seems to be the
>> fastest option and I believe, the Integreately Team tried that before
>> (Peter, David - perhaps you could tell us more about it). However, it ties
>> us up to Postgresql in a specific version (as far as I know there are no
>> guarantees about migrating the data directory between Postgresql
>> versions).
>> My intuition tells me, this will be a problem in long-term.
>>
>> [1]
>>
>> https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/
>>
>>
>> >
>> > On Mon, 30 Sep 2019 at 15:22, Sebastian Laskawiec <slaskawi at redhat.com>
>> > wrote:
>> >
>> >> Hey,
>> >>
>> >> In the next few days we'll be looking into implementing backup and
>> restore
>> >> functionality for the Keycloak Operator. One of the options we are
>> >> considering, is using an export/import functionality. An Operator could
>> >> export all realms into a JSON file and put it somewhere in a Persistent
>> >> Volume.
>> >>
>> >> I was wondering, what do you think about this approach? Are there any
>> >> guarantees around export/import functionality (especially with the
>> regards
>> >> to its format)? Also, would it work for exporting JSON file from
>> Keycloak
>> >> and importing it to RHSSO?
>> >>
>> >> Thanks,
>> >> Sebastian
>> >> _______________________________________________
>> >> keycloak-dev mailing list
>> >> keycloak-dev at lists.jboss.org
>> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev
>> >>
>> >
>> _______________________________________________
>> keycloak-dev mailing list
>> keycloak-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/keycloak-dev
>>
>


More information about the keycloak-dev mailing list