[keycloak-user] Recommendations for protecting REST service with bearer token and basic auth
Juraci Paixão Kröhling
juraci at kroehling.de
Wed Nov 19 04:33:42 EST 2014
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
On 11/19/2014 09:33 AM, Stian Thorgersen wrote:
> ----- Original Message -----
>> From: "Juraci Paixão Kröhling" <juraci at kroehling.de> To:
>> keycloak-user at lists.jboss.org Sent: Tuesday, November 18, 2014
>> 4:36:11 PM Subject: Re: [keycloak-user] Recommendations for
>> protecting REST service with bearer token and basic auth
>
> To obtain an access token, I'd still need to talk with the Auth
> server and then, based on the response (ie, synchronously), send a
> request with a bearer token to the service. This is not viable when
> the client sends several (thousands of) requests to the service.
>
>> Why does the shell script have to talk to the auth server for
>> every request? It should cache the token, not the users
>> credentials.
I have the strong feeling that I'm missing something very fundamental
here, so, I'd be very glad if you could correct me if I'm wrong.
I got more time thinking about this, and you are right, caching the
tokens is pretty much the only solution that could make this work. But
still, it's not very optimal. As I see it, if I have one "business"
script, I need two extra scripts.
First script:
- - User runs the bash script for the first time
- - Script generates an URL for the user to open in a browser
- - User copies the code from the Keycloak server into the local file system
- - Script exchanges this code for a refresh token and stores the
refresh token in the local system.
Second script:
- - Runs every N minutes, where N < "the expiration in minutes of the
tokens", updating the refresh and access tokens, and writing it
somewhere in the local file system.
Third script (my business):
- - Reads the token from the local file system and calls the backend.
The third script might still fail, if it tries to read the token while
it's being written by the second script, but this is manageable.
Now, how to handle a multi-host scenario? The user would hopefully not
need to execute "first-script" at each host. For that, some sort of
token-propagation among the hosts would need to exist. In other words,
during peak time, when I spawn more worker hosts, the new hosts would
only need to execute scripts 2 and 3.
What I'm proposing is something like this:
- - User access the "my account" page
- - Copies a token/code/secret/key into somewhere in the system
- - Bash script reads it and uses it for each backend request, knowing
that it will fail only when the user actively revokes this key from
his account.
Thus, no need for the first script or second script, and no need for
any token propagation. I can just spawn new hosts at any time, as the
key could be stored in its kickstart file.
>
> That without mentioning the difficulties in parsing tokens via a
> shell script.
>
>> Why does the shell script have to parse the token? Does it not
>> just pass it on to the rest services it invokes.
>
My bad, not "parsing tokens" but "parsing JSON" (when getting the
access token from a refresh token request).
- - Juca.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBCgAGBQJUbGP2AAoJEDnJtskdmzLMqmQIAKrszmTAx1Ycrc9PR59SzGZc
wTVsHinAREsYFYirIZ5LHVirrw5mihiy93zMmUg0FhlwMqr/vbQXpDBCr683U+ku
cENofdO+RXg0TxHavNlUd1St5z+Evf1hGO6vdCczDAO0uemEcTnFujJZCTiIKrzk
HNDPnCsGruxCi4IHBhGKFfTFz5mfaeh9zbKF4s7TcevOayUk8RLyCJwWMnxMhQ4u
vOjw3UEpToomGkMTc7sToCA+dYOfBtMzm56YB15NdwyTNzZoY6FWiAkx8mIqI272
YMe2sb3AmddrNWZWQG0XH5Nb3dEXOOp0n/IN25uJu/qsj/pTrmbX+Qm3N1j145M=
=sSDP
-----END PGP SIGNATURE-----
More information about the keycloak-user
mailing list