From mposolda at redhat.com Wed Mar 1 06:02:24 2017 From: mposolda at redhat.com (Marek Posolda) Date: Wed, 1 Mar 2017 12:02:24 +0100 Subject: [keycloak-dev] Arquillian testsuite: realm import per class now Message-ID: <3e343b79-6ac5-fac2-ebf0-1e8d67c269e3@redhat.com> So testsuite-arquillian is now using the realm import per class similarly like the old testsuite. Also there is just one adminClient and one testingClient per class now. This was identified as one of the two major bottlenecks (The second was the phantomjs, which was changed to htmlUnit earlier). With both changes, running the arquillian testsuite takes 10 minutes instead of 36 on my laptop. There was quite a lot of changes needed to achieve this as many test methods relied on the fact that realm is imported and didn't clean stuff after itself. I may not fix all the tests, especially not those, which are not executed during default build (eg. cluster tests). In case that you found your test is broken you can do either: - Fix the tests to clean after itself without realm reimport needed after every test. This is preferred way :) - Add this to your test class: @Override protected boolean isImportAfterEachMethod() { return true; } This is fallback to previous behaviour and will cause the import after each test method like was before. Hopefully this option can be removed after some time once all tests are fixed :) For now, I needed to use it for all adapter tests (added into AbstractAdapterTest) as some adapter tests were still failing and I don't have much time to investigate further.. Created https://issues.jboss.org/browse/KEYCLOAK-4517 . Marek From bburke at redhat.com Wed Mar 1 18:29:41 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 1 Mar 2017 18:29:41 -0500 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests Message-ID: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> Ok, I just spent 1.5 days on debugging a problem and I was ready to throw my Laptop out of the window I was getting so frustrated. #1 I copied code from the arquillian adapter tests to deploy my own servlet. When running in IntelliJ, all logging messages by the servlet and OIDC adapters were eaten and never displayed. #2 If you have a @Deployment it deploys it in @BeforeClass and only once for all tests run in the class #3 I recreate/destroy my realms for every test #4 The default "min-time-between-jwks-requests" is 10 seconds...Because my servlet doesn't get redeployed per test, the 1st test would set up the cache for the realm key for the servlet. The 2nd test would run, because the realms were recreated, there is a different key, but the min-time-between-jwkds-requests was 10 seconds so it wasn't updating the key and my logins would fail. This was extermely frustrating to debug because of #1 and because it only happened if I was running all tests in the class. The workaround is to set "min-time-between-jwks-requests" to zero in your adapter configuration. This is an FYI just in case somebody else runs into this. Took me awhile to figure out. From sblanc at redhat.com Thu Mar 2 01:28:36 2017 From: sblanc at redhat.com (Sebastien Blanc) Date: Thu, 02 Mar 2017 06:28:36 +0000 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> Message-ID: Yes it hit the same issue when I wrote tests for the nodejs adapter, also took me a while before I discovered it was because of the "min-time-between-jwks-requests" :/ Le jeu. 2 mars 2017 ? 01:07, Bill Burke a ?crit : > Ok, I just spent 1.5 days on debugging a problem and I was ready to > throw my Laptop out of the window I was getting so frustrated. > > #1 I copied code from the arquillian adapter tests to deploy my own > servlet. When running in IntelliJ, all logging messages by the servlet > and OIDC adapters were eaten and never displayed. > > #2 If you have a @Deployment it deploys it in @BeforeClass and only once > for all tests run in the class > > #3 I recreate/destroy my realms for every test > > #4 The default "min-time-between-jwks-requests" is 10 seconds...Because > my servlet doesn't get redeployed per test, the 1st test would set up > the cache for the realm key for the servlet. The 2nd test would run, > because the realms were recreated, there is a different key, but the > min-time-between-jwkds-requests was 10 seconds so it wasn't updating the > key and my logins would fail. This was extermely frustrating to debug > because of #1 and because it only happened if I was running all tests in > the class. > > The workaround is to set "min-time-between-jwks-requests" to zero in > your adapter configuration. This is an FYI just in case somebody else > runs into this. Took me awhile to figure out. > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mposolda at redhat.com Thu Mar 2 04:08:04 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 2 Mar 2017 10:08:04 +0100 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> Message-ID: On 02/03/17 00:29, Bill Burke wrote: > Ok, I just spent 1.5 days on debugging a problem and I was ready to > throw my Laptop out of the window I was getting so frustrated. > > #1 I copied code from the arquillian adapter tests to deploy my own > servlet. When running in IntelliJ, all logging messages by the servlet > and OIDC adapters were eaten and never displayed. Keycloak logging disabled in testsuite/integration-arquillian/tests/base/src/test/resources/log4j.properties . AFAIK it's disabled just because running whole testsuite produces very big logs, which caused issues with travis. I hope it's possible to fix that and have Keycloak logging enabled when running from IDE, but still keep it disabled when running from command line with "mvn" command. Will try to look into it. Created : https://issues.jboss.org/browse/KEYCLOAK-4520 > > #2 If you have a @Deployment it deploys it in @BeforeClass and only once > for all tests run in the class > > #3 I recreate/destroy my realms for every test > > #4 The default "min-time-between-jwks-requests" is 10 seconds...Because > my servlet doesn't get redeployed per test, the 1st test would set up > the cache for the realm key for the servlet. The 2nd test would run, > because the realms were recreated, there is a different key, but the > min-time-between-jwkds-requests was 10 seconds so it wasn't updating the > key and my logins would fail. This was extermely frustrating to debug > because of #1 and because it only happened if I was running all tests in > the class. > > The workaround is to set "min-time-between-jwks-requests" to zero in > your adapter configuration. This is an FYI just in case somebody else > runs into this. Took me awhile to figure out. Another possibility is to put private/public keys into your realm JSON. Then there is always same keys and same "kid" and application doesn't need to re-download it. FYI. with my latest changes, there is no realm reimport for every test for most of the tests (see other thread I sent yesterday). But unfortunately this is not yet the case for Adapter tests (subclasses of AbstractAdapterTest)... Marek > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Thu Mar 2 06:26:13 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 2 Mar 2017 12:26:13 +0100 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> Message-ID: <1ed3f0c4-ed21-0673-76ee-b15a1f9f0647@redhat.com> On 02/03/17 10:08, Marek Posolda wrote: > On 02/03/17 00:29, Bill Burke wrote: >> Ok, I just spent 1.5 days on debugging a problem and I was ready to >> throw my Laptop out of the window I was getting so frustrated. >> >> #1 I copied code from the arquillian adapter tests to deploy my own >> servlet. When running in IntelliJ, all logging messages by the servlet >> and OIDC adapters were eaten and never displayed. > Keycloak logging disabled in > testsuite/integration-arquillian/tests/base/src/test/resources/log4j.properties > . AFAIK it's disabled just because running whole testsuite produces very > big logs, which caused issues with travis. > > I hope it's possible to fix that and have Keycloak logging enabled when > running from IDE, but still keep it disabled when running from command > line with "mvn" command. Will try to look into it. Created : > https://issues.jboss.org/browse/KEYCLOAK-4520 Fixed now. Logging for both server and adapters is enabled now when running test from IDE. Marek > >> #2 If you have a @Deployment it deploys it in @BeforeClass and only once >> for all tests run in the class >> >> #3 I recreate/destroy my realms for every test >> >> #4 The default "min-time-between-jwks-requests" is 10 seconds...Because >> my servlet doesn't get redeployed per test, the 1st test would set up >> the cache for the realm key for the servlet. The 2nd test would run, >> because the realms were recreated, there is a different key, but the >> min-time-between-jwkds-requests was 10 seconds so it wasn't updating the >> key and my logins would fail. This was extermely frustrating to debug >> because of #1 and because it only happened if I was running all tests in >> the class. >> >> The workaround is to set "min-time-between-jwks-requests" to zero in >> your adapter configuration. This is an FYI just in case somebody else >> runs into this. Took me awhile to figure out. > Another possibility is to put private/public keys into your realm JSON. > Then there is always same keys and same "kid" and application doesn't > need to re-download it. > > FYI. with my latest changes, there is no realm reimport for every test > for most of the tests (see other thread I sent yesterday). But > unfortunately this is not yet the case for Adapter tests (subclasses of > AbstractAdapterTest)... > > Marek >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From martin.hardselius at gmail.com Thu Mar 2 08:29:50 2017 From: martin.hardselius at gmail.com (Martin Hardselius) Date: Thu, 02 Mar 2017 13:29:50 +0000 Subject: [keycloak-dev] Bug / un-intended behaviour in jpa RealmAdapter? Message-ID: The below piece of code seems faulty to me. Shouldn't it pass model.getName() as the second arg to getIdentityProviderMapperByName instead? org.keycloak.models.jpa.RealmAdapter @Override public IdentityProviderMapperModel addIdentityProviderMapper(IdentityProviderMapperModel model) { if (getIdentityProviderMapperByName(model.getIdentityProviderAlias(), model.getIdentityProviderMapper()) != null) { throw new RuntimeException("identity provider mapper name must be unique per identity provider"); } // ... } Martin From mposolda at redhat.com Thu Mar 2 09:44:24 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 2 Mar 2017 15:44:24 +0100 Subject: [keycloak-dev] Client adapters backwards compatibility Message-ID: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> It looks that we should support latest Keycloak server with older versions of Keycloak adapters. So for some corner scenarios, I wonder if we should add the switch to the ClientModel and admin console like "Adapter version" . This switch will be available for both OIDC and SAML clients, but will be useful just for the clients, which uses Keycloak adapter. It will be useful to specify the version of Keycloak client adapter, which particular client application is using. WDYT? The reason why I felt into this is a reported RHSSO bug. Long-story short: When Keycloak SAML 1.9.8 adapter is used with "isPassive=true", then Keycloak 2.5.4 server returns him the valid error response. However 1.9.8 adapter has a bug https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it receives such response. With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned invalid error response, however 1.9.8 adapter was able to handle this invalid response without throwing any exception. By adding the switch to the ClientModel, we defacto allow adapter to say: "Please return me broken response, because I am not able to handle valid response." Note that this is bug in adapter, so it will be better to ask customers to rather upgrade their SAML adapters to newest version. On the other hand, we claim to support backwards compatibility. So should we add the switch or not? WDYT? Marek From mposolda at redhat.com Thu Mar 2 09:53:34 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 2 Mar 2017 15:53:34 +0100 Subject: [keycloak-dev] Bug / un-intended behaviour in jpa RealmAdapter? In-Reply-To: References: Message-ID: <76e573a6-d4d3-acf7-5fb7-cc89614be69b@redhat.com> Yes, it is bug. Thanks! So now it's possible to add more identityProvider mappers with same name into single identityProvider. Could you please create JIRA for it? Or even better also send PR? Will be also good to add a unit test into IdentityProviderTest that creating more mappers with same name will fail once fix is added. Just in case you are willing to contribute that :) Thanks, Marek On 02/03/17 14:29, Martin Hardselius wrote: > The below piece of code seems faulty to me. Shouldn't it pass > model.getName() as the second arg to getIdentityProviderMapperByName > instead? > > org.keycloak.models.jpa.RealmAdapter > > @Override > public IdentityProviderMapperModel > addIdentityProviderMapper(IdentityProviderMapperModel model) { > if (getIdentityProviderMapperByName(model.getIdentityProviderAlias(), > model.getIdentityProviderMapper()) != null) { > throw new RuntimeException("identity provider mapper name must be > unique per identity provider"); > } > // ... > } > > Martin > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From martin.hardselius at gmail.com Thu Mar 2 09:58:51 2017 From: martin.hardselius at gmail.com (Martin Hardselius) Date: Thu, 02 Mar 2017 14:58:51 +0000 Subject: [keycloak-dev] Bug / un-intended behaviour in jpa RealmAdapter? In-Reply-To: <76e573a6-d4d3-acf7-5fb7-cc89614be69b@redhat.com> References: <76e573a6-d4d3-acf7-5fb7-cc89614be69b@redhat.com> Message-ID: Yep. I'll JIRA it and send a PR. On Thu, 2 Mar 2017 at 15:53 Marek Posolda wrote: > Yes, it is bug. Thanks! > > So now it's possible to add more identityProvider mappers with same name > into single identityProvider. > > Could you please create JIRA for it? Or even better also send PR? Will be > also good to add a unit test into IdentityProviderTest that creating more > mappers with same name will fail once fix is added. Just in case you are > willing to contribute that :) > > Thanks, > Marek > > > On 02/03/17 14:29, Martin Hardselius wrote: > > The below piece of code seems faulty to me. Shouldn't it pass > model.getName() as the second arg to getIdentityProviderMapperByName > instead? > > org.keycloak.models.jpa.RealmAdapter > > @Override > public IdentityProviderMapperModel > addIdentityProviderMapper(IdentityProviderMapperModel model) { > if (getIdentityProviderMapperByName(model.getIdentityProviderAlias(), > model. > > getIdentityProviderMapper()) != null) { > throw new RuntimeException("identity provider mapper name must be > unique per identity provider"); > } > // ... > } > > Martin > _______________________________________________ > keycloak-dev mailing listkeycloak-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From ssilvert at redhat.com Thu Mar 2 13:12:51 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Thu, 2 Mar 2017 13:12:51 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> Message-ID: <6b68b11e-16a1-f34b-f502-08fb9405face@redhat.com> The way a protocol usually implements this is not for the server to keep track of versions. Rather, the client simply transmits his version as part of the protocol. Then the server knows what he is dealing with and acts accordingly. Also, this has the advantage of allowing automatic auditing of client versions without manually setting things up from the server side. On 3/2/2017 9:44 AM, Marek Posolda wrote: > It looks that we should support latest Keycloak server with older > versions of Keycloak adapters. > > So for some corner scenarios, I wonder if we should add the switch to > the ClientModel and admin console like "Adapter version" . This switch > will be available for both OIDC and SAML clients, but will be useful > just for the clients, which uses Keycloak adapter. It will be useful to > specify the version of Keycloak client adapter, which particular client > application is using. WDYT? > > The reason why I felt into this is a reported RHSSO bug. > > Long-story short: When Keycloak SAML 1.9.8 adapter is used with > "isPassive=true", then Keycloak 2.5.4 server returns him the valid error > response. However 1.9.8 adapter has a bug > https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it > receives such response. > > With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned > invalid error response, however 1.9.8 adapter was able to handle this > invalid response without throwing any exception. > > > By adding the switch to the ClientModel, we defacto allow adapter to > say: "Please return me broken response, because I am not able to handle > valid response." > > Note that this is bug in adapter, so it will be better to ask customers > to rather upgrade their SAML adapters to newest version. On the other > hand, we claim to support backwards compatibility. > > So should we add the switch or not? WDYT? > > Marek > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Thu Mar 2 14:28:07 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 2 Mar 2017 14:28:07 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> Message-ID: <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> Add switch IMO. It should have a select box that defaults to "latest". On 3/2/17 9:44 AM, Marek Posolda wrote: > It looks that we should support latest Keycloak server with older > versions of Keycloak adapters. > > So for some corner scenarios, I wonder if we should add the switch to > the ClientModel and admin console like "Adapter version" . This switch > will be available for both OIDC and SAML clients, but will be useful > just for the clients, which uses Keycloak adapter. It will be useful to > specify the version of Keycloak client adapter, which particular client > application is using. WDYT? > > The reason why I felt into this is a reported RHSSO bug. > > Long-story short: When Keycloak SAML 1.9.8 adapter is used with > "isPassive=true", then Keycloak 2.5.4 server returns him the valid error > response. However 1.9.8 adapter has a bug > https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it > receives such response. > > With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned > invalid error response, however 1.9.8 adapter was able to handle this > invalid response without throwing any exception. > > > By adding the switch to the ClientModel, we defacto allow adapter to > say: "Please return me broken response, because I am not able to handle > valid response." > > Note that this is bug in adapter, so it will be better to ask customers > to rather upgrade their SAML adapters to newest version. On the other > hand, we claim to support backwards compatibility. > > So should we add the switch or not? WDYT? > > Marek > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Thu Mar 2 14:29:13 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 2 Mar 2017 14:29:13 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <6b68b11e-16a1-f34b-f502-08fb9405face@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <6b68b11e-16a1-f34b-f502-08fb9405face@redhat.com> Message-ID: <20fb72a1-303b-21e0-0784-a3664184b56a@redhat.com> Yeah, stan is right, but 2.5.x does not have this switch, so you wouldn't be able to tell the different between 1.9.8 and 2.5.x. On 3/2/17 1:12 PM, Stan Silvert wrote: > The way a protocol usually implements this is not for the server to keep > track of versions. Rather, the client simply transmits his version as > part of the protocol. Then the server knows what he is dealing with and > acts accordingly. > > Also, this has the advantage of allowing automatic auditing of client > versions without manually setting things up from the server side. > > On 3/2/2017 9:44 AM, Marek Posolda wrote: >> It looks that we should support latest Keycloak server with older >> versions of Keycloak adapters. >> >> So for some corner scenarios, I wonder if we should add the switch to >> the ClientModel and admin console like "Adapter version" . This switch >> will be available for both OIDC and SAML clients, but will be useful >> just for the clients, which uses Keycloak adapter. It will be useful to >> specify the version of Keycloak client adapter, which particular client >> application is using. WDYT? >> >> The reason why I felt into this is a reported RHSSO bug. >> >> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >> response. However 1.9.8 adapter has a bug >> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >> receives such response. >> >> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >> invalid error response, however 1.9.8 adapter was able to handle this >> invalid response without throwing any exception. >> >> >> By adding the switch to the ClientModel, we defacto allow adapter to >> say: "Please return me broken response, because I am not able to handle >> valid response." >> >> Note that this is bug in adapter, so it will be better to ask customers >> to rather upgrade their SAML adapters to newest version. On the other >> hand, we claim to support backwards compatibility. >> >> So should we add the switch or not? WDYT? >> >> Marek >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From hmlnarik at redhat.com Fri Mar 3 03:15:15 2017 From: hmlnarik at redhat.com (Hynek Mlnarik) Date: Fri, 3 Mar 2017 09:15:15 +0100 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> Message-ID: Determination of client version from client message would not work for IdP-initiated SSO (there is no client message to determine version from), so +1. On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: > Add switch IMO. It should have a select box that defaults to "latest". > > > On 3/2/17 9:44 AM, Marek Posolda wrote: >> It looks that we should support latest Keycloak server with older >> versions of Keycloak adapters. >> >> So for some corner scenarios, I wonder if we should add the switch to >> the ClientModel and admin console like "Adapter version" . This switch >> will be available for both OIDC and SAML clients, but will be useful >> just for the clients, which uses Keycloak adapter. It will be useful to >> specify the version of Keycloak client adapter, which particular client >> application is using. WDYT? >> >> The reason why I felt into this is a reported RHSSO bug. >> >> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >> response. However 1.9.8 adapter has a bug >> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >> receives such response. >> >> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >> invalid error response, however 1.9.8 adapter was able to handle this >> invalid response without throwing any exception. >> >> >> By adding the switch to the ClientModel, we defacto allow adapter to >> say: "Please return me broken response, because I am not able to handle >> valid response." >> >> Note that this is bug in adapter, so it will be better to ask customers >> to rather upgrade their SAML adapters to newest version. On the other >> hand, we claim to support backwards compatibility. >> >> So should we add the switch or not? WDYT? >> >> Marek >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev -- --Hynek From mposolda at redhat.com Fri Mar 3 06:56:19 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 3 Mar 2017 12:56:19 +0100 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> Message-ID: <15a735ca-5449-9849-aba5-8aaf699ab19b@redhat.com> Ah yes. I was thinking about the client message vs. switch, but it seems that switch be cleaner then. Thanks all for the feedback! Marek On 03/03/17 09:15, Hynek Mlnarik wrote: > Determination of client version from client message would not work for > IdP-initiated SSO (there is no client message to determine version > from), so +1. > > On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: >> Add switch IMO. It should have a select box that defaults to "latest". >> >> >> On 3/2/17 9:44 AM, Marek Posolda wrote: >>> It looks that we should support latest Keycloak server with older >>> versions of Keycloak adapters. >>> >>> So for some corner scenarios, I wonder if we should add the switch to >>> the ClientModel and admin console like "Adapter version" . This switch >>> will be available for both OIDC and SAML clients, but will be useful >>> just for the clients, which uses Keycloak adapter. It will be useful to >>> specify the version of Keycloak client adapter, which particular client >>> application is using. WDYT? >>> >>> The reason why I felt into this is a reported RHSSO bug. >>> >>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >>> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >>> response. However 1.9.8 adapter has a bug >>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >>> receives such response. >>> >>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >>> invalid error response, however 1.9.8 adapter was able to handle this >>> invalid response without throwing any exception. >>> >>> >>> By adding the switch to the ClientModel, we defacto allow adapter to >>> say: "Please return me broken response, because I am not able to handle >>> valid response." >>> >>> Note that this is bug in adapter, so it will be better to ask customers >>> to rather upgrade their SAML adapters to newest version. On the other >>> hand, we claim to support backwards compatibility. >>> >>> So should we add the switch or not? WDYT? >>> >>> Marek >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > From vmuzikar at redhat.com Fri Mar 3 07:23:42 2017 From: vmuzikar at redhat.com (Vaclav Muzikar) Date: Fri, 3 Mar 2017 13:23:42 +0100 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <15a735ca-5449-9849-aba5-8aaf699ab19b@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <15a735ca-5449-9849-aba5-8aaf699ab19b@redhat.com> Message-ID: By the way, how many previous adapter versions do we need to support (i.e. test)? I thought only the previous major release (like now - 1.9.8 adapter with 2.5.x server). So, do we really need to have this switch permanently? Who knows, maybe with the next major SSO version the current 2.5.x adapters will work flawlessly. :) V. On Fri, Mar 3, 2017 at 12:56 PM, Marek Posolda wrote: > Ah yes. I was thinking about the client message vs. switch, but it seems > that switch be cleaner then. > > Thanks all for the feedback! > Marek > > On 03/03/17 09:15, Hynek Mlnarik wrote: > > Determination of client version from client message would not work for > > IdP-initiated SSO (there is no client message to determine version > > from), so +1. > > > > On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: > >> Add switch IMO. It should have a select box that defaults to "latest". > >> > >> > >> On 3/2/17 9:44 AM, Marek Posolda wrote: > >>> It looks that we should support latest Keycloak server with older > >>> versions of Keycloak adapters. > >>> > >>> So for some corner scenarios, I wonder if we should add the switch to > >>> the ClientModel and admin console like "Adapter version" . This switch > >>> will be available for both OIDC and SAML clients, but will be useful > >>> just for the clients, which uses Keycloak adapter. It will be useful to > >>> specify the version of Keycloak client adapter, which particular client > >>> application is using. WDYT? > >>> > >>> The reason why I felt into this is a reported RHSSO bug. > >>> > >>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with > >>> "isPassive=true", then Keycloak 2.5.4 server returns him the valid > error > >>> response. However 1.9.8 adapter has a bug > >>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when > it > >>> receives such response. > >>> > >>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned > >>> invalid error response, however 1.9.8 adapter was able to handle this > >>> invalid response without throwing any exception. > >>> > >>> > >>> By adding the switch to the ClientModel, we defacto allow adapter to > >>> say: "Please return me broken response, because I am not able to handle > >>> valid response." > >>> > >>> Note that this is bug in adapter, so it will be better to ask customers > >>> to rather upgrade their SAML adapters to newest version. On the other > >>> hand, we claim to support backwards compatibility. > >>> > >>> So should we add the switch or not? WDYT? > >>> > >>> Marek > >>> > >>> _______________________________________________ > >>> keycloak-dev mailing list > >>> keycloak-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > -- V?clav Muzik?? Quality Engineer Keycloak / Red Hat Single Sign-On Red Hat Czech s.r.o. From ssilvert at redhat.com Fri Mar 3 09:11:11 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Fri, 3 Mar 2017 09:11:11 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> Message-ID: On 3/3/2017 3:15 AM, Hynek Mlnarik wrote: > Determination of client version from client message would not work for > IdP-initiated SSO (there is no client message to determine version > from), so +1. I don't understand this. I don't know exactly how we implement Idp-initiated SSO, but if you are talking to a client then, by definition, you are exchanging messages. In every protocol I can remember, part of the handshake includes transmitting the protocol version. If you don't do this, it leads to problems. With the switch, somebody has to manually manage the protocol version of potentially thousands of clients. If somebody sets the switch wrong, the client can't communicate. Out of the plethora of possible issues, how long will it take before somebody realizes that the version switch is wrong? Also, you might have the switch set incorrectly, but the client still seems to communicate fine because it doesn't run into unexpected messages. But you never know that the client is really using the wrong version. So what happens when there is a serious security problem in a version and you need to upgrade or disable certain clients? Then you don't know for sure what version each client is running. For the security reason alone, you need to know for sure what software the client is running. If the client always tells you its version, the server ALWAYS knows what to do. Otherwise, it's hit or miss. Granted, I haven't been "into" protocols for many, many years. But this seems like fundamental stuff. Feel free to talk me down. > > On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: >> Add switch IMO. It should have a select box that defaults to "latest". >> >> >> On 3/2/17 9:44 AM, Marek Posolda wrote: >>> It looks that we should support latest Keycloak server with older >>> versions of Keycloak adapters. >>> >>> So for some corner scenarios, I wonder if we should add the switch to >>> the ClientModel and admin console like "Adapter version" . This switch >>> will be available for both OIDC and SAML clients, but will be useful >>> just for the clients, which uses Keycloak adapter. It will be useful to >>> specify the version of Keycloak client adapter, which particular client >>> application is using. WDYT? >>> >>> The reason why I felt into this is a reported RHSSO bug. >>> >>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >>> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >>> response. However 1.9.8 adapter has a bug >>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >>> receives such response. >>> >>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >>> invalid error response, however 1.9.8 adapter was able to handle this >>> invalid response without throwing any exception. >>> >>> >>> By adding the switch to the ClientModel, we defacto allow adapter to >>> say: "Please return me broken response, because I am not able to handle >>> valid response." >>> >>> Note that this is bug in adapter, so it will be better to ask customers >>> to rather upgrade their SAML adapters to newest version. On the other >>> hand, we claim to support backwards compatibility. >>> >>> So should we add the switch or not? WDYT? >>> >>> Marek >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > From bburke at redhat.com Fri Mar 3 09:35:01 2017 From: bburke at redhat.com (Bill Burke) Date: Fri, 3 Mar 2017 09:35:01 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> Message-ID: <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> I agree with you in principle stan, but there are other issues: * For OIDC it is a get request with simple query parameters. There is no request object you can hide the version information in and you'd have to add another query param. * As Hynek said, for SAML IDP initiated SSO, there is no client request. User logs in then assertion is sent to client. * Keycloak 1.x and 2.x are already out the door and don't submit a version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is no way to tell the difference. * Client templates can be used to manage large sets of clients. On 3/3/17 9:11 AM, Stan Silvert wrote: > On 3/3/2017 3:15 AM, Hynek Mlnarik wrote: >> Determination of client version from client message would not work for >> IdP-initiated SSO (there is no client message to determine version >> from), so +1. > I don't understand this. I don't know exactly how we implement > Idp-initiated SSO, but if you are talking to a client then, by > definition, you are exchanging messages. In every protocol I can > remember, part of the handshake includes transmitting the protocol version. > > If you don't do this, it leads to problems. With the switch, somebody > has to manually manage the protocol version of potentially thousands of > clients. If somebody sets the switch wrong, the client can't > communicate. Out of the plethora of possible issues, how long will it > take before somebody realizes that the version switch is wrong? > > Also, you might have the switch set incorrectly, but the client still > seems to communicate fine because it doesn't run into unexpected > messages. But you never know that the client is really using the wrong > version. So what happens when there is a serious security problem in a > version and you need to upgrade or disable certain clients? Then you > don't know for sure what version each client is running. > > For the security reason alone, you need to know for sure what software > the client is running. If the client always tells you its version, the > server ALWAYS knows what to do. Otherwise, it's hit or miss. > > Granted, I haven't been "into" protocols for many, many years. But this > seems like fundamental stuff. Feel free to talk me down. >> On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: >>> Add switch IMO. It should have a select box that defaults to "latest". >>> >>> >>> On 3/2/17 9:44 AM, Marek Posolda wrote: >>>> It looks that we should support latest Keycloak server with older >>>> versions of Keycloak adapters. >>>> >>>> So for some corner scenarios, I wonder if we should add the switch to >>>> the ClientModel and admin console like "Adapter version" . This switch >>>> will be available for both OIDC and SAML clients, but will be useful >>>> just for the clients, which uses Keycloak adapter. It will be useful to >>>> specify the version of Keycloak client adapter, which particular client >>>> application is using. WDYT? >>>> >>>> The reason why I felt into this is a reported RHSSO bug. >>>> >>>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >>>> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >>>> response. However 1.9.8 adapter has a bug >>>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >>>> receives such response. >>>> >>>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >>>> invalid error response, however 1.9.8 adapter was able to handle this >>>> invalid response without throwing any exception. >>>> >>>> >>>> By adding the switch to the ClientModel, we defacto allow adapter to >>>> say: "Please return me broken response, because I am not able to handle >>>> valid response." >>>> >>>> Note that this is bug in adapter, so it will be better to ask customers >>>> to rather upgrade their SAML adapters to newest version. On the other >>>> hand, we claim to support backwards compatibility. >>>> >>>> So should we add the switch or not? WDYT? >>>> >>>> Marek >>>> >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From hmlnarik at redhat.com Fri Mar 3 09:52:22 2017 From: hmlnarik at redhat.com (Hynek Mlnarik) Date: Fri, 3 Mar 2017 15:52:22 +0100 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> Message-ID: IdP-initiated SSO [1] is a part of SAML 2.0 standard and basically sends a login response from server (IdP) to client (SP) without client requesting it. Even though there is no handshake, the client has to handle it properly. In this particular case, client never scenario to the server so the server needs to be "told" the version of client explicitly. --Hynek [1] http://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0-cd-02.html#5.1.4.IdP-Initiated SSO: POST Binding|outline On Fri, Mar 3, 2017 at 3:11 PM, Stan Silvert wrote: > On 3/3/2017 3:15 AM, Hynek Mlnarik wrote: >> Determination of client version from client message would not work for >> IdP-initiated SSO (there is no client message to determine version >> from), so +1. > I don't understand this. I don't know exactly how we implement > Idp-initiated SSO, but if you are talking to a client then, by > definition, you are exchanging messages. In every protocol I can > remember, part of the handshake includes transmitting the protocol version. > > If you don't do this, it leads to problems. With the switch, somebody > has to manually manage the protocol version of potentially thousands of > clients. If somebody sets the switch wrong, the client can't > communicate. Out of the plethora of possible issues, how long will it > take before somebody realizes that the version switch is wrong? > > Also, you might have the switch set incorrectly, but the client still > seems to communicate fine because it doesn't run into unexpected > messages. But you never know that the client is really using the wrong > version. So what happens when there is a serious security problem in a > version and you need to upgrade or disable certain clients? Then you > don't know for sure what version each client is running. > > For the security reason alone, you need to know for sure what software > the client is running. If the client always tells you its version, the > server ALWAYS knows what to do. Otherwise, it's hit or miss. > > Granted, I haven't been "into" protocols for many, many years. But this > seems like fundamental stuff. Feel free to talk me down. >> >> On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: >>> Add switch IMO. It should have a select box that defaults to "latest". >>> >>> >>> On 3/2/17 9:44 AM, Marek Posolda wrote: >>>> It looks that we should support latest Keycloak server with older >>>> versions of Keycloak adapters. >>>> >>>> So for some corner scenarios, I wonder if we should add the switch to >>>> the ClientModel and admin console like "Adapter version" . This switch >>>> will be available for both OIDC and SAML clients, but will be useful >>>> just for the clients, which uses Keycloak adapter. It will be useful to >>>> specify the version of Keycloak client adapter, which particular client >>>> application is using. WDYT? >>>> >>>> The reason why I felt into this is a reported RHSSO bug. >>>> >>>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >>>> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >>>> response. However 1.9.8 adapter has a bug >>>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >>>> receives such response. >>>> >>>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >>>> invalid error response, however 1.9.8 adapter was able to handle this >>>> invalid response without throwing any exception. >>>> >>>> >>>> By adding the switch to the ClientModel, we defacto allow adapter to >>>> say: "Please return me broken response, because I am not able to handle >>>> valid response." >>>> >>>> Note that this is bug in adapter, so it will be better to ask customers >>>> to rather upgrade their SAML adapters to newest version. On the other >>>> hand, we claim to support backwards compatibility. >>>> >>>> So should we add the switch or not? WDYT? >>>> >>>> Marek >>>> >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev -- --Hynek From ssilvert at redhat.com Fri Mar 3 10:29:28 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Fri, 3 Mar 2017 10:29:28 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> Message-ID: <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> Versioning is always a pain. But it's much better for us to manage the pain than leave it up to customers. On 3/3/2017 9:35 AM, Bill Burke wrote: > I agree with you in principle stan, but there are other issues: > > * For OIDC it is a get request with simple query parameters. There is no > request object you can hide the version information in and you'd have to > add another query param. I don't think you need to hide the version. Just add another query param. Then if version is not present, you at least know it's an old client. > > * As Hynek said, for SAML IDP initiated SSO, there is no client > request. User logs in then assertion is sent to client. Would it make sense to send client version as part of the login? At some point, the client has to send a message to the server. You can always piggyback the version onto whatever message is sent. > > * Keycloak 1.x and 2.x are already out the door and don't submit a > version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is no > way to tell the difference. Yea, I raised this issue in Brno last year. It's not too late to fix the problem going forward though. > > * Client templates can be used to manage large sets of clients. That might help, but it's not really a solution. > > > > On 3/3/17 9:11 AM, Stan Silvert wrote: >> On 3/3/2017 3:15 AM, Hynek Mlnarik wrote: >>> Determination of client version from client message would not work for >>> IdP-initiated SSO (there is no client message to determine version >>> from), so +1. >> I don't understand this. I don't know exactly how we implement >> Idp-initiated SSO, but if you are talking to a client then, by >> definition, you are exchanging messages. In every protocol I can >> remember, part of the handshake includes transmitting the protocol version. >> >> If you don't do this, it leads to problems. With the switch, somebody >> has to manually manage the protocol version of potentially thousands of >> clients. If somebody sets the switch wrong, the client can't >> communicate. Out of the plethora of possible issues, how long will it >> take before somebody realizes that the version switch is wrong? >> >> Also, you might have the switch set incorrectly, but the client still >> seems to communicate fine because it doesn't run into unexpected >> messages. But you never know that the client is really using the wrong >> version. So what happens when there is a serious security problem in a >> version and you need to upgrade or disable certain clients? Then you >> don't know for sure what version each client is running. >> >> For the security reason alone, you need to know for sure what software >> the client is running. If the client always tells you its version, the >> server ALWAYS knows what to do. Otherwise, it's hit or miss. >> >> Granted, I haven't been "into" protocols for many, many years. But this >> seems like fundamental stuff. Feel free to talk me down. >>> On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: >>>> Add switch IMO. It should have a select box that defaults to "latest". >>>> >>>> >>>> On 3/2/17 9:44 AM, Marek Posolda wrote: >>>>> It looks that we should support latest Keycloak server with older >>>>> versions of Keycloak adapters. >>>>> >>>>> So for some corner scenarios, I wonder if we should add the switch to >>>>> the ClientModel and admin console like "Adapter version" . This switch >>>>> will be available for both OIDC and SAML clients, but will be useful >>>>> just for the clients, which uses Keycloak adapter. It will be useful to >>>>> specify the version of Keycloak client adapter, which particular client >>>>> application is using. WDYT? >>>>> >>>>> The reason why I felt into this is a reported RHSSO bug. >>>>> >>>>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >>>>> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >>>>> response. However 1.9.8 adapter has a bug >>>>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >>>>> receives such response. >>>>> >>>>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >>>>> invalid error response, however 1.9.8 adapter was able to handle this >>>>> invalid response without throwing any exception. >>>>> >>>>> >>>>> By adding the switch to the ClientModel, we defacto allow adapter to >>>>> say: "Please return me broken response, because I am not able to handle >>>>> valid response." >>>>> >>>>> Note that this is bug in adapter, so it will be better to ask customers >>>>> to rather upgrade their SAML adapters to newest version. On the other >>>>> hand, we claim to support backwards compatibility. >>>>> >>>>> So should we add the switch or not? WDYT? >>>>> >>>>> Marek >>>>> >>>>> _______________________________________________ >>>>> keycloak-dev mailing list >>>>> keycloak-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Fri Mar 3 11:34:12 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 3 Mar 2017 17:34:12 +0100 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> Message-ID: On 03/03/17 16:29, Stan Silvert wrote: > Versioning is always a pain. But it's much better for us to manage the > pain than leave it up to customers. > > On 3/3/2017 9:35 AM, Bill Burke wrote: >> I agree with you in principle stan, but there are other issues: >> >> * For OIDC it is a get request with simple query parameters. There is no >> request object you can hide the version information in and you'd have to >> add another query param. > I don't think you need to hide the version. Just add another query > param. Then if version is not present, you at least know it's an old > client. Another query param for OIDC should work and some similar extension for SAML should work too though. The backwards compatibility is supported just for Keycloak adapters, so we can "extend" the protocols here IMO. Maybe it's even not too late to fix this in 2.x :) But still, for 1.x the version won't never be present. So you still won't be able to differentiate between 1.9.8 adapter (where we need to send "broken" response) and between external SAML adapter (where we need to send valid response). As external OIDC and SAML adapters will obviously never send us the version as that would be Keycloak specific extension. >> * As Hynek said, for SAML IDP initiated SSO, there is no client >> request. User logs in then assertion is sent to client. > Would it make sense to send client version as part of the login? > > At some point, the client has to send a message to the server. You can > always piggyback the version onto whatever message is sent. Maybe the possibility here is to send the message at the deployment time (ServletContextListener?). But that has some other potential issues (eg. application deployed at the time when KC server is not yet running, which would mean error etc..). There are request from adapters to server to download keys, but not sure whether to mix with the version (and keys can still be hardcoded in the adapter config, so this one is not reliable too). In summary, I am also more keen for the switch. Note that we're talking about version of Keycloak (RHSSO) but not about version of protocol (OIDC or SAML). So in practice, Keycloak server should always send the correct response according to the protocol. Only exception, I can see, is some bug in the old adapter, which is not able to handle correct responses according to OIDC or SAML protocols. Which is exactly the case of this particular RHSSO bug. People should be told to rather update their adapters and the switch will be a backup just in case that updating adapters is too much pain for them. In case there is security issue in the adapter, customers should be told to update their adapters anyway. Marek >> * Keycloak 1.x and 2.x are already out the door and don't submit a >> version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is no >> way to tell the difference. > Yea, I raised this issue in Brno last year. It's not too late to fix > the problem going forward though. >> * Client templates can be used to manage large sets of clients. > That might help, but it's not really a solution. >> >> >> On 3/3/17 9:11 AM, Stan Silvert wrote: >>> On 3/3/2017 3:15 AM, Hynek Mlnarik wrote: >>>> Determination of client version from client message would not work for >>>> IdP-initiated SSO (there is no client message to determine version >>>> from), so +1. >>> I don't understand this. I don't know exactly how we implement >>> Idp-initiated SSO, but if you are talking to a client then, by >>> definition, you are exchanging messages. In every protocol I can >>> remember, part of the handshake includes transmitting the protocol version. >>> >>> If you don't do this, it leads to problems. With the switch, somebody >>> has to manually manage the protocol version of potentially thousands of >>> clients. If somebody sets the switch wrong, the client can't >>> communicate. Out of the plethora of possible issues, how long will it >>> take before somebody realizes that the version switch is wrong? >>> >>> Also, you might have the switch set incorrectly, but the client still >>> seems to communicate fine because it doesn't run into unexpected >>> messages. But you never know that the client is really using the wrong >>> version. So what happens when there is a serious security problem in a >>> version and you need to upgrade or disable certain clients? Then you >>> don't know for sure what version each client is running. >>> >>> For the security reason alone, you need to know for sure what software >>> the client is running. If the client always tells you its version, the >>> server ALWAYS knows what to do. Otherwise, it's hit or miss. >>> >>> Granted, I haven't been "into" protocols for many, many years. But this >>> seems like fundamental stuff. Feel free to talk me down. >>>> On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: >>>>> Add switch IMO. It should have a select box that defaults to "latest". >>>>> >>>>> >>>>> On 3/2/17 9:44 AM, Marek Posolda wrote: >>>>>> It looks that we should support latest Keycloak server with older >>>>>> versions of Keycloak adapters. >>>>>> >>>>>> So for some corner scenarios, I wonder if we should add the switch to >>>>>> the ClientModel and admin console like "Adapter version" . This switch >>>>>> will be available for both OIDC and SAML clients, but will be useful >>>>>> just for the clients, which uses Keycloak adapter. It will be useful to >>>>>> specify the version of Keycloak client adapter, which particular client >>>>>> application is using. WDYT? >>>>>> >>>>>> The reason why I felt into this is a reported RHSSO bug. >>>>>> >>>>>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >>>>>> "isPassive=true", then Keycloak 2.5.4 server returns him the valid error >>>>>> response. However 1.9.8 adapter has a bug >>>>>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE when it >>>>>> receives such response. >>>>>> >>>>>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server returned >>>>>> invalid error response, however 1.9.8 adapter was able to handle this >>>>>> invalid response without throwing any exception. >>>>>> >>>>>> >>>>>> By adding the switch to the ClientModel, we defacto allow adapter to >>>>>> say: "Please return me broken response, because I am not able to handle >>>>>> valid response." >>>>>> >>>>>> Note that this is bug in adapter, so it will be better to ask customers >>>>>> to rather upgrade their SAML adapters to newest version. On the other >>>>>> hand, we claim to support backwards compatibility. >>>>>> >>>>>> So should we add the switch or not? WDYT? >>>>>> >>>>>> Marek >>>>>> >>>>>> _______________________________________________ >>>>>> keycloak-dev mailing list >>>>>> keycloak-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>>> _______________________________________________ >>>>> keycloak-dev mailing list >>>>> keycloak-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From ssilvert at redhat.com Fri Mar 3 11:45:36 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Fri, 3 Mar 2017 11:45:36 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> Message-ID: I really feel strongly that we should add the version to adapters going forward. Even if it's only used for auditing and support purposes, there is tremendous value in knowing for sure that the client is running version X of the adapter. On 3/3/2017 11:34 AM, Marek Posolda wrote: > On 03/03/17 16:29, Stan Silvert wrote: >> Versioning is always a pain. But it's much better for us to manage the >> pain than leave it up to customers. >> >> On 3/3/2017 9:35 AM, Bill Burke wrote: >>> I agree with you in principle stan, but there are other issues: >>> >>> * For OIDC it is a get request with simple query parameters. There >>> is no >>> request object you can hide the version information in and you'd >>> have to >>> add another query param. >> I don't think you need to hide the version. Just add another query >> param. Then if version is not present, you at least know it's an old >> client. > Another query param for OIDC should work and some similar extension > for SAML should work too though. The backwards compatibility is > supported just for Keycloak adapters, so we can "extend" the protocols > here IMO. Maybe it's even not too late to fix this in 2.x :) > > But still, for 1.x the version won't never be present. So you still > won't be able to differentiate between 1.9.8 adapter (where we need to > send "broken" response) and between external SAML adapter (where we > need to send valid response). As external OIDC and SAML adapters will > obviously never send us the version as that would be Keycloak specific > extension. > >>> * As Hynek said, for SAML IDP initiated SSO, there is no client >>> request. User logs in then assertion is sent to client. >> Would it make sense to send client version as part of the login? >> >> At some point, the client has to send a message to the server. You can >> always piggyback the version onto whatever message is sent. > Maybe the possibility here is to send the message at the deployment > time (ServletContextListener?). But that has some other potential > issues (eg. application deployed at the time when KC server is not yet > running, which would mean error etc..). There are request from > adapters to server to download keys, but not sure whether to mix with > the version (and keys can still be hardcoded in the adapter config, so > this one is not reliable too). > > In summary, I am also more keen for the switch. > > Note that we're talking about version of Keycloak (RHSSO) but not > about version of protocol (OIDC or SAML). So in practice, Keycloak > server should always send the correct response according to the > protocol. Only exception, I can see, is some bug in the old adapter, > which is not able to handle correct responses according to OIDC or > SAML protocols. Which is exactly the case of this particular RHSSO > bug. People should be told to rather update their adapters and the > switch will be a backup just in case that updating adapters is too > much pain for them. In case there is security issue in the adapter, > customers should be told to update their adapters anyway. > > Marek >>> * Keycloak 1.x and 2.x are already out the door and don't submit a >>> version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is no >>> way to tell the difference. >> Yea, I raised this issue in Brno last year. It's not too late to fix >> the problem going forward though. >>> * Client templates can be used to manage large sets of clients. >> That might help, but it's not really a solution. >>> >>> >>> On 3/3/17 9:11 AM, Stan Silvert wrote: >>>> On 3/3/2017 3:15 AM, Hynek Mlnarik wrote: >>>>> Determination of client version from client message would not work >>>>> for >>>>> IdP-initiated SSO (there is no client message to determine version >>>>> from), so +1. >>>> I don't understand this. I don't know exactly how we implement >>>> Idp-initiated SSO, but if you are talking to a client then, by >>>> definition, you are exchanging messages. In every protocol I can >>>> remember, part of the handshake includes transmitting the protocol >>>> version. >>>> >>>> If you don't do this, it leads to problems. With the switch, somebody >>>> has to manually manage the protocol version of potentially >>>> thousands of >>>> clients. If somebody sets the switch wrong, the client can't >>>> communicate. Out of the plethora of possible issues, how long will it >>>> take before somebody realizes that the version switch is wrong? >>>> >>>> Also, you might have the switch set incorrectly, but the client still >>>> seems to communicate fine because it doesn't run into unexpected >>>> messages. But you never know that the client is really using the >>>> wrong >>>> version. So what happens when there is a serious security problem >>>> in a >>>> version and you need to upgrade or disable certain clients? Then you >>>> don't know for sure what version each client is running. >>>> >>>> For the security reason alone, you need to know for sure what software >>>> the client is running. If the client always tells you its version, >>>> the >>>> server ALWAYS knows what to do. Otherwise, it's hit or miss. >>>> >>>> Granted, I haven't been "into" protocols for many, many years. But >>>> this >>>> seems like fundamental stuff. Feel free to talk me down. >>>>> On Thu, Mar 2, 2017 at 8:28 PM, Bill Burke wrote: >>>>>> Add switch IMO. It should have a select box that defaults to >>>>>> "latest". >>>>>> >>>>>> >>>>>> On 3/2/17 9:44 AM, Marek Posolda wrote: >>>>>>> It looks that we should support latest Keycloak server with older >>>>>>> versions of Keycloak adapters. >>>>>>> >>>>>>> So for some corner scenarios, I wonder if we should add the >>>>>>> switch to >>>>>>> the ClientModel and admin console like "Adapter version" . This >>>>>>> switch >>>>>>> will be available for both OIDC and SAML clients, but will be >>>>>>> useful >>>>>>> just for the clients, which uses Keycloak adapter. It will be >>>>>>> useful to >>>>>>> specify the version of Keycloak client adapter, which particular >>>>>>> client >>>>>>> application is using. WDYT? >>>>>>> >>>>>>> The reason why I felt into this is a reported RHSSO bug. >>>>>>> >>>>>>> Long-story short: When Keycloak SAML 1.9.8 adapter is used with >>>>>>> "isPassive=true", then Keycloak 2.5.4 server returns him the >>>>>>> valid error >>>>>>> response. However 1.9.8 adapter has a bug >>>>>>> https://issues.jboss.org/browse/KEYCLOAK-4264 and it throws NPE >>>>>>> when it >>>>>>> receives such response. >>>>>>> >>>>>>> With SAML 1.9.8 adapter + 1.9.8 server, the Keycloak server >>>>>>> returned >>>>>>> invalid error response, however 1.9.8 adapter was able to handle >>>>>>> this >>>>>>> invalid response without throwing any exception. >>>>>>> >>>>>>> >>>>>>> By adding the switch to the ClientModel, we defacto allow >>>>>>> adapter to >>>>>>> say: "Please return me broken response, because I am not able to >>>>>>> handle >>>>>>> valid response." >>>>>>> >>>>>>> Note that this is bug in adapter, so it will be better to ask >>>>>>> customers >>>>>>> to rather upgrade their SAML adapters to newest version. On the >>>>>>> other >>>>>>> hand, we claim to support backwards compatibility. >>>>>>> >>>>>>> So should we add the switch or not? WDYT? >>>>>>> >>>>>>> Marek >>>>>>> >>>>>>> _______________________________________________ >>>>>>> keycloak-dev mailing list >>>>>>> keycloak-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>>>> _______________________________________________ >>>>>> keycloak-dev mailing list >>>>>> keycloak-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > From bburke at redhat.com Fri Mar 3 14:11:20 2017 From: bburke at redhat.com (Bill Burke) Date: Fri, 3 Mar 2017 14:11:20 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> Message-ID: On 3/3/17 10:29 AM, Stan Silvert wrote: > Versioning is always a pain. But it's much better for us to manage the > pain than leave it up to customers. > > On 3/3/2017 9:35 AM, Bill Burke wrote: >> I agree with you in principle stan, but there are other issues: >> >> * For OIDC it is a get request with simple query parameters. There is no >> request object you can hide the version information in and you'd have to >> add another query param. > I don't think you need to hide the version. Just add another query > param. Then if version is not present, you at least know it's an old > client. >> * As Hynek said, for SAML IDP initiated SSO, there is no client >> request. User logs in then assertion is sent to client. > Would it make sense to send client version as part of the login? > > At some point, the client has to send a message to the server. You can > always piggyback the version onto whatever message is sent. Untrue. With SAML IDP initiated SSO, the client does not have to send a message to the server, ever. Also, this isn't a communication protocol, its an authentication protocol, so once the protocol is finished, there's no more piggybacking that can be done. >> * Keycloak 1.x and 2.x are already out the door and don't submit a >> version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is no >> way to tell the difference. > Yea, I raised this issue in Brno last year. It's not too late to fix > the problem going forward though. We can and should propagate version, but it still doesn't solve the problem of a 1.9.x adapter talking to Keycoak 2.5.x, nor does it solve idp initiated SSO issues. Bill From ssilvert at redhat.com Fri Mar 3 14:37:38 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Fri, 3 Mar 2017 14:37:38 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> Message-ID: <0c3f481e-363a-05ca-c4fe-fcb4e47a699a@redhat.com> On 3/3/2017 2:11 PM, Bill Burke wrote: > > On 3/3/17 10:29 AM, Stan Silvert wrote: >> Versioning is always a pain. But it's much better for us to manage the >> pain than leave it up to customers. >> >> On 3/3/2017 9:35 AM, Bill Burke wrote: >>> I agree with you in principle stan, but there are other issues: >>> >>> * For OIDC it is a get request with simple query parameters. There is no >>> request object you can hide the version information in and you'd have to >>> add another query param. >> I don't think you need to hide the version. Just add another query >> param. Then if version is not present, you at least know it's an old >> client. >>> * As Hynek said, for SAML IDP initiated SSO, there is no client >>> request. User logs in then assertion is sent to client. >> Would it make sense to send client version as part of the login? >> >> At some point, the client has to send a message to the server. You can >> always piggyback the version onto whatever message is sent. > Untrue. With SAML IDP initiated SSO, the client does not have to send a > message to the server, ever. Also, this isn't a communication protocol, > its an authentication protocol, so once the protocol is finished, > there's no more piggybacking that can be done. > >>> * Keycloak 1.x and 2.x are already out the door and don't submit a >>> version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is no >>> way to tell the difference. >> Yea, I raised this issue in Brno last year. It's not too late to fix >> the problem going forward though. > We can and should propagate version, but it still doesn't solve the > problem of a 1.9.x adapter talking to Keycoak 2.5.x, nor does it solve > idp initiated SSO issues. I'm on board with that. Let the client send a version whenever possible. > > Bill > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From jblashka at redhat.com Fri Mar 3 15:37:29 2017 From: jblashka at redhat.com (Jared Blashka) Date: Fri, 3 Mar 2017 15:37:29 -0500 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: <0c3f481e-363a-05ca-c4fe-fcb4e47a699a@redhat.com> References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> <0c3f481e-363a-05ca-c4fe-fcb4e47a699a@redhat.com> Message-ID: I'm not on the Keycloak dev team (obviously) but can't you backport a patch to RH-SSO 7.0 to fix the broken adapter behavior rather than building in a very specialized flag that will be present in every single SAML client configuration going forward where it's completely unnecessary a large majority of the time? It doesn't make sense to me to build a workaround into the IDP code just to accommodate one specific version of one specific client library. If this was any other client library with this problem I feel like the response would be that the client library needs to be updated rather than maintaining a workaround in the IDP. If RH-SSO 7.0 is still a supported product then it should still be eligible for receiving updates. Jared On Fri, Mar 3, 2017 at 2:37 PM, Stan Silvert wrote: > On 3/3/2017 2:11 PM, Bill Burke wrote: > > > > On 3/3/17 10:29 AM, Stan Silvert wrote: > >> Versioning is always a pain. But it's much better for us to manage the > >> pain than leave it up to customers. > >> > >> On 3/3/2017 9:35 AM, Bill Burke wrote: > >>> I agree with you in principle stan, but there are other issues: > >>> > >>> * For OIDC it is a get request with simple query parameters. There is > no > >>> request object you can hide the version information in and you'd have > to > >>> add another query param. > >> I don't think you need to hide the version. Just add another query > >> param. Then if version is not present, you at least know it's an old > >> client. > >>> * As Hynek said, for SAML IDP initiated SSO, there is no client > >>> request. User logs in then assertion is sent to client. > >> Would it make sense to send client version as part of the login? > >> > >> At some point, the client has to send a message to the server. You can > >> always piggyback the version onto whatever message is sent. > > Untrue. With SAML IDP initiated SSO, the client does not have to send a > > message to the server, ever. Also, this isn't a communication protocol, > > its an authentication protocol, so once the protocol is finished, > > there's no more piggybacking that can be done. > > > >>> * Keycloak 1.x and 2.x are already out the door and don't submit a > >>> version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is no > >>> way to tell the difference. > >> Yea, I raised this issue in Brno last year. It's not too late to fix > >> the problem going forward though. > > We can and should propagate version, but it still doesn't solve the > > problem of a 1.9.x adapter talking to Keycoak 2.5.x, nor does it solve > > idp initiated SSO issues. > I'm on board with that. Let the client send a version whenever possible. > > > > Bill > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Mon Mar 6 06:51:35 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 6 Mar 2017 12:51:35 +0100 Subject: [keycloak-dev] Client adapters backwards compatibility In-Reply-To: References: <97bb090d-5b16-91cc-2d6b-16b4c73cec70@redhat.com> <1fba2fd6-ec0e-a275-2f72-4049d62f5f3a@redhat.com> <4eb27639-e847-eef1-cb4c-ec829bdb1f28@redhat.com> <88a9046d-2642-b3b2-c934-77aba61188af@redhat.com> <0c3f481e-363a-05ca-c4fe-fcb4e47a699a@redhat.com> Message-ID: +1 This boils down to mistakes in the protocol implementations. I really don't like either option of adding making the client send the version in the protocol, nor having an option in the admin console. If really needed I'd go for an option in the admin console. Adding a version doesn't work for one, secondly even if it did it's not necessary to send this like it is in other protocols. All clients need to be registered with RHSSO and the point when they are registered allows giving the metadata about them. Other protocols don't have this knowledge, so hence need to send the metadata as part of a handshake. In reality we should support rolling upgrades, not backwards compatibility of adapters (at least not when the adapter is broken according to the spec). That means we should support upgrading server first, adapter second. In these cases it would probably just be simplest to require upgrading both server and adapter at the same time. On 3 March 2017 at 21:37, Jared Blashka wrote: > I'm not on the Keycloak dev team (obviously) but can't you backport a patch > to RH-SSO 7.0 to fix the broken adapter behavior rather than building in a > very specialized flag that will be present in every single SAML client > configuration going forward where it's completely unnecessary a large > majority of the time? > > It doesn't make sense to me to build a workaround into the IDP code just to > accommodate one specific version of one specific client library. If this > was any other client library with this problem I feel like the response > would be that the client library needs to be updated rather than > maintaining a workaround in the IDP. If RH-SSO 7.0 is still a supported > product then it should still be eligible for receiving updates. > > Jared > > On Fri, Mar 3, 2017 at 2:37 PM, Stan Silvert wrote: > > > On 3/3/2017 2:11 PM, Bill Burke wrote: > > > > > > On 3/3/17 10:29 AM, Stan Silvert wrote: > > >> Versioning is always a pain. But it's much better for us to manage > the > > >> pain than leave it up to customers. > > >> > > >> On 3/3/2017 9:35 AM, Bill Burke wrote: > > >>> I agree with you in principle stan, but there are other issues: > > >>> > > >>> * For OIDC it is a get request with simple query parameters. There is > > no > > >>> request object you can hide the version information in and you'd have > > to > > >>> add another query param. > > >> I don't think you need to hide the version. Just add another query > > >> param. Then if version is not present, you at least know it's an old > > >> client. > > >>> * As Hynek said, for SAML IDP initiated SSO, there is no client > > >>> request. User logs in then assertion is sent to client. > > >> Would it make sense to send client version as part of the login? > > >> > > >> At some point, the client has to send a message to the server. You > can > > >> always piggyback the version onto whatever message is sent. > > > Untrue. With SAML IDP initiated SSO, the client does not have to send > a > > > message to the server, ever. Also, this isn't a communication > protocol, > > > its an authentication protocol, so once the protocol is finished, > > > there's no more piggybacking that can be done. > > > > > >>> * Keycloak 1.x and 2.x are already out the door and don't submit a > > >>> version. 2.5.x fixes the problem of the OP, 1.x doesn't so there is > no > > >>> way to tell the difference. > > >> Yea, I raised this issue in Brno last year. It's not too late to fix > > >> the problem going forward though. > > > We can and should propagate version, but it still doesn't solve the > > > problem of a 1.9.x adapter talking to Keycoak 2.5.x, nor does it solve > > > idp initiated SSO issues. > > I'm on board with that. Let the client send a version whenever possible. > > > > > > Bill > > > _______________________________________________ > > > keycloak-dev mailing list > > > keycloak-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Mon Mar 6 07:07:02 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 6 Mar 2017 13:07:02 +0100 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: <1ed3f0c4-ed21-0673-76ee-b15a1f9f0647@redhat.com> References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> <1ed3f0c4-ed21-0673-76ee-b15a1f9f0647@redhat.com> Message-ID: Is this maybe something we should improve in the adapter in the first place? A blind only allow one request every 10 seconds seems a bit to forceful. Would it not be better to allow X number of failed attempts within some window? On 2 March 2017 at 12:26, Marek Posolda wrote: > On 02/03/17 10:08, Marek Posolda wrote: > > On 02/03/17 00:29, Bill Burke wrote: > >> Ok, I just spent 1.5 days on debugging a problem and I was ready to > >> throw my Laptop out of the window I was getting so frustrated. > >> > >> #1 I copied code from the arquillian adapter tests to deploy my own > >> servlet. When running in IntelliJ, all logging messages by the servlet > >> and OIDC adapters were eaten and never displayed. > > Keycloak logging disabled in > > testsuite/integration-arquillian/tests/base/src/test/resources/log4j. > properties > > . AFAIK it's disabled just because running whole testsuite produces very > > big logs, which caused issues with travis. > > > > I hope it's possible to fix that and have Keycloak logging enabled when > > running from IDE, but still keep it disabled when running from command > > line with "mvn" command. Will try to look into it. Created : > > https://issues.jboss.org/browse/KEYCLOAK-4520 > Fixed now. Logging for both server and adapters is enabled now when > running test from IDE. > > Marek > > > >> #2 If you have a @Deployment it deploys it in @BeforeClass and only once > >> for all tests run in the class > >> > >> #3 I recreate/destroy my realms for every test > >> > >> #4 The default "min-time-between-jwks-requests" is 10 seconds...Because > >> my servlet doesn't get redeployed per test, the 1st test would set up > >> the cache for the realm key for the servlet. The 2nd test would run, > >> because the realms were recreated, there is a different key, but the > >> min-time-between-jwkds-requests was 10 seconds so it wasn't updating > the > >> key and my logins would fail. This was extermely frustrating to debug > >> because of #1 and because it only happened if I was running all tests in > >> the class. > >> > >> The workaround is to set "min-time-between-jwks-requests" to zero in > >> your adapter configuration. This is an FYI just in case somebody else > >> runs into this. Took me awhile to figure out. > > Another possibility is to put private/public keys into your realm JSON. > > Then there is always same keys and same "kid" and application doesn't > > need to re-download it. > > > > FYI. with my latest changes, there is no realm reimport for every test > > for most of the tests (see other thread I sent yesterday). But > > unfortunately this is not yet the case for Adapter tests (subclasses of > > AbstractAdapterTest)... > > > > Marek > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mposolda at redhat.com Mon Mar 6 09:27:02 2017 From: mposolda at redhat.com (Marek Posolda) Date: Mon, 6 Mar 2017 15:27:02 +0100 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> <1ed3f0c4-ed21-0673-76ee-b15a1f9f0647@redhat.com> Message-ID: <0f2a246b-a053-afaa-d76a-5ab3ff18545f@redhat.com> Will it be useful for other scenarios besides automated tests? I am not seeing why someone would re-import realm every 10 seconds in real environment? Even the tests can be easily fixed by put the keys into JSON reps. And once we fix the adapter tests to not require realm re-import after every method, even that won't be needed. Marek On 06/03/17 13:07, Stian Thorgersen wrote: > Is this maybe something we should improve in the adapter in the first > place? A blind only allow one request every 10 seconds seems a bit to > forceful. Would it not be better to allow X number of failed attempts > within some window? > > On 2 March 2017 at 12:26, Marek Posolda > wrote: > > On 02/03/17 10:08, Marek Posolda wrote: > > On 02/03/17 00:29, Bill Burke wrote: > >> Ok, I just spent 1.5 days on debugging a problem and I was ready to > >> throw my Laptop out of the window I was getting so frustrated. > >> > >> #1 I copied code from the arquillian adapter tests to deploy my own > >> servlet. When running in IntelliJ, all logging messages by the > servlet > >> and OIDC adapters were eaten and never displayed. > > Keycloak logging disabled in > > > testsuite/integration-arquillian/tests/base/src/test/resources/log4j.properties > > . AFAIK it's disabled just because running whole testsuite > produces very > > big logs, which caused issues with travis. > > > > I hope it's possible to fix that and have Keycloak logging > enabled when > > running from IDE, but still keep it disabled when running from > command > > line with "mvn" command. Will try to look into it. Created : > > https://issues.jboss.org/browse/KEYCLOAK-4520 > > Fixed now. Logging for both server and adapters is enabled now when > running test from IDE. > > Marek > > > >> #2 If you have a @Deployment it deploys it in @BeforeClass and > only once > >> for all tests run in the class > >> > >> #3 I recreate/destroy my realms for every test > >> > >> #4 The default "min-time-between-jwks-requests" is 10 > seconds...Because > >> my servlet doesn't get redeployed per test, the 1st test would > set up > >> the cache for the realm key for the servlet. The 2nd test would > run, > >> because the realms were recreated, there is a different key, > but the > >> min-time-between-jwkds-requests was 10 seconds so it wasn't > updating the > >> key and my logins would fail. This was extermely frustrating > to debug > >> because of #1 and because it only happened if I was running all > tests in > >> the class. > >> > >> The workaround is to set "min-time-between-jwks-requests" to > zero in > >> your adapter configuration. This is an FYI just in case > somebody else > >> runs into this. Took me awhile to figure out. > > Another possibility is to put private/public keys into your > realm JSON. > > Then there is always same keys and same "kid" and application > doesn't > > need to re-download it. > > > > FYI. with my latest changes, there is no realm reimport for > every test > > for most of the tests (see other thread I sent yesterday). But > > unfortunately this is not yet the case for Adapter tests > (subclasses of > > AbstractAdapterTest)... > > > > Marek > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From sthorger at redhat.com Mon Mar 6 09:38:02 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 6 Mar 2017 15:38:02 +0100 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: <0f2a246b-a053-afaa-d76a-5ab3ff18545f@redhat.com> References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> <1ed3f0c4-ed21-0673-76ee-b15a1f9f0647@redhat.com> <0f2a246b-a053-afaa-d76a-5ab3ff18545f@redhat.com> Message-ID: I'm not 100% sure, but thinking that there are cases where it could cause issues. For example if a service gets bad requests from a client, say every 5 seconds, it won't be able to fetch new proper keys. I'm probably overthinking this though. On 6 March 2017 at 15:27, Marek Posolda wrote: > Will it be useful for other scenarios besides automated tests? I am not > seeing why someone would re-import realm every 10 seconds in real > environment? > > Even the tests can be easily fixed by put the keys into JSON reps. And > once we fix the adapter tests to not require realm re-import after every > method, even that won't be needed. > > Marek > > > On 06/03/17 13:07, Stian Thorgersen wrote: > > Is this maybe something we should improve in the adapter in the first > place? A blind only allow one request every 10 seconds seems a bit to > forceful. Would it not be better to allow X number of failed attempts > within some window? > > On 2 March 2017 at 12:26, Marek Posolda wrote: > >> On 02/03/17 10:08, Marek Posolda wrote: >> > On 02/03/17 00:29, Bill Burke wrote: >> >> Ok, I just spent 1.5 days on debugging a problem and I was ready to >> >> throw my Laptop out of the window I was getting so frustrated. >> >> >> >> #1 I copied code from the arquillian adapter tests to deploy my own >> >> servlet. When running in IntelliJ, all logging messages by the servlet >> >> and OIDC adapters were eaten and never displayed. >> > Keycloak logging disabled in >> > testsuite/integration-arquillian/tests/base/src/test/ >> resources/log4j.properties >> > . AFAIK it's disabled just because running whole testsuite produces very >> > big logs, which caused issues with travis. >> > >> > I hope it's possible to fix that and have Keycloak logging enabled when >> > running from IDE, but still keep it disabled when running from command >> > line with "mvn" command. Will try to look into it. Created : >> > https://issues.jboss.org/browse/KEYCLOAK-4520 >> Fixed now. Logging for both server and adapters is enabled now when >> running test from IDE. >> >> Marek >> > >> >> #2 If you have a @Deployment it deploys it in @BeforeClass and only >> once >> >> for all tests run in the class >> >> >> >> #3 I recreate/destroy my realms for every test >> >> >> >> #4 The default "min-time-between-jwks-requests" is 10 >> seconds...Because >> >> my servlet doesn't get redeployed per test, the 1st test would set up >> >> the cache for the realm key for the servlet. The 2nd test would run, >> >> because the realms were recreated, there is a different key, but the >> >> min-time-between-jwkds-requests was 10 seconds so it wasn't updating >> the >> >> key and my logins would fail. This was extermely frustrating to debug >> >> because of #1 and because it only happened if I was running all tests >> in >> >> the class. >> >> >> >> The workaround is to set "min-time-between-jwks-requests" to zero in >> >> your adapter configuration. This is an FYI just in case somebody else >> >> runs into this. Took me awhile to figure out. >> > Another possibility is to put private/public keys into your realm JSON. >> > Then there is always same keys and same "kid" and application doesn't >> > need to re-download it. >> > >> > FYI. with my latest changes, there is no realm reimport for every test >> > for most of the tests (see other thread I sent yesterday). But >> > unfortunately this is not yet the case for Adapter tests (subclasses of >> > AbstractAdapterTest)... >> > >> > Marek >> >> _______________________________________________ >> >> keycloak-dev mailing list >> >> keycloak-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > > From mposolda at redhat.com Mon Mar 6 10:11:53 2017 From: mposolda at redhat.com (Marek Posolda) Date: Mon, 6 Mar 2017 16:11:53 +0100 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> <1ed3f0c4-ed21-0673-76ee-b15a1f9f0647@redhat.com> <0f2a246b-a053-afaa-d76a-5ab3ff18545f@redhat.com> Message-ID: On 06/03/17 15:38, Stian Thorgersen wrote: > I'm not 100% sure, but thinking that there are cases where it could > cause issues. For example if a service gets bad requests from a > client, say every 5 seconds, it won't be able to fetch new proper > keys. I'm probably overthinking this though. It should be able to download new proper keys in this particular scenario though. Adapter will just downloads proper keys when it sees unknown KID. So the scenario will be like: - Bad request to the adapter with the kid "bad-kid" . - Adapter will try to download new keys because it doesn't know "bad-kid" kid. It will download "good-kid" key - Adapter saves the key for "good-kid" and then it rejects the request from "bad-kid" client - Proper request coming to the adapter with "good-kid" will immediatelly see the "good-kid" key as it was already downloaded thanks to bad client :) - Another request from "bad-kid" coming after 5 seconds will be rejected due to 10 seconds interval. - Yet another bad request in additional 5 seconds will try to download keys again and request to the server will be sent then and it will re-download "good-kid". However one request per 10 second shouldn't be sufficient to simulate DoS. Good clients aren't blocked anyhow and at the same time, there is no DoS from bad clients. Marek > On 6 March 2017 at 15:27, Marek Posolda > wrote: > > Will it be useful for other scenarios besides automated tests? I > am not seeing why someone would re-import realm every 10 seconds > in real environment? > > Even the tests can be easily fixed by put the keys into JSON reps. > And once we fix the adapter tests to not require realm re-import > after every method, even that won't be needed. > > Marek > > > On 06/03/17 13:07, Stian Thorgersen wrote: >> Is this maybe something we should improve in the adapter in the >> first place? A blind only allow one request every 10 seconds >> seems a bit to forceful. Would it not be better to allow X number >> of failed attempts within some window? >> >> On 2 March 2017 at 12:26, Marek Posolda > > wrote: >> >> On 02/03/17 10:08, Marek Posolda wrote: >> > On 02/03/17 00:29, Bill Burke wrote: >> >> Ok, I just spent 1.5 days on debugging a problem and I was >> ready to >> >> throw my Laptop out of the window I was getting so frustrated. >> >> >> >> #1 I copied code from the arquillian adapter tests to >> deploy my own >> >> servlet. When running in IntelliJ, all logging messages >> by the servlet >> >> and OIDC adapters were eaten and never displayed. >> > Keycloak logging disabled in >> > >> testsuite/integration-arquillian/tests/base/src/test/resources/log4j.properties >> > . AFAIK it's disabled just because running whole testsuite >> produces very >> > big logs, which caused issues with travis. >> > >> > I hope it's possible to fix that and have Keycloak logging >> enabled when >> > running from IDE, but still keep it disabled when running >> from command >> > line with "mvn" command. Will try to look into it. Created : >> > https://issues.jboss.org/browse/KEYCLOAK-4520 >> >> Fixed now. Logging for both server and adapters is enabled >> now when >> running test from IDE. >> >> Marek >> > >> >> #2 If you have a @Deployment it deploys it in @BeforeClass >> and only once >> >> for all tests run in the class >> >> >> >> #3 I recreate/destroy my realms for every test >> >> >> >> #4 The default "min-time-between-jwks-requests" is 10 >> seconds...Because >> >> my servlet doesn't get redeployed per test, the 1st test >> would set up >> >> the cache for the realm key for the servlet. The 2nd test >> would run, >> >> because the realms were recreated, there is a different >> key, but the >> >> min-time-between-jwkds-requests was 10 seconds so it >> wasn't updating the >> >> key and my logins would fail. This was extermely >> frustrating to debug >> >> because of #1 and because it only happened if I was >> running all tests in >> >> the class. >> >> >> >> The workaround is to set "min-time-between-jwks-requests" >> to zero in >> >> your adapter configuration. This is an FYI just in case >> somebody else >> >> runs into this. Took me awhile to figure out. >> > Another possibility is to put private/public keys into your >> realm JSON. >> > Then there is always same keys and same "kid" and >> application doesn't >> > need to re-download it. >> > >> > FYI. with my latest changes, there is no realm reimport for >> every test >> > for most of the tests (see other thread I sent yesterday). But >> > unfortunately this is not yet the case for Adapter tests >> (subclasses of >> > AbstractAdapterTest)... >> > >> > Marek >> >> _______________________________________________ >> >> keycloak-dev mailing list >> >> keycloak-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> > >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> > > From ritesh.garg at outlook.com Mon Mar 6 20:25:15 2017 From: ritesh.garg at outlook.com (Ritesh Garg) Date: Tue, 7 Mar 2017 01:25:15 +0000 Subject: [keycloak-dev] Keycloak Impersonation feature | KEYCLOAK-4219 In-Reply-To: References: , Message-ID: Apologies for the late reply. My use case was similar to what Gribov mentioned. I agree, offline tokens approach is the right approach to achieve that. Feel free to reject the issue in Jira. Thank you. ________________________________ From: Konstantin Gribov Sent: Wednesday, January 25, 2017 8:20 AM To: stian at redhat.com Cc: Ritesh Garg; keycloak-dev at lists.jboss.org Subject: Re: [keycloak-dev] Keycloak Impersonation feature | KEYCLOAK-4219 Yes, I totally agree. I'm not a OP, though. Maybe he has some other use case where he think he need impersonation. I've just explained a workaroud I have in use case where I *could* think impersonation would be way to go because it's hard to integrate offline tokens in my case. But I'm totally aware that such feature shouldn't be implemented because of its security implications. ??, 25 ???. 2017 ?. ? 15:31, Stian Thorgersen >: On 25 January 2017 at 13:24, Konstantin Gribov > wrote: Hello, Stian. I have such usecase where app should be able to work on behalf of some user for periodic tasks. But I'm not sure that such case should be implemented using impersonation. It's too coarse grained and makes attack surface much larger, since it allows to get access token for any scope including offline for impersonated user. Currently I use usernames (which are unique in my case because of my ldap contraints on users `cn`s) and a little service to obtain all required info (like email, groups/roles) for periodic tasks via Keycloak Admin API using service user for this activity. I have to make this service a confidential client because of bearer-only lacks support for service accounts (so I hope that KEYCLOAK-4156 will come soon). Another way to do something on behalf of some user would be offline token which is intended for such usecases, but it can't be easily integrated to our system. Offline tokens are the way your use-case should be solved and we won't add another approach. Allowing an application to impersonate any arbitrary user is just plain crazy. It's actually a really god reason we should not support impersonation through an api. ??, 25 ???. 2017 ?. ? 15:03, Stian Thorgersen >: Please have patience rather than repeat yourself. I don't really need 3 emails about the same thing in my mailbox as I have loads of email to get through in a day! I don't really see the use-case for this. Impersonation is specifically for a user to impersonate another user. As such there has to be a front-end application as users don't go around manually obtaining tokens to invoke backend services. On 23 January 2017 at 15:30, Ritesh Garg > wrote: > Hello, > > > Any thoughts on this? > > > Thanks, > > Ritesh > > ________________________________ > From: Ritesh Garg > > Sent: Thursday, January 19, 2017 9:47 AM > To: keycloak-dev at lists.jboss.org > Subject: Keycloak Impersonation feature | KEYCLOAK-4219 > > Hello everyone, > > As of now, Keycloak supports impersonation by an admin user at the front > end application level. However, if someone is using JWT token based API > security, there is no existing way to get a user's JWT token "on behalf" of > the user by admin u. > > I understand and agree with Stian Thorgersen that this is not just adding > the return of a JWT token to the current impersonation endpoint. But I > believe if keycloak supports impersonation; we should support that for API > security as well and not just front-end applications. > > If we decide to incorporate it; one implementation approach can be to > introduce an impersonation grant type which would perform client and admin > user authentication before granting a token on behalf of the user it is > requested for. Please let me know if this sounds completely absurd to you > guys. > > Thoughts? > > Thanks, > Ritesh Garg > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev -- Best regards, Konstantin Gribov -- Best regards, Konstantin Gribov From sthorger at redhat.com Tue Mar 7 02:54:23 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Tue, 7 Mar 2017 08:54:23 +0100 Subject: [keycloak-dev] min-time-between-jwks-requests Problems when running tests In-Reply-To: References: <0eb86c71-7fe6-6474-b361-f384a90d4473@redhat.com> <1ed3f0c4-ed21-0673-76ee-b15a1f9f0647@redhat.com> <0f2a246b-a053-afaa-d76a-5ab3ff18545f@redhat.com> Message-ID: Of course - didn't consider that it's a "refresh the list request" and not a fetch a "specific key request". On 6 March 2017 at 16:11, Marek Posolda wrote: > On 06/03/17 15:38, Stian Thorgersen wrote: > > I'm not 100% sure, but thinking that there are cases where it could cause > issues. For example if a service gets bad requests from a client, say every > 5 seconds, it won't be able to fetch new proper keys. I'm probably > overthinking this though. > > It should be able to download new proper keys in this particular scenario > though. Adapter will just downloads proper keys when it sees unknown KID. > > So the scenario will be like: > - Bad request to the adapter with the kid "bad-kid" . > - Adapter will try to download new keys because it doesn't know "bad-kid" > kid. It will download "good-kid" key > - Adapter saves the key for "good-kid" and then it rejects the request > from "bad-kid" client > - Proper request coming to the adapter with "good-kid" will immediatelly > see the "good-kid" key as it was already downloaded thanks to bad client :) > - Another request from "bad-kid" coming after 5 seconds will be rejected > due to 10 seconds interval. > - Yet another bad request in additional 5 seconds will try to download > keys again and request to the server will be sent then and it will > re-download "good-kid". However one request per 10 second shouldn't be > sufficient to simulate DoS. > > Good clients aren't blocked anyhow and at the same time, there is no DoS > from bad clients. > > Marek > > > > On 6 March 2017 at 15:27, Marek Posolda wrote: > >> Will it be useful for other scenarios besides automated tests? I am not >> seeing why someone would re-import realm every 10 seconds in real >> environment? >> >> Even the tests can be easily fixed by put the keys into JSON reps. And >> once we fix the adapter tests to not require realm re-import after every >> method, even that won't be needed. >> >> Marek >> >> >> On 06/03/17 13:07, Stian Thorgersen wrote: >> >> Is this maybe something we should improve in the adapter in the first >> place? A blind only allow one request every 10 seconds seems a bit to >> forceful. Would it not be better to allow X number of failed attempts >> within some window? >> >> On 2 March 2017 at 12:26, Marek Posolda wrote: >> >>> On 02/03/17 10:08, Marek Posolda wrote: >>> > On 02/03/17 00:29, Bill Burke wrote: >>> >> Ok, I just spent 1.5 days on debugging a problem and I was ready to >>> >> throw my Laptop out of the window I was getting so frustrated. >>> >> >>> >> #1 I copied code from the arquillian adapter tests to deploy my own >>> >> servlet. When running in IntelliJ, all logging messages by the >>> servlet >>> >> and OIDC adapters were eaten and never displayed. >>> > Keycloak logging disabled in >>> > testsuite/integration-arquillian/tests/base/src/test/resourc >>> es/log4j.properties >>> > . AFAIK it's disabled just because running whole testsuite produces >>> very >>> > big logs, which caused issues with travis. >>> > >>> > I hope it's possible to fix that and have Keycloak logging enabled when >>> > running from IDE, but still keep it disabled when running from command >>> > line with "mvn" command. Will try to look into it. Created : >>> > https://issues.jboss.org/browse/KEYCLOAK-4520 >>> Fixed now. Logging for both server and adapters is enabled now when >>> running test from IDE. >>> >>> Marek >>> > >>> >> #2 If you have a @Deployment it deploys it in @BeforeClass and only >>> once >>> >> for all tests run in the class >>> >> >>> >> #3 I recreate/destroy my realms for every test >>> >> >>> >> #4 The default "min-time-between-jwks-requests" is 10 >>> seconds...Because >>> >> my servlet doesn't get redeployed per test, the 1st test would set up >>> >> the cache for the realm key for the servlet. The 2nd test would run, >>> >> because the realms were recreated, there is a different key, but the >>> >> min-time-between-jwkds-requests was 10 seconds so it wasn't updating >>> the >>> >> key and my logins would fail. This was extermely frustrating to debug >>> >> because of #1 and because it only happened if I was running all tests >>> in >>> >> the class. >>> >> >>> >> The workaround is to set "min-time-between-jwks-requests" to zero in >>> >> your adapter configuration. This is an FYI just in case somebody else >>> >> runs into this. Took me awhile to figure out. >>> > Another possibility is to put private/public keys into your realm JSON. >>> > Then there is always same keys and same "kid" and application doesn't >>> > need to re-download it. >>> > >>> > FYI. with my latest changes, there is no realm reimport for every test >>> > for most of the tests (see other thread I sent yesterday). But >>> > unfortunately this is not yet the case for Adapter tests (subclasses of >>> > AbstractAdapterTest)... >>> > >>> > Marek >>> >> _______________________________________________ >>> >> keycloak-dev mailing list >>> >> keycloak-dev at lists.jboss.org >>> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>> > _______________________________________________ >>> > keycloak-dev mailing list >>> > keycloak-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >> >> >> > > From sthorger at redhat.com Tue Mar 7 05:54:18 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Tue, 7 Mar 2017 11:54:18 +0100 Subject: [keycloak-dev] Arquillian testsuite: realm import per class now In-Reply-To: <3e343b79-6ac5-fac2-ebf0-1e8d67c269e3@redhat.com> References: <3e343b79-6ac5-fac2-ebf0-1e8d67c269e3@redhat.com> Message-ID: Awesome! Thanks Marek :) On 1 March 2017 at 12:02, Marek Posolda wrote: > So testsuite-arquillian is now using the realm import per class > similarly like the old testsuite. Also there is just one adminClient and > one testingClient per class now. > > This was identified as one of the two major bottlenecks (The second was > the phantomjs, which was changed to htmlUnit earlier). With both > changes, running the arquillian testsuite takes 10 minutes instead of 36 > on my laptop. > > There was quite a lot of changes needed to achieve this as many test > methods relied on the fact that realm is imported and didn't clean stuff > after itself. > > I may not fix all the tests, especially not those, which are not > executed during default build (eg. cluster tests). In case that you > found your test is broken you can do either: > - Fix the tests to clean after itself without realm reimport needed > after every test. This is preferred way :) > - Add this to your test class: > > @Override > protected boolean isImportAfterEachMethod() { > return true; > } > > This is fallback to previous behaviour and will cause the import after > each test method like was before. Hopefully this option can be removed > after some time once all tests are fixed :) > > For now, I needed to use it for all adapter tests (added into > AbstractAdapterTest) as some adapter tests were still failing and I > don't have much time to investigate further.. Created > https://issues.jboss.org/browse/KEYCLOAK-4517 . > > Marek > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From pkboucher801 at gmail.com Tue Mar 7 16:22:56 2017 From: pkboucher801 at gmail.com (Peter K. Boucher) Date: Tue, 7 Mar 2017 16:22:56 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? Message-ID: <003201d29788$fd3808a0$f7a819e0$@gmail.com> Suppose you don't want your passwords transmitted in the clear after SSL is terminated by a proxy. Has anyone developed a secure way for the client to prove they have the password, rather than transmitting it in the body of a post? From bburke at redhat.com Tue Mar 7 18:05:44 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 7 Mar 2017 18:05:44 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <003201d29788$fd3808a0$f7a819e0$@gmail.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> Message-ID: <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> What does that even mean? Keycloak's SSL mode can forbid non SSL connections. FYI, OIDC requires SSL. On 3/7/17 4:22 PM, Peter K. Boucher wrote: > Suppose you don't want your passwords transmitted in the clear after SSL is > terminated by a proxy. > > > > Has anyone developed a secure way for the client to prove they have the > password, rather than transmitting it in the body of a post? > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mtrue at redhat.com Tue Mar 7 20:31:55 2017 From: mtrue at redhat.com (Mark True) Date: Tue, 7 Mar 2017 20:31:55 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> Message-ID: I think the closest people have come to what you describe are things like FreeOTP or the RSA Firewall fobs. These provide one way passwords that are based on "what you know" and do not require of transmitting a permanent password over cleartext. Hope this helps! On Tue, Mar 7, 2017 at 6:05 PM, Bill Burke wrote: > What does that even mean? Keycloak's SSL mode can forbid non SSL > connections. FYI, OIDC requires SSL. > > > On 3/7/17 4:22 PM, Peter K. Boucher wrote: > > Suppose you don't want your passwords transmitted in the clear after SSL > is > > terminated by a proxy. > > > > > > > > Has anyone developed a secure way for the client to prove they have the > > password, rather than transmitting it in the body of a post? > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mtrue at redhat.com Tue Mar 7 20:35:58 2017 From: mtrue at redhat.com (Mark True) Date: Tue, 7 Mar 2017 20:35:58 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> Message-ID: I did a bit of looking for a non-interactive version of OTP, and I managed to find this: http://www.a100websolutions.in/otp-using-php-one-time-passwords/ I don't know if this answers your question, but I found it an interesting read anyway! Hope this helps! On Tue, Mar 7, 2017 at 8:31 PM, Mark True wrote: > I think the closest people have come to what you describe are things like > FreeOTP or the RSA Firewall fobs. These provide one way passwords that > are based on "what you know" and do not require of transmitting a permanent > password over cleartext. > > Hope this helps! > > > On Tue, Mar 7, 2017 at 6:05 PM, Bill Burke wrote: > >> What does that even mean? Keycloak's SSL mode can forbid non SSL >> connections. FYI, OIDC requires SSL. >> >> >> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >> > Suppose you don't want your passwords transmitted in the clear after >> SSL is >> > terminated by a proxy. >> > >> > >> > >> > Has anyone developed a secure way for the client to prove they have the >> > password, rather than transmitting it in the body of a post? >> > >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > From pkboucher801 at gmail.com Wed Mar 8 08:33:33 2017 From: pkboucher801 at gmail.com (Peter K. Boucher) Date: Wed, 8 Mar 2017 08:33:33 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> Message-ID: <003f01d29810$95580cc0$c0082640$@gmail.com> Sorry, I should have described our scenario more thoroughly. We have one of these at the border of our VPC: https://en.wikipedia.org/wiki/TLS_termination_proxy We can accept the risk of data being transmitted in the clear inside the VPC, but we would prefer that passwords not be transmitted in the clear. It's an old problem. NTLM also used a proof of the password rather than transmitting the password for similar reasons. We could force that TLS be used inside the VPC between the TLS termination proxy and Keycloak, but even then, the passwords are decrypted and then re-encrypted. We are considering trying to use something like the client-side hashing described here: https://github.com/dxa4481/clientHashing The question for this group was related to whether anyone has already developed anything along these lines for use with Keycloak. Thanks! -----Original Message----- From: keycloak-dev-bounces at lists.jboss.org [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke Sent: Tuesday, March 7, 2017 6:06 PM To: keycloak-dev at lists.jboss.org Subject: Re: [keycloak-dev] Zero-knowledge proof of password? What does that even mean? Keycloak's SSL mode can forbid non SSL connections. FYI, OIDC requires SSL. On 3/7/17 4:22 PM, Peter K. Boucher wrote: > Suppose you don't want your passwords transmitted in the clear after SSL is > terminated by a proxy. > > > > Has anyone developed a secure way for the client to prove they have the > password, rather than transmitting it in the body of a post? > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From nielsbne at gmail.com Wed Mar 8 18:45:14 2017 From: nielsbne at gmail.com (Niels Bertram) Date: Thu, 9 Mar 2017 09:45:14 +1000 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <003f01d29810$95580cc0$c0082640$@gmail.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> Message-ID: <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> Hi Peter, your security is only ever as good as the weakest link. Given you transmit the password using SSL up to your VPC why would you need to "strengthen" (obfuscate rather) the password from there to the keycloak socket? From what I have seen there are 2 ways to proxy a message, 1) to tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for you as this setup would not decrypt your message ... although this comes with other drawbacks. I am intrigued as to what exactly you are trying to achieve by modifying the messages on the way though a proxy. Any chance you could elaborate on your security requirement? > On 8 Mar. 2017, at 23:33, Peter K. Boucher wrote: > > Sorry, I should have described our scenario more thoroughly. > > We have one of these at the border of our VPC: > https://en.wikipedia.org/wiki/TLS_termination_proxy > > We can accept the risk of data being transmitted in the clear inside the > VPC, but we would prefer that passwords not be transmitted in the clear. > > It's an old problem. NTLM also used a proof of the password rather than > transmitting the password for similar reasons. > > We could force that TLS be used inside the VPC between the TLS termination > proxy and Keycloak, but even then, the passwords are decrypted and then > re-encrypted. > > We are considering trying to use something like the client-side hashing > described here: https://github.com/dxa4481/clientHashing > > The question for this group was related to whether anyone has already > developed anything along these lines for use with Keycloak. > > Thanks! > > > -----Original Message----- > From: keycloak-dev-bounces at lists.jboss.org > [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke > Sent: Tuesday, March 7, 2017 6:06 PM > To: keycloak-dev at lists.jboss.org > Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > > What does that even mean? Keycloak's SSL mode can forbid non SSL > connections. FYI, OIDC requires SSL. > > >> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >> Suppose you don't want your passwords transmitted in the clear after SSL > is >> terminated by a proxy. >> >> >> >> Has anyone developed a secure way for the client to prove they have the >> password, rather than transmitting it in the body of a post? >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From pkboucher801 at gmail.com Wed Mar 8 19:18:40 2017 From: pkboucher801 at gmail.com (Peter K. Boucher) Date: Wed, 8 Mar 2017 19:18:40 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> Message-ID: <92BC5EFA-564B-4321-899B-9642080AE45B@gmail.com> We had a pen test finding saying that the password should be protected all the way from the client to the keycloak server. Regards, Peter K. Boucher > On Mar 8, 2017, at 6:45 PM, Niels Bertram wrote: > > Hi Peter, your security is only ever as good as the weakest link. Given you transmit the password using SSL up to your VPC why would you need to "strengthen" (obfuscate rather) the password from there to the keycloak socket? From what I have seen there are 2 ways to proxy a message, 1) to tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for you as this setup would not decrypt your message ... although this comes with other drawbacks. I am intrigued as to what exactly you are trying to achieve by modifying the messages on the way though a proxy. Any chance you could elaborate on your security requirement? > >> On 8 Mar. 2017, at 23:33, Peter K. Boucher wrote: >> >> Sorry, I should have described our scenario more thoroughly. >> >> We have one of these at the border of our VPC: >> https://en.wikipedia.org/wiki/TLS_termination_proxy >> >> We can accept the risk of data being transmitted in the clear inside the >> VPC, but we would prefer that passwords not be transmitted in the clear. >> >> It's an old problem. NTLM also used a proof of the password rather than >> transmitting the password for similar reasons. >> >> We could force that TLS be used inside the VPC between the TLS termination >> proxy and Keycloak, but even then, the passwords are decrypted and then >> re-encrypted. >> >> We are considering trying to use something like the client-side hashing >> described here: https://github.com/dxa4481/clientHashing >> >> The question for this group was related to whether anyone has already >> developed anything along these lines for use with Keycloak. >> >> Thanks! >> >> >> -----Original Message----- >> From: keycloak-dev-bounces at lists.jboss.org >> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >> Sent: Tuesday, March 7, 2017 6:06 PM >> To: keycloak-dev at lists.jboss.org >> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >> >> What does that even mean? Keycloak's SSL mode can forbid non SSL >> connections. FYI, OIDC requires SSL. >> >> >>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>> Suppose you don't want your passwords transmitted in the clear after SSL >> is >>> terminated by a proxy. >>> >>> >>> >>> Has anyone developed a secure way for the client to prove they have the >>> password, rather than transmitting it in the body of a post? >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Wed Mar 8 20:45:41 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 8 Mar 2017 20:45:41 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> Message-ID: <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> So, you want to create the hash in the browser or proxy, then transmit this to Keycloak. Keycloak compares the hash to the precalculated hash it has stored? I don't see how this is any more secure. You're still passing the credential (the hash) in clear text. BTW, I think other issues that make things more complex with client hashing is if * You need to bump up the number of hashing iterations. (recommended value changes every 5 years or so) * Change the hashing algorithm. (SHA-1 was just broken). On 3/8/17 6:45 PM, Niels Bertram wrote: > Hi Peter, your security is only ever as good as the weakest link. Given you transmit the password using SSL up to your VPC why would you need to "strengthen" (obfuscate rather) the password from there to the keycloak socket? From what I have seen there are 2 ways to proxy a message, 1) to tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for you as this setup would not decrypt your message ... although this comes with other drawbacks. I am intrigued as to what exactly you are trying to achieve by modifying the messages on the way though a proxy. Any chance you could elaborate on your security requirement? > >> On 8 Mar. 2017, at 23:33, Peter K. Boucher wrote: >> >> Sorry, I should have described our scenario more thoroughly. >> >> We have one of these at the border of our VPC: >> https://en.wikipedia.org/wiki/TLS_termination_proxy >> >> We can accept the risk of data being transmitted in the clear inside the >> VPC, but we would prefer that passwords not be transmitted in the clear. >> >> It's an old problem. NTLM also used a proof of the password rather than >> transmitting the password for similar reasons. >> >> We could force that TLS be used inside the VPC between the TLS termination >> proxy and Keycloak, but even then, the passwords are decrypted and then >> re-encrypted. >> >> We are considering trying to use something like the client-side hashing >> described here: https://github.com/dxa4481/clientHashing >> >> The question for this group was related to whether anyone has already >> developed anything along these lines for use with Keycloak. >> >> Thanks! >> >> >> -----Original Message----- >> From: keycloak-dev-bounces at lists.jboss.org >> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >> Sent: Tuesday, March 7, 2017 6:06 PM >> To: keycloak-dev at lists.jboss.org >> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >> >> What does that even mean? Keycloak's SSL mode can forbid non SSL >> connections. FYI, OIDC requires SSL. >> >> >>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>> Suppose you don't want your passwords transmitted in the clear after SSL >> is >>> terminated by a proxy. >>> >>> >>> >>> Has anyone developed a secure way for the client to prove they have the >>> password, rather than transmitting it in the body of a post? >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From sthorger at redhat.com Thu Mar 9 06:28:09 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 9 Mar 2017 12:28:09 +0100 Subject: [keycloak-dev] Use default methods in Provider and ProviderFactory Message-ID: The life-cycle methods on providers and provider factories (init, postInit, close) are frequently not used, but providers have to add empty methods. To reduce the amount of boilerplate in a provider I propose changing the following to have empty default methods: ProviderFactory: * init * postInit * close Provider: * close From sthorger at redhat.com Thu Mar 9 07:37:12 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 9 Mar 2017 13:37:12 +0100 Subject: [keycloak-dev] Keycloak 3.0.0.CR1 approaching Message-ID: Keycloak 3.0.0.CR1 is scheduled to be released on the 15th March. Please have things ready for end of Monday 13th. From pkboucher801 at gmail.com Thu Mar 9 08:14:03 2017 From: pkboucher801 at gmail.com (Peter K. Boucher) Date: Thu, 9 Mar 2017 08:14:03 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> Message-ID: <000301d298d7$06242b70$126c8250$@gmail.com> I think if I were going to tweak it myself, I would do something patterned after what NTLM did: Server generates pseudo-random nonce and sends it with the ID of the hash-algorithm it used when storing the password: Server ----(hash algorithm, salt, nonce)----> Client Client hashes password with specified algorithm and salt. Client generates pseudo-random IV and encrypts the specified nonce, using the output of the hash as the key, and sends the IV and the encrypted nonce to the Server: Client ----(IV, AES block-encrypted nonce with hash as key)----> Server Server uses stored hash and specified IV to decrypt nonce, and compares nonce to what was sent to the Client. This way, the password is never transmitted at all, but this challenge-response protocol serves to prove that the Client knows the password. Anyway, I think my main question was answered that no one has done such a proof-based protocol with keycloak so far, right? -----Original Message----- From: keycloak-dev-bounces at lists.jboss.org [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke Sent: Wednesday, March 8, 2017 8:46 PM To: keycloak-dev at lists.jboss.org Subject: Re: [keycloak-dev] Zero-knowledge proof of password? So, you want to create the hash in the browser or proxy, then transmit this to Keycloak. Keycloak compares the hash to the precalculated hash it has stored? I don't see how this is any more secure. You're still passing the credential (the hash) in clear text. BTW, I think other issues that make things more complex with client hashing is if * You need to bump up the number of hashing iterations. (recommended value changes every 5 years or so) * Change the hashing algorithm. (SHA-1 was just broken). On 3/8/17 6:45 PM, Niels Bertram wrote: > Hi Peter, your security is only ever as good as the weakest link. Given you transmit the password using SSL up to your VPC why would you need to "strengthen" (obfuscate rather) the password from there to the keycloak socket? From what I have seen there are 2 ways to proxy a message, 1) to tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for you as this setup would not decrypt your message ... although this comes with other drawbacks. I am intrigued as to what exactly you are trying to achieve by modifying the messages on the way though a proxy. Any chance you could elaborate on your security requirement? > >> On 8 Mar. 2017, at 23:33, Peter K. Boucher wrote: >> >> Sorry, I should have described our scenario more thoroughly. >> >> We have one of these at the border of our VPC: >> https://en.wikipedia.org/wiki/TLS_termination_proxy >> >> We can accept the risk of data being transmitted in the clear inside the >> VPC, but we would prefer that passwords not be transmitted in the clear. >> >> It's an old problem. NTLM also used a proof of the password rather than >> transmitting the password for similar reasons. >> >> We could force that TLS be used inside the VPC between the TLS termination >> proxy and Keycloak, but even then, the passwords are decrypted and then >> re-encrypted. >> >> We are considering trying to use something like the client-side hashing >> described here: https://github.com/dxa4481/clientHashing >> >> The question for this group was related to whether anyone has already >> developed anything along these lines for use with Keycloak. >> >> Thanks! >> >> >> -----Original Message----- >> From: keycloak-dev-bounces at lists.jboss.org >> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >> Sent: Tuesday, March 7, 2017 6:06 PM >> To: keycloak-dev at lists.jboss.org >> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >> >> What does that even mean? Keycloak's SSL mode can forbid non SSL >> connections. FYI, OIDC requires SSL. >> >> >>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>> Suppose you don't want your passwords transmitted in the clear after SSL >> is >>> terminated by a proxy. >>> >>> >>> >>> Has anyone developed a secure way for the client to prove they have the >>> password, rather than transmitting it in the body of a post? >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From sthorger at redhat.com Thu Mar 9 08:26:39 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 9 Mar 2017 14:26:39 +0100 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <000301d298d7$06242b70$126c8250$@gmail.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> Message-ID: Or just use SSL all the way? That'll protect tokens as well. On 9 March 2017 at 14:14, Peter K. Boucher wrote: > I think if I were going to tweak it myself, I would do something patterned > after what NTLM did: > > Server generates pseudo-random nonce and sends it with the ID of the > hash-algorithm it used when storing the password: > Server ----(hash algorithm, salt, nonce)----> Client > > Client hashes password with specified algorithm and salt. > Client generates pseudo-random IV and encrypts the specified nonce, using > the output of the hash as the key, and sends the IV and the encrypted nonce > to the Server: > Client ----(IV, AES block-encrypted nonce with hash as key)----> Server > > Server uses stored hash and specified IV to decrypt nonce, and compares > nonce to what was sent to the Client. > > This way, the password is never transmitted at all, but this > challenge-response protocol serves to prove that the Client knows the > password. > > Anyway, I think my main question was answered that no one has done such a > proof-based protocol with keycloak so far, right? > > -----Original Message----- > From: keycloak-dev-bounces at lists.jboss.org > [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke > Sent: Wednesday, March 8, 2017 8:46 PM > To: keycloak-dev at lists.jboss.org > Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > > So, you want to create the hash in the browser or proxy, then transmit > this to Keycloak. Keycloak compares the hash to the precalculated hash > it has stored? I don't see how this is any more secure. You're still > passing the credential (the hash) in clear text. > > BTW, I think other issues that make things more complex with client > hashing is if > > * You need to bump up the number of hashing iterations. (recommended > value changes every 5 years or so) > > * Change the hashing algorithm. (SHA-1 was just broken). > > > > On 3/8/17 6:45 PM, Niels Bertram wrote: > > Hi Peter, your security is only ever as good as the weakest link. Given > you transmit the password using SSL up to your VPC why would you need to > "strengthen" (obfuscate rather) the password from there to the keycloak > socket? From what I have seen there are 2 ways to proxy a message, 1) to > tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for > you as this setup would not decrypt your message ... although this comes > with other drawbacks. I am intrigued as to what exactly you are trying to > achieve by modifying the messages on the way though a proxy. Any chance you > could elaborate on your security requirement? > > > >> On 8 Mar. 2017, at 23:33, Peter K. Boucher > wrote: > >> > >> Sorry, I should have described our scenario more thoroughly. > >> > >> We have one of these at the border of our VPC: > >> https://en.wikipedia.org/wiki/TLS_termination_proxy > >> > >> We can accept the risk of data being transmitted in the clear inside the > >> VPC, but we would prefer that passwords not be transmitted in the clear. > >> > >> It's an old problem. NTLM also used a proof of the password rather than > >> transmitting the password for similar reasons. > >> > >> We could force that TLS be used inside the VPC between the TLS > termination > >> proxy and Keycloak, but even then, the passwords are decrypted and then > >> re-encrypted. > >> > >> We are considering trying to use something like the client-side hashing > >> described here: https://github.com/dxa4481/clientHashing > >> > >> The question for this group was related to whether anyone has already > >> developed anything along these lines for use with Keycloak. > >> > >> Thanks! > >> > >> > >> -----Original Message----- > >> From: keycloak-dev-bounces at lists.jboss.org > >> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke > >> Sent: Tuesday, March 7, 2017 6:06 PM > >> To: keycloak-dev at lists.jboss.org > >> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > >> > >> What does that even mean? Keycloak's SSL mode can forbid non SSL > >> connections. FYI, OIDC requires SSL. > >> > >> > >>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: > >>> Suppose you don't want your passwords transmitted in the clear after > SSL > >> is > >>> terminated by a proxy. > >>> > >>> > >>> > >>> Has anyone developed a secure way for the client to prove they have the > >>> password, rather than transmitting it in the body of a post? > >>> > >>> _______________________________________________ > >>> keycloak-dev mailing list > >>> keycloak-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bburke at redhat.com Thu Mar 9 10:10:37 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 9 Mar 2017 10:10:37 -0500 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <000301d298d7$06242b70$126c8250$@gmail.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> Message-ID: <6d253df4-293b-639b-7469-de27861b465c@redhat.com> I think HTTP Digest was written for non-TLS connections and works similarly. FYI, this also requires the client provide a username prior to authentication as you need to know the salt, algorithm, and number of hash iterations that were used to hash the password for that particular user. To prevent attackers from guessing usernames, the client should always be provided with this information whether or not the username exists. I think you could definitely implement something here. Would be a nice feature for Keycloak. On 3/9/17 8:14 AM, Peter K. Boucher wrote: > I think if I were going to tweak it myself, I would do something patterned > after what NTLM did: > > Server generates pseudo-random nonce and sends it with the ID of the > hash-algorithm it used when storing the password: > Server ----(hash algorithm, salt, nonce)----> Client > > Client hashes password with specified algorithm and salt. > Client generates pseudo-random IV and encrypts the specified nonce, using > the output of the hash as the key, and sends the IV and the encrypted nonce > to the Server: > Client ----(IV, AES block-encrypted nonce with hash as key)----> Server > > Server uses stored hash and specified IV to decrypt nonce, and compares > nonce to what was sent to the Client. > > This way, the password is never transmitted at all, but this > challenge-response protocol serves to prove that the Client knows the > password. > > Anyway, I think my main question was answered that no one has done such a > proof-based protocol with keycloak so far, right? > > -----Original Message----- > From: keycloak-dev-bounces at lists.jboss.org > [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke > Sent: Wednesday, March 8, 2017 8:46 PM > To: keycloak-dev at lists.jboss.org > Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > > So, you want to create the hash in the browser or proxy, then transmit > this to Keycloak. Keycloak compares the hash to the precalculated hash > it has stored? I don't see how this is any more secure. You're still > passing the credential (the hash) in clear text. > > BTW, I think other issues that make things more complex with client > hashing is if > > * You need to bump up the number of hashing iterations. (recommended > value changes every 5 years or so) > > * Change the hashing algorithm. (SHA-1 was just broken). > > > > On 3/8/17 6:45 PM, Niels Bertram wrote: >> Hi Peter, your security is only ever as good as the weakest link. Given > you transmit the password using SSL up to your VPC why would you need to > "strengthen" (obfuscate rather) the password from there to the keycloak > socket? From what I have seen there are 2 ways to proxy a message, 1) to > tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for > you as this setup would not decrypt your message ... although this comes > with other drawbacks. I am intrigued as to what exactly you are trying to > achieve by modifying the messages on the way though a proxy. Any chance you > could elaborate on your security requirement? >>> On 8 Mar. 2017, at 23:33, Peter K. Boucher > wrote: >>> Sorry, I should have described our scenario more thoroughly. >>> >>> We have one of these at the border of our VPC: >>> https://en.wikipedia.org/wiki/TLS_termination_proxy >>> >>> We can accept the risk of data being transmitted in the clear inside the >>> VPC, but we would prefer that passwords not be transmitted in the clear. >>> >>> It's an old problem. NTLM also used a proof of the password rather than >>> transmitting the password for similar reasons. >>> >>> We could force that TLS be used inside the VPC between the TLS > termination >>> proxy and Keycloak, but even then, the passwords are decrypted and then >>> re-encrypted. >>> >>> We are considering trying to use something like the client-side hashing >>> described here: https://github.com/dxa4481/clientHashing >>> >>> The question for this group was related to whether anyone has already >>> developed anything along these lines for use with Keycloak. >>> >>> Thanks! >>> >>> >>> -----Original Message----- >>> From: keycloak-dev-bounces at lists.jboss.org >>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >>> Sent: Tuesday, March 7, 2017 6:06 PM >>> To: keycloak-dev at lists.jboss.org >>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>> >>> What does that even mean? Keycloak's SSL mode can forbid non SSL >>> connections. FYI, OIDC requires SSL. >>> >>> >>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>>> Suppose you don't want your passwords transmitted in the clear after SSL >>> is >>>> terminated by a proxy. >>>> >>>> >>>> >>>> Has anyone developed a secure way for the client to prove they have the >>>> password, rather than transmitting it in the body of a post? >>>> >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From miboe60 at hotmail.com Thu Mar 9 14:23:56 2017 From: miboe60 at hotmail.com (Michael B) Date: Thu, 9 Mar 2017 19:23:56 +0000 Subject: [keycloak-dev] Email locale Message-ID: I do not understand the logic behind the locale of the mails. Example scenario's: - Mail verification => mail is sent based on the locale of the registration screen => this is clear - Forgot password => mail is sent based on the locale of the 'forgot password' screen. => this is NOT clear - Execute actions mail sent by an administrator => mail is sent based on the locale of the user to which the mail is sent => this is clear Is the language of the 'forgot password' mail intended behaviour? If so, could you briefly explain me the logic of the locale selection? Is it a different selection per mail, or does it somehow depend on the user that is logged in and the user to whom the mail is sent? thanks, Michael From thomas.darimont at googlemail.com Thu Mar 9 17:46:49 2017 From: thomas.darimont at googlemail.com (Thomas Darimont) Date: Thu, 9 Mar 2017 23:46:49 +0100 Subject: [keycloak-dev] Notify clients on client configuration changes in Keycloak Message-ID: Hello group, I have a service which is registered as an OIDC client with service accounts enabled. If the service obtained an access_token with client_credentials grant it contains the service account roles assigned to that client at the moment the token was issued. The service now uses the access_token to make calls to other services. As long as the access_token is valid the service reuses the access_token. If one now changes the service account role configuration of the client in Keycloak the new roles are NOT visible to the service until it obtains a new access_token with the new role assignment - which can take a while depending on the configured token lifetime. It would be helpful if Keycloak could notify clients (perhaps via Webhook?) about client configuration changes (roles, mappers, scopes, etc.) - services could then take suitable action e.g. obtain a new access_token. What do you think? Cheers, Thomas From sthorger at redhat.com Fri Mar 10 01:13:55 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 10 Mar 2017 07:13:55 +0100 Subject: [keycloak-dev] KEYCLOAK-4523 SPI implementation In-Reply-To: References: Message-ID: [Moving to dev list] Currently PasswordPolicy.HASH_ALGORITHM_DEFAULT is used as the default provider when not specified for a realm. Maybe it would actually be better to have the default set in standalone.xml to make it configurable. Same could be done for the hashing intervals and make the default a configuration option on each provider separately. The default hashing intervals should most likely me lower for pbkdf2-sha256 and pbkdf2-sha512 to make them as expensive. If we do that I'd like to see new installations use pbkdf2-sha256 by default (and whatever hash interval matches 20K with pbkdf2), while upgraded installations remain with pbkdf2 and 20K until manually changed in standalone.xml or in realm password policy. On 9 March 2017 at 18:36, Adam Kaplan wrote: > I noticed the ID for the original PasswordHashProvider > (Pbkdf2PasswordHashProvider) was hard-coded in several places. > > 1. Should I add an SPI definition to default-server-subsys-config. > properties? > 2. Does calling getProvider(Class.class) on a KeycloakSession return the > default provider? > > On Thu, Mar 9, 2017 at 12:15 PM, Adam Kaplan wrote: > >> I'd agree with 4 being overkill - I just listed what was available in in >> the JRE. >> >> I started down the path of implementing - feature branch is here: >> https://github.com/adambkaplan/keycloak/tree/feature/KEYCLOAK-4523 >> >> On Thu, Mar 9, 2017 at 8:24 AM, Stian Thorgersen >> wrote: >> >>> Search for usage of the class PasswordHashProvider >>> >>> On 9 March 2017 at 12:54, Ori Doolman wrote: >>> >>>> From this discussion I understand that for all realm users, current >>>> password hashing algorithm is using SHA1 before the hashed password is >>>> saved to the DB. >>>> >>>> Can you please point me to the place in the code where this hashing >>>> occurs ? >>>> >>>> Thanks. >>>> >>>> >>>> -----Original Message----- >>>> From: keycloak-user-bounces at lists.jboss.org [mailto: >>>> keycloak-user-bounces at lists.jboss.org] On Behalf Of Bruno Oliveira >>>> Sent: ??? ? 06 ??? 2017 14:08 >>>> To: stian at redhat.com; Adam Kaplan >>>> Cc: keycloak-user >>>> Subject: Re: [keycloak-user] Submitted Feature: More Secure >>>> PassowrdHashProviders >>>> >>>> On Mon, Mar 6, 2017 at 8:37 AM Stian Thorgersen >>>> wrote: >>>> >>>> > 4 new providers is surely a bit overkill? Isn't 256 and 512 more than >>>> > sufficient? >>>> > >>>> >>>> +1 >>>> >>>> >>>> > >>>> > On 2 March 2017 at 15:28, Adam Kaplan wrote: >>>> > >>>> > This is now in the jboss JIRA: >>>> > https://issues.jboss.org/browse/KEYCLOAK-4523 >>>> > >>>> > I intend to work on it over the next week or two and submit a PR. >>>> > >>>> > On Thu, Mar 2, 2017 at 4:39 AM, Bruno Oliveira >>>> > wrote: >>>> > >>>> > > Hi Adam and John, I understand your concern. Although, collisions >>>> > > are not practical for key derivation functions. There's a long >>>> > > discussion about this subject here[1]. >>>> > > >>>> > > Anyways, you can file a Jira as a feature request. If you feel like >>>> > > you would like to attach a PR, better. >>>> > > >>>> > > [1] - http://comments.gmane.org/gmane.comp.security.phc/973 >>>> > > >>>> > > On Wed, Mar 1, 2017 at 3:33 PM John D. Ament >>>> > > >>>> > > wrote: >>>> > > >>>> > >> I deal with similarly concerned customer bases. I would be happy >>>> > >> to see some of these algorithms added. +1 >>>> > >> >>>> > >> On Wed, Mar 1, 2017 at 12:56 PM Adam Kaplan >>>> wrote: >>>> > >> >>>> > >> > My company has a client whose security prerequisites require us >>>> > >> > to >>>> > store >>>> > >> > passwords using SHA-2 or better for the hash (SHA-512 ideal). >>>> > >> > We're >>>> > >> looking >>>> > >> > to migrate our user management functions to Keycloak, and I >>>> > >> > noticed >>>> > that >>>> > >> > hashing with SHA-1 is only provider out of the box. >>>> > >> > >>>> > >> > I propose adding the following providers (and will be happy to >>>> > >> > contribute!), using the hash functions available in the Java 8 >>>> > >> > runtime >>>> > >> > environment: >>>> > >> > >>>> > >> > 1. PBKDF2WithHmacSHA224 >>>> > >> > 2. PBKDF2WithHmacSHA256 >>>> > >> > 3. PBKDF2WithHmacSHA384 >>>> > >> > 4. PBKDF2WithHmacSHA512 >>>> > >> > >>>> > >> > I also propose marking the current Pbkdf2PasswordHashProvider as >>>> > >> > deprecated, now that a real SHA-1 hash collision has been >>>> > >> > published by Google Security. >>>> > >> > >>>> > >> > -- >>>> > >> > *Adam Kaplan* >>>> > >> > Senior Engineer >>>> > >> > findyr >>>> > >>>> > >> > m 914.924.5186 <(914)%20924-5186> <(914)%20924-5186> >>>> > >> > >>> > >> <(914)%20924-5186> <(914)%20924-5186>> | e >>>> > >>>> > >>>> > >> > akaplan at findyr.com >>>> > >> > WeWork c/o Findyr | 1460 Broadway | New York, NY 10036 >>>> > >> > _______________________________________________ >>>> > >> > keycloak-user mailing list >>>> > >> > keycloak-user at lists.jboss.org >>>> > >> > https://lists.jboss.org/mailman/listinfo/keycloak-user >>>> > >> > >>>> > >> _______________________________________________ >>>> > >> keycloak-user mailing list >>>> > >> keycloak-user at lists.jboss.org >>>> > >> https://lists.jboss.org/mailman/listinfo/keycloak-user >>>> > >> >>>> > > >>>> > >>>> > >>>> > >>>> > -- >>>> > >>>> > >>>> > *Adam Kaplan* >>>> > Senior Engineer >>>> > findyr >>>> > >>>> > m 914.924.5186 | e akaplan at findyr.com >>>> > >>>> > >>>> > WeWork c/o Findyr | 1460 Broadway | New York, NY 10036 >>>> > _______________________________________________ >>>> > keycloak-user mailing list >>>> > keycloak-user at lists.jboss.org >>>> > https://lists.jboss.org/mailman/listinfo/keycloak-user >>>> > >>>> > >>>> _______________________________________________ >>>> keycloak-user mailing list >>>> keycloak-user at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-user >>>> This message and the information contained herein is proprietary and >>>> confidential and subject to the Amdocs policy statement, >>>> >>>> you may review at http://www.amdocs.com/email_disclaimer.asp >>>> >>> >>> >> >> >> -- >> *Adam Kaplan* >> Senior Engineer >> findyr >> m 914.924.5186 | e akaplan at findyr.com >> WeWork c/o Findyr | 1460 Broadway | New York, NY 10036 >> > > > > -- > *Adam Kaplan* > Senior Engineer > findyr > m 914.924.5186 | e akaplan at findyr.com > WeWork c/o Findyr | 1460 Broadway | New York, NY 10036 > From sthorger at redhat.com Fri Mar 10 01:27:59 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 10 Mar 2017 07:27:59 +0100 Subject: [keycloak-dev] Email locale In-Reply-To: References: Message-ID: This is a question for the user mailing list. You can find the logic here though: https://github.com/keycloak/keycloak/blob/master/services/src/main/java/org/keycloak/services/util/LocaleHelper.java On 9 March 2017 at 20:23, Michael B wrote: > I do not understand the logic behind the locale of the mails. Example > scenario's: > > - Mail verification => mail is sent based on the locale of the > registration screen => this is clear > > - Forgot password => mail is sent based on the locale of the 'forgot > password' screen. => this is NOT clear > > - Execute actions mail sent by an administrator => mail is sent based on > the locale of the user to which the mail is sent => this is clear > > > Is the language of the 'forgot password' mail intended behaviour? If so, > could you briefly explain me the logic of the locale selection? Is it a > different selection per mail, or does it somehow depend on the user that is > logged in and the user to whom the mail is sent? > > > thanks, > > Michael > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Fri Mar 10 01:30:49 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 10 Mar 2017 07:30:49 +0100 Subject: [keycloak-dev] Notify clients on client configuration changes in Keycloak In-Reply-To: References: Message-ID: I'm not keen on that as it wouldn't be standards compliant. Could also require a lot of messages to a lot of clients when roles are modified. I think it can just be handled on the client side. If it gets a 403, get a new token and try again. On 9 March 2017 at 23:46, Thomas Darimont wrote: > Hello group, > > I have a service which is registered as an OIDC client with service > accounts enabled. > If the service obtained an access_token with client_credentials grant > it contains the service account roles assigned to that client at the moment > the token was issued. > > The service now uses the access_token to make calls to other services. > As long as the access_token is valid the service reuses the access_token. > > If one now changes the service account role configuration of the client in > Keycloak > the new roles are NOT visible to the service until it obtains a new > access_token with > the new role assignment - which can take a while depending on the > configured token lifetime. > > It would be helpful if Keycloak could notify clients (perhaps via Webhook?) > about client > configuration changes (roles, mappers, scopes, etc.) - services could then > take > suitable action e.g. obtain a new access_token. > > What do you think? > > Cheers, > Thomas > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mposolda at redhat.com Fri Mar 10 03:20:42 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 10 Mar 2017 09:20:42 +0100 Subject: [keycloak-dev] Use default methods in Provider and ProviderFactory In-Reply-To: References: Message-ID: +1 On a slightly related note, I wonder if we can also improve our ProviderEvents? Registering listener for the event, which usually looks like: @Override public void postInit(KeycloakSessionFactory factory) { factory.register(new ProviderEventListener() { @Override public void onEvent(ProviderEvent event) { if (event instanceof RealmModel.ClientCreationEvent) { RealmModel.ClientCreationEvent typedEvent = (RealmModel.ClientCreationEvent) event; doSomething(typedEvent); } } }); } can be simplified and prettyfied to something like: @ProviderEventListener public void doSomething(RealmModel.ClientCreationEvent typedEvent, KeycloakSessionFactory factory) { } I think the impl will be pretty easy. At startup, framework will just lookup for @ProviderEventListener annotated methods of registered KeycloakSessionFactory implementations. It can also detect the event type according to first argument of method and automatically register the listener for it. This will allow us to remove method "postInit" altogether as it's mostly used just for registering event listeners though. Marek On 09/03/17 12:28, Stian Thorgersen wrote: > The life-cycle methods on providers and provider factories (init, postInit, > close) are frequently not used, but providers have to add empty methods. To > reduce the amount of boilerplate in a provider I propose changing the > following to have empty default methods: > > ProviderFactory: > * init > * postInit > * close > > Provider: > * close > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Fri Mar 10 03:23:57 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 10 Mar 2017 09:23:57 +0100 Subject: [keycloak-dev] Use default methods in Provider and ProviderFactory In-Reply-To: References: Message-ID: <36d2b122-1c1e-b634-f6ea-859e4e9c3801@redhat.com> Another related thing: Many of our providers (eg. DefaultJpaConnectionProviderFactory) are using the "lazyInit" pattern when some initialization is done at the time when ProviderFactory.create is called for the first time. Maybe we can add some support into the framework itself and directly add another empty method like "lazyInit" or "beforeFirstCreate" to the ProviderFactory interface? Marek On 10/03/17 09:20, Marek Posolda wrote: > +1 > > On a slightly related note, I wonder if we can also improve our > ProviderEvents? Registering listener for the event, which usually > looks like: > > @Override > public void postInit(KeycloakSessionFactory factory) { > factory.register(new ProviderEventListener() { > @Override > public void onEvent(ProviderEvent event) { > if (event instanceof RealmModel.ClientCreationEvent) { > RealmModel.ClientCreationEvent typedEvent = > (RealmModel.ClientCreationEvent) event; > doSomething(typedEvent); > } > } > }); > } > > > can be simplified and prettyfied to something like: > > > @ProviderEventListener > public void doSomething(RealmModel.ClientCreationEvent typedEvent, > KeycloakSessionFactory factory) { > > } > > I think the impl will be pretty easy. At startup, framework will just > lookup for @ProviderEventListener annotated methods of registered > KeycloakSessionFactory implementations. It can also detect the event > type according to first argument of method and automatically register > the listener for it. > > This will allow us to remove method "postInit" altogether as it's > mostly used just for registering event listeners though. > > Marek > > On 09/03/17 12:28, Stian Thorgersen wrote: >> The life-cycle methods on providers and provider factories (init, >> postInit, >> close) are frequently not used, but providers have to add empty >> methods. To >> reduce the amount of boilerplate in a provider I propose changing the >> following to have empty default methods: >> >> ProviderFactory: >> * init >> * postInit >> * close >> >> Provider: >> * close >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > From mposolda at redhat.com Fri Mar 10 05:11:08 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 10 Mar 2017 11:11:08 +0100 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <6d253df4-293b-639b-7469-de27861b465c@redhat.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> Message-ID: <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Kerberos is also similar to this. In fact Kerberos was designed to provide secure communication over insecure network. All the handshakes are done in a way that sender usually encrypts the ticket/sessionKey by some secret known by the receiving party (eg. hash of the user password). And yes, Kerberos also sends defacto just the username in the first request of the username-password verification handshake. Marek On 09/03/17 16:10, Bill Burke wrote: > I think HTTP Digest was written for non-TLS connections and works similarly. > > FYI, this also requires the client provide a username prior to > authentication as you need to know the salt, algorithm, and number of > hash iterations that were used to hash the password for that particular > user. To prevent attackers from guessing usernames, the client should > always be provided with this information whether or not the username exists. > > I think you could definitely implement something here. Would be a nice > feature for Keycloak. > > > On 3/9/17 8:14 AM, Peter K. Boucher wrote: >> I think if I were going to tweak it myself, I would do something patterned >> after what NTLM did: >> >> Server generates pseudo-random nonce and sends it with the ID of the >> hash-algorithm it used when storing the password: >> Server ----(hash algorithm, salt, nonce)----> Client >> >> Client hashes password with specified algorithm and salt. >> Client generates pseudo-random IV and encrypts the specified nonce, using >> the output of the hash as the key, and sends the IV and the encrypted nonce >> to the Server: >> Client ----(IV, AES block-encrypted nonce with hash as key)----> Server >> >> Server uses stored hash and specified IV to decrypt nonce, and compares >> nonce to what was sent to the Client. >> >> This way, the password is never transmitted at all, but this >> challenge-response protocol serves to prove that the Client knows the >> password. >> >> Anyway, I think my main question was answered that no one has done such a >> proof-based protocol with keycloak so far, right? >> >> -----Original Message----- >> From: keycloak-dev-bounces at lists.jboss.org >> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >> Sent: Wednesday, March 8, 2017 8:46 PM >> To: keycloak-dev at lists.jboss.org >> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >> >> So, you want to create the hash in the browser or proxy, then transmit >> this to Keycloak. Keycloak compares the hash to the precalculated hash >> it has stored? I don't see how this is any more secure. You're still >> passing the credential (the hash) in clear text. >> >> BTW, I think other issues that make things more complex with client >> hashing is if >> >> * You need to bump up the number of hashing iterations. (recommended >> value changes every 5 years or so) >> >> * Change the hashing algorithm. (SHA-1 was just broken). >> >> >> >> On 3/8/17 6:45 PM, Niels Bertram wrote: >>> Hi Peter, your security is only ever as good as the weakest link. Given >> you transmit the password using SSL up to your VPC why would you need to >> "strengthen" (obfuscate rather) the password from there to the keycloak >> socket? From what I have seen there are 2 ways to proxy a message, 1) to >> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for >> you as this setup would not decrypt your message ... although this comes >> with other drawbacks. I am intrigued as to what exactly you are trying to >> achieve by modifying the messages on the way though a proxy. Any chance you >> could elaborate on your security requirement? >>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher >> wrote: >>>> Sorry, I should have described our scenario more thoroughly. >>>> >>>> We have one of these at the border of our VPC: >>>> https://en.wikipedia.org/wiki/TLS_termination_proxy >>>> >>>> We can accept the risk of data being transmitted in the clear inside the >>>> VPC, but we would prefer that passwords not be transmitted in the clear. >>>> >>>> It's an old problem. NTLM also used a proof of the password rather than >>>> transmitting the password for similar reasons. >>>> >>>> We could force that TLS be used inside the VPC between the TLS >> termination >>>> proxy and Keycloak, but even then, the passwords are decrypted and then >>>> re-encrypted. >>>> >>>> We are considering trying to use something like the client-side hashing >>>> described here: https://github.com/dxa4481/clientHashing >>>> >>>> The question for this group was related to whether anyone has already >>>> developed anything along these lines for use with Keycloak. >>>> >>>> Thanks! >>>> >>>> >>>> -----Original Message----- >>>> From: keycloak-dev-bounces at lists.jboss.org >>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >>>> Sent: Tuesday, March 7, 2017 6:06 PM >>>> To: keycloak-dev at lists.jboss.org >>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>>> >>>> What does that even mean? Keycloak's SSL mode can forbid non SSL >>>> connections. FYI, OIDC requires SSL. >>>> >>>> >>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>>>> Suppose you don't want your passwords transmitted in the clear after SSL >>>> is >>>>> terminated by a proxy. >>>>> >>>>> >>>>> >>>>> Has anyone developed a secure way for the client to prove they have the >>>>> password, rather than transmitting it in the body of a post? >>>>> >>>>> _______________________________________________ >>>>> keycloak-dev mailing list >>>>> keycloak-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Fri Mar 10 05:18:19 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 10 Mar 2017 11:18:19 +0100 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Message-ID: I wonder if it's possible to do the whole handshake in 1 request instead of 2 requests, which you would need if you send username in the first request. Something along the lines of: - User enters username+password in the browser - Browser do some preliminary hashing of the password (eg. hashes it with 10K iterations) and send this hash to the server - Server will receive the 10K-iterations hashed password and add another 10K iterations to it. Then it will compare the final 20K hash with the 20K hash from the DB and checks if it match. This will allow that everything is done in single request, password is not sent over the network in cleartext and also there is not the 20K hash sent over the network, which won't be good as it will exactly match the hash from DB. Not sure if it's doable in practice, just an idea :) Marek On 10/03/17 11:11, Marek Posolda wrote: > Kerberos is also similar to this. In fact Kerberos was designed to > provide secure communication over insecure network. All the handshakes > are done in a way that sender usually encrypts the ticket/sessionKey by > some secret known by the receiving party (eg. hash of the user > password). And yes, Kerberos also sends defacto just the username in the > first request of the username-password verification handshake. > > > Marek > > On 09/03/17 16:10, Bill Burke wrote: >> I think HTTP Digest was written for non-TLS connections and works similarly. >> >> FYI, this also requires the client provide a username prior to >> authentication as you need to know the salt, algorithm, and number of >> hash iterations that were used to hash the password for that particular >> user. To prevent attackers from guessing usernames, the client should >> always be provided with this information whether or not the username exists. >> >> I think you could definitely implement something here. Would be a nice >> feature for Keycloak. >> >> >> On 3/9/17 8:14 AM, Peter K. Boucher wrote: >>> I think if I were going to tweak it myself, I would do something patterned >>> after what NTLM did: >>> >>> Server generates pseudo-random nonce and sends it with the ID of the >>> hash-algorithm it used when storing the password: >>> Server ----(hash algorithm, salt, nonce)----> Client >>> >>> Client hashes password with specified algorithm and salt. >>> Client generates pseudo-random IV and encrypts the specified nonce, using >>> the output of the hash as the key, and sends the IV and the encrypted nonce >>> to the Server: >>> Client ----(IV, AES block-encrypted nonce with hash as key)----> Server >>> >>> Server uses stored hash and specified IV to decrypt nonce, and compares >>> nonce to what was sent to the Client. >>> >>> This way, the password is never transmitted at all, but this >>> challenge-response protocol serves to prove that the Client knows the >>> password. >>> >>> Anyway, I think my main question was answered that no one has done such a >>> proof-based protocol with keycloak so far, right? >>> >>> -----Original Message----- >>> From: keycloak-dev-bounces at lists.jboss.org >>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >>> Sent: Wednesday, March 8, 2017 8:46 PM >>> To: keycloak-dev at lists.jboss.org >>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>> >>> So, you want to create the hash in the browser or proxy, then transmit >>> this to Keycloak. Keycloak compares the hash to the precalculated hash >>> it has stored? I don't see how this is any more secure. You're still >>> passing the credential (the hash) in clear text. >>> >>> BTW, I think other issues that make things more complex with client >>> hashing is if >>> >>> * You need to bump up the number of hashing iterations. (recommended >>> value changes every 5 years or so) >>> >>> * Change the hashing algorithm. (SHA-1 was just broken). >>> >>> >>> >>> On 3/8/17 6:45 PM, Niels Bertram wrote: >>>> Hi Peter, your security is only ever as good as the weakest link. Given >>> you transmit the password using SSL up to your VPC why would you need to >>> "strengthen" (obfuscate rather) the password from there to the keycloak >>> socket? From what I have seen there are 2 ways to proxy a message, 1) to >>> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for >>> you as this setup would not decrypt your message ... although this comes >>> with other drawbacks. I am intrigued as to what exactly you are trying to >>> achieve by modifying the messages on the way though a proxy. Any chance you >>> could elaborate on your security requirement? >>>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher >>> wrote: >>>>> Sorry, I should have described our scenario more thoroughly. >>>>> >>>>> We have one of these at the border of our VPC: >>>>> https://en.wikipedia.org/wiki/TLS_termination_proxy >>>>> >>>>> We can accept the risk of data being transmitted in the clear inside the >>>>> VPC, but we would prefer that passwords not be transmitted in the clear. >>>>> >>>>> It's an old problem. NTLM also used a proof of the password rather than >>>>> transmitting the password for similar reasons. >>>>> >>>>> We could force that TLS be used inside the VPC between the TLS >>> termination >>>>> proxy and Keycloak, but even then, the passwords are decrypted and then >>>>> re-encrypted. >>>>> >>>>> We are considering trying to use something like the client-side hashing >>>>> described here: https://github.com/dxa4481/clientHashing >>>>> >>>>> The question for this group was related to whether anyone has already >>>>> developed anything along these lines for use with Keycloak. >>>>> >>>>> Thanks! >>>>> >>>>> >>>>> -----Original Message----- >>>>> From: keycloak-dev-bounces at lists.jboss.org >>>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >>>>> Sent: Tuesday, March 7, 2017 6:06 PM >>>>> To: keycloak-dev at lists.jboss.org >>>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>>>> >>>>> What does that even mean? Keycloak's SSL mode can forbid non SSL >>>>> connections. FYI, OIDC requires SSL. >>>>> >>>>> >>>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>>>>> Suppose you don't want your passwords transmitted in the clear after SSL >>>>> is >>>>>> terminated by a proxy. >>>>>> >>>>>> >>>>>> >>>>>> Has anyone developed a secure way for the client to prove they have the >>>>>> password, rather than transmitting it in the body of a post? >>>>>> >>>>>> _______________________________________________ >>>>>> keycloak-dev mailing list >>>>>> keycloak-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>>> _______________________________________________ >>>>> keycloak-dev mailing list >>>>> keycloak-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>>> >>>>> _______________________________________________ >>>>> keycloak-dev mailing list >>>>> keycloak-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Fri Mar 10 05:21:14 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 10 Mar 2017 11:21:14 +0100 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Message-ID: On 10/03/17 11:18, Marek Posolda wrote: > I wonder if it's possible to do the whole handshake in 1 request instead > of 2 requests, which you would need if you send username in the first > request. > > Something along the lines of: > - User enters username+password in the browser > - Browser do some preliminary hashing of the password (eg. hashes it > with 10K iterations) and send this hash to the server > - Server will receive the 10K-iterations hashed password and add another > 10K iterations to it. Then it will compare the final 20K hash with the > 20K hash from the DB and checks if it match. > > This will allow that everything is done in single request, password is > not sent over the network in cleartext and also there is not the 20K > hash sent over the network, which won't be good as it will exactly match > the hash from DB. Not sure if it's doable in practice, just an idea :) ah, browser doesn't have the password salt, so it won't be able to do first 10K iterations... Marek > > Marek > > On 10/03/17 11:11, Marek Posolda wrote: >> Kerberos is also similar to this. In fact Kerberos was designed to >> provide secure communication over insecure network. All the handshakes >> are done in a way that sender usually encrypts the ticket/sessionKey by >> some secret known by the receiving party (eg. hash of the user >> password). And yes, Kerberos also sends defacto just the username in the >> first request of the username-password verification handshake. >> >> >> Marek >> >> On 09/03/17 16:10, Bill Burke wrote: >>> I think HTTP Digest was written for non-TLS connections and works similarly. >>> >>> FYI, this also requires the client provide a username prior to >>> authentication as you need to know the salt, algorithm, and number of >>> hash iterations that were used to hash the password for that particular >>> user. To prevent attackers from guessing usernames, the client should >>> always be provided with this information whether or not the username exists. >>> >>> I think you could definitely implement something here. Would be a nice >>> feature for Keycloak. >>> >>> >>> On 3/9/17 8:14 AM, Peter K. Boucher wrote: >>>> I think if I were going to tweak it myself, I would do something patterned >>>> after what NTLM did: >>>> >>>> Server generates pseudo-random nonce and sends it with the ID of the >>>> hash-algorithm it used when storing the password: >>>> Server ----(hash algorithm, salt, nonce)----> Client >>>> >>>> Client hashes password with specified algorithm and salt. >>>> Client generates pseudo-random IV and encrypts the specified nonce, using >>>> the output of the hash as the key, and sends the IV and the encrypted nonce >>>> to the Server: >>>> Client ----(IV, AES block-encrypted nonce with hash as key)----> Server >>>> >>>> Server uses stored hash and specified IV to decrypt nonce, and compares >>>> nonce to what was sent to the Client. >>>> >>>> This way, the password is never transmitted at all, but this >>>> challenge-response protocol serves to prove that the Client knows the >>>> password. >>>> >>>> Anyway, I think my main question was answered that no one has done such a >>>> proof-based protocol with keycloak so far, right? >>>> >>>> -----Original Message----- >>>> From: keycloak-dev-bounces at lists.jboss.org >>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >>>> Sent: Wednesday, March 8, 2017 8:46 PM >>>> To: keycloak-dev at lists.jboss.org >>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>>> >>>> So, you want to create the hash in the browser or proxy, then transmit >>>> this to Keycloak. Keycloak compares the hash to the precalculated hash >>>> it has stored? I don't see how this is any more secure. You're still >>>> passing the credential (the hash) in clear text. >>>> >>>> BTW, I think other issues that make things more complex with client >>>> hashing is if >>>> >>>> * You need to bump up the number of hashing iterations. (recommended >>>> value changes every 5 years or so) >>>> >>>> * Change the hashing algorithm. (SHA-1 was just broken). >>>> >>>> >>>> >>>> On 3/8/17 6:45 PM, Niels Bertram wrote: >>>>> Hi Peter, your security is only ever as good as the weakest link. Given >>>> you transmit the password using SSL up to your VPC why would you need to >>>> "strengthen" (obfuscate rather) the password from there to the keycloak >>>> socket? From what I have seen there are 2 ways to proxy a message, 1) to >>>> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option for >>>> you as this setup would not decrypt your message ... although this comes >>>> with other drawbacks. I am intrigued as to what exactly you are trying to >>>> achieve by modifying the messages on the way though a proxy. Any chance you >>>> could elaborate on your security requirement? >>>>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher >>>> wrote: >>>>>> Sorry, I should have described our scenario more thoroughly. >>>>>> >>>>>> We have one of these at the border of our VPC: >>>>>> https://en.wikipedia.org/wiki/TLS_termination_proxy >>>>>> >>>>>> We can accept the risk of data being transmitted in the clear inside the >>>>>> VPC, but we would prefer that passwords not be transmitted in the clear. >>>>>> >>>>>> It's an old problem. NTLM also used a proof of the password rather than >>>>>> transmitting the password for similar reasons. >>>>>> >>>>>> We could force that TLS be used inside the VPC between the TLS >>>> termination >>>>>> proxy and Keycloak, but even then, the passwords are decrypted and then >>>>>> re-encrypted. >>>>>> >>>>>> We are considering trying to use something like the client-side hashing >>>>>> described here: https://github.com/dxa4481/clientHashing >>>>>> >>>>>> The question for this group was related to whether anyone has already >>>>>> developed anything along these lines for use with Keycloak. >>>>>> >>>>>> Thanks! >>>>>> >>>>>> >>>>>> -----Original Message----- >>>>>> From: keycloak-dev-bounces at lists.jboss.org >>>>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke >>>>>> Sent: Tuesday, March 7, 2017 6:06 PM >>>>>> To: keycloak-dev at lists.jboss.org >>>>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>>>>> >>>>>> What does that even mean? Keycloak's SSL mode can forbid non SSL >>>>>> connections. FYI, OIDC requires SSL. >>>>>> >>>>>> >>>>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>>>>>> Suppose you don't want your passwords transmitted in the clear after SSL >>>>>> is >>>>>>> terminated by a proxy. >>>>>>> >>>>>>> >>>>>>> >>>>>>> Has anyone developed a secure way for the client to prove they have the >>>>>>> password, rather than transmitting it in the body of a post? >>>>>>> >>>>>>> _______________________________________________ >>>>>>> keycloak-dev mailing list >>>>>>> keycloak-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>>>> _______________________________________________ >>>>>> keycloak-dev mailing list >>>>>> keycloak-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>>>> >>>>>> _______________________________________________ >>>>>> keycloak-dev mailing list >>>>>> keycloak-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>>> _______________________________________________ >>>>> keycloak-dev mailing list >>>>> keycloak-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From sthorger at redhat.com Fri Mar 10 05:32:51 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 10 Mar 2017 11:32:51 +0100 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Message-ID: I really don't see the need for this when both OIDC and SAML will require secure networks in either case. On 10 March 2017 at 11:21, Marek Posolda wrote: > On 10/03/17 11:18, Marek Posolda wrote: > > I wonder if it's possible to do the whole handshake in 1 request instead > > of 2 requests, which you would need if you send username in the first > > request. > > > > Something along the lines of: > > - User enters username+password in the browser > > - Browser do some preliminary hashing of the password (eg. hashes it > > with 10K iterations) and send this hash to the server > > - Server will receive the 10K-iterations hashed password and add another > > 10K iterations to it. Then it will compare the final 20K hash with the > > 20K hash from the DB and checks if it match. > > > > This will allow that everything is done in single request, password is > > not sent over the network in cleartext and also there is not the 20K > > hash sent over the network, which won't be good as it will exactly match > > the hash from DB. Not sure if it's doable in practice, just an idea :) > ah, browser doesn't have the password salt, so it won't be able to do > first 10K iterations... > > Marek > > > > Marek > > > > On 10/03/17 11:11, Marek Posolda wrote: > >> Kerberos is also similar to this. In fact Kerberos was designed to > >> provide secure communication over insecure network. All the handshakes > >> are done in a way that sender usually encrypts the ticket/sessionKey by > >> some secret known by the receiving party (eg. hash of the user > >> password). And yes, Kerberos also sends defacto just the username in the > >> first request of the username-password verification handshake. > >> > >> > >> Marek > >> > >> On 09/03/17 16:10, Bill Burke wrote: > >>> I think HTTP Digest was written for non-TLS connections and works > similarly. > >>> > >>> FYI, this also requires the client provide a username prior to > >>> authentication as you need to know the salt, algorithm, and number of > >>> hash iterations that were used to hash the password for that particular > >>> user. To prevent attackers from guessing usernames, the client should > >>> always be provided with this information whether or not the username > exists. > >>> > >>> I think you could definitely implement something here. Would be a nice > >>> feature for Keycloak. > >>> > >>> > >>> On 3/9/17 8:14 AM, Peter K. Boucher wrote: > >>>> I think if I were going to tweak it myself, I would do something > patterned > >>>> after what NTLM did: > >>>> > >>>> Server generates pseudo-random nonce and sends it with the ID of the > >>>> hash-algorithm it used when storing the password: > >>>> Server ----(hash algorithm, salt, nonce)----> Client > >>>> > >>>> Client hashes password with specified algorithm and salt. > >>>> Client generates pseudo-random IV and encrypts the specified nonce, > using > >>>> the output of the hash as the key, and sends the IV and the encrypted > nonce > >>>> to the Server: > >>>> Client ----(IV, AES block-encrypted nonce with hash as key)----> > Server > >>>> > >>>> Server uses stored hash and specified IV to decrypt nonce, and > compares > >>>> nonce to what was sent to the Client. > >>>> > >>>> This way, the password is never transmitted at all, but this > >>>> challenge-response protocol serves to prove that the Client knows the > >>>> password. > >>>> > >>>> Anyway, I think my main question was answered that no one has done > such a > >>>> proof-based protocol with keycloak so far, right? > >>>> > >>>> -----Original Message----- > >>>> From: keycloak-dev-bounces at lists.jboss.org > >>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill Burke > >>>> Sent: Wednesday, March 8, 2017 8:46 PM > >>>> To: keycloak-dev at lists.jboss.org > >>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > >>>> > >>>> So, you want to create the hash in the browser or proxy, then transmit > >>>> this to Keycloak. Keycloak compares the hash to the precalculated > hash > >>>> it has stored? I don't see how this is any more secure. You're still > >>>> passing the credential (the hash) in clear text. > >>>> > >>>> BTW, I think other issues that make things more complex with client > >>>> hashing is if > >>>> > >>>> * You need to bump up the number of hashing iterations. (recommended > >>>> value changes every 5 years or so) > >>>> > >>>> * Change the hashing algorithm. (SHA-1 was just broken). > >>>> > >>>> > >>>> > >>>> On 3/8/17 6:45 PM, Niels Bertram wrote: > >>>>> Hi Peter, your security is only ever as good as the weakest link. > Given > >>>> you transmit the password using SSL up to your VPC why would you need > to > >>>> "strengthen" (obfuscate rather) the password from there to the > keycloak > >>>> socket? From what I have seen there are 2 ways to proxy a message, 1) > to > >>>> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an option > for > >>>> you as this setup would not decrypt your message ... although this > comes > >>>> with other drawbacks. I am intrigued as to what exactly you are > trying to > >>>> achieve by modifying the messages on the way though a proxy. Any > chance you > >>>> could elaborate on your security requirement? > >>>>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher > >>>> wrote: > >>>>>> Sorry, I should have described our scenario more thoroughly. > >>>>>> > >>>>>> We have one of these at the border of our VPC: > >>>>>> https://en.wikipedia.org/wiki/TLS_termination_proxy > >>>>>> > >>>>>> We can accept the risk of data being transmitted in the clear > inside the > >>>>>> VPC, but we would prefer that passwords not be transmitted in the > clear. > >>>>>> > >>>>>> It's an old problem. NTLM also used a proof of the password rather > than > >>>>>> transmitting the password for similar reasons. > >>>>>> > >>>>>> We could force that TLS be used inside the VPC between the TLS > >>>> termination > >>>>>> proxy and Keycloak, but even then, the passwords are decrypted and > then > >>>>>> re-encrypted. > >>>>>> > >>>>>> We are considering trying to use something like the client-side > hashing > >>>>>> described here: https://github.com/dxa4481/clientHashing > >>>>>> > >>>>>> The question for this group was related to whether anyone has > already > >>>>>> developed anything along these lines for use with Keycloak. > >>>>>> > >>>>>> Thanks! > >>>>>> > >>>>>> > >>>>>> -----Original Message----- > >>>>>> From: keycloak-dev-bounces at lists.jboss.org > >>>>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill > Burke > >>>>>> Sent: Tuesday, March 7, 2017 6:06 PM > >>>>>> To: keycloak-dev at lists.jboss.org > >>>>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > >>>>>> > >>>>>> What does that even mean? Keycloak's SSL mode can forbid non SSL > >>>>>> connections. FYI, OIDC requires SSL. > >>>>>> > >>>>>> > >>>>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: > >>>>>>> Suppose you don't want your passwords transmitted in the clear > after SSL > >>>>>> is > >>>>>>> terminated by a proxy. > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Has anyone developed a secure way for the client to prove they > have the > >>>>>>> password, rather than transmitting it in the body of a post? > >>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> keycloak-dev mailing list > >>>>>>> keycloak-dev at lists.jboss.org > >>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >>>>>> _______________________________________________ > >>>>>> keycloak-dev mailing list > >>>>>> keycloak-dev at lists.jboss.org > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >>>>>> > >>>>>> _______________________________________________ > >>>>>> keycloak-dev mailing list > >>>>>> keycloak-dev at lists.jboss.org > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >>>>> _______________________________________________ > >>>>> keycloak-dev mailing list > >>>>> keycloak-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >>>> _______________________________________________ > >>>> keycloak-dev mailing list > >>>> keycloak-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >>>> > >>> _______________________________________________ > >>> keycloak-dev mailing list > >>> keycloak-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From nielsbne at gmail.com Fri Mar 10 05:39:10 2017 From: nielsbne at gmail.com (Niels Bertram) Date: Fri, 10 Mar 2017 20:39:10 +1000 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Message-ID: For a normal webpage or app I can't see it either. But there might be some use for it when you want to do authentication from pin pad devices or like where you cannot guarantee what will be between the input device and the auth terminal. But even then you are limited by getting the iterations, salt etc securily to the terminal. On Fri, Mar 10, 2017 at 8:32 PM, Stian Thorgersen wrote: > I really don't see the need for this when both OIDC and SAML will require > secure networks in either case. > > On 10 March 2017 at 11:21, Marek Posolda wrote: > > > On 10/03/17 11:18, Marek Posolda wrote: > > > I wonder if it's possible to do the whole handshake in 1 request > instead > > > of 2 requests, which you would need if you send username in the first > > > request. > > > > > > Something along the lines of: > > > - User enters username+password in the browser > > > - Browser do some preliminary hashing of the password (eg. hashes it > > > with 10K iterations) and send this hash to the server > > > - Server will receive the 10K-iterations hashed password and add > another > > > 10K iterations to it. Then it will compare the final 20K hash with the > > > 20K hash from the DB and checks if it match. > > > > > > This will allow that everything is done in single request, password is > > > not sent over the network in cleartext and also there is not the 20K > > > hash sent over the network, which won't be good as it will exactly > match > > > the hash from DB. Not sure if it's doable in practice, just an idea :) > > ah, browser doesn't have the password salt, so it won't be able to do > > first 10K iterations... > > > > Marek > > > > > > Marek > > > > > > On 10/03/17 11:11, Marek Posolda wrote: > > >> Kerberos is also similar to this. In fact Kerberos was designed to > > >> provide secure communication over insecure network. All the handshakes > > >> are done in a way that sender usually encrypts the ticket/sessionKey > by > > >> some secret known by the receiving party (eg. hash of the user > > >> password). And yes, Kerberos also sends defacto just the username in > the > > >> first request of the username-password verification handshake. > > >> > > >> > > >> Marek > > >> > > >> On 09/03/17 16:10, Bill Burke wrote: > > >>> I think HTTP Digest was written for non-TLS connections and works > > similarly. > > >>> > > >>> FYI, this also requires the client provide a username prior to > > >>> authentication as you need to know the salt, algorithm, and number of > > >>> hash iterations that were used to hash the password for that > particular > > >>> user. To prevent attackers from guessing usernames, the client > should > > >>> always be provided with this information whether or not the username > > exists. > > >>> > > >>> I think you could definitely implement something here. Would be a > nice > > >>> feature for Keycloak. > > >>> > > >>> > > >>> On 3/9/17 8:14 AM, Peter K. Boucher wrote: > > >>>> I think if I were going to tweak it myself, I would do something > > patterned > > >>>> after what NTLM did: > > >>>> > > >>>> Server generates pseudo-random nonce and sends it with the ID of the > > >>>> hash-algorithm it used when storing the password: > > >>>> Server ----(hash algorithm, salt, nonce)----> Client > > >>>> > > >>>> Client hashes password with specified algorithm and salt. > > >>>> Client generates pseudo-random IV and encrypts the specified nonce, > > using > > >>>> the output of the hash as the key, and sends the IV and the > encrypted > > nonce > > >>>> to the Server: > > >>>> Client ----(IV, AES block-encrypted nonce with hash as key)----> > > Server > > >>>> > > >>>> Server uses stored hash and specified IV to decrypt nonce, and > > compares > > >>>> nonce to what was sent to the Client. > > >>>> > > >>>> This way, the password is never transmitted at all, but this > > >>>> challenge-response protocol serves to prove that the Client knows > the > > >>>> password. > > >>>> > > >>>> Anyway, I think my main question was answered that no one has done > > such a > > >>>> proof-based protocol with keycloak so far, right? > > >>>> > > >>>> -----Original Message----- > > >>>> From: keycloak-dev-bounces at lists.jboss.org > > >>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill > Burke > > >>>> Sent: Wednesday, March 8, 2017 8:46 PM > > >>>> To: keycloak-dev at lists.jboss.org > > >>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > > >>>> > > >>>> So, you want to create the hash in the browser or proxy, then > transmit > > >>>> this to Keycloak. Keycloak compares the hash to the precalculated > > hash > > >>>> it has stored? I don't see how this is any more secure. You're > still > > >>>> passing the credential (the hash) in clear text. > > >>>> > > >>>> BTW, I think other issues that make things more complex with client > > >>>> hashing is if > > >>>> > > >>>> * You need to bump up the number of hashing iterations. (recommended > > >>>> value changes every 5 years or so) > > >>>> > > >>>> * Change the hashing algorithm. (SHA-1 was just broken). > > >>>> > > >>>> > > >>>> > > >>>> On 3/8/17 6:45 PM, Niels Bertram wrote: > > >>>>> Hi Peter, your security is only ever as good as the weakest link. > > Given > > >>>> you transmit the password using SSL up to your VPC why would you > need > > to > > >>>> "strengthen" (obfuscate rather) the password from there to the > > keycloak > > >>>> socket? From what I have seen there are 2 ways to proxy a message, > 1) > > to > > >>>> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an > option > > for > > >>>> you as this setup would not decrypt your message ... although this > > comes > > >>>> with other drawbacks. I am intrigued as to what exactly you are > > trying to > > >>>> achieve by modifying the messages on the way though a proxy. Any > > chance you > > >>>> could elaborate on your security requirement? > > >>>>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher < > pkboucher801 at gmail.com> > > >>>> wrote: > > >>>>>> Sorry, I should have described our scenario more thoroughly. > > >>>>>> > > >>>>>> We have one of these at the border of our VPC: > > >>>>>> https://en.wikipedia.org/wiki/TLS_termination_proxy > > >>>>>> > > >>>>>> We can accept the risk of data being transmitted in the clear > > inside the > > >>>>>> VPC, but we would prefer that passwords not be transmitted in the > > clear. > > >>>>>> > > >>>>>> It's an old problem. NTLM also used a proof of the password > rather > > than > > >>>>>> transmitting the password for similar reasons. > > >>>>>> > > >>>>>> We could force that TLS be used inside the VPC between the TLS > > >>>> termination > > >>>>>> proxy and Keycloak, but even then, the passwords are decrypted and > > then > > >>>>>> re-encrypted. > > >>>>>> > > >>>>>> We are considering trying to use something like the client-side > > hashing > > >>>>>> described here: https://github.com/dxa4481/clientHashing > > >>>>>> > > >>>>>> The question for this group was related to whether anyone has > > already > > >>>>>> developed anything along these lines for use with Keycloak. > > >>>>>> > > >>>>>> Thanks! > > >>>>>> > > >>>>>> > > >>>>>> -----Original Message----- > > >>>>>> From: keycloak-dev-bounces at lists.jboss.org > > >>>>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill > > Burke > > >>>>>> Sent: Tuesday, March 7, 2017 6:06 PM > > >>>>>> To: keycloak-dev at lists.jboss.org > > >>>>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > > >>>>>> > > >>>>>> What does that even mean? Keycloak's SSL mode can forbid non SSL > > >>>>>> connections. FYI, OIDC requires SSL. > > >>>>>> > > >>>>>> > > >>>>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: > > >>>>>>> Suppose you don't want your passwords transmitted in the clear > > after SSL > > >>>>>> is > > >>>>>>> terminated by a proxy. > > >>>>>>> > > >>>>>>> > > >>>>>>> > > >>>>>>> Has anyone developed a secure way for the client to prove they > > have the > > >>>>>>> password, rather than transmitting it in the body of a post? > > >>>>>>> > > >>>>>>> _______________________________________________ > > >>>>>>> keycloak-dev mailing list > > >>>>>>> keycloak-dev at lists.jboss.org > > >>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > >>>>>> _______________________________________________ > > >>>>>> keycloak-dev mailing list > > >>>>>> keycloak-dev at lists.jboss.org > > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > >>>>>> > > >>>>>> _______________________________________________ > > >>>>>> keycloak-dev mailing list > > >>>>>> keycloak-dev at lists.jboss.org > > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > >>>>> _______________________________________________ > > >>>>> keycloak-dev mailing list > > >>>>> keycloak-dev at lists.jboss.org > > >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > >>>> _______________________________________________ > > >>>> keycloak-dev mailing list > > >>>> keycloak-dev at lists.jboss.org > > >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > >>>> > > >>> _______________________________________________ > > >>> keycloak-dev mailing list > > >>> keycloak-dev at lists.jboss.org > > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > >> _______________________________________________ > > >> keycloak-dev mailing list > > >> keycloak-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > > _______________________________________________ > > > keycloak-dev mailing list > > > keycloak-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Fri Mar 10 05:42:30 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 10 Mar 2017 11:42:30 +0100 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Message-ID: If you're talking about some non-web based client it would still need to use SSL as otherwise tokens are still sent in clear text and can be exploited. So I don't see the use-case for it there either. On 10 March 2017 at 11:39, Niels Bertram wrote: > For a normal webpage or app I can't see it either. But there might be some > use for it when you want to do authentication from pin pad devices or like > where you cannot guarantee what will be between the input device and the > auth terminal. But even then you are limited by getting the iterations, > salt etc securily to the terminal. > > On Fri, Mar 10, 2017 at 8:32 PM, Stian Thorgersen > wrote: > >> I really don't see the need for this when both OIDC and SAML will require >> secure networks in either case. >> >> On 10 March 2017 at 11:21, Marek Posolda wrote: >> >> > On 10/03/17 11:18, Marek Posolda wrote: >> > > I wonder if it's possible to do the whole handshake in 1 request >> instead >> > > of 2 requests, which you would need if you send username in the first >> > > request. >> > > >> > > Something along the lines of: >> > > - User enters username+password in the browser >> > > - Browser do some preliminary hashing of the password (eg. hashes it >> > > with 10K iterations) and send this hash to the server >> > > - Server will receive the 10K-iterations hashed password and add >> another >> > > 10K iterations to it. Then it will compare the final 20K hash with the >> > > 20K hash from the DB and checks if it match. >> > > >> > > This will allow that everything is done in single request, password is >> > > not sent over the network in cleartext and also there is not the 20K >> > > hash sent over the network, which won't be good as it will exactly >> match >> > > the hash from DB. Not sure if it's doable in practice, just an idea :) >> > ah, browser doesn't have the password salt, so it won't be able to do >> > first 10K iterations... >> > >> > Marek >> > > >> > > Marek >> > > >> > > On 10/03/17 11:11, Marek Posolda wrote: >> > >> Kerberos is also similar to this. In fact Kerberos was designed to >> > >> provide secure communication over insecure network. All the >> handshakes >> > >> are done in a way that sender usually encrypts the ticket/sessionKey >> by >> > >> some secret known by the receiving party (eg. hash of the user >> > >> password). And yes, Kerberos also sends defacto just the username in >> the >> > >> first request of the username-password verification handshake. >> > >> >> > >> >> > >> Marek >> > >> >> > >> On 09/03/17 16:10, Bill Burke wrote: >> > >>> I think HTTP Digest was written for non-TLS connections and works >> > similarly. >> > >>> >> > >>> FYI, this also requires the client provide a username prior to >> > >>> authentication as you need to know the salt, algorithm, and number >> of >> > >>> hash iterations that were used to hash the password for that >> particular >> > >>> user. To prevent attackers from guessing usernames, the client >> should >> > >>> always be provided with this information whether or not the username >> > exists. >> > >>> >> > >>> I think you could definitely implement something here. Would be a >> nice >> > >>> feature for Keycloak. >> > >>> >> > >>> >> > >>> On 3/9/17 8:14 AM, Peter K. Boucher wrote: >> > >>>> I think if I were going to tweak it myself, I would do something >> > patterned >> > >>>> after what NTLM did: >> > >>>> >> > >>>> Server generates pseudo-random nonce and sends it with the ID of >> the >> > >>>> hash-algorithm it used when storing the password: >> > >>>> Server ----(hash algorithm, salt, nonce)----> Client >> > >>>> >> > >>>> Client hashes password with specified algorithm and salt. >> > >>>> Client generates pseudo-random IV and encrypts the specified nonce, >> > using >> > >>>> the output of the hash as the key, and sends the IV and the >> encrypted >> > nonce >> > >>>> to the Server: >> > >>>> Client ----(IV, AES block-encrypted nonce with hash as key)----> >> > Server >> > >>>> >> > >>>> Server uses stored hash and specified IV to decrypt nonce, and >> > compares >> > >>>> nonce to what was sent to the Client. >> > >>>> >> > >>>> This way, the password is never transmitted at all, but this >> > >>>> challenge-response protocol serves to prove that the Client knows >> the >> > >>>> password. >> > >>>> >> > >>>> Anyway, I think my main question was answered that no one has done >> > such a >> > >>>> proof-based protocol with keycloak so far, right? >> > >>>> >> > >>>> -----Original Message----- >> > >>>> From: keycloak-dev-bounces at lists.jboss.org >> > >>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill >> Burke >> > >>>> Sent: Wednesday, March 8, 2017 8:46 PM >> > >>>> To: keycloak-dev at lists.jboss.org >> > >>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >> > >>>> >> > >>>> So, you want to create the hash in the browser or proxy, then >> transmit >> > >>>> this to Keycloak. Keycloak compares the hash to the precalculated >> > hash >> > >>>> it has stored? I don't see how this is any more secure. You're >> still >> > >>>> passing the credential (the hash) in clear text. >> > >>>> >> > >>>> BTW, I think other issues that make things more complex with client >> > >>>> hashing is if >> > >>>> >> > >>>> * You need to bump up the number of hashing iterations. >> (recommended >> > >>>> value changes every 5 years or so) >> > >>>> >> > >>>> * Change the hashing algorithm. (SHA-1 was just broken). >> > >>>> >> > >>>> >> > >>>> >> > >>>> On 3/8/17 6:45 PM, Niels Bertram wrote: >> > >>>>> Hi Peter, your security is only ever as good as the weakest link. >> > Given >> > >>>> you transmit the password using SSL up to your VPC why would you >> need >> > to >> > >>>> "strengthen" (obfuscate rather) the password from there to the >> > keycloak >> > >>>> socket? From what I have seen there are 2 ways to proxy a message, >> 1) >> > to >> > >>>> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an >> option >> > for >> > >>>> you as this setup would not decrypt your message ... although this >> > comes >> > >>>> with other drawbacks. I am intrigued as to what exactly you are >> > trying to >> > >>>> achieve by modifying the messages on the way though a proxy. Any >> > chance you >> > >>>> could elaborate on your security requirement? >> > >>>>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher < >> pkboucher801 at gmail.com> >> > >>>> wrote: >> > >>>>>> Sorry, I should have described our scenario more thoroughly. >> > >>>>>> >> > >>>>>> We have one of these at the border of our VPC: >> > >>>>>> https://en.wikipedia.org/wiki/TLS_termination_proxy >> > >>>>>> >> > >>>>>> We can accept the risk of data being transmitted in the clear >> > inside the >> > >>>>>> VPC, but we would prefer that passwords not be transmitted in the >> > clear. >> > >>>>>> >> > >>>>>> It's an old problem. NTLM also used a proof of the password >> rather >> > than >> > >>>>>> transmitting the password for similar reasons. >> > >>>>>> >> > >>>>>> We could force that TLS be used inside the VPC between the TLS >> > >>>> termination >> > >>>>>> proxy and Keycloak, but even then, the passwords are decrypted >> and >> > then >> > >>>>>> re-encrypted. >> > >>>>>> >> > >>>>>> We are considering trying to use something like the client-side >> > hashing >> > >>>>>> described here: https://github.com/dxa4481/clientHashing >> > >>>>>> >> > >>>>>> The question for this group was related to whether anyone has >> > already >> > >>>>>> developed anything along these lines for use with Keycloak. >> > >>>>>> >> > >>>>>> Thanks! >> > >>>>>> >> > >>>>>> >> > >>>>>> -----Original Message----- >> > >>>>>> From: keycloak-dev-bounces at lists.jboss.org >> > >>>>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill >> > Burke >> > >>>>>> Sent: Tuesday, March 7, 2017 6:06 PM >> > >>>>>> To: keycloak-dev at lists.jboss.org >> > >>>>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >> > >>>>>> >> > >>>>>> What does that even mean? Keycloak's SSL mode can forbid non SSL >> > >>>>>> connections. FYI, OIDC requires SSL. >> > >>>>>> >> > >>>>>> >> > >>>>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >> > >>>>>>> Suppose you don't want your passwords transmitted in the clear >> > after SSL >> > >>>>>> is >> > >>>>>>> terminated by a proxy. >> > >>>>>>> >> > >>>>>>> >> > >>>>>>> >> > >>>>>>> Has anyone developed a secure way for the client to prove they >> > have the >> > >>>>>>> password, rather than transmitting it in the body of a post? >> > >>>>>>> >> > >>>>>>> _______________________________________________ >> > >>>>>>> keycloak-dev mailing list >> > >>>>>>> keycloak-dev at lists.jboss.org >> > >>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >>>>>> _______________________________________________ >> > >>>>>> keycloak-dev mailing list >> > >>>>>> keycloak-dev at lists.jboss.org >> > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >>>>>> >> > >>>>>> _______________________________________________ >> > >>>>>> keycloak-dev mailing list >> > >>>>>> keycloak-dev at lists.jboss.org >> > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >>>>> _______________________________________________ >> > >>>>> keycloak-dev mailing list >> > >>>>> keycloak-dev at lists.jboss.org >> > >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >>>> _______________________________________________ >> > >>>> keycloak-dev mailing list >> > >>>> keycloak-dev at lists.jboss.org >> > >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >>>> >> > >>> _______________________________________________ >> > >>> keycloak-dev mailing list >> > >>> keycloak-dev at lists.jboss.org >> > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >> _______________________________________________ >> > >> keycloak-dev mailing list >> > >> keycloak-dev at lists.jboss.org >> > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > >> > > _______________________________________________ >> > > keycloak-dev mailing list >> > > keycloak-dev at lists.jboss.org >> > > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >> > >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > From psilva at redhat.com Fri Mar 10 10:13:46 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Fri, 10 Mar 2017 12:13:46 -0300 Subject: [keycloak-dev] Arquillian tests using graphene are broken ? Message-ID: Do have any change recently to arquillian testsuite ? Regards. Pedro Igor From sblanc at redhat.com Fri Mar 10 11:50:03 2017 From: sblanc at redhat.com (Sebastien Blanc) Date: Fri, 10 Mar 2017 17:50:03 +0100 Subject: [keycloak-dev] Feedback about our BOMs Message-ID: Hi, One of the requirement to get added on the start.spring.io website is to have BOMs and that is what we did. But now they are reviewing our request and I got this as remark : " The version.keycloak version in your bom doesn't look right to me. If you import a bom of version A.B.C it makes no sense to ask for D.E.F. (a dependency may have been added/remove in that version). I'd rather hard-code the version in each dependency (that will be updated by the release process the same way as the property anyway). Also, that bom is a child of your main pom which is usually a bad idea. I can see that you have a repositories definition there that is going to pollute the Maven build. Worse, you inherit from the dependency management of the whole infrastructure (including Jackson, log4j and a bunch of 3rd party libraries). We can't accept a bom that does that as it conflicts with Spring Boot's dependency management. " Does that make all sense to you ? TBH I'm not a BOM expert but looks like it make sense (at least for not using the keycloak parent pom) From francis.pouatcha at adorsys.com Fri Mar 10 12:56:35 2017 From: francis.pouatcha at adorsys.com (Francis Pouatcha) Date: Fri, 10 Mar 2017 18:56:35 +0100 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Message-ID: again, this is a growing concern. See: https://en.wikipedia.org/wiki/TLS_termination_proxy. There are many situations where producer and consumer of some data do not have control over the SSL-termination. We will definitively have to work on solutions for encrypting or hashing some critical form fields before sending them over to the consuming server. Francis Pouatcha Open https://github.com/adorsys On Fri, Mar 10, 2017 at 11:42 AM, Stian Thorgersen wrote: > If you're talking about some non-web based client it would still need to > use SSL as otherwise tokens are still sent in clear text and can be > exploited. So I don't see the use-case for it there either. > > On 10 March 2017 at 11:39, Niels Bertram wrote: > > > For a normal webpage or app I can't see it either. But there might be > some > > use for it when you want to do authentication from pin pad devices or > like > > where you cannot guarantee what will be between the input device and the > > auth terminal. But even then you are limited by getting the iterations, > > salt etc securily to the terminal. > > > > On Fri, Mar 10, 2017 at 8:32 PM, Stian Thorgersen > > wrote: > > > >> I really don't see the need for this when both OIDC and SAML will > require > >> secure networks in either case. > >> > >> On 10 March 2017 at 11:21, Marek Posolda wrote: > >> > >> > On 10/03/17 11:18, Marek Posolda wrote: > >> > > I wonder if it's possible to do the whole handshake in 1 request > >> instead > >> > > of 2 requests, which you would need if you send username in the > first > >> > > request. > >> > > > >> > > Something along the lines of: > >> > > - User enters username+password in the browser > >> > > - Browser do some preliminary hashing of the password (eg. hashes it > >> > > with 10K iterations) and send this hash to the server > >> > > - Server will receive the 10K-iterations hashed password and add > >> another > >> > > 10K iterations to it. Then it will compare the final 20K hash with > the > >> > > 20K hash from the DB and checks if it match. > >> > > > >> > > This will allow that everything is done in single request, password > is > >> > > not sent over the network in cleartext and also there is not the 20K > >> > > hash sent over the network, which won't be good as it will exactly > >> match > >> > > the hash from DB. Not sure if it's doable in practice, just an idea > :) > >> > ah, browser doesn't have the password salt, so it won't be able to do > >> > first 10K iterations... > >> > > >> > Marek > >> > > > >> > > Marek > >> > > > >> > > On 10/03/17 11:11, Marek Posolda wrote: > >> > >> Kerberos is also similar to this. In fact Kerberos was designed to > >> > >> provide secure communication over insecure network. All the > >> handshakes > >> > >> are done in a way that sender usually encrypts the > ticket/sessionKey > >> by > >> > >> some secret known by the receiving party (eg. hash of the user > >> > >> password). And yes, Kerberos also sends defacto just the username > in > >> the > >> > >> first request of the username-password verification handshake. > >> > >> > >> > >> > >> > >> Marek > >> > >> > >> > >> On 09/03/17 16:10, Bill Burke wrote: > >> > >>> I think HTTP Digest was written for non-TLS connections and works > >> > similarly. > >> > >>> > >> > >>> FYI, this also requires the client provide a username prior to > >> > >>> authentication as you need to know the salt, algorithm, and number > >> of > >> > >>> hash iterations that were used to hash the password for that > >> particular > >> > >>> user. To prevent attackers from guessing usernames, the client > >> should > >> > >>> always be provided with this information whether or not the > username > >> > exists. > >> > >>> > >> > >>> I think you could definitely implement something here. Would be a > >> nice > >> > >>> feature for Keycloak. > >> > >>> > >> > >>> > >> > >>> On 3/9/17 8:14 AM, Peter K. Boucher wrote: > >> > >>>> I think if I were going to tweak it myself, I would do something > >> > patterned > >> > >>>> after what NTLM did: > >> > >>>> > >> > >>>> Server generates pseudo-random nonce and sends it with the ID of > >> the > >> > >>>> hash-algorithm it used when storing the password: > >> > >>>> Server ----(hash algorithm, salt, nonce)----> Client > >> > >>>> > >> > >>>> Client hashes password with specified algorithm and salt. > >> > >>>> Client generates pseudo-random IV and encrypts the specified > nonce, > >> > using > >> > >>>> the output of the hash as the key, and sends the IV and the > >> encrypted > >> > nonce > >> > >>>> to the Server: > >> > >>>> Client ----(IV, AES block-encrypted nonce with hash as key)----> > >> > Server > >> > >>>> > >> > >>>> Server uses stored hash and specified IV to decrypt nonce, and > >> > compares > >> > >>>> nonce to what was sent to the Client. > >> > >>>> > >> > >>>> This way, the password is never transmitted at all, but this > >> > >>>> challenge-response protocol serves to prove that the Client knows > >> the > >> > >>>> password. > >> > >>>> > >> > >>>> Anyway, I think my main question was answered that no one has > done > >> > such a > >> > >>>> proof-based protocol with keycloak so far, right? > >> > >>>> > >> > >>>> -----Original Message----- > >> > >>>> From: keycloak-dev-bounces at lists.jboss.org > >> > >>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill > >> Burke > >> > >>>> Sent: Wednesday, March 8, 2017 8:46 PM > >> > >>>> To: keycloak-dev at lists.jboss.org > >> > >>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > >> > >>>> > >> > >>>> So, you want to create the hash in the browser or proxy, then > >> transmit > >> > >>>> this to Keycloak. Keycloak compares the hash to the > precalculated > >> > hash > >> > >>>> it has stored? I don't see how this is any more secure. You're > >> still > >> > >>>> passing the credential (the hash) in clear text. > >> > >>>> > >> > >>>> BTW, I think other issues that make things more complex with > client > >> > >>>> hashing is if > >> > >>>> > >> > >>>> * You need to bump up the number of hashing iterations. > >> (recommended > >> > >>>> value changes every 5 years or so) > >> > >>>> > >> > >>>> * Change the hashing algorithm. (SHA-1 was just broken). > >> > >>>> > >> > >>>> > >> > >>>> > >> > >>>> On 3/8/17 6:45 PM, Niels Bertram wrote: > >> > >>>>> Hi Peter, your security is only ever as good as the weakest > link. > >> > Given > >> > >>>> you transmit the password using SSL up to your VPC why would you > >> need > >> > to > >> > >>>> "strengthen" (obfuscate rather) the password from there to the > >> > keycloak > >> > >>>> socket? From what I have seen there are 2 ways to proxy a > message, > >> 1) > >> > to > >> > >>>> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an > >> option > >> > for > >> > >>>> you as this setup would not decrypt your message ... although > this > >> > comes > >> > >>>> with other drawbacks. I am intrigued as to what exactly you are > >> > trying to > >> > >>>> achieve by modifying the messages on the way though a proxy. Any > >> > chance you > >> > >>>> could elaborate on your security requirement? > >> > >>>>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher < > >> pkboucher801 at gmail.com> > >> > >>>> wrote: > >> > >>>>>> Sorry, I should have described our scenario more thoroughly. > >> > >>>>>> > >> > >>>>>> We have one of these at the border of our VPC: > >> > >>>>>> https://en.wikipedia.org/wiki/TLS_termination_proxy > >> > >>>>>> > >> > >>>>>> We can accept the risk of data being transmitted in the clear > >> > inside the > >> > >>>>>> VPC, but we would prefer that passwords not be transmitted in > the > >> > clear. > >> > >>>>>> > >> > >>>>>> It's an old problem. NTLM also used a proof of the password > >> rather > >> > than > >> > >>>>>> transmitting the password for similar reasons. > >> > >>>>>> > >> > >>>>>> We could force that TLS be used inside the VPC between the TLS > >> > >>>> termination > >> > >>>>>> proxy and Keycloak, but even then, the passwords are decrypted > >> and > >> > then > >> > >>>>>> re-encrypted. > >> > >>>>>> > >> > >>>>>> We are considering trying to use something like the client-side > >> > hashing > >> > >>>>>> described here: https://github.com/dxa4481/clientHashing > >> > >>>>>> > >> > >>>>>> The question for this group was related to whether anyone has > >> > already > >> > >>>>>> developed anything along these lines for use with Keycloak. > >> > >>>>>> > >> > >>>>>> Thanks! > >> > >>>>>> > >> > >>>>>> > >> > >>>>>> -----Original Message----- > >> > >>>>>> From: keycloak-dev-bounces at lists.jboss.org > >> > >>>>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of > Bill > >> > Burke > >> > >>>>>> Sent: Tuesday, March 7, 2017 6:06 PM > >> > >>>>>> To: keycloak-dev at lists.jboss.org > >> > >>>>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? > >> > >>>>>> > >> > >>>>>> What does that even mean? Keycloak's SSL mode can forbid non > SSL > >> > >>>>>> connections. FYI, OIDC requires SSL. > >> > >>>>>> > >> > >>>>>> > >> > >>>>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: > >> > >>>>>>> Suppose you don't want your passwords transmitted in the clear > >> > after SSL > >> > >>>>>> is > >> > >>>>>>> terminated by a proxy. > >> > >>>>>>> > >> > >>>>>>> > >> > >>>>>>> > >> > >>>>>>> Has anyone developed a secure way for the client to prove they > >> > have the > >> > >>>>>>> password, rather than transmitting it in the body of a post? > >> > >>>>>>> > >> > >>>>>>> _______________________________________________ > >> > >>>>>>> keycloak-dev mailing list > >> > >>>>>>> keycloak-dev at lists.jboss.org > >> > >>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >>>>>> _______________________________________________ > >> > >>>>>> keycloak-dev mailing list > >> > >>>>>> keycloak-dev at lists.jboss.org > >> > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >>>>>> > >> > >>>>>> _______________________________________________ > >> > >>>>>> keycloak-dev mailing list > >> > >>>>>> keycloak-dev at lists.jboss.org > >> > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >>>>> _______________________________________________ > >> > >>>>> keycloak-dev mailing list > >> > >>>>> keycloak-dev at lists.jboss.org > >> > >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >>>> _______________________________________________ > >> > >>>> keycloak-dev mailing list > >> > >>>> keycloak-dev at lists.jboss.org > >> > >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >>>> > >> > >>> _______________________________________________ > >> > >>> keycloak-dev mailing list > >> > >>> keycloak-dev at lists.jboss.org > >> > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >> _______________________________________________ > >> > >> keycloak-dev mailing list > >> > >> keycloak-dev at lists.jboss.org > >> > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > > > >> > > _______________________________________________ > >> > > keycloak-dev mailing list > >> > > keycloak-dev at lists.jboss.org > >> > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > > >> > > >> > _______________________________________________ > >> > keycloak-dev mailing list > >> > keycloak-dev at lists.jboss.org > >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mposolda at redhat.com Fri Mar 10 15:21:39 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 10 Mar 2017 21:21:39 +0100 Subject: [keycloak-dev] Arquillian tests using graphene are broken ? In-Reply-To: References: Message-ID: Hey Pedro, This [1] and this [2] . Do you see any issues? For [1], the workaround can be to use different browser (eg. -Dbrowser=phantomjs). For [2], the workaround is import per method as described in the email. [1] http://lists.jboss.org/pipermail/keycloak-dev/2017-February/008859.html [2] http://lists.jboss.org/pipermail/keycloak-dev/2017-March/008902.html Marek On 10/03/17 16:13, Pedro Igor Silva wrote: > Do have any change recently to arquillian testsuite ? > > Regards. > Pedro Igor > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Sat Mar 11 08:18:09 2017 From: bburke at redhat.com (Bill Burke) Date: Sat, 11 Mar 2017 08:18:09 -0500 Subject: [keycloak-dev] fine-grain admin permissions with Authz Message-ID: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> I'm looking into how we could implement fine-grain admin permissions with Pedro's Authz service, i.e. fix our long standing bug that manage-users allows people to grant themselves admin roles. I want to do an exercise of how certain things can be modeled, specific user role mappings. Some things we want to be able to do * admin can only assign specific roles to users * admin can only assign specific roles to users of a specific group The entire realm would be a Authz resource server. There's already a client (resource server) for the realm "realm-management". - A Scope of "user-role-mapping" would be defined. These resources would be defined and would have the "user-role-mapping" scope attached to them. * "Users" resource. This resource represents all users in the system * A resource is created per role * A resource is created per group Now, when managing roles for a user, we need to ask two questions: 1. Can the admin manage role mappings for this user? 2. Can the admin manage role mappings for this role? For the first question, let's map the current behavior of Keycloak onto the Authz service. * A scoped-base permission would be created for the "Users" resource with a scope of "user-role-mapping" and a role policy of role "manage-users". When role mapping happens, the operation would make an entitlement request for "Users" with a scope of "user-role-mapping". This would pass by default because of the default permission defined above. Now what about the case where we only want an admin to be able to manage roles for a specific group? In this case we define a resource for the Group Foo. The Group Foo would be attached to the "user-role-mapping" scope. Then the realm admin would define a scope-based permission for the Group Foo resource and "user-role-mapping". For example, there might be a "foo-admin" role. The scope permission could grant the permission if the admin has the "foo-admin" role. So, if the "Users"->"user-role-mapping" evaluation fails, the role mapping operation would then cycle through each Group of the user being managed and see if "Group Foo"->"user-role-mapping" evaluates correctly. That's only half of a solution to our problem. We also want to control what roles an admin is allowed to manage. In this case we would have a resource defined for each role in the system. A scoped-based permission would be created for the role's resource and the "user-role-mapping" scope. For example, let's say we wanted to say that only admins with the "admin-role-mapper" role can assign admin roles like "manage-users" or "manage-realm". For the "manage-realm" role resource, we would define a scoped-based permission for "user-role-mapping" with a role policy of "admin-role-mapper". So, let's put this all together. The role mapping operation would do these steps: 1. Can the admin manage role mappings for this user? 1.1 Evaluate that admin can access "user-role-mapping" scope for "Users" resource. If success, goto 2. 1.2 For each group of the user being managed, evaluate that the admin can access "user-role-mapping" scope for that Group. If success goto 2 1.3 Fail the role mapping operation 2. Is the admin allowed to assign the specific role? 2.1 Evaluate that the admin can access the "user-role-mapping" scope for the role's resource. From sthorger at redhat.com Mon Mar 13 03:35:42 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 13 Mar 2017 08:35:42 +0100 Subject: [keycloak-dev] Feedback about our BOMs In-Reply-To: References: Message-ID: Makes sense. Here's fix: https://github.com/keycloak/keycloak/pull/3942 On 10 March 2017 at 17:50, Sebastien Blanc wrote: > Hi, > > One of the requirement to get added on the start.spring.io website is to > have BOMs and that is what we did. But now they are reviewing our request > and I got this as remark : > > " > The version.keycloak version in your bom doesn't look right to me. If you > import a bom of version A.B.C it makes no sense to ask for D.E.F. (a > dependency may have been added/remove in that version). I'd rather > hard-code the version in each dependency (that will be updated by the > release process the same way as the property anyway). Also, that bom is a > child of your main pom which is usually a bad idea. I can see that you have > a repositories definition there that is going to pollute the Maven build. > Worse, you inherit from the dependency management of the whole > infrastructure (including Jackson, log4j and a bunch of 3rd party > libraries). We can't accept a bom that does that as it conflicts with > Spring Boot's dependency management. > " > > Does that make all sense to you ? TBH I'm not a BOM expert but looks like > it make sense (at least for not using the keycloak parent pom) > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Mon Mar 13 03:35:48 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 13 Mar 2017 08:35:48 +0100 Subject: [keycloak-dev] Feedback about our BOMs In-Reply-To: References: Message-ID: Makes sense. Here's fix: https://github.com/keycloak/keycloak/pull/3942 On 10 March 2017 at 17:50, Sebastien Blanc wrote: > Hi, > > One of the requirement to get added on the start.spring.io website is to > have BOMs and that is what we did. But now they are reviewing our request > and I got this as remark : > > " > The version.keycloak version in your bom doesn't look right to me. If you > import a bom of version A.B.C it makes no sense to ask for D.E.F. (a > dependency may have been added/remove in that version). I'd rather > hard-code the version in each dependency (that will be updated by the > release process the same way as the property anyway). Also, that bom is a > child of your main pom which is usually a bad idea. I can see that you have > a repositories definition there that is going to pollute the Maven build. > Worse, you inherit from the dependency management of the whole > infrastructure (including Jackson, log4j and a bunch of 3rd party > libraries). We can't accept a bom that does that as it conflicts with > Spring Boot's dependency management. > " > > Does that make all sense to you ? TBH I'm not a BOM expert but looks like > it make sense (at least for not using the keycloak parent pom) > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Mon Mar 13 03:35:57 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 13 Mar 2017 08:35:57 +0100 Subject: [keycloak-dev] Feedback about our BOMs In-Reply-To: References: Message-ID: Makes sense. Here's fix: https://github.com/keycloak/keycloak/pull/3942 On 10 March 2017 at 17:50, Sebastien Blanc wrote: > Hi, > > One of the requirement to get added on the start.spring.io website is to > have BOMs and that is what we did. But now they are reviewing our request > and I got this as remark : > > " > The version.keycloak version in your bom doesn't look right to me. If you > import a bom of version A.B.C it makes no sense to ask for D.E.F. (a > dependency may have been added/remove in that version). I'd rather > hard-code the version in each dependency (that will be updated by the > release process the same way as the property anyway). Also, that bom is a > child of your main pom which is usually a bad idea. I can see that you have > a repositories definition there that is going to pollute the Maven build. > Worse, you inherit from the dependency management of the whole > infrastructure (including Jackson, log4j and a bunch of 3rd party > libraries). We can't accept a bom that does that as it conflicts with > Spring Boot's dependency management. > " > > Does that make all sense to you ? TBH I'm not a BOM expert but looks like > it make sense (at least for not using the keycloak parent pom) > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Mon Mar 13 06:03:30 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 13 Mar 2017 11:03:30 +0100 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: The JIRA for this is https://issues.jboss.org/browse/KEYCLOAK-3444 We should provide this level of fine-grained permissions only to admins within the realm. That will simplify both the model as well as the UI for it. Admins in the master realm should be considered "super users" and as such there is no need to be able to finely control what they can access. I still want to see the master realm eventually going away and instead levering identity brokering as a way to allow admins from one realm to admin other realms. Not sure that's something we can introduce in a minor update of RH-SSO though and that would probably have to wait for RH-SSO 8. Benefits here is that permissions are managed within a single realm and there is no restriction that both the master realm and the other realm has to be on the same server. With some automatic "linking" of realms and a bit of clever UI work we should be able to make it as easy to use as todays master realm. Within a realm it should be possible to define a subset of the users, clients and roles that a specific admin can view and/or manage. It should leverage the authorization services. Would be great if it's done in such a way that it would be possible to customize it with your own policies. On 11 March 2017 at 14:18, Bill Burke wrote: > I'm looking into how we could implement fine-grain admin permissions > with Pedro's Authz service, i.e. fix our long standing bug that > manage-users allows people to grant themselves admin roles. I want to > do an exercise of how certain things can be modeled, specific user role > mappings. > > Some things we want to be able to do > > * admin can only assign specific roles to users > > * admin can only assign specific roles to users of a specific group > > > The entire realm would be a Authz resource server. There's already a > client (resource server) for the realm "realm-management". > > - A Scope of "user-role-mapping" would be defined. > > These resources would be defined and would have the "user-role-mapping" > scope attached to them. > > * "Users" resource. This resource represents all users in the system > * A resource is created per role > * A resource is created per group > > Now, when managing roles for a user, we need to ask two questions: > > 1. Can the admin manage role mappings for this user? > 2. Can the admin manage role mappings for this role? > > For the first question, let's map the current behavior of Keycloak onto > the Authz service. > > * A scoped-base permission would be created for the "Users" resource > with a scope of "user-role-mapping" and a role policy of role > "manage-users". > > When role mapping happens, the operation would make an entitlement > request for "Users" with a scope of "user-role-mapping". This would > pass by default because of the default permission defined above. Now > what about the case where we only want an admin to be able to manage > roles for a specific group? In this case we define a resource for the > Group Foo. The Group Foo would be attached to the "user-role-mapping" > scope. Then the realm admin would define a scope-based permission for > the Group Foo resource and "user-role-mapping". For example, there > might be a "foo-admin" role. The scope permission could grant the > permission if the admin has the "foo-admin" role. > > So, if the "Users"->"user-role-mapping" evaluation fails, the role > mapping operation would then cycle through each Group of the user being > managed and see if "Group Foo"->"user-role-mapping" evaluates correctly. > > That's only half of a solution to our problem. We also want to control > what roles an admin is allowed to manage. In this case we would have a > resource defined for each role in the system. A scoped-based permission > would be created for the role's resource and the "user-role-mapping" > scope. For example, let's say we wanted to say that only admins with > the "admin-role-mapper" role can assign admin roles like "manage-users" > or "manage-realm". For the "manage-realm" role resource, we would > define a scoped-based permission for "user-role-mapping" with a role > policy of "admin-role-mapper". > > So, let's put this all together. The role mapping operation would do > these steps: > > 1. Can the admin manage role mappings for this user? > 1.1 Evaluate that admin can access "user-role-mapping" scope for "Users" > resource. If success, goto 2. > 1.2 For each group of the user being managed, evaluate that the admin > can access "user-role-mapping" scope for that Group. If success goto 2 > 1.3 Fail the role mapping operation > 2. Is the admin allowed to assign the specific role? > 2.1 Evaluate that the admin can access the "user-role-mapping" scope for > the role's resource. > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From marc.savy at redhat.com Mon Mar 13 07:09:07 2017 From: marc.savy at redhat.com (Marc Savy) Date: Mon, 13 Mar 2017 11:09:07 +0000 Subject: [keycloak-dev] Zero-knowledge proof of password? In-Reply-To: References: <003201d29788$fd3808a0$f7a819e0$@gmail.com> <9fd94583-7b50-e123-1c8d-2d8a659e084f@redhat.com> <003f01d29810$95580cc0$c0082640$@gmail.com> <1D155D4D-2BB8-4696-8B85-D65320DA04D5@gmail.com> <0cce22b6-369e-a0c7-e736-850adce16cf0@redhat.com> <000301d298d7$06242b70$126c8250$@gmail.com> <6d253df4-293b-639b-7469-de27861b465c@redhat.com> <64362891-4999-e417-e49d-72d9a9a19fe5@redhat.com> Message-ID: This sounds like a candidate for HTTP Signatures - i.e. strictly a verifiability and integrity issue rather than confidentiality. The likes of Amazon use this approach AIUI. See: https://github.com/tomitribe/http-signatures-java This would enable you to continue with TLS termination but retain verifiability, non-repudiation, etc for interested parties. The downside, of course, is that the involved parties need to have keys to perform the verification - however, it's non-invasive in that it doesn't break anything for disinterested parties. NB: I'm not claiming that KC should be doing this - perhaps it should be capable of it, but I think that's still open for discussion. On 10 March 2017 at 10:42, Stian Thorgersen wrote: > If you're talking about some non-web based client it would still need to > use SSL as otherwise tokens are still sent in clear text and can be > exploited. So I don't see the use-case for it there either. > > On 10 March 2017 at 11:39, Niels Bertram wrote: > >> For a normal webpage or app I can't see it either. But there might be some >> use for it when you want to do authentication from pin pad devices or like >> where you cannot guarantee what will be between the input device and the >> auth terminal. But even then you are limited by getting the iterations, >> salt etc securily to the terminal. >> >> On Fri, Mar 10, 2017 at 8:32 PM, Stian Thorgersen >> wrote: >> >>> I really don't see the need for this when both OIDC and SAML will require >>> secure networks in either case. >>> >>> On 10 March 2017 at 11:21, Marek Posolda wrote: >>> >>> > On 10/03/17 11:18, Marek Posolda wrote: >>> > > I wonder if it's possible to do the whole handshake in 1 request >>> instead >>> > > of 2 requests, which you would need if you send username in the first >>> > > request. >>> > > >>> > > Something along the lines of: >>> > > - User enters username+password in the browser >>> > > - Browser do some preliminary hashing of the password (eg. hashes it >>> > > with 10K iterations) and send this hash to the server >>> > > - Server will receive the 10K-iterations hashed password and add >>> another >>> > > 10K iterations to it. Then it will compare the final 20K hash with the >>> > > 20K hash from the DB and checks if it match. >>> > > >>> > > This will allow that everything is done in single request, password is >>> > > not sent over the network in cleartext and also there is not the 20K >>> > > hash sent over the network, which won't be good as it will exactly >>> match >>> > > the hash from DB. Not sure if it's doable in practice, just an idea :) >>> > ah, browser doesn't have the password salt, so it won't be able to do >>> > first 10K iterations... >>> > >>> > Marek >>> > > >>> > > Marek >>> > > >>> > > On 10/03/17 11:11, Marek Posolda wrote: >>> > >> Kerberos is also similar to this. In fact Kerberos was designed to >>> > >> provide secure communication over insecure network. All the >>> handshakes >>> > >> are done in a way that sender usually encrypts the ticket/sessionKey >>> by >>> > >> some secret known by the receiving party (eg. hash of the user >>> > >> password). And yes, Kerberos also sends defacto just the username in >>> the >>> > >> first request of the username-password verification handshake. >>> > >> >>> > >> >>> > >> Marek >>> > >> >>> > >> On 09/03/17 16:10, Bill Burke wrote: >>> > >>> I think HTTP Digest was written for non-TLS connections and works >>> > similarly. >>> > >>> >>> > >>> FYI, this also requires the client provide a username prior to >>> > >>> authentication as you need to know the salt, algorithm, and number >>> of >>> > >>> hash iterations that were used to hash the password for that >>> particular >>> > >>> user. To prevent attackers from guessing usernames, the client >>> should >>> > >>> always be provided with this information whether or not the username >>> > exists. >>> > >>> >>> > >>> I think you could definitely implement something here. Would be a >>> nice >>> > >>> feature for Keycloak. >>> > >>> >>> > >>> >>> > >>> On 3/9/17 8:14 AM, Peter K. Boucher wrote: >>> > >>>> I think if I were going to tweak it myself, I would do something >>> > patterned >>> > >>>> after what NTLM did: >>> > >>>> >>> > >>>> Server generates pseudo-random nonce and sends it with the ID of >>> the >>> > >>>> hash-algorithm it used when storing the password: >>> > >>>> Server ----(hash algorithm, salt, nonce)----> Client >>> > >>>> >>> > >>>> Client hashes password with specified algorithm and salt. >>> > >>>> Client generates pseudo-random IV and encrypts the specified nonce, >>> > using >>> > >>>> the output of the hash as the key, and sends the IV and the >>> encrypted >>> > nonce >>> > >>>> to the Server: >>> > >>>> Client ----(IV, AES block-encrypted nonce with hash as key)----> >>> > Server >>> > >>>> >>> > >>>> Server uses stored hash and specified IV to decrypt nonce, and >>> > compares >>> > >>>> nonce to what was sent to the Client. >>> > >>>> >>> > >>>> This way, the password is never transmitted at all, but this >>> > >>>> challenge-response protocol serves to prove that the Client knows >>> the >>> > >>>> password. >>> > >>>> >>> > >>>> Anyway, I think my main question was answered that no one has done >>> > such a >>> > >>>> proof-based protocol with keycloak so far, right? >>> > >>>> >>> > >>>> -----Original Message----- >>> > >>>> From: keycloak-dev-bounces at lists.jboss.org >>> > >>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill >>> Burke >>> > >>>> Sent: Wednesday, March 8, 2017 8:46 PM >>> > >>>> To: keycloak-dev at lists.jboss.org >>> > >>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>> > >>>> >>> > >>>> So, you want to create the hash in the browser or proxy, then >>> transmit >>> > >>>> this to Keycloak. Keycloak compares the hash to the precalculated >>> > hash >>> > >>>> it has stored? I don't see how this is any more secure. You're >>> still >>> > >>>> passing the credential (the hash) in clear text. >>> > >>>> >>> > >>>> BTW, I think other issues that make things more complex with client >>> > >>>> hashing is if >>> > >>>> >>> > >>>> * You need to bump up the number of hashing iterations. >>> (recommended >>> > >>>> value changes every 5 years or so) >>> > >>>> >>> > >>>> * Change the hashing algorithm. (SHA-1 was just broken). >>> > >>>> >>> > >>>> >>> > >>>> >>> > >>>> On 3/8/17 6:45 PM, Niels Bertram wrote: >>> > >>>>> Hi Peter, your security is only ever as good as the weakest link. >>> > Given >>> > >>>> you transmit the password using SSL up to your VPC why would you >>> need >>> > to >>> > >>>> "strengthen" (obfuscate rather) the password from there to the >>> > keycloak >>> > >>>> socket? From what I have seen there are 2 ways to proxy a message, >>> 1) >>> > to >>> > >>>> tunnel the SSL or 2) reencrypt it in the proxy. Maybe 1) is an >>> option >>> > for >>> > >>>> you as this setup would not decrypt your message ... although this >>> > comes >>> > >>>> with other drawbacks. I am intrigued as to what exactly you are >>> > trying to >>> > >>>> achieve by modifying the messages on the way though a proxy. Any >>> > chance you >>> > >>>> could elaborate on your security requirement? >>> > >>>>>> On 8 Mar. 2017, at 23:33, Peter K. Boucher < >>> pkboucher801 at gmail.com> >>> > >>>> wrote: >>> > >>>>>> Sorry, I should have described our scenario more thoroughly. >>> > >>>>>> >>> > >>>>>> We have one of these at the border of our VPC: >>> > >>>>>> https://en.wikipedia.org/wiki/TLS_termination_proxy >>> > >>>>>> >>> > >>>>>> We can accept the risk of data being transmitted in the clear >>> > inside the >>> > >>>>>> VPC, but we would prefer that passwords not be transmitted in the >>> > clear. >>> > >>>>>> >>> > >>>>>> It's an old problem. NTLM also used a proof of the password >>> rather >>> > than >>> > >>>>>> transmitting the password for similar reasons. >>> > >>>>>> >>> > >>>>>> We could force that TLS be used inside the VPC between the TLS >>> > >>>> termination >>> > >>>>>> proxy and Keycloak, but even then, the passwords are decrypted >>> and >>> > then >>> > >>>>>> re-encrypted. >>> > >>>>>> >>> > >>>>>> We are considering trying to use something like the client-side >>> > hashing >>> > >>>>>> described here: https://github.com/dxa4481/clientHashing >>> > >>>>>> >>> > >>>>>> The question for this group was related to whether anyone has >>> > already >>> > >>>>>> developed anything along these lines for use with Keycloak. >>> > >>>>>> >>> > >>>>>> Thanks! >>> > >>>>>> >>> > >>>>>> >>> > >>>>>> -----Original Message----- >>> > >>>>>> From: keycloak-dev-bounces at lists.jboss.org >>> > >>>>>> [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Bill >>> > Burke >>> > >>>>>> Sent: Tuesday, March 7, 2017 6:06 PM >>> > >>>>>> To: keycloak-dev at lists.jboss.org >>> > >>>>>> Subject: Re: [keycloak-dev] Zero-knowledge proof of password? >>> > >>>>>> >>> > >>>>>> What does that even mean? Keycloak's SSL mode can forbid non SSL >>> > >>>>>> connections. FYI, OIDC requires SSL. >>> > >>>>>> >>> > >>>>>> >>> > >>>>>>> On 3/7/17 4:22 PM, Peter K. Boucher wrote: >>> > >>>>>>> Suppose you don't want your passwords transmitted in the clear >>> > after SSL >>> > >>>>>> is >>> > >>>>>>> terminated by a proxy. >>> > >>>>>>> >>> > >>>>>>> >>> > >>>>>>> >>> > >>>>>>> Has anyone developed a secure way for the client to prove they >>> > have the >>> > >>>>>>> password, rather than transmitting it in the body of a post? >>> > >>>>>>> >>> > >>>>>>> _______________________________________________ >>> > >>>>>>> keycloak-dev mailing list >>> > >>>>>>> keycloak-dev at lists.jboss.org >>> > >>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>>>>> _______________________________________________ >>> > >>>>>> keycloak-dev mailing list >>> > >>>>>> keycloak-dev at lists.jboss.org >>> > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>>>>> >>> > >>>>>> _______________________________________________ >>> > >>>>>> keycloak-dev mailing list >>> > >>>>>> keycloak-dev at lists.jboss.org >>> > >>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>>>> _______________________________________________ >>> > >>>>> keycloak-dev mailing list >>> > >>>>> keycloak-dev at lists.jboss.org >>> > >>>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>>> _______________________________________________ >>> > >>>> keycloak-dev mailing list >>> > >>>> keycloak-dev at lists.jboss.org >>> > >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>>> >>> > >>> _______________________________________________ >>> > >>> keycloak-dev mailing list >>> > >>> keycloak-dev at lists.jboss.org >>> > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >> _______________________________________________ >>> > >> keycloak-dev mailing list >>> > >> keycloak-dev at lists.jboss.org >>> > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > > >>> > > _______________________________________________ >>> > > keycloak-dev mailing list >>> > > keycloak-dev at lists.jboss.org >>> > > https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>> > >>> > _______________________________________________ >>> > keycloak-dev mailing list >>> > keycloak-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> > >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >> >> > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From psilva at redhat.com Mon Mar 13 09:43:40 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 10:43:40 -0300 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: On Sat, Mar 11, 2017 at 10:18 AM, Bill Burke wrote: > I'm looking into how we could implement fine-grain admin permissions > with Pedro's Authz service, i.e. fix our long standing bug that > manage-users allows people to grant themselves admin roles. I want to > do an exercise of how certain things can be modeled, specific user role > mappings. > Some things we want to be able to do > > * admin can only assign specific roles to users > > * admin can only assign specific roles to users of a specific group > > > The entire realm would be a Authz resource server. There's already a > client (resource server) for the realm "realm-management". > > - A Scope of "user-role-mapping" would be defined. > > These resources would be defined and would have the "user-role-mapping" > scope attached to them. > > * "Users" resource. This resource represents all users in the system * A resource is created per role > * A resource is created per group > You could also create different resources for each user. It is worthy mentioning about our permissioning model is that we support typed resources. For instance, you can define a typed resource that represents all users (your "Users" resources) and assign some general policies that must be applied to all users. When you create an user you can also create a resource representing that particular user with the same type as the typed resource (we call a resource instance). That means that all policies associated with the typed resource are also enforced for any other resource with the same type. In addition, you can also define specific policies for a resource instance (e.g.: a resource representing a specific user) to enforce additional policies for an user. The same applies for roles, groups or any other resource you are protecting. > > Now, when managing roles for a user, we need to ask two questions: > > 1. Can the admin manage role mappings for this user? > 2. Can the admin manage role mappings for this role? > > For the first question, let's map the current behavior of Keycloak onto > the Authz service. > > * A scoped-base permission would be created for the "Users" resource > with a scope of "user-role-mapping" and a role policy of role > "manage-users". > > When role mapping happens, the operation would make an entitlement > request for "Users" with a scope of "user-role-mapping". This would > pass by default because of the default permission defined above. Now > what about the case where we only want an admin to be able to manage > roles for a specific group? In this case we define a resource for the > Group Foo. The Group Foo would be attached to the "user-role-mapping" > scope. Then the realm admin would define a scope-based permission for > the Group Foo resource and "user-role-mapping". For example, there > might be a "foo-admin" role. The scope permission could grant the > permission if the admin has the "foo-admin" role. > So, if the "Users"->"user-role-mapping" evaluation fails, the role > mapping operation would then cycle through each Group of the user being > managed and see if "Group Foo"->"user-role-mapping" evaluates correctly. > > That's only half of a solution to our problem. We also want to control > what roles an admin is allowed to manage. In this case we would have a > resource defined for each role in the system. A scoped-based permission > would be created for the role's resource and the "user-role-mapping" > scope. For example, let's say we wanted to say that only admins with > the "admin-role-mapper" role can assign admin roles like "manage-users" > or "manage-realm". For the "manage-realm" role resource, we would > define a scoped-based permission for "user-role-mapping" with a role > policy of "admin-role-mapper". > > So, let's put this all together. The role mapping operation would do > these steps: > > 1. Can the admin manage role mappings for this user? > 1.1 Evaluate that admin can access "user-role-mapping" scope for "Users" > resource. If success, goto 2. > 1.2 For each group of the user being managed, evaluate that the admin > can access "user-role-mapping" scope for that Group. If success goto 2 > 1.3 Fail the role mapping operation > 2. Is the admin allowed to assign the specific role? > 2.1 Evaluate that the admin can access the "user-role-mapping" scope for > the role's resource. > Are you already implementing things ? Do you want me to look at these changes or work together with you on them ? (As you may have noticed, there is an API that we use internally to actually evaluate policies given a set of permissions.) > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From psilva at redhat.com Mon Mar 13 10:38:08 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 11:38:08 -0300 Subject: [keycloak-dev] Arquillian tests using graphene are broken ? In-Reply-To: References: Message-ID: Thanks Marek, missed that email. Everything looks fine now ... On Fri, Mar 10, 2017 at 5:21 PM, Marek Posolda wrote: > Hey Pedro, > > This [1] and this [2] . Do you see any issues? For [1], the workaround can > be to use different browser (eg. -Dbrowser=phantomjs). For [2], the > workaround is import per method as described in the email. > > [1] http://lists.jboss.org/pipermail/keycloak-dev/2017-February/ > 008859.html > [2] http://lists.jboss.org/pipermail/keycloak-dev/2017-March/008902.html > > Marek > > > On 10/03/17 16:13, Pedro Igor Silva wrote: > >> Do have any change recently to arquillian testsuite ? >> >> Regards. >> Pedro Igor >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > > From bburke at redhat.com Mon Mar 13 11:15:43 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 13 Mar 2017 11:15:43 -0400 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: On 3/13/17 9:43 AM, Pedro Igor Silva wrote: > > Are you already implementing things ? Do you want me to look at these > changes or work together with you on them ? > > (As you may have noticed, there is an API that we use internally to > actually evaluate policies given a set of permissions.) Haven't implemented anything just researching how it could be done. The biggest issue right now that I'm having is that I don't understand how to do things programatically yet (i.e. set up resources, set up scopes, set up permissions, set up policies). I don't understand how the UI translates to the JPA entity model and there seems to be a lot of set up data hidden by generic Map objects. Its also really confusing how the admin REST interface translates from the UI to the model. Its also really bizarre to me that the things represented in the Admin Console UI are not represented in the data model. i.e. I have no idea how a "Scoped-Permission" in the admin console maps to a JSON representation, the REST API, nor how that JSON representation is mapped to the model. BIll From psilva at redhat.com Mon Mar 13 11:49:03 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 12:49:03 -0300 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: On Mon, Mar 13, 2017 at 12:15 PM, Bill Burke wrote: > > > On 3/13/17 9:43 AM, Pedro Igor Silva wrote: > > > Are you already implementing things ? Do you want me to look at these > changes or work together with you on them ? > > (As you may have noticed, there is an API that we use internally to > actually evaluate policies given a set of permissions.) > > Haven't implemented anything just researching how it could be done. The > biggest issue right now that I'm having is that I don't understand how to > do things programatically yet (i.e. set up resources, set up scopes, set up > permissions, set up policies). I don't understand how the UI translates to > the JPA entity model and there seems to be a lot of set up data hidden by > generic Map objects. Its also really confusing how the admin REST > interface translates from the UI to the model. Its also really bizarre > to me that the things represented in the Admin Console UI are not > represented in the data model. i.e. I have no idea how a > "Scoped-Permission" in the admin console maps to a JSON representation, the > REST API, nor how that JSON representation is mapped to the model. > The usage of Map objects is similar to what we have with identity brokering. We need to be flexible in order to support different types of policies without requiring changes to model. Regarding representation, both permission and policy are represented as policies. What differs a permission from a policy is the type. Policies defined as "scope" or "resource" (as you have in UI) are the ones you can use to actually create permissions to resources and scopes. Everything you need is available from an AuthorizationProvider instance, which can be obtained as follows: KeycloakSession.getProvider(AuthorizationProvider.class); >From an instance of that class you can access methods to evaluate permissions or obtain references to the different storages we use to actually manage resources, policies, scopes and resource servers. I can include the AuthZ API as a topic to our meeting about authz services next week. Regarding scope-based permissions and the use cases you pointed out, I think we may have to change how we are handling typed resources. Today, we know a resource is a general/typed resource because the owner of the resource is the resource server itself, and resource instances are usually owned by an user, someone different than the resource server. Need to take a look on this one and see how it impact the use cases you pointed out ... > > > BIll > From bburke at redhat.com Mon Mar 13 13:10:07 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 13 Mar 2017 13:10:07 -0400 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: On 3/13/17 11:49 AM, Pedro Igor Silva wrote: > > On Mon, Mar 13, 2017 at 12:15 PM, Bill Burke > wrote: > > > > On 3/13/17 9:43 AM, Pedro Igor Silva wrote: >> >> Are you already implementing things ? Do you want me to look at >> these changes or work together with you on them ? >> >> (As you may have noticed, there is an API that we use internally >> to actually evaluate policies given a set of permissions.) > Haven't implemented anything just researching how it could be > done. The biggest issue right now that I'm having is that I don't > understand how to do things programatically yet (i.e. set up > resources, set up scopes, set up permissions, set up policies). I > don't understand how the UI translates to the JPA entity model and > there seems to be a lot of set up data hidden by generic Map > objects. Its also really confusing how the admin REST interface > translates from the UI to the model. Its also really bizarre to > me that the things represented in the Admin Console UI are not > represented in the data model. i.e. I have no idea how a > "Scoped-Permission" in the admin console maps to a JSON > representation, the REST API, nor how that JSON representation is > mapped to the model. > > > The usage of Map objects is similar to what we have with identity > brokering. We need to be flexible in order to support different types > of policies without requiring changes to model. > > Regarding representation, both permission and policy are represented > as policies. What differs a permission from a policy is the type. > Policies defined as "scope" or "resource" (as you have in UI) are the > ones you can use to actually create permissions to resources and scopes. > This is the really confusing part. There is no documentation on how to set up a Scoped or Resource permission through the REST API. The AuthzClient only allows you to create resources AFAICT. I'm currently tracing through admin-console permission UI to find which REST endpoint to use, and how that REST endpoint converts JSON into actual API calls. > Everything you need is available from an AuthorizationProvider > instance, which can be obtained as follows: > > KeycloakSession.getProvider(AuthorizationProvider.class); > > From an instance of that class you can access methods to evaluate > permissions or obtain references to the different storages we use to > actually manage resources, policies, scopes and resource servers. I > can include the AuthZ API as a topic to our meeting about authz > services next week. > I don't have everything I need as you overload concepts (permission and policy) and it doesn't make sense yet the magic that must be done to store these definitions. > Regarding scope-based permissions and the use cases you pointed out, I > think we may have to change how we are handling typed resources. > Today, we know a resource is a general/typed resource because the > owner of the resource is the resource server itself, and resource > instances are usually owned by an user, someone different than the > resource server. Need to take a look on this one and see how it impact > the use cases you pointed out ... I have no idea what you're talking about here. Am I missing something fundamental here? From the docs, examples, and the admin console UI, I thought Authz was about defining rules (policies) to determine if somebody is allowed to perform a specific action (scope) on a certain thing (resource). Bill From psilva at redhat.com Mon Mar 13 13:27:44 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 14:27:44 -0300 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: On Mon, Mar 13, 2017 at 2:10 PM, Bill Burke wrote: > > > On 3/13/17 11:49 AM, Pedro Igor Silva wrote: > > > On Mon, Mar 13, 2017 at 12:15 PM, Bill Burke wrote: > >> >> >> On 3/13/17 9:43 AM, Pedro Igor Silva wrote: >> >> >> Are you already implementing things ? Do you want me to look at these >> changes or work together with you on them ? >> >> (As you may have noticed, there is an API that we use internally to >> actually evaluate policies given a set of permissions.) >> >> Haven't implemented anything just researching how it could be done. The >> biggest issue right now that I'm having is that I don't understand how to >> do things programatically yet (i.e. set up resources, set up scopes, set up >> permissions, set up policies). I don't understand how the UI translates to >> the JPA entity model and there seems to be a lot of set up data hidden by >> generic Map objects. Its also really confusing how the admin REST >> interface translates from the UI to the model. Its also really bizarre >> to me that the things represented in the Admin Console UI are not >> represented in the data model. i.e. I have no idea how a >> "Scoped-Permission" in the admin console maps to a JSON representation, the >> REST API, nor how that JSON representation is mapped to the model. >> > > The usage of Map objects is similar to what we have with identity > brokering. We need to be flexible in order to support different types of > policies without requiring changes to model. > > > Regarding representation, both permission and policy are represented as > policies. What differs a permission from a policy is the type. Policies > defined as "scope" or "resource" (as you have in UI) are the ones you can > use to actually create permissions to resources and scopes. > > This is the really confusing part. There is no documentation on how to > set up a Scoped or Resource permission through the REST API. The > AuthzClient only allows you to create resources AFAICT. I'm currently > tracing through admin-console permission UI to find which REST endpoint to > use, and how that REST endpoint converts JSON into actual API calls. > The confusion is because we have this JIRA https://issues.jboss.org/browse/KEYCLOAK-3135. Currently, we don't have a nice REST API for policy management. That explains why we lack docs as well. You can use KC Admin Client (that is what we use in tests) but that will change once we introduce a specific API for this purpose. > > > Everything you need is available from an AuthorizationProvider instance, > which can be obtained as follows: > > KeycloakSession.getProvider(AuthorizationProvider.class); > > From an instance of that class you can access methods to evaluate > permissions or obtain references to the different storages we use to > actually manage resources, policies, scopes and resource servers. I can > include the AuthZ API as a topic to our meeting about authz services next > week. > > I don't have everything I need as you overload concepts (permission and > policy) and it doesn't make sense yet the magic that must be done to store > these definitions. > I don't think I've overloaded concepts. The concept of a policy is much more broader than with a permission. A policy can be seen as a condition that must be satisfied in order to grant/deny access to something (resource or scope). You can reuse policies across different permissions (you probably want that instead of replicating the same condition on similar policies), policies can be combined to form more complex and fine-grained conditions (see aggregated policies), etc. On the other hand, permissions are what you actually use to say: "OK, here is the thing I want to protect with these policies". Initially, we didn't have the "permission" concept at all and that was blocking us to be more flexible and support more functionalities. What magic you are talking about ? > > > Regarding scope-based permissions and the use cases you pointed out, I > think we may have to change how we are handling typed resources. Today, we > know a resource is a general/typed resource because the owner of the > resource is the resource server itself, and resource instances are usually > owned by an user, someone different than the resource server. Need to take > a look on this one and see how it impact the use cases you pointed out ... > > I have no idea what you're talking about here. Am I missing something > fundamental here? From the docs, examples, and the admin console UI, I > thought Authz was about defining rules (policies) to determine if somebody > is allowed to perform a specific action (scope) on a certain thing > (resource). > There are a few things more than just "are you allowed to do this" :) You may get to a decision in different/configured ways. Glad that you are now looking into this. What I meant is that we kind of support a parent/child relationship between resources with typed resources [1]. Just thinking loud about what we may (or not) need to use authz internally. [1] https://keycloak.gitbooks.io/authorization-services-guide/topics/permission/typed-resource-permission.html > > > Bill > From psilva at redhat.com Mon Mar 13 13:42:35 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 14:42:35 -0300 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: On Mon, Mar 13, 2017 at 7:03 AM, Stian Thorgersen wrote: > The JIRA for this is https://issues.jboss.org/browse/KEYCLOAK-3444 > > We should provide this level of fine-grained permissions only to admins > within the realm. That will simplify both the model as well as the UI for > it. Admins in the master realm should be considered "super users" and as > such there is no need to be able to finely control what they can access. > > I still want to see the master realm eventually going away and instead > levering identity brokering as a way to allow admins from one realm to > admin other realms. Not sure that's something we can introduce in a minor > update of RH-SSO though and that would probably have to wait for RH-SSO 8. > Benefits here is that permissions are managed within a single realm and > there is no restriction that both the master realm and the other realm has > to be on the same server. With some automatic "linking" of realms and a bit > of clever UI work we should be able to make it as easy to use as todays > master realm. > > Within a realm it should be possible to define a subset of the users, > clients and roles that a specific admin can view and/or manage. It should > leverage the authorization services. Would be great if it's done in such a > way that it would be possible to customize it with your own policies. > I think we can achieve that already. Although, depending on how many users a realm may have we may need to revisit UI controls in order to improve usability. I'm going to try to prepare something to our next meeting at this regard. > > On 11 March 2017 at 14:18, Bill Burke wrote: > > > I'm looking into how we could implement fine-grain admin permissions > > with Pedro's Authz service, i.e. fix our long standing bug that > > manage-users allows people to grant themselves admin roles. I want to > > do an exercise of how certain things can be modeled, specific user role > > mappings. > > > > Some things we want to be able to do > > > > * admin can only assign specific roles to users > > > > * admin can only assign specific roles to users of a specific group > > > > > > The entire realm would be a Authz resource server. There's already a > > client (resource server) for the realm "realm-management". > > > > - A Scope of "user-role-mapping" would be defined. > > > > These resources would be defined and would have the "user-role-mapping" > > scope attached to them. > > > > * "Users" resource. This resource represents all users in the system > > * A resource is created per role > > * A resource is created per group > > > > Now, when managing roles for a user, we need to ask two questions: > > > > 1. Can the admin manage role mappings for this user? > > 2. Can the admin manage role mappings for this role? > > > > For the first question, let's map the current behavior of Keycloak onto > > the Authz service. > > > > * A scoped-base permission would be created for the "Users" resource > > with a scope of "user-role-mapping" and a role policy of role > > "manage-users". > > > > When role mapping happens, the operation would make an entitlement > > request for "Users" with a scope of "user-role-mapping". This would > > pass by default because of the default permission defined above. Now > > what about the case where we only want an admin to be able to manage > > roles for a specific group? In this case we define a resource for the > > Group Foo. The Group Foo would be attached to the "user-role-mapping" > > scope. Then the realm admin would define a scope-based permission for > > the Group Foo resource and "user-role-mapping". For example, there > > might be a "foo-admin" role. The scope permission could grant the > > permission if the admin has the "foo-admin" role. > > > > So, if the "Users"->"user-role-mapping" evaluation fails, the role > > mapping operation would then cycle through each Group of the user being > > managed and see if "Group Foo"->"user-role-mapping" evaluates correctly. > > > > That's only half of a solution to our problem. We also want to control > > what roles an admin is allowed to manage. In this case we would have a > > resource defined for each role in the system. A scoped-based permission > > would be created for the role's resource and the "user-role-mapping" > > scope. For example, let's say we wanted to say that only admins with > > the "admin-role-mapper" role can assign admin roles like "manage-users" > > or "manage-realm". For the "manage-realm" role resource, we would > > define a scoped-based permission for "user-role-mapping" with a role > > policy of "admin-role-mapper". > > > > So, let's put this all together. The role mapping operation would do > > these steps: > > > > 1. Can the admin manage role mappings for this user? > > 1.1 Evaluate that admin can access "user-role-mapping" scope for "Users" > > resource. If success, goto 2. > > 1.2 For each group of the user being managed, evaluate that the admin > > can access "user-role-mapping" scope for that Group. If success goto 2 > > 1.3 Fail the role mapping operation > > 2. Is the admin allowed to assign the specific role? > > 2.1 Evaluate that the admin can access the "user-role-mapping" scope for > > the role's resource. > > > > > > > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bburke at redhat.com Mon Mar 13 15:20:02 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 13 Mar 2017 15:20:02 -0400 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> Message-ID: <5746d979-fc15-796d-ae01-3f002d77308e@redhat.com> On 3/13/17 1:27 PM, Pedro Igor Silva wrote: > On Mon, Mar 13, 2017 at 2:10 PM, Bill Burke > wrote: > > > > On 3/13/17 11:49 AM, Pedro Igor Silva wrote: >> >> On Mon, Mar 13, 2017 at 12:15 PM, Bill Burke > > wrote: >> >> >> >> On 3/13/17 9:43 AM, Pedro Igor Silva wrote: >>> >>> Are you already implementing things ? Do you want me to look >>> at these changes or work together with you on them ? >>> >>> (As you may have noticed, there is an API that we use >>> internally to actually evaluate policies given a set of >>> permissions.) >> Haven't implemented anything just researching how it could be >> done. The biggest issue right now that I'm having is that I >> don't understand how to do things programatically yet (i.e. >> set up resources, set up scopes, set up permissions, set up >> policies). I don't understand how the UI translates to the >> JPA entity model and there seems to be a lot of set up data >> hidden by generic Map objects. Its also really confusing how >> the admin REST interface translates from the UI to the >> model. Its also really bizarre to me that the things >> represented in the Admin Console UI are not represented in >> the data model. i.e. I have no idea how a >> "Scoped-Permission" in the admin console maps to a JSON >> representation, the REST API, nor how that JSON >> representation is mapped to the model. >> >> >> The usage of Map objects is similar to what we have with identity >> brokering. We need to be flexible in order to support different >> types of policies without requiring changes to model. >> > >> Regarding representation, both permission and policy are >> represented as policies. What differs a permission from a policy >> is the type. Policies defined as "scope" or "resource" (as you >> have in UI) are the ones you can use to actually create >> permissions to resources and scopes. >> > This is the really confusing part. There is no documentation on > how to set up a Scoped or Resource permission through the REST > API. The AuthzClient only allows you to create resources AFAICT. > I'm currently tracing through admin-console permission UI to find > which REST endpoint to use, and how that REST endpoint converts > JSON into actual API calls. > > > The confusion is because we have this JIRA > https://issues.jboss.org/browse/KEYCLOAK-3135. Currently, we don't > have a nice REST API for policy management. That explains why we lack > docs as well. You can use KC Admin Client (that is what we use in > tests) but that will change once we introduce a specific API for this > purpose. > > > >> Everything you need is available from an AuthorizationProvider >> instance, which can be obtained as follows: >> >> KeycloakSession.getProvider(AuthorizationProvider.class); >> >> From an instance of that class you can access methods to evaluate >> permissions or obtain references to the different storages we use >> to actually manage resources, policies, scopes and resource >> servers. I can include the AuthZ API as a topic to our meeting >> about authz services next week. >> > I don't have everything I need as you overload concepts > (permission and policy) and it doesn't make sense yet the magic > that must be done to store these definitions. > > > I don't think I've overloaded concepts. The concept of a policy is > much more broader than with a permission. A policy can be seen as a > condition that must be satisfied in order to grant/deny access to > something (resource or scope). You can reuse policies across different > permissions (you probably want that instead of replicating the same > condition on similar policies), policies can be combined to form more > complex and fine-grained conditions (see aggregated policies), etc. I'm sorry. I sounded too critical and I didn't mean to be or sound like I was attacking you. I really like what you are doing and am excited about it...but...You do overload concepts because the admin console has the concept of a Permission and there's no Permission object in the REST API, the model api, or the data model. Permission looked like a join table between Resource and Policy or Resource, Scope, and Policy. From your docs and admin console it looks like you have 4 concepts: * The thing (Resource) * The action (scope) * The condition (policy) * The permission (join of resource, scope, and policy). > > On the other hand, permissions are what you actually use to say: "OK, > here is the thing I want to protect with these policies". Initially, > we didn't have the "permission" concept at all and that was blocking > us to be more flexible and support more functionalities. > Yeah, Keycloak didn't have the concept of "Resource", "Scope", or "Policy". Role was really just a way to catagorize users. Is that what you mean? > What magic you are talking about ? > > > >> Regarding scope-based permissions and the use cases you pointed >> out, I think we may have to change how we are handling typed >> resources. Today, we know a resource is a general/typed resource >> because the owner of the resource is the resource server itself, >> and resource instances are usually owned by an user, someone >> different than the resource server. Need to take a look on this >> one and see how it impact the use cases you pointed out ... > I have no idea what you're talking about here. Am I missing > something fundamental here? From the docs, examples, and the > admin console UI, I thought Authz was about defining rules > (policies) to determine if somebody is allowed to perform a > specific action (scope) on a certain thing (resource). > > > There are a few things more than just "are you allowed to do this" :) > You may get to a decision in different/configured ways. Glad that you > are now looking into this. > What is the more? I understand you can get to a decision in different ways. That's what is really cool about your stuff...but what exactly are you "deciding" on? Again I thought we were deciding what actions can somebody perform on a resource. Some other related questions: * What is a "resource permission" supposed to be? Is it a "catch all" for that resource? * What if you have multiple "scoped permissions" defined for the same resource and scope. Do they all have to be true? Or just one? > What I meant is that we kind of support a parent/child relationship > between resources with typed resources [1]. Just thinking loud about > what we may (or not) need to use authz internally. > > [1] > https://keycloak.gitbooks.io/authorization-services-guide/topics/permission/typed-resource-permission.html That looks interesting, but there's currently no way to say: For this resource type and these scopes, apply these policies. The "scope" part is missing. Bill From psilva at redhat.com Mon Mar 13 16:34:16 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 17:34:16 -0300 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: <5746d979-fc15-796d-ae01-3f002d77308e@redhat.com> References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> <5746d979-fc15-796d-ae01-3f002d77308e@redhat.com> Message-ID: On Mon, Mar 13, 2017 at 4:20 PM, Bill Burke wrote: > > > On 3/13/17 1:27 PM, Pedro Igor Silva wrote: > > On Mon, Mar 13, 2017 at 2:10 PM, Bill Burke wrote: > >> >> >> On 3/13/17 11:49 AM, Pedro Igor Silva wrote: >> >> >> On Mon, Mar 13, 2017 at 12:15 PM, Bill Burke wrote: >> >>> >>> >>> On 3/13/17 9:43 AM, Pedro Igor Silva wrote: >>> >>> >>> Are you already implementing things ? Do you want me to look at these >>> changes or work together with you on them ? >>> >>> (As you may have noticed, there is an API that we use internally to >>> actually evaluate policies given a set of permissions.) >>> >>> Haven't implemented anything just researching how it could be done. The >>> biggest issue right now that I'm having is that I don't understand how to >>> do things programatically yet (i.e. set up resources, set up scopes, set up >>> permissions, set up policies). I don't understand how the UI translates to >>> the JPA entity model and there seems to be a lot of set up data hidden by >>> generic Map objects. Its also really confusing how the admin REST >>> interface translates from the UI to the model. Its also really bizarre >>> to me that the things represented in the Admin Console UI are not >>> represented in the data model. i.e. I have no idea how a >>> "Scoped-Permission" in the admin console maps to a JSON representation, the >>> REST API, nor how that JSON representation is mapped to the model. >>> >> >> The usage of Map objects is similar to what we have with identity >> brokering. We need to be flexible in order to support different types of >> policies without requiring changes to model. >> >> >> Regarding representation, both permission and policy are represented as >> policies. What differs a permission from a policy is the type. Policies >> defined as "scope" or "resource" (as you have in UI) are the ones you can >> use to actually create permissions to resources and scopes. >> >> This is the really confusing part. There is no documentation on how to >> set up a Scoped or Resource permission through the REST API. The >> AuthzClient only allows you to create resources AFAICT. I'm currently >> tracing through admin-console permission UI to find which REST endpoint to >> use, and how that REST endpoint converts JSON into actual API calls. >> > > The confusion is because we have this JIRA https://issues.jboss.org/ > browse/KEYCLOAK-3135. Currently, we don't have a nice REST API for policy > management. That explains why we lack docs as well. You can use KC Admin > Client (that is what we use in tests) but that will change once we > introduce a specific API for this purpose. > > >> >> >> Everything you need is available from an AuthorizationProvider instance, >> which can be obtained as follows: >> >> KeycloakSession.getProvider(AuthorizationProvider.class); >> >> From an instance of that class you can access methods to evaluate >> permissions or obtain references to the different storages we use to >> actually manage resources, policies, scopes and resource servers. I can >> include the AuthZ API as a topic to our meeting about authz services next >> week. >> >> I don't have everything I need as you overload concepts (permission and >> policy) and it doesn't make sense yet the magic that must be done to store >> these definitions. >> > > I don't think I've overloaded concepts. The concept of a policy is much > more broader than with a permission. A policy can be seen as a condition > that must be satisfied in order to grant/deny access to something (resource > or scope). You can reuse policies across different permissions (you > probably want that instead of replicating the same condition on similar > policies), policies can be combined to form more complex and fine-grained > conditions (see aggregated policies), etc. > > > I'm sorry. I sounded too critical and I didn't mean to be or sound like I > was attacking you. I really like what you are doing and am excited about > it...but...You do overload concepts because the admin console has the > concept of a Permission and there's no Permission object in the REST API, > the model api, or the data model. Permission looked like a join table > between Resource and Policy or Resource, Scope, and Policy. From your docs > and admin console it looks like you have 4 concepts: > > * The thing (Resource) > * The action (scope) > * The condition (policy) > * The permission (join of resource, scope, and policy). > > > On the other hand, permissions are what you actually use to say: "OK, here > is the thing I want to protect with these policies". Initially, we didn't > have the "permission" concept at all and that was blocking us to be more > flexible and support more functionalities. > > Yeah, Keycloak didn't have the concept of "Resource", "Scope", or > "Policy". Role was really just a way to catagorize users. Is that what > you mean? > Don't worry, I know what you mean. You usually have a point when you criticize something ...... >From the bottom up. I did not mean role and KC role mapping. By "initially", I meant when I started AuthZ Services .... Regarding your point, that is exactly what we want to fix/provide with that JIRA for "Remote Policy Management". Not only provide a REST API on top of what we have, but make easier to manage policies for a resource server. We definitely going to have a Permission object in the REST API. Regarding the model and data model, we can discuss that in a separated thread. This is an implementation detail that we can easily solve. > > > What magic you are talking about ? > > >> >> >> Regarding scope-based permissions and the use cases you pointed out, I >> think we may have to change how we are handling typed resources. Today, we >> know a resource is a general/typed resource because the owner of the >> resource is the resource server itself, and resource instances are usually >> owned by an user, someone different than the resource server. Need to take >> a look on this one and see how it impact the use cases you pointed out ... >> >> I have no idea what you're talking about here. Am I missing something >> fundamental here? From the docs, examples, and the admin console UI, I >> thought Authz was about defining rules (policies) to determine if somebody >> is allowed to perform a specific action (scope) on a certain thing >> (resource). >> > > There are a few things more than just "are you allowed to do this" :) You > may get to a decision in different/configured ways. Glad that you are now > looking into this. > > What is the more? I understand you can get to a decision in different > ways. That's what is really cool about your stuff...but what exactly are > you "deciding" on? Again I thought we were deciding what actions can > somebody perform on a resource. > By more, I mean that a decision really depends on how you define your policies. Where that "typed resource" thing I mentioned is one way to define common permissions to a "parent" resource and have them also applied to resource instances sharing the same type. There is also the possibility to override permissions defined on a typed resource for a given resource instance, the possibility to specify more fine-grained policies to scopes associated with a resource even if that resource has a permission (resource-based permission) associated with it, etc. But in a nutshell, yes we are basically determining which actions can somebody perform on a resource. > > Some other related questions: > * What is a "resource permission" supposed to be? Is it a "catch all" for > that resource? > Yes. A resource permission should be used when you want to define policies for a given resource. In this case, if you get a PERMIT you'll have access to all scopes associated with the resource as well. > * What if you have multiple "scoped permissions" defined for the same > resource and scope. Do they all have to be true? Or just one? > For scope-based permissions, what our evaluation engine does is look for PERMITs. So, if you have asked for all permissions an user has (e.g.: using entitlement api) you'll get the scopes whose permissions evaluated to a PERMIT. > > > > What I meant is that we kind of support a parent/child relationship > between resources with typed resources [1]. Just thinking loud about what > we may (or not) need to use authz internally. > > [1] https://keycloak.gitbooks.io/authorization-services- > guide/topics/permission/typed-resource-permission.html > > > That looks interesting, but there's currently no way to say: For this > resource type and these scopes, apply these policies. The "scope" part is > missing. > Right now, this is achieved by checking the scopes associated with the resource that we consider to be the parent. Like I mentioned, we assume that the parent/typed resource is the one whose owner is the resource server itself. I have a question about what you are planning to do. Are you planning to put our policy enforcer on top of Keycloak Admin API or are you going to use the AuthZ API internally to enforce those decisions ? >From a UI perspective, are you planning to re-use the UIs we already have in that "Authorization" tab or provide a specific set of UIs for defining permissions to KC resources ? > > Bill > From bburke at redhat.com Mon Mar 13 17:07:39 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 13 Mar 2017 17:07:39 -0400 Subject: [keycloak-dev] next-gen Keycloak proxy Message-ID: Keycloak Proxy was written a few years ago to secure apps that can't use an adapter provided by us. While Keycloak Proxy works (? mostly?) ,we've been pushing people to Apache + mod-auth-mellon or mod-auth-openidc for non-Java apps. I predict that relying on Apache to proxy and secure apps that can't use our adapters is going to quickly become an issue for us. We already have a need to write extensions to mod-auth-*, specifically to support Pedro's Authz work (which is really nice BTW!). We could also do tighter integration to make the configuration experience more user-friendly. The problem is we have zero expertise in this area and none of us are C/C++ developers (I haven't coded in C/C++ since 1999 when I was at Iona). This brings me to what would be the next generation of the Keycloak Proxy. The first thing I'd like to improve is that configuration would happen within the admin console. This configuration could be made much simpler as whatever protocol configuration that would be needed could be hard-coded and pre-configured. Mappers would focus on mapping values to HTTP headers. Beyond configuration, things become more interesting and complex and their are multiple factors in deciding the authentication protocol, proxy design, and provisioning: * Can/Should one Keycloak Proxy virtual host and proxy multiple apps in same instance? One thing stopping this is SSL. If Keycloak Proxy is handling SSL, then there is no possibility of virtual hosting. If the load balancer is handling SSL, then this is a possibility. * Keycloak Proxy currently needs an HttpSession as it stores authentication information (JWS access token and Refresh Token) there so it can forward it to the application. We'd have to either shrink needed information so it could be stored in a cookie, or replication sessions. THe latter of which would have the same issues with cross DC. * Should we collocate Keycloak proxy with Keycloak runtime? That is, should Keycloak Proxy have direct access to UserSession, CLientSession, and other model interfaces? The benefits of this are that you could have a really optimized auth protocol, you'd still have to bounce the browser to set up cookies directly, but everything else could be handled through the ClientSession object and there would be no need to generate or store tokens. * Collocation is even nicer if virtual hosting could be done and there would be no configuration needed for the proxy. It would just be configured as a Keycloak instance and pull which apps in would need to proxy from the database. From bburke at redhat.com Mon Mar 13 17:38:33 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 13 Mar 2017 17:38:33 -0400 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> <5746d979-fc15-796d-ae01-3f002d77308e@redhat.com> Message-ID: <3dee10d8-d848-dc82-ec53-6a917c4664a7@redhat.com> On 3/13/17 4:34 PM, Pedro Igor Silva wrote: > On Mon, Mar 13, 2017 at 4:20 PM, Bill Burke > wrote: > > > > On 3/13/17 1:27 PM, Pedro Igor Silva wrote: >> On Mon, Mar 13, 2017 at 2:10 PM, Bill Burke > > wrote: >> >> >> >> On 3/13/17 11:49 AM, Pedro Igor Silva wrote: >>> >>> On Mon, Mar 13, 2017 at 12:15 PM, Bill Burke >>> > wrote: >>> >>> >>> >>> On 3/13/17 9:43 AM, Pedro Igor Silva wrote: >>>> >>>> Are you already implementing things ? Do you want me to >>>> look at these changes or work together with you on them ? >>>> >>>> (As you may have noticed, there is an API that we use >>>> internally to actually evaluate policies given a set of >>>> permissions.) >>> Haven't implemented anything just researching how it >>> could be done. The biggest issue right now that I'm >>> having is that I don't understand how to do things >>> programatically yet (i.e. set up resources, set up >>> scopes, set up permissions, set up policies). I don't >>> understand how the UI translates to the JPA entity model >>> and there seems to be a lot of set up data hidden by >>> generic Map objects. Its also really confusing how the >>> admin REST interface translates from the UI to the >>> model. Its also really bizarre to me that the things >>> represented in the Admin Console UI are not represented >>> in the data model. i.e. I have no idea how a >>> "Scoped-Permission" in the admin console maps to a JSON >>> representation, the REST API, nor how that JSON >>> representation is mapped to the model. >>> >>> >>> The usage of Map objects is similar to what we have with >>> identity brokering. We need to be flexible in order to >>> support different types of policies without requiring >>> changes to model. >>> >> >>> Regarding representation, both permission and policy are >>> represented as policies. What differs a permission from a >>> policy is the type. Policies defined as "scope" or >>> "resource" (as you have in UI) are the ones you can use to >>> actually create permissions to resources and scopes. >>> >> This is the really confusing part. There is no documentation >> on how to set up a Scoped or Resource permission through the >> REST API. The AuthzClient only allows you to create >> resources AFAICT. I'm currently tracing through >> admin-console permission UI to find which REST endpoint to >> use, and how that REST endpoint converts JSON into actual API >> calls. >> >> >> The confusion is because we have this JIRA >> https://issues.jboss.org/browse/KEYCLOAK-3135 >> . Currently, we >> don't have a nice REST API for policy management. That explains >> why we lack docs as well. You can use KC Admin Client (that is >> what we use in tests) but that will change once we introduce a >> specific API for this purpose. >> >> >> >>> Everything you need is available from an >>> AuthorizationProvider instance, which can be obtained as >>> follows: >>> >>> KeycloakSession.getProvider(AuthorizationProvider.class); >>> >>> From an instance of that class you can access methods to >>> evaluate permissions or obtain references to the different >>> storages we use to actually manage resources, policies, >>> scopes and resource servers. I can include the AuthZ API as >>> a topic to our meeting about authz services next week. >>> >> I don't have everything I need as you overload concepts >> (permission and policy) and it doesn't make sense yet the >> magic that must be done to store these definitions. >> >> >> I don't think I've overloaded concepts. The concept of a policy >> is much more broader than with a permission. A policy can be seen >> as a condition that must be satisfied in order to grant/deny >> access to something (resource or scope). You can reuse policies >> across different permissions (you probably want that instead of >> replicating the same condition on similar policies), policies can >> be combined to form more complex and fine-grained conditions (see >> aggregated policies), etc. > > I'm sorry. I sounded too critical and I didn't mean to be or > sound like I was attacking you. I really like what you are doing > and am excited about it...but...You do overload concepts because > the admin console has the concept of a Permission and there's no > Permission object in the REST API, the model api, or the data > model. Permission looked like a join table between Resource and > Policy or Resource, Scope, and Policy. From your docs and admin > console it looks like you have 4 concepts: > > * The thing (Resource) > * The action (scope) > * The condition (policy) > * The permission (join of resource, scope, and policy). > >> >> On the other hand, permissions are what you actually use to say: >> "OK, here is the thing I want to protect with these policies". >> Initially, we didn't have the "permission" concept at all and >> that was blocking us to be more flexible and support more >> functionalities. >> > Yeah, Keycloak didn't have the concept of "Resource", "Scope", or > "Policy". Role was really just a way to catagorize users. Is > that what you mean? > > > Don't worry, I know what you mean. You usually have a point when you > criticize something ...... > > From the bottom up. I did not mean role and KC role mapping. By > "initially", I meant when I started AuthZ Services .... > > Regarding your point, that is exactly what we want to fix/provide with > that JIRA for "Remote Policy Management". Not only provide a REST API > on top of what we have, but make easier to manage policies for a > resource server. We definitely going to have a Permission object in > the REST API. Regarding the model and data model, we can discuss that > in a separated thread. This is an implementation detail that we can > easily solve. > > > >> What magic you are talking about ? >> >> >> >>> Regarding scope-based permissions and the use cases you >>> pointed out, I think we may have to change how we are >>> handling typed resources. Today, we know a resource is a >>> general/typed resource because the owner of the resource is >>> the resource server itself, and resource instances are >>> usually owned by an user, someone different than the >>> resource server. Need to take a look on this one and see how >>> it impact the use cases you pointed out ... >> I have no idea what you're talking about here. Am I missing >> something fundamental here? From the docs, examples, and the >> admin console UI, I thought Authz was about defining rules >> (policies) to determine if somebody is allowed to perform a >> specific action (scope) on a certain thing (resource). >> >> >> There are a few things more than just "are you allowed to do >> this" :) You may get to a decision in different/configured ways. >> Glad that you are now looking into this. >> > What is the more? I understand you can get to a decision in > different ways. That's what is really cool about your stuff...but > what exactly are you "deciding" on? Again I thought we were > deciding what actions can somebody perform on a resource. > > > By more, I mean that a decision really depends on how you define your > policies. Where that "typed resource" thing I mentioned is one way to > define common permissions to a "parent" resource and have them also > applied to resource instances sharing the same type. There is also the > possibility to override permissions defined on a typed resource for a > given resource instance, the possibility to specify more fine-grained > policies to scopes associated with a resource even if that resource > has a permission (resource-based permission) associated with it, etc. > > But in a nutshell, yes we are basically determining which actions can > somebody perform on a resource. > > > Some other related questions: > * What is a "resource permission" supposed to be? Is it a "catch > all" for that resource? > > > Yes. A resource permission should be used when you want to define > policies for a given resource. In this case, if you get a PERMIT > you'll have access to all scopes associated with the resource as well. > > * What if you have multiple "scoped permissions" defined for the > same resource and scope. Do they all have to be true? Or just one? > > > For scope-based permissions, what our evaluation engine does is look > for PERMITs. So, if you have asked for all permissions an user has > (e.g.: using entitlement api) you'll get the scopes whose permissions > evaluated to a PERMIT. So, if you have a set of permissions (scoped-base and resource base) it is a logical OR on all these permissions when evaluating. > > > >> What I meant is that we kind of support a parent/child >> relationship between resources with typed resources [1]. Just >> thinking loud about what we may (or not) need to use authz >> internally. >> >> [1] >> https://keycloak.gitbooks.io/authorization-services-guide/topics/permission/typed-resource-permission.html >> > > That looks interesting, but there's currently no way to say: For > this resource type and these scopes, apply these policies. The > "scope" part is missing. > > > Right now, this is achieved by checking the scopes associated with the > resource that we consider to be the parent. Like I mentioned, we > assume that the parent/typed resource is the one whose owner is the > resource server itself. > > I have a question about what you are planning to do. Are you planning > to put our policy enforcer on top of Keycloak Admin API or are you > going to use the AuthZ API internally to enforce those decisions ? > Not sure what you mean by that question but I'll try and answer.... I want to integrate your stuff with Keycloak Admin REST API *and* the admin console. REST API first though: Want to use the local-java-API equivalent to the REST entitlement request. See the example I described in the original post. It seems like Authz can handle this. For admin console UI, I was going to make a general entitlement request to obtain all permissions for a particular admin, then render the UI appropriately. i.e. if the admin can't update the realm, doesn't see those screens. Go to the original post on this email thread to see how I explained user role mappings and what things we want to offer there. > From a UI perspective, are you planning to re-use the UIs we already > have in that "Authorization" tab or provide a specific set of UIs for > defining permissions to KC resources ? My initial thought is that the Authz UI would not be re-used. I think we need something more user friendly to navigate between all the resources we are going to have. What do you think? Bill From psilva at redhat.com Mon Mar 13 17:54:39 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 18:54:39 -0300 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: <3dee10d8-d848-dc82-ec53-6a917c4664a7@redhat.com> References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> <5746d979-fc15-796d-ae01-3f002d77308e@redhat.com> <3dee10d8-d848-dc82-ec53-6a917c4664a7@redhat.com> Message-ID: On Mon, Mar 13, 2017 at 6:38 PM, Bill Burke wrote: > > > On 3/13/17 4:34 PM, Pedro Igor Silva wrote: > > On Mon, Mar 13, 2017 at 4:20 PM, Bill Burke wrote: > >> >> >> On 3/13/17 1:27 PM, Pedro Igor Silva wrote: >> >> On Mon, Mar 13, 2017 at 2:10 PM, Bill Burke wrote: >> >>> >>> >>> On 3/13/17 11:49 AM, Pedro Igor Silva wrote: >>> >>> >>> On Mon, Mar 13, 2017 at 12:15 PM, Bill Burke wrote: >>> >>>> >>>> >>>> On 3/13/17 9:43 AM, Pedro Igor Silva wrote: >>>> >>>> >>>> Are you already implementing things ? Do you want me to look at these >>>> changes or work together with you on them ? >>>> >>>> (As you may have noticed, there is an API that we use internally to >>>> actually evaluate policies given a set of permissions.) >>>> >>>> Haven't implemented anything just researching how it could be done. >>>> The biggest issue right now that I'm having is that I don't understand how >>>> to do things programatically yet (i.e. set up resources, set up scopes, set >>>> up permissions, set up policies). I don't understand how the UI translates >>>> to the JPA entity model and there seems to be a lot of set up data hidden >>>> by generic Map objects. Its also really confusing how the admin REST >>>> interface translates from the UI to the model. Its also really bizarre >>>> to me that the things represented in the Admin Console UI are not >>>> represented in the data model. i.e. I have no idea how a >>>> "Scoped-Permission" in the admin console maps to a JSON representation, the >>>> REST API, nor how that JSON representation is mapped to the model. >>>> >>> >>> The usage of Map objects is similar to what we have with identity >>> brokering. We need to be flexible in order to support different types of >>> policies without requiring changes to model. >>> >>> >>> Regarding representation, both permission and policy are represented as >>> policies. What differs a permission from a policy is the type. Policies >>> defined as "scope" or "resource" (as you have in UI) are the ones you can >>> use to actually create permissions to resources and scopes. >>> >>> This is the really confusing part. There is no documentation on how to >>> set up a Scoped or Resource permission through the REST API. The >>> AuthzClient only allows you to create resources AFAICT. I'm currently >>> tracing through admin-console permission UI to find which REST endpoint to >>> use, and how that REST endpoint converts JSON into actual API calls. >>> >> >> The confusion is because we have this JIRA https://issues.jboss.org/ >> browse/KEYCLOAK-3135. Currently, we don't have a nice REST API for >> policy management. That explains why we lack docs as well. You can use KC >> Admin Client (that is what we use in tests) but that will change once we >> introduce a specific API for this purpose. >> >> >>> >>> >>> Everything you need is available from an AuthorizationProvider instance, >>> which can be obtained as follows: >>> >>> KeycloakSession.getProvider(AuthorizationProvider.class); >>> >>> From an instance of that class you can access methods to evaluate >>> permissions or obtain references to the different storages we use to >>> actually manage resources, policies, scopes and resource servers. I can >>> include the AuthZ API as a topic to our meeting about authz services next >>> week. >>> >>> I don't have everything I need as you overload concepts (permission and >>> policy) and it doesn't make sense yet the magic that must be done to store >>> these definitions. >>> >> >> I don't think I've overloaded concepts. The concept of a policy is much >> more broader than with a permission. A policy can be seen as a condition >> that must be satisfied in order to grant/deny access to something (resource >> or scope). You can reuse policies across different permissions (you >> probably want that instead of replicating the same condition on similar >> policies), policies can be combined to form more complex and fine-grained >> conditions (see aggregated policies), etc. >> >> >> I'm sorry. I sounded too critical and I didn't mean to be or sound like >> I was attacking you. I really like what you are doing and am excited about >> it...but...You do overload concepts because the admin console has the >> concept of a Permission and there's no Permission object in the REST API, >> the model api, or the data model. Permission looked like a join table >> between Resource and Policy or Resource, Scope, and Policy. From your docs >> and admin console it looks like you have 4 concepts: >> >> * The thing (Resource) >> * The action (scope) >> * The condition (policy) >> * The permission (join of resource, scope, and policy). >> >> >> On the other hand, permissions are what you actually use to say: "OK, >> here is the thing I want to protect with these policies". Initially, we >> didn't have the "permission" concept at all and that was blocking us to be >> more flexible and support more functionalities. >> >> Yeah, Keycloak didn't have the concept of "Resource", "Scope", or >> "Policy". Role was really just a way to catagorize users. Is that what >> you mean? >> > > Don't worry, I know what you mean. You usually have a point when you > criticize something ...... > > From the bottom up. I did not mean role and KC role mapping. By > "initially", I meant when I started AuthZ Services .... > > Regarding your point, that is exactly what we want to fix/provide with > that JIRA for "Remote Policy Management". Not only provide a REST API on > top of what we have, but make easier to manage policies for a resource > server. We definitely going to have a Permission object in the REST API. > Regarding the model and data model, we can discuss that in a separated > thread. This is an implementation detail that we can easily solve. > > >> >> >> What magic you are talking about ? >> >> >>> >>> >>> Regarding scope-based permissions and the use cases you pointed out, I >>> think we may have to change how we are handling typed resources. Today, we >>> know a resource is a general/typed resource because the owner of the >>> resource is the resource server itself, and resource instances are usually >>> owned by an user, someone different than the resource server. Need to take >>> a look on this one and see how it impact the use cases you pointed out ... >>> >>> I have no idea what you're talking about here. Am I missing something >>> fundamental here? From the docs, examples, and the admin console UI, I >>> thought Authz was about defining rules (policies) to determine if somebody >>> is allowed to perform a specific action (scope) on a certain thing >>> (resource). >>> >> >> There are a few things more than just "are you allowed to do this" :) You >> may get to a decision in different/configured ways. Glad that you are now >> looking into this. >> >> What is the more? I understand you can get to a decision in different >> ways. That's what is really cool about your stuff...but what exactly are >> you "deciding" on? Again I thought we were deciding what actions can >> somebody perform on a resource. >> > > By more, I mean that a decision really depends on how you define your > policies. Where that "typed resource" thing I mentioned is one way to > define common permissions to a "parent" resource and have them also applied > to resource instances sharing the same type. There is also the possibility > to override permissions defined on a typed resource for a given resource > instance, the possibility to specify more fine-grained policies to scopes > associated with a resource even if that resource has a permission > (resource-based permission) associated with it, etc. > > But in a nutshell, yes we are basically determining which actions can > somebody perform on a resource. > > >> >> Some other related questions: >> * What is a "resource permission" supposed to be? Is it a "catch all" >> for that resource? >> > > Yes. A resource permission should be used when you want to define policies > for a given resource. In this case, if you get a PERMIT you'll have access > to all scopes associated with the resource as well. > > >> * What if you have multiple "scoped permissions" defined for the same >> resource and scope. Do they all have to be true? Or just one? >> > > For scope-based permissions, what our evaluation engine does is look for > PERMITs. So, if you have asked for all permissions an user has (e.g.: using > entitlement api) you'll get the scopes whose permissions evaluated to a > PERMIT. > > > So, if you have a set of permissions (scoped-base and resource base) it is > a logical OR on all these permissions when evaluating. > > > > >> >> >> >> What I meant is that we kind of support a parent/child relationship >> between resources with typed resources [1]. Just thinking loud about what >> we may (or not) need to use authz internally. >> >> [1] https://keycloak.gitbooks.io/authorization-services-guid >> e/topics/permission/typed-resource-permission.html >> >> >> That looks interesting, but there's currently no way to say: For this >> resource type and these scopes, apply these policies. The "scope" part is >> missing. >> > > Right now, this is achieved by checking the scopes associated with the > resource that we consider to be the parent. Like I mentioned, we assume > that the parent/typed resource is the one whose owner is the resource > server itself. > > I have a question about what you are planning to do. Are you planning to > put our policy enforcer on top of Keycloak Admin API or are you going to > use the AuthZ API internally to enforce those decisions ? > > Not sure what you mean by that question but I'll try and answer.... I > want to integrate your stuff with Keycloak Admin REST API *and* the admin > console. REST API first though: Want to use the local-java-API equivalent > to the REST entitlement request. See the example I described in the > original post. It seems like Authz can handle this. For admin console UI, > I was going to make a general entitlement request to obtain all permissions > for a particular admin, then render the UI appropriately. i.e. if the > admin can't update the realm, doesn't see those screens. > By "local-java-API" you mean use that AuthorizationProvider I mentioned earlier ? That is what I meant by using the AuthZ API (the Java API we use internally in authz services). Regarding the admin console, the only thing to keep in mind is how much permissions you are going to get from the server. We also support sending along a entitlement request the resource/scopes you want to access. That can help to perform incremental authorization instead of obtaining everything once from the server. > > Go to the original post on this email thread to see how I explained user > role mappings and what things we want to offer there. > > From a UI perspective, are you planning to re-use the UIs we already have > in that "Authorization" tab or provide a specific set of UIs for defining > permissions to KC resources ? > > My initial thought is that the Authz UI would not be re-used. I think we > need something more user friendly to navigate between all the resources we > are going to have. What do you think? > It makes sense to provide separated UIs. Although we can also try to re-use authz UIs and see how it looks like. Not sure if usability will be so bad. > > > Bill > > From bburke at redhat.com Mon Mar 13 18:03:41 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 13 Mar 2017 18:03:41 -0400 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> <5746d979-fc15-796d-ae01-3f002d77308e@redhat.com> <3dee10d8-d848-dc82-ec53-6a917c4664a7@redhat.com> Message-ID: On 3/13/17 5:54 PM, Pedro Igor Silva wrote: > > By "local-java-API" you mean use that AuthorizationProvider I > mentioned earlier ? That is what I meant by using the AuthZ API (the > Java API we use internally in authz services). > Yes, I think that's the one. > Regarding the admin console, the only thing to keep in mind is how > much permissions you are going to get from the server. We also support > sending along a entitlement request the resource/scopes you want to > access. That can help to perform incremental authorization instead of > obtaining everything once from the server. Yeah, I saw that. Not sure what the best way would be. Just think of the admin console main screen. If you can only manage users of a specific group there's a bunch of menu items that wouldn't show up. > > Go to the original post on this email thread to see how I > explained user role mappings and what things we want to offer there. > >> From a UI perspective, are you planning to re-use the UIs we >> already have in that "Authorization" tab or provide a specific >> set of UIs for defining permissions to KC resources ? > My initial thought is that the Authz UI would not be re-used. I > think we need something more user friendly to navigate between all > the resources we are going to have. What do you think? > > > It makes sense to provide separated UIs. Although we can also try to > re-use authz UIs and see how it looks like. Not sure if usability will > be so bad. Really depends on the realm and what approach we want to take. Do we want one place where all permissions are decided? Then something like a tree view would be need to navigate all the clients, roles, groups and other managable resources a realm might have. One of our customers has hundreds of roles and thousands of groups and hundreds of thousands of users. Another approach is have an Authz tab on each thing we want to have fine grain permissions on. I.e. the role page would have a "Management permissions" tab. Bill From psilva at redhat.com Mon Mar 13 18:30:47 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 13 Mar 2017 19:30:47 -0300 Subject: [keycloak-dev] fine-grain admin permissions with Authz In-Reply-To: References: <559ff331-4767-30dc-f758-83b10d761289@redhat.com> <5746d979-fc15-796d-ae01-3f002d77308e@redhat.com> <3dee10d8-d848-dc82-ec53-6a917c4664a7@redhat.com> Message-ID: On Mon, Mar 13, 2017 at 7:03 PM, Bill Burke wrote: > > > On 3/13/17 5:54 PM, Pedro Igor Silva wrote: > > > By "local-java-API" you mean use that AuthorizationProvider I mentioned > earlier ? That is what I meant by using the AuthZ API (the Java API we use > internally in authz services). > > Yes, I think that's the one. > > Regarding the admin console, the only thing to keep in mind is how much > permissions you are going to get from the server. We also support sending > along a entitlement request the resource/scopes you want to access. That > can help to perform incremental authorization instead of obtaining > everything once from the server. > > Yeah, I saw that. Not sure what the best way would be. Just think of the > admin console main screen. If you can only manage users of a specific > group there's a bunch of menu items that wouldn't show up. > For the main screen, I think we can have a single resource with scopes representing each link on the menu. Then we can ask for "give me permissions for Main Page" and have only the set of permissions required to load the main page. Another thing I was holding to implement is a way to configure which resources can be obtained as a result of a general entitlement request (without any specific resource and scope). So you can have more control over the initial load of permissions and use incremental authorization for the rest on a per resource basis. > > > > >> >> Go to the original post on this email thread to see how I explained user >> role mappings and what things we want to offer there. >> >> From a UI perspective, are you planning to re-use the UIs we already have >> in that "Authorization" tab or provide a specific set of UIs for defining >> permissions to KC resources ? >> >> My initial thought is that the Authz UI would not be re-used. I think we >> need something more user friendly to navigate between all the resources we >> are going to have. What do you think? >> > > It makes sense to provide separated UIs. Although we can also try to > re-use authz UIs and see how it looks like. Not sure if usability will be > so bad. > > > Really depends on the realm and what approach we want to take. Do we want > one place where all permissions are decided? Then something like a tree > view would be need to navigate all the clients, roles, groups and other > managable resources a realm might have. One of our customers has hundreds > of roles and thousands of groups and hundreds of thousands of users. > Another approach is have an Authz tab on each thing we want to have fine > grain permissions on. I.e. the role page would have a "Management > permissions" tab. > Yeah, considering that use case a tree would also suffer from usability/performance issues. On the other hand, a specific tab on each resource looks more "bullet proof" but bring usability issues. I think AuthZ UI can handle this as well depending on how we decide to do things. For instance, suppose we are creating a role. At that point we also create a "Role A Permission" with a default policy (probably granting things to superuser only). Later, if you want to change the permission/policies for this role you can go to AuthZ UIs and perform a partial search on the permission list for "Role A", which will bring you the "Role A Permission". Something to try before implementing more things, I think ... > > Bill > From marc.boorshtein at tremolosecurity.com Mon Mar 13 21:05:59 2017 From: marc.boorshtein at tremolosecurity.com (Marc Boorshtein) Date: Tue, 14 Mar 2017 01:05:59 +0000 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: Message-ID: > > > > * Can/Should one Keycloak Proxy virtual host and proxy multiple apps in > same instance? One thing stopping this is SSL. If Keycloak Proxy is > handling SSL, then there is no possibility of virtual hosting. If the > load balancer is handling SSL, then this is a possibility. > > You can have multiple virtual hosts with the TLS endpoint being KC. We do ti with OpenUnison and apache lets you do it I think with TLS 1.2 and apache 2.4 (I have a customer thats doing that right now so I know it works). So long as the cert has multiple Subject Alternative Names or is a wildcard it should work. > * Keycloak Proxy currently needs an HttpSession as it stores > authentication information (JWS access token and Refresh Token) there so > it can forward it to the application. We'd have to either shrink needed > information so it could be stored in a cookie, or replication sessions. > THe latter of which would have the same issues with cross DC. > OpenUnison originally took the "everything in a cookie" approach, the cookie quickly got too big to be effective and we had to switch to maintaining a backend session. I know I've brought this up before, but I'd like to offer up OpenUnison as a starting point: https://github.com/tremolosecurity/openunison. OU probably has 70%-80% of what you are looking for. It already has the reverse proxy code built in, written in Java, supports extensibility via multiple mechanisms, an authorization subsystem that can easily be extended to support an external az service and we have an extensible last mile system for legacy apps that don't support openid connect for apache, .net and Java. We also have multiple production deployments (including public safety applications). >From a corporate standpoint we're already Red Hat partners at multiple levels. We're sponsoring Summit this year again and I'll be doing a session on OpenShift identity management and compliance. Thanks -- Marc Boorshtein CTO Tremolo Security marc.boorshtein at tremolosecurity.com (703) 828-4902 Twitter - @mlbiam / @tremolosecurity From bburke at redhat.com Mon Mar 13 23:19:29 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 13 Mar 2017 23:19:29 -0400 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: Message-ID: On 3/13/17 9:05 PM, Marc Boorshtein wrote: > > > > * Can/Should one Keycloak Proxy virtual host and proxy multiple > apps in > same instance? One thing stopping this is SSL. If Keycloak Proxy is > handling SSL, then there is no possibility of virtual hosting. If the > load balancer is handling SSL, then this is a possibility. > > > You can have multiple virtual hosts with the TLS endpoint being KC. > We do ti with OpenUnison and apache lets you do it I think with TLS > 1.2 and apache 2.4 (I have a customer thats doing that right now so I > know it works). So long as the cert has multiple Subject Alternative > Names or is a wildcard it should work. Didn't know that, I'll have to try it out. I thought the browser only validated by looking at the CN. Thanks for that. > > * Keycloak Proxy currently needs an HttpSession as it stores > authentication information (JWS access token and Refresh Token) > there so > it can forward it to the application. We'd have to either shrink > needed > information so it could be stored in a cookie, or replication > sessions. > THe latter of which would have the same issues with cross DC. > > > OpenUnison originally took the "everything in a cookie" approach, the > cookie quickly got too big to be effective and we had to switch to > maintaining a backend session. We already have a cookie option with our Java saml/oidc adapters that some users prefer. Not everybody is trying to solve the worlds problems with their identity tokens. > > I know I've brought this up before, but I'd like to offer up > OpenUnison as a starting point: > https://github.com/tremolosecurity/openunison. OU probably has 70%-80% > of what you are looking for. It already has the reverse proxy code > built in, written in Java, supports extensibility via multiple > mechanisms, an authorization subsystem that can easily be extended to > support an external az service and we have an extensible last mile > system for legacy apps that don't support openid connect for apache, > .net and Java. We also have multiple production deployments > (including public safety applications). > > From a corporate standpoint we're already Red Hat partners at multiple > levels. We're sponsoring Summit this year again and I'll be doing a > session on OpenShift identity management and compliance. > So nice of you to hijack the thread to promote your own product. Not very professional. Its a bit hypocritical of me to say this as I've done it myself in the past and received a lot of crap for it. Now that its being done to my project I can see why people get upset over it. This isn't the first time you've done this. If you do it again, we'll remove you from the list. I really don't give a shit if you're a partner or not. Cheers, Bill From marc.boorshtein at tremolosecurity.com Tue Mar 14 04:21:58 2017 From: marc.boorshtein at tremolosecurity.com (Marc Boorshtein) Date: Tue, 14 Mar 2017 08:21:58 +0000 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: Message-ID: > > > So nice of you to hijack the thread to promote your own product. Not very > professional. Its a bit hypocritical of me to say this as I've done it > myself in the past and received a lot of crap for it. Now that its being > done to my project I can see why people get upset over it. This isn't the > first time you've done this. If you do it again, we'll remove you from the > list. I really don't give a shit if you're a partner or not. > > > I'm sorry, I don't see how this is "hijacking" a thread. I'm not telling a KC user "hey, use my project instead because XYZ". I'm providing another open source option you could use as a starting point (and I did say "starting point"). Maybe you find it does a large chunk of what you are looking for and decide to fork it, maybe you look at it and get some ideas, maybe you find that you want to use some different approaches because you have different ideas as to how this problem should be solved. If I remember correctly in your last thread on this topic you explicitly asked for the community's help to build this out because you didn't have time. Its no different then when someone has said to me on other lists "we do it XYZ way, you should take a look" and I do because if someone else solved the problem I'm trying to solve why wouldn't I look at their solution?. Thats not hijacking, thats open source. I brought up being a partner as working with components not built by Red Hat was something you called out in both threads as being an issue with mod_auth_oidc. I also do my best to be transparent because I don't want people to think I'm trying to sell them something in every discussion. My company name and title is in every email I send and I don't use a generic email account so there's no question. I could go on, but I don't wish to continue to clog up this list with a non-technical discussion. My apologies if you or anyone else on this list was offended, it was not my intention. Thanks -- Marc Boorshtein CTO Tremolo Security marc.boorshtein at tremolosecurity.com (703) 828-4902 Twitter - @mlbiam / @tremolosecurity From sthorger at redhat.com Tue Mar 14 04:51:30 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Tue, 14 Mar 2017 09:51:30 +0100 Subject: [keycloak-dev] Documentation moved to a single repository Message-ID: All documentation has been moved to a single repository: https://github.com/keycloak/keycloak-documentation From sthorger at redhat.com Tue Mar 14 05:13:55 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Tue, 14 Mar 2017 10:13:55 +0100 Subject: [keycloak-dev] Profile SPI Message-ID: At the moment there is no single point to define validation for a user. Even worse for the account management console and admin console it's not even possible to define validation for custom attributes. Also, as there is no defined list of attributes for a user there the mapping of user attributes is error prone. I'd like to introduce a Profile SPI to help with this. It would have methods to: * Validate users during creation and updates * List defined attributes on a user There would be a built-in provider that would delegate to ProfileAttribute SPI. ProfileAttribute SPI would allow defining configurable providers for single user attributes. I'm also considering adding a separate Validation SPI, so a ProfileAttribute provider could delegate validation to a separate validator. Users could also implement their own Profile provider to do whatever they want. I'd like to aim to make the SPI a supported SPI. First pass would focus purely on validation. Second pass would focus on using the attribute metadata to do things like: * Have dropdown boxes in mappers to select user attribute instead of copy/pasting the name * Have additional built-in attributes on registration form, update profile form and account management console that can be enabled/disabled by defining the Profile. I'm not suggesting a huge amount here and it will be limited to a few sensible attributes. Defining more complex things like address would still be done through extending the forms. From martin.hardselius at gmail.com Tue Mar 14 05:40:17 2017 From: martin.hardselius at gmail.com (Martin Hardselius) Date: Tue, 14 Mar 2017 09:40:17 +0000 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: Message-ID: +1 OIDC standard claims seem like a great set of attributes to start with. Perhaps out of scope, but unique attribute values per user would be really nice. A generic way to add more identifiers to users. On Tue, 14 Mar 2017 at 10:14 Stian Thorgersen wrote: > At the moment there is no single point to define validation for a user. > Even worse for the account management console and admin console it's not > even possible to define validation for custom attributes. > > Also, as there is no defined list of attributes for a user there the > mapping of user attributes is error prone. > > I'd like to introduce a Profile SPI to help with this. It would have > methods to: > > * Validate users during creation and updates > * List defined attributes on a user > > There would be a built-in provider that would delegate to ProfileAttribute > SPI. ProfileAttribute SPI would allow defining configurable providers for > single user attributes. I'm also considering adding a separate Validation > SPI, so a ProfileAttribute provider could delegate validation to a separate > validator. > > Users could also implement their own Profile provider to do whatever they > want. I'd like to aim to make the SPI a supported SPI. > > First pass would focus purely on validation. Second pass would focus on > using the attribute metadata to do things like: > > * Have dropdown boxes in mappers to select user attribute instead of > copy/pasting the name > * Have additional built-in attributes on registration form, update profile > form and account management console that can be enabled/disabled by > defining the Profile. I'm not suggesting a huge amount here and it will be > limited to a few sensible attributes. Defining more complex things like > address would still be done through extending the forms. > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Tue Mar 14 05:59:39 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Tue, 14 Mar 2017 10:59:39 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: Message-ID: On 14 March 2017 at 10:40, Martin Hardselius wrote: > +1 > > OIDC standard claims seem like a great set of attributes to start with. > > Perhaps out of scope, but unique attribute values per user would be really > nice. A generic way to add more identifiers to users. > That's far from trivial to add since it can't be guaranteed without a constraint on the DB. > > On Tue, 14 Mar 2017 at 10:14 Stian Thorgersen wrote: > >> At the moment there is no single point to define validation for a user. >> Even worse for the account management console and admin console it's not >> even possible to define validation for custom attributes. >> >> Also, as there is no defined list of attributes for a user there the >> mapping of user attributes is error prone. >> >> I'd like to introduce a Profile SPI to help with this. It would have >> methods to: >> >> * Validate users during creation and updates >> * List defined attributes on a user >> >> There would be a built-in provider that would delegate to ProfileAttribute >> SPI. ProfileAttribute SPI would allow defining configurable providers for >> single user attributes. I'm also considering adding a separate Validation >> SPI, so a ProfileAttribute provider could delegate validation to a >> separate >> validator. >> >> Users could also implement their own Profile provider to do whatever they >> want. I'd like to aim to make the SPI a supported SPI. >> >> First pass would focus purely on validation. Second pass would focus on >> using the attribute metadata to do things like: >> >> * Have dropdown boxes in mappers to select user attribute instead of >> copy/pasting the name >> * Have additional built-in attributes on registration form, update profile >> form and account management console that can be enabled/disabled by >> defining the Profile. I'm not suggesting a huge amount here and it will be >> limited to a few sensible attributes. Defining more complex things like >> address would still be done through extending the forms. >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > From mposolda at redhat.com Tue Mar 14 06:53:41 2017 From: mposolda at redhat.com (Marek Posolda) Date: Tue, 14 Mar 2017 11:53:41 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? Message-ID: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> When working on login sessions, I wonder if we want to improve browser back-button and browser refreshes. In shortcut, I can see 3 basic options: 1) Keep same like now and rely on header "Cache-Control: no-store, must-revalidate, max-age=0" . This works fine and users never saw outdated form and never submit outdated form 2 times. However the usability sucks a bit IMO. When you press back-button after POST request, you can see the ugly browser page "Web page has expired" . And if you press F5 on this, you will see the unfriendly Keycloak error page "Error was occured. Please login again through your application" because of invalid code. 2) Use the pattern with POST followed by the redirect to GET. Since we will have loginSession with the ID in the cookie, the GET request can be sent to the URL without any special query parameter. Something like "http://localhost:8180/auth/realms/master/login-actions/authenticate" . This will allow us that in every stage of authentication, user can press back-button and will be always redirected to the first step of the flow. When he refreshes the page, it will re-send just the GET request and always brings him to the current execution. This looks most user-friendly. But there is the issue with performance though. As we will need to followup every POST request with one additional GET request. 3) Don't do anything special regarding back-button or refresh. But in case that page is refreshed AND the post with invalid (already used) code will be re-submitted, we won't display the ugly page "Error was occured.", but we will just redirect to current step of the flow. Example: a) User was redirected from the application to OIDC AuthorizationEndpoint request. Login page is shown b) User confirmed invalid username and password with POST request. Login form with error page "Invalid password" is shown c) User confirmed valid username and password with POST request. TOTP page is shown. d) User press back-button. Now he will see again the page with username/password form. e) User press F5. The POST request will be re-sent, but it will use previous "code", which is outdated now. So in this case, we will redirect to the current execution and TOTP form will be shown. No re-submission of username/password form will happen. In case 3, the username/password form will be shown again, but user won't be able to resubmit it. In shortcut: With 2 and 3, users will never see the browser page "Web page is expired" or Keycloak "Error occured. Go back to the application". With 2, there is additional GET request needed. With 3, the back-button may show the authentication forms, which user already successfully confirmed, but he won't be able to re-submit them. Is it bad regarding usability? To me, it looks better than showing "Web page is expired". So my preference is 3,2,1. WDYT? Any other options? Marek From mposolda at redhat.com Tue Mar 14 07:21:44 2017 From: mposolda at redhat.com (Marek Posolda) Date: Tue, 14 Mar 2017 12:21:44 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: Message-ID: <9bb32606-543d-29c7-8799-596ab7d1806b@redhat.com> Few things: - It will be good to have some OOTB support for multivalued attributes. You will be able to define if attribute is multivalued and then in registration/account pages, users will see something like we have in admin console for "redirect uris" or "web origins" in client detail page. - Besides validation, it may be useful to add some "actions" when attribute is changed? For example if user changes email, there will be the optional action, which will switch "emailVerified" to false and put the "VerifyEmail" required action on him. When he changes mobile number, it will send him SMS and he will need to confirm it somehow (perhaps again through required action), etc. - It will be probably useful to allow admin to skip validation (and actions) for certain attributes. Maybe validators could have an option like "Skip admin" or something like that? Or should we always skip the validations for admin? Marek On 14/03/17 10:13, Stian Thorgersen wrote: > At the moment there is no single point to define validation for a user. > Even worse for the account management console and admin console it's not > even possible to define validation for custom attributes. > > Also, as there is no defined list of attributes for a user there the > mapping of user attributes is error prone. > > I'd like to introduce a Profile SPI to help with this. It would have > methods to: > > * Validate users during creation and updates > * List defined attributes on a user > > There would be a built-in provider that would delegate to ProfileAttribute > SPI. ProfileAttribute SPI would allow defining configurable providers for > single user attributes. I'm also considering adding a separate Validation > SPI, so a ProfileAttribute provider could delegate validation to a separate > validator. > > Users could also implement their own Profile provider to do whatever they > want. I'd like to aim to make the SPI a supported SPI. > > First pass would focus purely on validation. Second pass would focus on > using the attribute metadata to do things like: > > * Have dropdown boxes in mappers to select user attribute instead of > copy/pasting the name > * Have additional built-in attributes on registration form, update profile > form and account management console that can be enabled/disabled by > defining the Profile. I'm not suggesting a huge amount here and it will be > limited to a few sensible attributes. Defining more complex things like > address would still be done through extending the forms. > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Tue Mar 14 08:50:57 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 14 Mar 2017 08:50:57 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> Message-ID: I've got an idea. What about * keep the code in the URL * Additionally add a "current-code" cookie If code in the URL doesn't match the cookie, then redirect to the URL of the current-code. On 3/14/17 6:53 AM, Marek Posolda wrote: > When working on login sessions, I wonder if we want to improve browser > back-button and browser refreshes. > > In shortcut, I can see 3 basic options: > > 1) Keep same like now and rely on header "Cache-Control: no-store, > must-revalidate, max-age=0" . This works fine and users never saw > outdated form and never submit outdated form 2 times. However the > usability sucks a bit IMO. When you press back-button after POST > request, you can see the ugly browser page "Web page has expired" . And > if you press F5 on this, you will see the unfriendly Keycloak error page > "Error was occured. Please login again through your application" because > of invalid code. > > 2) Use the pattern with POST followed by the redirect to GET. Since we > will have loginSession with the ID in the cookie, the GET request can be > sent to the URL without any special query parameter. Something like > "http://localhost:8180/auth/realms/master/login-actions/authenticate" . > This will allow us that in every stage of authentication, user can press > back-button and will be always redirected to the first step of the flow. > When he refreshes the page, it will re-send just the GET request and > always brings him to the current execution. > > This looks most user-friendly. But there is the issue with performance > though. As we will need to followup every POST request with one > additional GET request. > > 3) Don't do anything special regarding back-button or refresh. But in > case that page is refreshed AND the post with invalid (already used) > code will be re-submitted, we won't display the ugly page "Error was > occured.", but we will just redirect to current step of the flow. > > Example: > a) User was redirected from the application to OIDC > AuthorizationEndpoint request. Login page is shown > b) User confirmed invalid username and password with POST request. Login > form with error page "Invalid password" is shown > c) User confirmed valid username and password with POST request. TOTP > page is shown. > d) User press back-button. Now he will see again the page with > username/password form. > e) User press F5. The POST request will be re-sent, but it will use > previous "code", which is outdated now. So in this case, we will > redirect to the current execution and TOTP form will be shown. No > re-submission of username/password form will happen. > > In case 3, the username/password form will be shown again, but user > won't be able to resubmit it. > > In shortcut: With 2 and 3, users will never see the browser page "Web > page is expired" or Keycloak "Error occured. Go back to the > application". With 2, there is additional GET request needed. With 3, > the back-button may show the authentication forms, which user already > successfully confirmed, but he won't be able to re-submit them. Is it > bad regarding usability? To me, it looks better than showing "Web page > is expired". > > So my preference is 3,2,1. WDYT? Any other options? > > Marek > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Tue Mar 14 09:20:25 2017 From: mposolda at redhat.com (Marek Posolda) Date: Tue, 14 Mar 2017 14:20:25 +0100 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: Message-ID: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> On 13/03/17 22:07, Bill Burke wrote: > Keycloak Proxy was written a few years ago to secure apps that can't use > an adapter provided by us. While Keycloak Proxy works (? mostly?) > ,we've been pushing people to Apache + mod-auth-mellon or > mod-auth-openidc for non-Java apps. I predict that relying on Apache > to proxy and secure apps that can't use our adapters is going to quickly > become an issue for us. We already have a need to write extensions to > mod-auth-*, specifically to support Pedro's Authz work (which is really > nice BTW!). We could also do tighter integration to make the > configuration experience more user-friendly. The problem is we have > zero expertise in this area and none of us are C/C++ developers (I > haven't coded in C/C++ since 1999 when I was at Iona). > > This brings me to what would be the next generation of the Keycloak > Proxy. The first thing I'd like to improve is that configuration would > happen within the admin console. This configuration could be made much > simpler as whatever protocol configuration that would be needed could be > hard-coded and pre-configured. Mappers would focus on mapping values > to HTTP headers. > > Beyond configuration, things become more interesting and complex and > their are multiple factors in deciding the authentication protocol, > proxy design, and provisioning: > > * Can/Should one Keycloak Proxy virtual host and proxy multiple apps in > same instance? One thing stopping this is SSL. If Keycloak Proxy is > handling SSL, then there is no possibility of virtual hosting. If the > load balancer is handling SSL, then this is a possibility. > > * Keycloak Proxy currently needs an HttpSession as it stores > authentication information (JWS access token and Refresh Token) there so > it can forward it to the application. We'd have to either shrink needed > information so it could be stored in a cookie, or replication sessions. > THe latter of which would have the same issues with cross DC. > > * Should we collocate Keycloak proxy with Keycloak runtime? That is, > should Keycloak Proxy have direct access to UserSession, CLientSession, > and other model interfaces? The benefits of this are that you could > have a really optimized auth protocol, you'd still have to bounce the > browser to set up cookies directly, but everything else could be handled > through the ClientSession object and there would be no need to generate > or store tokens. +1 I personally never tried Keycloak Proxy, but it's intended for the applications, which don't understand OIDC or SAML right? So we don't need another layer of separate KeycloakProxy server, which needs to communicate through OIDC with Keycloak auth-server itself, but we can maybe just have another login protocol implementation like "Keycloak protocol" or something? Once user is successfully authenticated, Keycloak will just programatically create token and add some headers (KEYCLOAK_IDENTITY etc) and forward the request to the application. Another advantage is, that we won't need anything special for the replication and cross-dc. As long as all the state is cached in userSession, Keycloak can just read the cached token from it and forward to the application. We will need the solution for cross-dc of userSessions anyway, but this will be able to just leverage it. Marek > > * Collocation is even nicer if virtual hosting could be done and there > would be no configuration needed for the proxy. It would just be > configured as a Keycloak instance and pull which apps in would need to > proxy from the database. > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Tue Mar 14 09:21:44 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 14 Mar 2017 09:21:44 -0400 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: Message-ID: <84fa72ac-2996-3895-b705-d83f87b17a32@redhat.com> On 3/14/17 4:21 AM, Marc Boorshtein wrote: > > > So nice of you to hijack the thread to promote your own product. > Not very professional. Its a bit hypocritical of me to say this > as I've done it myself in the past and received a lot of crap for > it. Now that its being done to my project I can see why people > get upset over it. This isn't the first time you've done this. If > you do it again, we'll remove you from the list. I really don't > give a shit if you're a partner or not. > > > > I'm sorry, I don't see how this is "hijacking" a thread. I'm not > telling a KC user "hey, use my project instead because XYZ". I'm > providing another open source option you could use as a starting point > (and I did say "starting point"). Maybe you find it does a large > chunk of what you are looking for and decide to fork it, maybe you > look at it and get some ideas, maybe you find that you want to use > some different approaches because you have different ideas as to how > this problem should be solved. If I remember correctly in your last > thread on this topic you explicitly asked for the community's help to > build this out because you didn't have time. Its no different then > when someone has said to me on other lists "we do it XYZ way, you > should take a look" and I do because if someone else solved the > problem I'm trying to solve why wouldn't I look at their solution?. > Thats not hijacking, thats open source. > > I brought up being a partner as working with components not built by > Red Hat was something you called out in both threads as being an issue > with mod_auth_oidc. I also do my best to be transparent because I > don't want people to think I'm trying to sell them something in every > discussion. My company name and title is in every email I send and I > don't use a generic email account so there's no question. > > I could go on, but I don't wish to continue to clog up this list with > a non-technical discussion. My apologies if you or anyone else on > this list was offended, it was not my intention. > I also don't want to drag this out, but I know the difference between hijacking and collaborating. I've been doing open source development full time for 16 years straight now on many popular open source projects with tons of contributors and users. So please don't lecture me on what open source is or isn't. From bburke at redhat.com Tue Mar 14 09:30:52 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 14 Mar 2017 09:30:52 -0400 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> References: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> Message-ID: On 3/14/17 9:20 AM, Marek Posolda wrote: > On 13/03/17 22:07, Bill Burke wrote: >> Keycloak Proxy was written a few years ago to secure apps that can't use >> an adapter provided by us. While Keycloak Proxy works (? mostly?) >> ,we've been pushing people to Apache + mod-auth-mellon or >> mod-auth-openidc for non-Java apps. I predict that relying on Apache >> to proxy and secure apps that can't use our adapters is going to quickly >> become an issue for us. We already have a need to write extensions to >> mod-auth-*, specifically to support Pedro's Authz work (which is really >> nice BTW!). We could also do tighter integration to make the >> configuration experience more user-friendly. The problem is we have >> zero expertise in this area and none of us are C/C++ developers (I >> haven't coded in C/C++ since 1999 when I was at Iona). >> >> This brings me to what would be the next generation of the Keycloak >> Proxy. The first thing I'd like to improve is that configuration would >> happen within the admin console. This configuration could be made much >> simpler as whatever protocol configuration that would be needed could be >> hard-coded and pre-configured. Mappers would focus on mapping values >> to HTTP headers. >> >> Beyond configuration, things become more interesting and complex and >> their are multiple factors in deciding the authentication protocol, >> proxy design, and provisioning: >> >> * Can/Should one Keycloak Proxy virtual host and proxy multiple apps in >> same instance? One thing stopping this is SSL. If Keycloak Proxy is >> handling SSL, then there is no possibility of virtual hosting. If the >> load balancer is handling SSL, then this is a possibility. >> >> * Keycloak Proxy currently needs an HttpSession as it stores >> authentication information (JWS access token and Refresh Token) there so >> it can forward it to the application. We'd have to either shrink needed >> information so it could be stored in a cookie, or replication sessions. >> THe latter of which would have the same issues with cross DC. >> >> * Should we collocate Keycloak proxy with Keycloak runtime? That is, >> should Keycloak Proxy have direct access to UserSession, CLientSession, >> and other model interfaces? The benefits of this are that you could >> have a really optimized auth protocol, you'd still have to bounce the >> browser to set up cookies directly, but everything else could be handled >> through the ClientSession object and there would be no need to generate >> or store tokens. > +1 > > I personally never tried Keycloak Proxy, but it's intended for the > applications, which don't understand OIDC or SAML right? So we don't > need another layer of separate KeycloakProxy server, which needs to > communicate through OIDC with Keycloak auth-server itself, but we can > maybe just have another login protocol implementation like "Keycloak > protocol" or something? Once user is successfully authenticated, > Keycloak will just programatically create token and add some headers > (KEYCLOAK_IDENTITY etc) and forward the request to the application. > > Another advantage is, that we won't need anything special for the > replication and cross-dc. As long as all the state is cached in > userSession, Keycloak can just read the cached token from it and > forward to the application. We will need the solution for cross-dc of > userSessions anyway, but this will be able to just leverage it. The downside is that you potentially have a lot more nodes joining the cluster and also the memory footprint and size of the proxy. Keycloak server is 125M+ distro and currently takes up minimally 300M+ of actual RAM. I'm not sure if that's something we should take into account or not. I also don't know if users want one proxy that virtual-hosts a bunch of apps or not. I"m not keen on having multiple options for deploying the proxy. More work for us and more work for our users to figure out what to do. I would rather have one way that we can push people towards as the recommended and preferred way. Bill From bburke at redhat.com Tue Mar 14 09:52:01 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 14 Mar 2017 09:52:01 -0400 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: Message-ID: This would have to be a completely optional SPI. There are a lot of keycloak users that are just mapping data from LDAP into our tokens and don't care about validation. On 3/14/17 5:13 AM, Stian Thorgersen wrote: > At the moment there is no single point to define validation for a user. > Even worse for the account management console and admin console it's not > even possible to define validation for custom attributes. > > Also, as there is no defined list of attributes for a user there the > mapping of user attributes is error prone. > > I'd like to introduce a Profile SPI to help with this. It would have > methods to: > > * Validate users during creation and updates > * List defined attributes on a user > > There would be a built-in provider that would delegate to ProfileAttribute > SPI. ProfileAttribute SPI would allow defining configurable providers for > single user attributes. I'm also considering adding a separate Validation > SPI, so a ProfileAttribute provider could delegate validation to a separate > validator. > > Users could also implement their own Profile provider to do whatever they > want. I'd like to aim to make the SPI a supported SPI. > > First pass would focus purely on validation. Second pass would focus on > using the attribute metadata to do things like: > > * Have dropdown boxes in mappers to select user attribute instead of > copy/pasting the name > * Have additional built-in attributes on registration form, update profile > form and account management console that can be enabled/disabled by > defining the Profile. I'm not suggesting a huge amount here and it will be > limited to a few sensible attributes. Defining more complex things like > address would still be done through extending the forms. > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From Paul.Waite at digital.homeoffice.gov.uk Tue Mar 14 10:05:49 2017 From: Paul.Waite at digital.homeoffice.gov.uk (Paul Waite) Date: Tue, 14 Mar 2017 14:05:49 +0000 Subject: [keycloak-dev] =?utf-8?q?Login_theme_templates=3A_getting_the_two?= =?utf-8?q?-letter_code_for_the_page=E2=80=99s_current_language?= Message-ID: <16A66CC2-1652-4977-B3F2-5D15942055CC@digital.homeoffice.gov.uk> I?m a front-end web developer working on a Keycloak login theme for the UK Home Office. The root template.ftl file in the base theme does not include a lang attribute on its tag: I?m trying to add one, as it?s required by the W3C?s Web Content Accessibility Guidelines (WCAG): - https://www.w3.org/TR/WCAG20/#meaning-doc-lang-id - https://www.w3.org/TR/WCAG20-TECHS/H57.html and useful for screen readers: - https://www.paciellogroup.com/blog/2016/06/using-the-html-lang-attribute/ The value of the attribute should be the ISO 639 code for the main language (e.g. English, Italian) that the page is written in. I tried getting this from the .locale template variable, but at least on the standalone server (2.5.4), this was always set to en_GB, even when internationalization was enabled and the default language was set to a different language (I tried with Italian). I can?t see anywhere else to access the language code for the page?s current language. My current workaround is to loop though locale.supported (if locale is defined), and if a supported locale?s label matches locale.current, grab the first two characters of the kc_locale query string parameter in the supported locale?s URL: <#assign LANG_CODE = "en"> <#if .locale??> <#assign LANG_CODE = .locale> <#if locale??> <#list locale.supported> <#items as supportedLocale> <#if supportedLocale.label == locale.current> <#if supportedLocale.url?contains("?kc_locale=")> <#assign LANG_CODE = supportedLocale.url?keep_after("?kc_locale=")[0..1]> <#if supportedLocale.url?contains("&kc_locale=")> <#assign LANG_CODE = supportedLocale.url?keep_after("&kc_locale=")[0..1]> Obviously this depends on no two locales sharing the same label, and on the first two characters of kc_locale being sufficient. It would be really useful if language code for the page?s current language were available in a template variable, and if this were used to populate the lang attribute on the HTML tag in the root login template. Paul Waite Associate Transform 60 Great Portland Street London W1W 7RT Mobile: +447764 752508 Email: paul.waite at transformuk.com Web: www.transformUK.com Follow us on Twitter @TransformUK Please ensure that any communication with Home Office Digital is via an official account ending with digital.homeoffice.gov.uk or homeoffice.gsi.gov.uk. This email and any files transmitted with it are private and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please return it to the address it came from telling them it is not for you and then delete it from your system. Communications via the digital.homeoffice.gov.uk domain may be automatically logged, monitored and/or recorded for legal purposes. This email message has been swept for computer viruses. From tech at psynd.net Tue Mar 14 13:04:12 2017 From: tech at psynd.net (Tech) Date: Tue, 14 Mar 2017 18:04:12 +0100 Subject: [keycloak-dev] Force Token Authentication Method Message-ID: Dear experts, we are integrating an application, Moodle, that apparently has an openIdConnect plugin that is already working with Azure (we tested alredy). Changing the IDP from Azure to Keycloak, we get the following error: "Error in OpenID Connect: Code not valid" line 54 of /auth/oidc/classes/utils.php: moodle_exception thrown line 252 of /auth/oidc/classes/oidcclient.php: call to auth_oidc\utils::process_json_response() line 197 of /auth/oidc/classes/loginflow/authcode.php: call to auth_oidc\oidcclient->tokenrequest() line 85 of /auth/oidc/classes/loginflow/authcode.php: call to auth_oidc\loginflow\authcode->handleauthresponse() line 105 of /auth/oidc/auth.php: call to auth_oidc\loginflow\authcode->handleredirect() line 29 of /auth/oidc/index.php: call to auth_plugin_oidc->handleredirect() Where the Code has the following format: "hZvVPC6iqBAZk9sXNbGGFa4hyHSdfLvsQ8adtGXS1dI8789b5e7-2d4f-4336-9896-981621969138" We opened the .well-known and we have: "token_endpoint_auth_methods_supported": "private_key_jwt", "client_secret_basic", "client_secret_post". Checking online https://github.com/Microsoft/o365-moodle/issues/200 We found out the identical stack trace and that other person resolved the issue changing the Token Authentication Method to client_secret_post, but from the .well-known, we saw that it's already between the accepted auth methods for our Keycloak. Have you any advise? Thanks From mark.pardijs at topicus.nl Tue Mar 14 13:21:33 2017 From: mark.pardijs at topicus.nl (Mark Pardijs) Date: Tue, 14 Mar 2017 17:21:33 +0000 Subject: [keycloak-dev] Deploying provider with dependencies In-Reply-To: <1485123376.9077.1.camel@cargosoft.ru> References: <1484736842.3591.1.camel@cargosoft.ru> <1484770247.18494.1.camel@cargosoft.ru> <1484774410.18494.3.camel@cargosoft.ru> <0cd22167-1915-1509-2b77-fddcf009d477@redhat.com> <1485123376.9077.1.camel@cargosoft.ru> Message-ID: I?m trying to add a custom authenticator to Keycloak, and I?m also confused on this topic. First, I copied my jar in the keycloak/providers directory, this works. Then, I started to depend on a third party library, so I needed to add this third party jar in my providers directory. Since it?s not very convenient to copy all third party libs I tried some other ways but I?m lost now. I could build an ear file with my my custom jar and third party libs in it, and place it in the deployments directory, but then the service module loader fails with the following message: 17:13:31,485 WARN [org.jboss.modules] (ServerService Thread Pool -- 50) Failed to define class nl.MyAuthenticatorFactory in Module "deployment.keycloak-authenticator-ear-1.0-SNAPSHOT.ear.keycloak-authenticat-1.0-SNAPSHOT.jar:main" from Service Module Loader: java.lang.NoClassDefFoundError: Failed to link nl/MyAuthenticatorFactory (Module "deployment.keycloak-authenticator-ear-1.0-SNAPSHOT.ear.keycloak-authenticator-1.0-SNAPSHOT.jar:main" from Service Module Loader): org/keycloak/authentication/AuthenticatorFactory. When I put the ear in the providers dir, it?s not picked up at all. I think I?m missing something in my ear file, I read something about a jboss-deployment-structure.xml but can?t find a good example of using this in this situation. So, hopefully you can help me with these quesitons: 1. Isn?t it possible to depend on a third party lib that?s already on the keycloak?s default classpath? I?m depending on commons-lang lib, thought which is also in keycloak?s default classpath, but still when using my provider I get a ClassNotFound error 2. The documentation is mentioning two ways to deploy a provider: using the /providers dir or using the deployments dir, which one is recommended when? 3. What?s the expected structure of an ear file for use in the deployments directory? Currently I have the same as Dmitry mentioned below Op 22 jan. 2017, om 23:16 heeft Dmitry Telegin > het volgende geschreven: Tried with 2.5.0.Final - the EAR doesn't get recursed unless there's an application.xml with all the internal JARs explicitly declared as modules. Could it have been some special jboss-deployment-structure.xml in your case? Either way, it's a very subtle issue that deserves being documented IMHO. Cheers, Dmitry ? Thu, 19/01/2017 ? 09:56 -0500, Bill Burke ?????: I'm pretty sure you can just remove the application.xml too. Then the ear will be recursed. I tried this out when I first wrote the deployer. On 1/18/17 4:20 PM, Dmitry Telegin wrote: I've finally solved this. There are two moments to pay attention to: - maven-ear-plugin by default generates almost empty application.xml. In order for subdeployment to be recognized, it must be mentioned in application.xml as a module. Both and worked for me; - in JBoss/WildFly, only top-level jboss-deployment-structure.xml files are recognized. Thus, this file should be moved from a JAR to an EAR and tweaked accordingly. This is discussed in detail here: http://stackoverflow.com/questions/26859092/jboss-deployment-struct ure- xml-does-not-loads-the-dependencies-in-my-ear-project Again, it would be nice to have a complete working example to demonstrate this approach. I think I could (one day) update my BeerCloak example to cover this new deployment technique. Dmitry ? Wed, 18/01/2017 ? 23:10 +0300, Dmitry Telegin ?????: Stian, I've tried to package my provider JAR into an EAR, but that didn't work :( The layout is the following: foo-0.1-SNAPSHOT.ear +- foo-provider-0.1-SNAPSHOT.jar +- META-INF +- application.xml When I put the JAR into the deployments subdir, it is deployed successfully, initialization code is called etc. But when I drop the EAR in the same subdir, there is only a successful deployment message. The provider doesn't get initialized; seems like deployer doesn't recurse into EAR contents. Tried to place the JAR into the "lib" subdir inside EAR, this didn't work either. The EAR is generated by maven- ear- plugin with the standard settings. Am I missing something? Sorry for bugging you, but unfortunately there is not much said in the docs about deploying providers from inside EARs. A working example would be helpful as well. Dmitry ? Wed, 18/01/2017 ? 12:34 +0100, Stian Thorgersen ?????: You have two options deploy as a module which requires adding modules for all dependencies or using the new deploy as a JEE archive approach, which also supports hot deployment. Check out the server developer guide for more details. On 18 January 2017 at 11:54, Dmitry Telegin https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Tue Mar 14 16:49:53 2017 From: mposolda at redhat.com (Marek Posolda) Date: Tue, 14 Mar 2017 21:49:53 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> Message-ID: <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> Thanks, that looks similar to my (3) though. Besides that I wonder if we should save just the ID of loginSession in the cookie and the "current-code" keep inside the loginSession (infinispan) similarly like it is now? I am thinking about the case when potential attacker tricks Keycloak by manually sending the request, which will just use same code in the cookie and in the URL. Keycloak will then always treat this request as valid due the code in the URL and in cookie will always match. Couldn't that be an issue? Marek On 14/03/17 13:50, Bill Burke wrote: > I've got an idea. What about > > * keep the code in the URL > > * Additionally add a "current-code" cookie > > If code in the URL doesn't match the cookie, then redirect to the URL of > the current-code. > > > On 3/14/17 6:53 AM, Marek Posolda wrote: >> When working on login sessions, I wonder if we want to improve browser >> back-button and browser refreshes. >> >> In shortcut, I can see 3 basic options: >> >> 1) Keep same like now and rely on header "Cache-Control: no-store, >> must-revalidate, max-age=0" . This works fine and users never saw >> outdated form and never submit outdated form 2 times. However the >> usability sucks a bit IMO. When you press back-button after POST >> request, you can see the ugly browser page "Web page has expired" . And >> if you press F5 on this, you will see the unfriendly Keycloak error page >> "Error was occured. Please login again through your application" because >> of invalid code. >> >> 2) Use the pattern with POST followed by the redirect to GET. Since we >> will have loginSession with the ID in the cookie, the GET request can be >> sent to the URL without any special query parameter. Something like >> "http://localhost:8180/auth/realms/master/login-actions/authenticate" . >> This will allow us that in every stage of authentication, user can press >> back-button and will be always redirected to the first step of the flow. >> When he refreshes the page, it will re-send just the GET request and >> always brings him to the current execution. >> >> This looks most user-friendly. But there is the issue with performance >> though. As we will need to followup every POST request with one >> additional GET request. >> >> 3) Don't do anything special regarding back-button or refresh. But in >> case that page is refreshed AND the post with invalid (already used) >> code will be re-submitted, we won't display the ugly page "Error was >> occured.", but we will just redirect to current step of the flow. >> >> Example: >> a) User was redirected from the application to OIDC >> AuthorizationEndpoint request. Login page is shown >> b) User confirmed invalid username and password with POST request. Login >> form with error page "Invalid password" is shown >> c) User confirmed valid username and password with POST request. TOTP >> page is shown. >> d) User press back-button. Now he will see again the page with >> username/password form. >> e) User press F5. The POST request will be re-sent, but it will use >> previous "code", which is outdated now. So in this case, we will >> redirect to the current execution and TOTP form will be shown. No >> re-submission of username/password form will happen. >> >> In case 3, the username/password form will be shown again, but user >> won't be able to resubmit it. >> >> In shortcut: With 2 and 3, users will never see the browser page "Web >> page is expired" or Keycloak "Error occured. Go back to the >> application". With 2, there is additional GET request needed. With 3, >> the back-button may show the authentication forms, which user already >> successfully confirmed, but he won't be able to re-submit them. Is it >> bad regarding usability? To me, it looks better than showing "Web page >> is expired". >> >> So my preference is 3,2,1. WDYT? Any other options? >> >> Marek >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Tue Mar 14 18:32:45 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 14 Mar 2017 18:32:45 -0400 Subject: [keycloak-dev] token service Message-ID: There seems to be momentum building around token services, particular features around: * Token downgrades. Reducing the scope of an access token when delegating to a separate less trusted service. For example, you have a token with admin priveleges and you want to remove those privleges before re-using the token against another service. * Token exchanges. Ability to convert a foreign token to and from a Keycloak one. For example, if you want to trust tokens issued by some proprietary IBM IDM. * Trusting tokens from other Keycloak domains. (Although I think this can fall under token exchanges). * Token revalidation (I think we have this). There are some specs around this that Pedro pointed me to: [1]https://tools.ietf.org/html/draft-richer-oauth-chain-00 [2]https://tools.ietf.org/html/draft-campbell-oauth-sts-01 I don't think they are either missing things we need or too complex for our needs. * Token downgrades, or token redelgation/chaining I don't want to require apps to know the exact scope they have to downgrade to if they want to reduce the scope when interacting with another service. Let's provide an additional extension to [1] and supply a "client" parameter in which the clientId of the redelegation you want to perform is used. The token returned would be a union of the access token's scope and the configured scope of the target client. * Token exchanges For [2] Keycloak just doesn't have all the concepts that are spoken about here. I also don't think the spec is good enough. Coverting tokens would be handled by a Token Exchange SPI. A provider would be configured per realm and implemented on top of the ComponentModel SPI. Each of these provider instances would handle either converting from an external token to a realm token and/or from a realm token to an external token. There will also be a rest endpoint on the realm to convert from external to Keycloak and a separate REST endpoint for converting from Keycloak to an external token. From externl to Keycloka: This would be a form POST to /token/convert-from with these additional form parameters "token" - REQUIRED. string rep of the token "provider" - REQUIRED. id of transformer register in the realm for the token type "requested-token-type" - OPTIONAL. "id", "access", "offline", or "refresh". Default is "access". "scope" - OPTIONAL. Same as oauth scope. This operation is analogous to the code to token flow. Here we are creating a token tailored to the authenticated client. So all scope configurations and mappers that the client has are applied. This means that the client must be registered as an OIDC client. The SPI would look something like this: interface TokenExchangeFromProvider extends Provider { Transformer parse(ClientModel client, Map formParameters); interface Transformer { UserModel getUser(); IDToken convert(IDToken idToken); AccessToken convert(AccessToken accessToken); } } The getUser() method returns a user that was authenticated from the external token. The convert() methods just gives the provider the flexibility to do further transformations on the returned token. The runtime would do something like this: ClientModel authenticatedClient = ...; ComponentModel model = realm.getComponent(formParams.get("provider")); TokenExchangeFromProvider provider = session.getProvider(TokenExchangeFromProvider.class, model); Transformer transformer = provider.parse(formParams); UserModel user = transformer.getUser(); if (formParam.get("requested-token-type").equals("access")) { AccessToken accessToken = generateAccessToken(authenticatedClient, user, ...); accessToken = transformer.convert(accessToken). } Something similar would be done for converting a Keycloak token to an external token: This would be a form POST to /token/convert-to with these additional form parameters "token" - REQUIRED. string rep of the token "provider" - REQUIRED. id of transformer register in the realm for the token type interface TokenExchangeToProvider extends Provider { ResponseBuilder parse(ClientModel client, Map formParameters); } Since we're crafting something for an external token system, we give the provider complete autonomy in crafting the HTTP response to this operation. From bburke at redhat.com Tue Mar 14 18:47:13 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 14 Mar 2017 18:47:13 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> Message-ID: <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> Ya, similar to #3, my thought is if you combine a cookie with code-in-url, you have a solution for backbutton and refresh and there's no special headers you have to specify. We used to do #2, but lot of people, specifically jboss.org guys, complained about it. On 3/14/17 4:49 PM, Marek Posolda wrote: > Thanks, that looks similar to my (3) though. > > Besides that I wonder if we should save just the ID of loginSession in > the cookie and the "current-code" keep inside the loginSession > (infinispan) similarly like it is now? > > I am thinking about the case when potential attacker tricks Keycloak > by manually sending the request, which will just use same code in the > cookie and in the URL. Keycloak will then always treat this request as > valid due the code in the URL and in cookie will always match. > Couldn't that be an issue? > > Marek > > On 14/03/17 13:50, Bill Burke wrote: >> I've got an idea. What about >> >> * keep the code in the URL >> >> * Additionally add a "current-code" cookie >> >> If code in the URL doesn't match the cookie, then redirect to the URL of >> the current-code. >> >> >> On 3/14/17 6:53 AM, Marek Posolda wrote: >>> When working on login sessions, I wonder if we want to improve browser >>> back-button and browser refreshes. >>> >>> In shortcut, I can see 3 basic options: >>> >>> 1) Keep same like now and rely on header "Cache-Control: no-store, >>> must-revalidate, max-age=0" . This works fine and users never saw >>> outdated form and never submit outdated form 2 times. However the >>> usability sucks a bit IMO. When you press back-button after POST >>> request, you can see the ugly browser page "Web page has expired" . And >>> if you press F5 on this, you will see the unfriendly Keycloak error >>> page >>> "Error was occured. Please login again through your application" >>> because >>> of invalid code. >>> >>> 2) Use the pattern with POST followed by the redirect to GET. Since we >>> will have loginSession with the ID in the cookie, the GET request >>> can be >>> sent to the URL without any special query parameter. Something like >>> "http://localhost:8180/auth/realms/master/login-actions/authenticate" . >>> This will allow us that in every stage of authentication, user can >>> press >>> back-button and will be always redirected to the first step of the >>> flow. >>> When he refreshes the page, it will re-send just the GET request and >>> always brings him to the current execution. >>> >>> This looks most user-friendly. But there is the issue with performance >>> though. As we will need to followup every POST request with one >>> additional GET request. >>> >>> 3) Don't do anything special regarding back-button or refresh. But in >>> case that page is refreshed AND the post with invalid (already used) >>> code will be re-submitted, we won't display the ugly page "Error was >>> occured.", but we will just redirect to current step of the flow. >>> >>> Example: >>> a) User was redirected from the application to OIDC >>> AuthorizationEndpoint request. Login page is shown >>> b) User confirmed invalid username and password with POST request. >>> Login >>> form with error page "Invalid password" is shown >>> c) User confirmed valid username and password with POST request. TOTP >>> page is shown. >>> d) User press back-button. Now he will see again the page with >>> username/password form. >>> e) User press F5. The POST request will be re-sent, but it will use >>> previous "code", which is outdated now. So in this case, we will >>> redirect to the current execution and TOTP form will be shown. No >>> re-submission of username/password form will happen. >>> >>> In case 3, the username/password form will be shown again, but user >>> won't be able to resubmit it. >>> >>> In shortcut: With 2 and 3, users will never see the browser page "Web >>> page is expired" or Keycloak "Error occured. Go back to the >>> application". With 2, there is additional GET request needed. With 3, >>> the back-button may show the authentication forms, which user already >>> successfully confirmed, but he won't be able to re-submit them. Is it >>> bad regarding usability? To me, it looks better than showing "Web page >>> is expired". >>> >>> So my preference is 3,2,1. WDYT? Any other options? >>> >>> Marek >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > From Sebastian.Schuster at bosch-si.com Wed Mar 15 03:30:48 2017 From: Sebastian.Schuster at bosch-si.com (Schuster Sebastian (INST/ESY1)) Date: Wed, 15 Mar 2017 07:30:48 +0000 Subject: [keycloak-dev] token service In-Reply-To: References: Message-ID: <41120688F6FCE840B172FD409300D9E8283903F2@BE6PW2EXD01.bosch-si.com> Have you seen the current version of [2] at https://tools.ietf.org/html/draft-ietf-oauth-token-exchange-07 ? This one looks very different and probably better... Regards, Sebastian -----Urspr?ngliche Nachricht----- Von: keycloak-dev-bounces at lists.jboss.org [mailto:keycloak-dev-bounces at lists.jboss.org] Im Auftrag von Bill Burke Gesendet: Dienstag, 14. M?rz 2017 23:33 An: keycloak-dev Betreff: [keycloak-dev] token service There seems to be momentum building around token services, particular features around: * Token downgrades. Reducing the scope of an access token when delegating to a separate less trusted service. For example, you have a token with admin priveleges and you want to remove those privleges before re-using the token against another service. * Token exchanges. Ability to convert a foreign token to and from a Keycloak one. For example, if you want to trust tokens issued by some proprietary IBM IDM. * Trusting tokens from other Keycloak domains. (Although I think this can fall under token exchanges). * Token revalidation (I think we have this). There are some specs around this that Pedro pointed me to: [1]https://tools.ietf.org/html/draft-richer-oauth-chain-00 [2]https://tools.ietf.org/html/draft-campbell-oauth-sts-01 I don't think they are either missing things we need or too complex for our needs. * Token downgrades, or token redelgation/chaining I don't want to require apps to know the exact scope they have to downgrade to if they want to reduce the scope when interacting with another service. Let's provide an additional extension to [1] and supply a "client" parameter in which the clientId of the redelegation you want to perform is used. The token returned would be a union of the access token's scope and the configured scope of the target client. * Token exchanges For [2] Keycloak just doesn't have all the concepts that are spoken about here. I also don't think the spec is good enough. Coverting tokens would be handled by a Token Exchange SPI. A provider would be configured per realm and implemented on top of the ComponentModel SPI. Each of these provider instances would handle either converting from an external token to a realm token and/or from a realm token to an external token. There will also be a rest endpoint on the realm to convert from external to Keycloak and a separate REST endpoint for converting from Keycloak to an external token. From externl to Keycloka: This would be a form POST to /token/convert-from with these additional form parameters "token" - REQUIRED. string rep of the token "provider" - REQUIRED. id of transformer register in the realm for the token type "requested-token-type" - OPTIONAL. "id", "access", "offline", or "refresh". Default is "access". "scope" - OPTIONAL. Same as oauth scope. This operation is analogous to the code to token flow. Here we are creating a token tailored to the authenticated client. So all scope configurations and mappers that the client has are applied. This means that the client must be registered as an OIDC client. The SPI would look something like this: interface TokenExchangeFromProvider extends Provider { Transformer parse(ClientModel client, Map formParameters); interface Transformer { UserModel getUser(); IDToken convert(IDToken idToken); AccessToken convert(AccessToken accessToken); } } The getUser() method returns a user that was authenticated from the external token. The convert() methods just gives the provider the flexibility to do further transformations on the returned token. The runtime would do something like this: ClientModel authenticatedClient = ...; ComponentModel model = realm.getComponent(formParams.get("provider")); TokenExchangeFromProvider provider = session.getProvider(TokenExchangeFromProvider.class, model); Transformer transformer = provider.parse(formParams); UserModel user = transformer.getUser(); if (formParam.get("requested-token-type").equals("access")) { AccessToken accessToken = generateAccessToken(authenticatedClient, user, ...); accessToken = transformer.convert(accessToken). } Something similar would be done for converting a Keycloak token to an external token: This would be a form POST to /token/convert-to with these additional form parameters "token" - REQUIRED. string rep of the token "provider" - REQUIRED. id of transformer register in the realm for the token type interface TokenExchangeToProvider extends Provider { ResponseBuilder parse(ClientModel client, Map formParameters); } Since we're crafting something for an external token system, we give the provider complete autonomy in crafting the HTTP response to this operation. _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From sthorger at redhat.com Wed Mar 15 03:33:31 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Wed, 15 Mar 2017 08:33:31 +0100 Subject: [keycloak-dev] Russian translation review Message-ID: Anyone capable of reviewing PR for the Russian translations: https://github.com/keycloak/keycloak/pull/3898 From sthorger at redhat.com Wed Mar 15 03:35:42 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Wed, 15 Mar 2017 08:35:42 +0100 Subject: [keycloak-dev] Keycloak 2.5.5.Final Released Message-ID: Keycloak 2.5.5.Final is out. There's nothing much except a handful bug fixes, but it's still worth upgrading. To download the release go to the Keycloak homepage . Highlights - A few bug fixes The full list of resolved issues is available in JIRA . Upgrading Before you upgrade remember to backup your database and check the migration guide . From sts at ono.at Wed Mar 15 06:22:49 2017 From: sts at ono.at (Stefan Schlesinger) Date: Wed, 15 Mar 2017 11:22:49 +0100 Subject: [keycloak-dev] Keycloak High-Availability / Database Message-ID: Hello Folks! I tried to setup a Keycloak HA cluster with Percona XtraDB/Galera as HA database backend and it looks like its not currently supported, at least by the database schema Keycloak uses and the default Galera settings. Galera requires or recommends (performance) all table schemas to be defined with a primary key field and when I tried to add a role to a group I got the following error: ERROR [io.undertow.request] (default task-14) UT005023: Exception handling request to /auth/admin/realms/vault/groups/c2a04652-a322-1111-18ea-b2145bab2222/role-mappings/realm: org.jboss.resteasy.spi.UnhandledException: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement ... Caused by: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement ... Caused by: org.hibernate.exception.GenericJDBCException: could not execute statement ... Caused by: java.sql.SQLException: Percona-XtraDB-Cluster prohibits use of DML command on a table (keycloak.ADMIN_EVENT_ENTITY) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER Looking through the database schema the following tables don?t have a primary key defined: ADMIN_EVENT_ENTITY COMPOSITE_ROLE* CREDENTIAL_ATTRIBUTE DATABASECHANGELOG FED_CREDENTIAL_ATTRIBUTE REALM_ENABLED_EVENT_TYPES* REALM_EVENTS_LISTENERS* REALM_SUPPORTED_LOCALES* REDIRECT_URIS* WEB_ORIGINS* Tables marked with an asterisk don?t even have an ID field, the rest of the tables actually got an ID field (with a UUID 'primary key'), which I think could be easily defined as primary key, and could easily be added. Looking at the Percona documentation[1][2], the limitation to only support tables with primary keys was liftet in more recent versions with the introduction of wsrep_certify_nonPK. However, it's still generally a best practice to have explicit PKs. If you don't define a PK, Galera will use an implicit hidden 6-byte PK for Innodb tables, taking up space that you can't use for querying. Innodb is very much optimized towards PK lookups. Also I?d need to set pxc_strict_mode from ENFORCING to PERMISSIVE, but that might have other side effects, as it relaxes other validations as well. Any experiences? Also, would it be possible to add primary keys in a bugfix version? Best, Stefan. [1] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/features/pxc-strict-mode.html#tables-without-primary-keys [2] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/wsrep-system-index.html#wsrep_certify_nonPK From psilva at redhat.com Wed Mar 15 07:24:12 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 15 Mar 2017 08:24:12 -0300 Subject: [keycloak-dev] token service In-Reply-To: References: Message-ID: On Tue, Mar 14, 2017 at 7:32 PM, Bill Burke wrote: > There seems to be momentum building around token services, particular > features around: > > * Token downgrades. Reducing the scope of an access token when > delegating to a separate less trusted service. For example, you have a > token with admin priveleges and you want to remove those privleges > before re-using the token against another service. > > * Token exchanges. Ability to convert a foreign token to and from a > Keycloak one. For example, if you want to trust tokens issued by some > proprietary IBM IDM. > > * Trusting tokens from other Keycloak domains. (Although I think this > can fall under token exchanges). > > * Token revalidation (I think we have this). > > > There are some specs around this that Pedro pointed me to: > > [1]https://tools.ietf.org/html/draft-richer-oauth-chain-00 > [2]https://tools.ietf.org/html/draft-campbell-oauth-sts-01 > > > I don't think they are either missing things we need or too complex for > our needs. > > * Token downgrades, or token redelgation/chaining > > I don't want to require apps to know the exact scope they have to > downgrade to if they want to reduce the scope when interacting with > another service. Let's provide an additional extension to [1] and > supply a "client" parameter in which the clientId of the redelegation > you want to perform is used. The token returned would be a union of the > access token's scope and the configured scope of the target client. > > * Token exchanges > For [2] Keycloak just doesn't have all the concepts that are spoken > about here. I also don't think the spec is good enough. Coverting > tokens would be handled by a Token Exchange SPI. A provider would be > configured per realm and implemented on top of the ComponentModel SPI. > Each of these provider instances would handle either converting from an > external token to a realm token and/or from a realm token to an external > token. There will also be a rest endpoint on the realm to convert from > external to Keycloak and a separate REST endpoint for converting from > Keycloak to an external token. > > From externl to Keycloka: > This would be a form POST to /token/convert-from with these additional > form parameters > > "token" - REQUIRED. string rep of the token > "provider" - REQUIRED. id of transformer register in the realm for the > token type > Why you need to send "provider" ? If you are already sending " requested_token_type", the provider is implicit, right ? Or if you don't send "requested_token_type" you can allow users to specify the default token type to be issued. > "requested-token-type" - OPTIONAL. "id", "access", "offline", or > I think we have a great opportunity here to also support use cases where you need to exchange a SAML assertion with an OAuth2/OIDC token. There is a specific standard for that, but I think a STS would be perfect for this. I remember some people asking for this and I think it will be a great win for enterprise use cases and legacy systems. > "refresh". Default is "access". > "scope" - OPTIONAL. Same as oauth scope. > This operation is analogous to the code to token flow. Here we are > creating a token tailored to the authenticated client. So all scope > configurations and mappers that the client has are applied. This means > that the client must be registered as an OIDC client. The SPI would look > something like this: > > interface TokenExchangeFromProvider extends Provider { > > Transformer parse(ClientModel client, Map > formParameters); > > interface Transformer { > > UserModel getUser(); > IDToken convert(IDToken idToken); > AccessToken convert(AccessToken accessToken); > } > } > > The getUser() method returns a user that was authenticated from the > external token. The convert() methods just gives the provider the > flexibility to do further transformations on the returned token. > > The runtime would do something like this: > > ClientModel authenticatedClient = ...; > ComponentModel model = realm.getComponent(formParams.get("provider")); > TokenExchangeFromProvider provider = > session.getProvider(TokenExchangeFromProvider.class, model); > Transformer transformer = provider.parse(formParams); > UserModel user = transformer.getUser(); > if (formParam.get("requested-token-type").equals("access")) { > AccessToken accessToken = generateAccessToken(authenticatedClient, > user, ...); > accessToken = transformer.convert(accessToken). > } > > Something similar would be done for converting a Keycloak token to an > external token: > > This would be a form POST to /token/convert-to with these additional > form parameters > > "token" - REQUIRED. string rep of the token > "provider" - REQUIRED. id of transformer register in the realm for the > token type > Same comments I did to "external to Keycloak" exchange. In this particular case, I think we can also provide some integration with identity brokering functionality. For instance, suppose I have a KC token and I want to obtain a Facebook AT. I know we have ways to do that today, but I think that using a STS is much more neat. In this case, we also don't need to send the provider, but we can make this completely configurable from admin console. E.g.: associate token types with a OOTB/custom identity providers. Maybe we can even define a Token Exchange Identity Provider, which can be configured to integrate with an external STS. > > > interface TokenExchangeToProvider extends Provider { > > ResponseBuilder parse(ClientModel client, Map > formParameters); > } > > Since we're crafting something for an external token system, we give the > provider complete autonomy in crafting the HTTP response to this operation. > Not sure you remember. But when we were discussing SAML on the earlier days of Keycloak, I mentioned a API for Security Token Service that we had in PL for years (I think Stefan did it). Plus a simplified version of this API in PL JEE/IDM. One of the things that I liked most in PL is that the STS is the backbone for the IdP. What I mean is, the IdP don't care about how a token is issued/revoked/validated/renewed, but delegate this to STS. Being responsible to basically implement the communication/protocol between the involved parties. > > > > > > > > > > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From psilva at redhat.com Wed Mar 15 07:27:59 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 15 Mar 2017 08:27:59 -0300 Subject: [keycloak-dev] token service In-Reply-To: <41120688F6FCE840B172FD409300D9E8283903F2@BE6PW2EXD01.bosch-si.com> References: <41120688F6FCE840B172FD409300D9E8283903F2@BE6PW2EXD01.bosch-si.com> Message-ID: +1. That is the most recent version of token exchange. Sorry Bill, just got those from my bookmarks. On Wed, Mar 15, 2017 at 4:30 AM, Schuster Sebastian (INST/ESY1) < Sebastian.Schuster at bosch-si.com> wrote: > Have you seen the current version of [2] at https://tools.ietf.org/html/ > draft-ietf-oauth-token-exchange-07 ? This one looks very different and > probably better... > > Regards, > Sebastian > > -----Urspr?ngliche Nachricht----- > Von: keycloak-dev-bounces at lists.jboss.org [mailto:keycloak-dev-bounces@ > lists.jboss.org] Im Auftrag von Bill Burke > Gesendet: Dienstag, 14. M?rz 2017 23:33 > An: keycloak-dev > Betreff: [keycloak-dev] token service > > There seems to be momentum building around token services, particular > features around: > > * Token downgrades. Reducing the scope of an access token when delegating > to a separate less trusted service. For example, you have a token with > admin priveleges and you want to remove those privleges before re-using the > token against another service. > > * Token exchanges. Ability to convert a foreign token to and from a > Keycloak one. For example, if you want to trust tokens issued by some > proprietary IBM IDM. > > * Trusting tokens from other Keycloak domains. (Although I think this can > fall under token exchanges). > > * Token revalidation (I think we have this). > > > There are some specs around this that Pedro pointed me to: > > [1]https://tools.ietf.org/html/draft-richer-oauth-chain-00 > [2]https://tools.ietf.org/html/draft-campbell-oauth-sts-01 > > > I don't think they are either missing things we need or too complex for > our needs. > > * Token downgrades, or token redelgation/chaining > > I don't want to require apps to know the exact scope they have to > downgrade to if they want to reduce the scope when interacting with another > service. Let's provide an additional extension to [1] and supply a > "client" parameter in which the clientId of the redelegation you want to > perform is used. The token returned would be a union of the access token's > scope and the configured scope of the target client. > > * Token exchanges > For [2] Keycloak just doesn't have all the concepts that are spoken about > here. I also don't think the spec is good enough. Coverting tokens would > be handled by a Token Exchange SPI. A provider would be configured per > realm and implemented on top of the ComponentModel SPI. > Each of these provider instances would handle either converting from an > external token to a realm token and/or from a realm token to an external > token. There will also be a rest endpoint on the realm to convert from > external to Keycloak and a separate REST endpoint for converting from > Keycloak to an external token. > > From externl to Keycloka: > This would be a form POST to /token/convert-from with these additional > form parameters > > "token" - REQUIRED. string rep of the token "provider" - REQUIRED. id of > transformer register in the realm for the token type "requested-token-type" > - OPTIONAL. "id", "access", "offline", or "refresh". Default is "access". > "scope" - OPTIONAL. Same as oauth scope. > > This operation is analogous to the code to token flow. Here we are > creating a token tailored to the authenticated client. So all scope > configurations and mappers that the client has are applied. This means that > the client must be registered as an OIDC client. The SPI would look > something like this: > > interface TokenExchangeFromProvider extends Provider { > > Transformer parse(ClientModel client, Map > formParameters); > > interface Transformer { > > UserModel getUser(); > IDToken convert(IDToken idToken); > AccessToken convert(AccessToken accessToken); > } > } > > The getUser() method returns a user that was authenticated from the > external token. The convert() methods just gives the provider the > flexibility to do further transformations on the returned token. > > The runtime would do something like this: > > ClientModel authenticatedClient = ...; > ComponentModel model = realm.getComponent(formParams.get("provider")); > TokenExchangeFromProvider provider = > session.getProvider(TokenExchangeFromProvider.class, model); Transformer > transformer = provider.parse(formParams); UserModel user = > transformer.getUser(); if (formParam.get("requested-token-type").equals("access")) > { > AccessToken accessToken = generateAccessToken(authenticatedClient, > user, ...); > accessToken = transformer.convert(accessToken). > } > > Something similar would be done for converting a Keycloak token to an > external token: > > This would be a form POST to /token/convert-to with these additional form > parameters > > "token" - REQUIRED. string rep of the token "provider" - REQUIRED. id of > transformer register in the realm for the token type > > > interface TokenExchangeToProvider extends Provider { > > ResponseBuilder parse(ClientModel client, Map > formParameters); > } > > Since we're crafting something for an external token system, we give the > provider complete autonomy in crafting the HTTP response to this operation. > > > > > > > > > > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bburke at redhat.com Wed Mar 15 10:14:41 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 15 Mar 2017 10:14:41 -0400 Subject: [keycloak-dev] Russian translation review In-Reply-To: References: Message-ID: <41cf40cd-2161-88ca-1974-77566dde5d42@redhat.com> I hear Paul Manafort and Mike Flynn are available now and could do it. Sorry, couldn't resist... :) On 3/15/17 3:33 AM, Stian Thorgersen wrote: > Anyone capable of reviewing PR for the Russian translations: > https://github.com/keycloak/keycloak/pull/3898 > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From irum at redhat.com Wed Mar 15 11:01:15 2017 From: irum at redhat.com (Ilya Rum) Date: Wed, 15 Mar 2017 16:01:15 +0100 Subject: [keycloak-dev] Russian translation review In-Reply-To: <41cf40cd-2161-88ca-1974-77566dde5d42@redhat.com> References: <41cf40cd-2161-88ca-1974-77566dde5d42@redhat.com> Message-ID: I can also do that, being russian might help. Probably. I hope? On Wed, Mar 15, 2017 at 3:14 PM, Bill Burke wrote: > I hear Paul Manafort and Mike Flynn are available now and could do it. > Sorry, couldn't resist... :) > > > On 3/15/17 3:33 AM, Stian Thorgersen wrote: > > Anyone capable of reviewing PR for the Russian translations: > > https://github.com/keycloak/keycloak/pull/3898 > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bburke at redhat.com Wed Mar 15 11:01:47 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 15 Mar 2017 11:01:47 -0400 Subject: [keycloak-dev] token service In-Reply-To: References: Message-ID: <04daf66c-986f-08d9-6299-500cb60997fb@redhat.com> On 3/15/17 7:24 AM, Pedro Igor Silva wrote: > > From externl to Keycloka: > This would be a form POST to /token/convert-from with these additional > form parameters > > "token" - REQUIRED. string rep of the token > > > > > "provider" - REQUIRED. id of transformer register in the realm for the > token type > > > Why you need to send "provider" ? If you are already sending > "requested_token_type", the provider is implicit, right ? > > Or if you don't send "requested_token_type" you can allow users to > specify the default token type to be issued. provider matches up with a Keycloak ComponentModel which matches up to a Keycloak ProviderFactory which knows how to do the translation. Need some way of finding the service that can handle the translation of the external token. The requested_token_type in my proposal is the catagory of token you want, not its actual type > "requested-token-type" - OPTIONAL. "id", "access", "offline", or > > > I think we have a great opportunity here to also support use cases > where you need to exchange a SAML assertion with an OAuth2/OIDC token. > There is a specific standard for that, but I think a STS would be > perfect for this. > > I remember some people asking for this and I think it will be a great > win for enterprise use cases and legacy systems. I still think we should have a separation between external and internal exchanges. > > "refresh". Default is "access". > "scope" - OPTIONAL. Same as oauth scope. > > > This operation is analogous to the code to token flow. Here we are > creating a token tailored to the authenticated client. So all scope > configurations and mappers that the client has are applied. This means > that the client must be registered as an OIDC client. The SPI > would look > something like this: > > interface TokenExchangeFromProvider extends Provider { > > Transformer parse(ClientModel client, Map > formParameters); > > interface Transformer { > > UserModel getUser(); > IDToken convert(IDToken idToken); > AccessToken convert(AccessToken accessToken); > } > } > > The getUser() method returns a user that was authenticated from the > external token. The convert() methods just gives the provider the > flexibility to do further transformations on the returned token. > > The runtime would do something like this: > > ClientModel authenticatedClient = ...; > ComponentModel model = realm.getComponent(formParams.get("provider")); > TokenExchangeFromProvider provider = > session.getProvider(TokenExchangeFromProvider.class, model); > Transformer transformer = provider.parse(formParams); > UserModel user = transformer.getUser(); > if (formParam.get("requested-token-type").equals("access")) { > AccessToken accessToken = generateAccessToken(authenticatedClient, > user, ...); > accessToken = transformer.convert(accessToken). > } > > Something similar would be done for converting a Keycloak token to an > external token: > > This would be a form POST to /token/convert-to with these additional > form parameters > > "token" - REQUIRED. string rep of the token > "provider" - REQUIRED. id of transformer register in the realm for the > token type > > > Same comments I did to "external to Keycloak" exchange. In this > particular case, I think we can also provide some integration with > identity brokering functionality. > > For instance, suppose I have a KC token and I want to obtain a > Facebook AT. I know we have ways to do that today, but I think that > using a STS is much more neat. In this case, we also don't need to > send the provider, but we can make this completely configurable from > admin console. E.g.: associate token types with a OOTB/custom identity > providers. Maybe we can even define a Token Exchange Identity > Provider, which can be configured to integrate with an external STS. An STS could be used to convert a Facebook AT into a Keycloak one, but not vice versa. For Facebook, Google etc. a browser protocol is required in many cases to obtain the external token. With Identity Brokering you are delegating authentication to another IDP. Keycloak doesn't know how the user will be authenticated. For instance, with Google a user may require authentication via SMS. FYI, this is why the "Client Initiated Account Linking" protocol was just implemented. There's also a lot of brokers that can only do logout via a browser protocol. I see what you mean though. We already have the beginnings of an STS that is spread out in different classes and services. > > > interface TokenExchangeToProvider extends Provider { > > ResponseBuilder parse(ClientModel client, Map > formParameters); > } > > Since we're crafting something for an external token system, we > give the > provider complete autonomy in crafting the HTTP response to this > operation. > > > Not sure you remember. But when we were discussing SAML on the earlier > days of Keycloak, I mentioned a API for Security Token Service that we > had in PL for years (I think Stefan did it). Plus a simplified version > of this API in PL JEE/IDM. One of the things that I liked most in PL > is that the STS is the backbone for the IdP. What I mean is, the IdP > don't care about how a token is issued/revoked/validated/renewed, but > delegate this to STS. Being responsible to basically implement the > communication/protocol between the involved parties. We already have this separation. Authentication is independent of protocol and we have a common data model that isn't protocol specific. We also do not map from a brokered SAML assertion to a client's OIDC access token. The brokered SAML assertion is mapped into a common data model which is then mapped to the OIDC access token. I see what you are saying though. Token Exchange should be done in conjunction with refactoring and rewriting the Identity Brokering SPI as the two are related. This also probably has an effect on Client configuration as I could see an STS-only based client that is just interacting with the STS. Bill From bburke at redhat.com Wed Mar 15 11:04:50 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 15 Mar 2017 11:04:50 -0400 Subject: [keycloak-dev] Russian translation review In-Reply-To: References: <41cf40cd-2161-88ca-1974-77566dde5d42@redhat.com> Message-ID: <6dcfa4d5-6629-53dd-bc59-b14687b5787e@redhat.com> For those of you unaware of USA politics, I was making a joke when I mentioned Paul Manafort and Mike Flynn. On 3/15/17 11:01 AM, Ilya Rum wrote: > I can also do that, being russian might help. Probably. I hope? > > On Wed, Mar 15, 2017 at 3:14 PM, Bill Burke > wrote: > > I hear Paul Manafort and Mike Flynn are available now and could do it. > Sorry, couldn't resist... :) > > > On 3/15/17 3:33 AM, Stian Thorgersen wrote: > > Anyone capable of reviewing PR for the Russian translations: > > https://github.com/keycloak/keycloak/pull/3898 > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From mark.pardijs at topicus.nl Wed Mar 15 11:16:47 2017 From: mark.pardijs at topicus.nl (Mark Pardijs) Date: Wed, 15 Mar 2017 15:16:47 +0000 Subject: [keycloak-dev] Deploying provider with dependencies In-Reply-To: References: <1484736842.3591.1.camel@cargosoft.ru> <1484770247.18494.1.camel@cargosoft.ru> <1484774410.18494.3.camel@cargosoft.ru> <0cd22167-1915-1509-2b77-fddcf009d477@redhat.com> <1485123376.9077.1.camel@cargosoft.ru> Message-ID: I somehow sort of solved this, but still am curious if this is the way to go. The way I understand it now is: the providers dir is a non-hot deploy method which uses the same classpath as keycloak. The deployments dir is the hot-deployment method where you have to provide the libs on classpath or depend on them via modules. The way I configured it now is the following: - The maven jar plugin which builds a manifest file [1]. When I package my third party libs in the ear I get ClassNotFound errors, so I have to depend on modules defined already in keycloak - The maven ear plugin which also builds the proper application.xml [2] Is this the way to go? [1] org.apache.maven.plugins maven-jar-plugin 2.6 org.keycloak.keycloak-server-spi-private,org.keycloak.keycloak-services,org.apache.commons.lang [2] maven-ear-plugin 2.9.1 myGroupId myArtifactId true myGroupId2 myArtifactId2 true Op 14 mrt. 2017, om 18:21 heeft Mark Pardijs > het volgende geschreven: I?m trying to add a custom authenticator to Keycloak, and I?m also confused on this topic. First, I copied my jar in the keycloak/providers directory, this works. Then, I started to depend on a third party library, so I needed to add this third party jar in my providers directory. Since it?s not very convenient to copy all third party libs I tried some other ways but I?m lost now. I could build an ear file with my my custom jar and third party libs in it, and place it in the deployments directory, but then the service module loader fails with the following message: 17:13:31,485 WARN [org.jboss.modules] (ServerService Thread Pool -- 50) Failed to define class nl.MyAuthenticatorFactory in Module "deployment.keycloak-authenticator-ear-1.0-SNAPSHOT.ear.keycloak-authenticat-1.0-SNAPSHOT.jar:main" from Service Module Loader: java.lang.NoClassDefFoundError: Failed to link nl/MyAuthenticatorFactory (Module "deployment.keycloak-authenticator-ear-1.0-SNAPSHOT.ear.keycloak-authenticator-1.0-SNAPSHOT.jar:main" from Service Module Loader): org/keycloak/authentication/AuthenticatorFactory. When I put the ear in the providers dir, it?s not picked up at all. I think I?m missing something in my ear file, I read something about a jboss-deployment-structure.xml but can?t find a good example of using this in this situation. So, hopefully you can help me with these quesitons: 1. Isn?t it possible to depend on a third party lib that?s already on the keycloak?s default classpath? I?m depending on commons-lang lib, thought which is also in keycloak?s default classpath, but still when using my provider I get a ClassNotFound error 2. The documentation is mentioning two ways to deploy a provider: using the /providers dir or using the deployments dir, which one is recommended when? 3. What?s the expected structure of an ear file for use in the deployments directory? Currently I have the same as Dmitry mentioned below Op 22 jan. 2017, om 23:16 heeft Dmitry Telegin > het volgende geschreven: Tried with 2.5.0.Final - the EAR doesn't get recursed unless there's an application.xml with all the internal JARs explicitly declared as modules. Could it have been some special jboss-deployment-structure.xml in your case? Either way, it's a very subtle issue that deserves being documented IMHO. Cheers, Dmitry ? Thu, 19/01/2017 ? 09:56 -0500, Bill Burke ?????: I'm pretty sure you can just remove the application.xml too. Then the ear will be recursed. I tried this out when I first wrote the deployer. On 1/18/17 4:20 PM, Dmitry Telegin wrote: I've finally solved this. There are two moments to pay attention to: - maven-ear-plugin by default generates almost empty application.xml. In order for subdeployment to be recognized, it must be mentioned in application.xml as a module. Both and worked for me; - in JBoss/WildFly, only top-level jboss-deployment-structure.xml files are recognized. Thus, this file should be moved from a JAR to an EAR and tweaked accordingly. This is discussed in detail here: http://stackoverflow.com/questions/26859092/jboss-deployment-struct ure- xml-does-not-loads-the-dependencies-in-my-ear-project Again, it would be nice to have a complete working example to demonstrate this approach. I think I could (one day) update my BeerCloak example to cover this new deployment technique. Dmitry ? Wed, 18/01/2017 ? 23:10 +0300, Dmitry Telegin ?????: Stian, I've tried to package my provider JAR into an EAR, but that didn't work :( The layout is the following: foo-0.1-SNAPSHOT.ear +- foo-provider-0.1-SNAPSHOT.jar +- META-INF +- application.xml When I put the JAR into the deployments subdir, it is deployed successfully, initialization code is called etc. But when I drop the EAR in the same subdir, there is only a successful deployment message. The provider doesn't get initialized; seems like deployer doesn't recurse into EAR contents. Tried to place the JAR into the "lib" subdir inside EAR, this didn't work either. The EAR is generated by maven- ear- plugin with the standard settings. Am I missing something? Sorry for bugging you, but unfortunately there is not much said in the docs about deploying providers from inside EARs. A working example would be helpful as well. Dmitry ? Wed, 18/01/2017 ? 12:34 +0100, Stian Thorgersen ?????: You have two options deploy as a module which requires adding modules for all dependencies or using the new deploy as a JEE archive approach, which also supports hot deployment. Check out the server developer guide for more details. On 18 January 2017 at 11:54, Dmitry Telegin wrote: Hi, It's easy to imagine a provider that would integrate a third party library which, together with transitive dependencies, might result in dozens of JARs. A real-world example: OpenID 2.0 login protocol implementation using openid4java, which in its turn pulls in another 10 JARs. What are the deployment options for configurations like that? Is it really necessary to install each and every dependency as a WildFly module? This could become a PITA if there are a lot of deps. Could it be a single, self-sufficient artifact just to be put into deployments subdir? If yes, what type of artifact it should be (EAR maybe)? Thx, Dmitry _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From psilva at redhat.com Wed Mar 15 12:40:35 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 15 Mar 2017 13:40:35 -0300 Subject: [keycloak-dev] token service In-Reply-To: <04daf66c-986f-08d9-6299-500cb60997fb@redhat.com> References: <04daf66c-986f-08d9-6299-500cb60997fb@redhat.com> Message-ID: On Wed, Mar 15, 2017 at 12:01 PM, Bill Burke wrote: > > > On 3/15/17 7:24 AM, Pedro Igor Silva wrote: > > > From externl to Keycloka: >> This would be a form POST to /token/convert-from with these additional >> form parameters >> >> "token" - REQUIRED. string rep of the token >> > > > > "provider" - REQUIRED. id of transformer register in the realm for the >> token type >> > > Why you need to send "provider" ? If you are already sending " > requested_token_type", the provider is implicit, right ? > > Or if you don't send "requested_token_type" you can allow users to specify > the default token type to be issued. > > > provider matches up with a Keycloak ComponentModel which matches up to a > Keycloak ProviderFactory which knows how to do the translation. Need some > way of finding the service that can handle the translation of the external > token. > > The requested_token_type in my proposal is the catagory of token you want, > not its actual type > I understand that. Being the token I want, I can still provide ways to configure their corresponding providers or a default (in case requested_token_type is not provided) in admin console. It seems to be just a 1:1 mapping between a token type and the corresponding provider. I think clients of the token exchange api should not be aware of providers in KC, but about the token types they use/need. > > > >> "requested-token-type" - OPTIONAL. "id", "access", "offline", or >> > > I think we have a great opportunity here to also support use cases where > you need to exchange a SAML assertion with an OAuth2/OIDC token. There is a > specific standard for that, but I think a STS would be perfect for this. > > I remember some people asking for this and I think it will be a great win > for enterprise use cases and legacy systems. > > > I still think we should have a separation between external and internal > exchanges. > That should be fine. But are you planning to support the exchange of SAML assertions for OIDC/OAuth2 tokens ? > > > > > >> "refresh". Default is "access". >> "scope" - OPTIONAL. Same as oauth scope. > > >> This operation is analogous to the code to token flow. Here we are >> creating a token tailored to the authenticated client. So all scope >> configurations and mappers that the client has are applied. This means >> that the client must be registered as an OIDC client. The SPI would look >> something like this: >> >> interface TokenExchangeFromProvider extends Provider { >> >> Transformer parse(ClientModel client, Map >> formParameters); >> >> interface Transformer { >> >> UserModel getUser(); >> IDToken convert(IDToken idToken); >> AccessToken convert(AccessToken accessToken); >> } >> } >> >> The getUser() method returns a user that was authenticated from the >> external token. The convert() methods just gives the provider the >> flexibility to do further transformations on the returned token. >> >> The runtime would do something like this: >> >> ClientModel authenticatedClient = ...; >> ComponentModel model = realm.getComponent(formParams.get("provider")); >> TokenExchangeFromProvider provider = >> session.getProvider(TokenExchangeFromProvider.class, model); >> Transformer transformer = provider.parse(formParams); >> UserModel user = transformer.getUser(); >> if (formParam.get("requested-token-type").equals("access")) { >> AccessToken accessToken = generateAccessToken(authenticatedClient, >> user, ...); >> accessToken = transformer.convert(accessToken). >> } >> >> Something similar would be done for converting a Keycloak token to an >> external token: >> >> This would be a form POST to /token/convert-to with these additional >> form parameters >> >> "token" - REQUIRED. string rep of the token >> "provider" - REQUIRED. id of transformer register in the realm for the >> token type >> > > Same comments I did to "external to Keycloak" exchange. In this particular > case, I think we can also provide some integration with identity brokering > functionality. > > For instance, suppose I have a KC token and I want to obtain a Facebook > AT. I know we have ways to do that today, but I think that using a STS is > much more neat. In this case, we also don't need to send the provider, but > we can make this completely configurable from admin console. E.g.: > associate token types with a OOTB/custom identity providers. Maybe we can > even define a Token Exchange Identity Provider, which can be configured to > integrate with an external STS. > > An STS could be used to convert a Facebook AT into a Keycloak one, but not > vice versa. For Facebook, Google etc. a browser protocol is required in > many cases to obtain the external token. With Identity Brokering you are > delegating authentication to another IDP. Keycloak doesn't know how the > user will be authenticated. For instance, with Google a user may require > authentication via SMS. FYI, this is why the "Client Initiated Account > Linking" protocol was just implemented. There's also a lot of brokers that > can only do logout via a browser protocol. > > I see what you mean though. We already have the beginnings of an STS that > is spread out in different classes and services. > Yeah. I understand Facebook or Google don't wllow token exchanges. But we do provide an endpoint to obtain the token previously stored for a identity provider. As you noticed, my point is that we could use the token exchange for that and centralize such services in a single place. And nothing stop us to support integration with external STSs by just setting up things in admin console, just like we do for brokering. For instance, suppose I have a business partner that supports OAuth2 Token Exchanges and I need to access services protected by this partner's domain. > > > > >> >> >> interface TokenExchangeToProvider extends Provider { >> >> ResponseBuilder parse(ClientModel client, Map >> formParameters); >> } >> >> Since we're crafting something for an external token system, we give the >> provider complete autonomy in crafting the HTTP response to this >> operation. >> > > Not sure you remember. But when we were discussing SAML on the earlier > days of Keycloak, I mentioned a API for Security Token Service that we had > in PL for years (I think Stefan did it). Plus a simplified version of this > API in PL JEE/IDM. One of the things that I liked most in PL is that the > STS is the backbone for the IdP. What I mean is, the IdP don't care about > how a token is issued/revoked/validated/renewed, but delegate this to > STS. Being responsible to basically implement the communication/protocol > between the involved parties. > > > We already have this separation. Authentication is independent of > protocol and we have a common data model that isn't protocol specific. We > also do not map from a brokered SAML assertion to a client's OIDC access > token. The brokered SAML assertion is mapped into a common data model > which is then mapped to the OIDC access token. > > I see what you are saying though. Token Exchange should be done in > conjunction with refactoring and rewriting the Identity Brokering SPI as > the two are related. This also probably has an effect on Client > configuration as I could see an STS-only based client that is just > interacting with the STS. > > > Bill > > From sthorger at redhat.com Thu Mar 16 06:19:46 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 11:19:46 +0100 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> Message-ID: The Keycloak proxy shouldn't be tied directly to the database or caches. It should ideally be stateless and ideally there's no need for sticky sessions. It should be capable of running collocated with the Keycloak Server for simplicity, but also should be possible to run in separate process. If it's done as an additional subsystem that allows easily configuring a Keycloak server to be IdP, IdP+Proxy or just Proxy. Further, it should leverage OpenID Connect rather than us coming up with a new separate protocol. My reasoning behind this is simple: * Please let's not invent another security protocol! That's a lot of work and a whole new vulnerability vector to deal with. * There will be tons more requests to a proxy than there are to the server. Latency overhead will also be much more important. On 14 March 2017 at 14:30, Bill Burke wrote: > > > On 3/14/17 9:20 AM, Marek Posolda wrote: > > On 13/03/17 22:07, Bill Burke wrote: > >> Keycloak Proxy was written a few years ago to secure apps that can't use > >> an adapter provided by us. While Keycloak Proxy works (? mostly?) > >> ,we've been pushing people to Apache + mod-auth-mellon or > >> mod-auth-openidc for non-Java apps. I predict that relying on Apache > >> to proxy and secure apps that can't use our adapters is going to quickly > >> become an issue for us. We already have a need to write extensions to > >> mod-auth-*, specifically to support Pedro's Authz work (which is really > >> nice BTW!). We could also do tighter integration to make the > >> configuration experience more user-friendly. The problem is we have > >> zero expertise in this area and none of us are C/C++ developers (I > >> haven't coded in C/C++ since 1999 when I was at Iona). > >> > >> This brings me to what would be the next generation of the Keycloak > >> Proxy. The first thing I'd like to improve is that configuration would > >> happen within the admin console. This configuration could be made much > >> simpler as whatever protocol configuration that would be needed could be > >> hard-coded and pre-configured. Mappers would focus on mapping values > >> to HTTP headers. > >> > >> Beyond configuration, things become more interesting and complex and > >> their are multiple factors in deciding the authentication protocol, > >> proxy design, and provisioning: > >> > >> * Can/Should one Keycloak Proxy virtual host and proxy multiple apps in > >> same instance? One thing stopping this is SSL. If Keycloak Proxy is > >> handling SSL, then there is no possibility of virtual hosting. If the > >> load balancer is handling SSL, then this is a possibility. > >> > >> * Keycloak Proxy currently needs an HttpSession as it stores > >> authentication information (JWS access token and Refresh Token) there so > >> it can forward it to the application. We'd have to either shrink needed > >> information so it could be stored in a cookie, or replication sessions. > >> THe latter of which would have the same issues with cross DC. > >> > >> * Should we collocate Keycloak proxy with Keycloak runtime? That is, > >> should Keycloak Proxy have direct access to UserSession, CLientSession, > >> and other model interfaces? The benefits of this are that you could > >> have a really optimized auth protocol, you'd still have to bounce the > >> browser to set up cookies directly, but everything else could be handled > >> through the ClientSession object and there would be no need to generate > >> or store tokens. > > +1 > > > > I personally never tried Keycloak Proxy, but it's intended for the > > applications, which don't understand OIDC or SAML right? So we don't > > need another layer of separate KeycloakProxy server, which needs to > > communicate through OIDC with Keycloak auth-server itself, but we can > > maybe just have another login protocol implementation like "Keycloak > > protocol" or something? Once user is successfully authenticated, > > Keycloak will just programatically create token and add some headers > > (KEYCLOAK_IDENTITY etc) and forward the request to the application. > > > > Another advantage is, that we won't need anything special for the > > replication and cross-dc. As long as all the state is cached in > > userSession, Keycloak can just read the cached token from it and > > forward to the application. We will need the solution for cross-dc of > > userSessions anyway, but this will be able to just leverage it. > > The downside is that you potentially have a lot more nodes joining the > cluster and also the memory footprint and size of the proxy. Keycloak > server is 125M+ distro and currently takes up minimally 300M+ of actual > RAM. I'm not sure if that's something we should take into account or > not. I also don't know if users want one proxy that virtual-hosts a > bunch of apps or not. I"m not keen on having multiple options for > deploying the proxy. More work for us and more work for our users to > figure out what to do. I would rather have one way that we can push > people towards as the recommended and preferred way. > > Bill > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Thu Mar 16 06:34:40 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 11:34:40 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: <9bb32606-543d-29c7-8799-596ab7d1806b@redhat.com> References: <9bb32606-543d-29c7-8799-596ab7d1806b@redhat.com> Message-ID: On 14 March 2017 at 12:21, Marek Posolda wrote: > Few things: > - It will be good to have some OOTB support for multivalued attributes. > You will be able to define if attribute is multivalued and then in > registration/account pages, users will see something like we have in admin > console for "redirect uris" or "web origins" in client detail page. > Any multi valued attributes would have to be done through custom extension to the account management console and login pages. The built-in properties we'll provide just don't need multi values (first name, dob, etc..) > > - Besides validation, it may be useful to add some "actions" when > attribute is changed? For example if user changes email, there will be the > optional action, which will switch "emailVerified" to false and put the > "VerifyEmail" required action on him. When he changes mobile number, it > will send him SMS and he will need to confirm it somehow (perhaps again > through required action), etc. > Yes, not quite sure how to do that though. There's a few built-in providers we'd like (make email not verified if changed, etc.), but users should also be able add their own. > > - It will be probably useful to allow admin to skip validation (and > actions) for certain attributes. Maybe validators could have an option like > "Skip admin" or something like that? Or should we always skip the > validations for admin? Dunno - why should an admin be allowed to bypass validation? Validation is there to make sure the details in the DB is accurate. > > > Marek > > > > On 14/03/17 10:13, Stian Thorgersen wrote: > >> At the moment there is no single point to define validation for a user. >> Even worse for the account management console and admin console it's not >> even possible to define validation for custom attributes. >> >> Also, as there is no defined list of attributes for a user there the >> mapping of user attributes is error prone. >> >> I'd like to introduce a Profile SPI to help with this. It would have >> methods to: >> >> * Validate users during creation and updates >> * List defined attributes on a user >> >> There would be a built-in provider that would delegate to ProfileAttribute >> SPI. ProfileAttribute SPI would allow defining configurable providers for >> single user attributes. I'm also considering adding a separate Validation >> SPI, so a ProfileAttribute provider could delegate validation to a >> separate >> validator. >> >> Users could also implement their own Profile provider to do whatever they >> want. I'd like to aim to make the SPI a supported SPI. >> >> First pass would focus purely on validation. Second pass would focus on >> using the attribute metadata to do things like: >> >> * Have dropdown boxes in mappers to select user attribute instead of >> copy/pasting the name >> * Have additional built-in attributes on registration form, update profile >> form and account management console that can be enabled/disabled by >> defining the Profile. I'm not suggesting a huge amount here and it will be >> limited to a few sensible attributes. Defining more complex things like >> address would still be done through extending the forms. >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > > From sthorger at redhat.com Thu Mar 16 06:35:10 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 11:35:10 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: Message-ID: Actually it doesn't even come into play in that situation. Validation would only be invoked if user or admin changes the profile directly through Keycloak. On 14 March 2017 at 14:52, Bill Burke wrote: > This would have to be a completely optional SPI. There are a lot of > keycloak users that are just mapping data from LDAP into our tokens and > don't care about validation. > > > > On 3/14/17 5:13 AM, Stian Thorgersen wrote: > > At the moment there is no single point to define validation for a user. > > Even worse for the account management console and admin console it's not > > even possible to define validation for custom attributes. > > > > Also, as there is no defined list of attributes for a user there the > > mapping of user attributes is error prone. > > > > I'd like to introduce a Profile SPI to help with this. It would have > > methods to: > > > > * Validate users during creation and updates > > * List defined attributes on a user > > > > There would be a built-in provider that would delegate to > ProfileAttribute > > SPI. ProfileAttribute SPI would allow defining configurable providers for > > single user attributes. I'm also considering adding a separate Validation > > SPI, so a ProfileAttribute provider could delegate validation to a > separate > > validator. > > > > Users could also implement their own Profile provider to do whatever they > > want. I'd like to aim to make the SPI a supported SPI. > > > > First pass would focus purely on validation. Second pass would focus on > > using the attribute metadata to do things like: > > > > * Have dropdown boxes in mappers to select user attribute instead of > > copy/pasting the name > > * Have additional built-in attributes on registration form, update > profile > > form and account management console that can be enabled/disabled by > > defining the Profile. I'm not suggesting a huge amount here and it will > be > > limited to a few sensible attributes. Defining more complex things like > > address would still be done through extending the forms. > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Thu Mar 16 06:36:28 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 11:36:28 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: Message-ID: For a new realm the default should be: * Allow built-in attributes (similar to what we have now), but don't allow any additional attributes For migrated realms the default should be: * Verify built-in attributes as we do now and allow any custom attributes On 16 March 2017 at 11:35, Stian Thorgersen wrote: > Actually it doesn't even come into play in that situation. Validation > would only be invoked if user or admin changes the profile directly through > Keycloak. > > On 14 March 2017 at 14:52, Bill Burke wrote: > >> This would have to be a completely optional SPI. There are a lot of >> keycloak users that are just mapping data from LDAP into our tokens and >> don't care about validation. >> >> >> >> On 3/14/17 5:13 AM, Stian Thorgersen wrote: >> > At the moment there is no single point to define validation for a user. >> > Even worse for the account management console and admin console it's not >> > even possible to define validation for custom attributes. >> > >> > Also, as there is no defined list of attributes for a user there the >> > mapping of user attributes is error prone. >> > >> > I'd like to introduce a Profile SPI to help with this. It would have >> > methods to: >> > >> > * Validate users during creation and updates >> > * List defined attributes on a user >> > >> > There would be a built-in provider that would delegate to >> ProfileAttribute >> > SPI. ProfileAttribute SPI would allow defining configurable providers >> for >> > single user attributes. I'm also considering adding a separate >> Validation >> > SPI, so a ProfileAttribute provider could delegate validation to a >> separate >> > validator. >> > >> > Users could also implement their own Profile provider to do whatever >> they >> > want. I'd like to aim to make the SPI a supported SPI. >> > >> > First pass would focus purely on validation. Second pass would focus on >> > using the attribute metadata to do things like: >> > >> > * Have dropdown boxes in mappers to select user attribute instead of >> > copy/pasting the name >> > * Have additional built-in attributes on registration form, update >> profile >> > form and account management console that can be enabled/disabled by >> > defining the Profile. I'm not suggesting a huge amount here and it will >> be >> > limited to a few sensible attributes. Defining more complex things like >> > address would still be done through extending the forms. >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > From sthorger at redhat.com Thu Mar 16 06:44:08 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 11:44:08 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> Message-ID: I like option #3, but what about adding a hidden field on the form that contains the step in the flow. That way we can easily find out if the form is a post for the current step or not. If it's not then we simply ignore the post and return the current step again? That would work for back/forward and refresh. On 14 March 2017 at 23:47, Bill Burke wrote: > Ya, similar to #3, my thought is if you combine a cookie with > code-in-url, you have a solution for backbutton and refresh and there's > no special headers you have to specify. We used to do #2, but lot of > people, specifically jboss.org guys, complained about it. > > > On 3/14/17 4:49 PM, Marek Posolda wrote: > > Thanks, that looks similar to my (3) though. > > > > Besides that I wonder if we should save just the ID of loginSession in > > the cookie and the "current-code" keep inside the loginSession > > (infinispan) similarly like it is now? > > > > I am thinking about the case when potential attacker tricks Keycloak > > by manually sending the request, which will just use same code in the > > cookie and in the URL. Keycloak will then always treat this request as > > valid due the code in the URL and in cookie will always match. > > Couldn't that be an issue? > > > > Marek > > > > On 14/03/17 13:50, Bill Burke wrote: > >> I've got an idea. What about > >> > >> * keep the code in the URL > >> > >> * Additionally add a "current-code" cookie > >> > >> If code in the URL doesn't match the cookie, then redirect to the URL of > >> the current-code. > >> > >> > >> On 3/14/17 6:53 AM, Marek Posolda wrote: > >>> When working on login sessions, I wonder if we want to improve browser > >>> back-button and browser refreshes. > >>> > >>> In shortcut, I can see 3 basic options: > >>> > >>> 1) Keep same like now and rely on header "Cache-Control: no-store, > >>> must-revalidate, max-age=0" . This works fine and users never saw > >>> outdated form and never submit outdated form 2 times. However the > >>> usability sucks a bit IMO. When you press back-button after POST > >>> request, you can see the ugly browser page "Web page has expired" . And > >>> if you press F5 on this, you will see the unfriendly Keycloak error > >>> page > >>> "Error was occured. Please login again through your application" > >>> because > >>> of invalid code. > >>> > >>> 2) Use the pattern with POST followed by the redirect to GET. Since we > >>> will have loginSession with the ID in the cookie, the GET request > >>> can be > >>> sent to the URL without any special query parameter. Something like > >>> "http://localhost:8180/auth/realms/master/login-actions/authenticate" > . > >>> This will allow us that in every stage of authentication, user can > >>> press > >>> back-button and will be always redirected to the first step of the > >>> flow. > >>> When he refreshes the page, it will re-send just the GET request and > >>> always brings him to the current execution. > >>> > >>> This looks most user-friendly. But there is the issue with performance > >>> though. As we will need to followup every POST request with one > >>> additional GET request. > >>> > >>> 3) Don't do anything special regarding back-button or refresh. But in > >>> case that page is refreshed AND the post with invalid (already used) > >>> code will be re-submitted, we won't display the ugly page "Error was > >>> occured.", but we will just redirect to current step of the flow. > >>> > >>> Example: > >>> a) User was redirected from the application to OIDC > >>> AuthorizationEndpoint request. Login page is shown > >>> b) User confirmed invalid username and password with POST request. > >>> Login > >>> form with error page "Invalid password" is shown > >>> c) User confirmed valid username and password with POST request. TOTP > >>> page is shown. > >>> d) User press back-button. Now he will see again the page with > >>> username/password form. > >>> e) User press F5. The POST request will be re-sent, but it will use > >>> previous "code", which is outdated now. So in this case, we will > >>> redirect to the current execution and TOTP form will be shown. No > >>> re-submission of username/password form will happen. > >>> > >>> In case 3, the username/password form will be shown again, but user > >>> won't be able to resubmit it. > >>> > >>> In shortcut: With 2 and 3, users will never see the browser page "Web > >>> page is expired" or Keycloak "Error occured. Go back to the > >>> application". With 2, there is additional GET request needed. With 3, > >>> the back-button may show the authentication forms, which user already > >>> successfully confirmed, but he won't be able to re-submit them. Is it > >>> bad regarding usability? To me, it looks better than showing "Web page > >>> is expired". > >>> > >>> So my preference is 3,2,1. WDYT? Any other options? > >>> > >>> Marek > >>> > >>> _______________________________________________ > >>> keycloak-dev mailing list > >>> keycloak-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mposolda at redhat.com Thu Mar 16 07:56:19 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 16 Mar 2017 12:56:19 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> Message-ID: <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> We should be able to detect that with the code in the URL too. So the question is, if hidden field has any advantage over keeping the code in URL? Thing is, that browsers always treat every POST request as unique even if it goes to the same URL. So for example, even if code is not in the URL and then the user submits username/password form with incorrect password 5 times, then he needs to press browser back-button 5 times to return back to initial AuthorizationEndpoint request. Even if every POST request had same URL without code like "http://host/auth/realms/master/authenticate" . Only real advantage I see is, that code in hidden field is maybe a bit safer. Not sure if worth the effort, I will try to investigate how much effort it is to put the code to the hidden field instead of URL. Marek On 16/03/17 11:44, Stian Thorgersen wrote: > I like option #3, but what about adding a hidden field on the form > that contains the step in the flow. That way we can easily find out if > the form is a post for the current step or not. If it's not then we > simply ignore the post and return the current step again? That would > work for back/forward and refresh. > > On 14 March 2017 at 23:47, Bill Burke > wrote: > > Ya, similar to #3, my thought is if you combine a cookie with > code-in-url, you have a solution for backbutton and refresh and > there's > no special headers you have to specify. We used to do #2, but lot of > people, specifically jboss.org guys, complained > about it. > > > On 3/14/17 4:49 PM, Marek Posolda wrote: > > Thanks, that looks similar to my (3) though. > > > > Besides that I wonder if we should save just the ID of > loginSession in > > the cookie and the "current-code" keep inside the loginSession > > (infinispan) similarly like it is now? > > > > I am thinking about the case when potential attacker tricks Keycloak > > by manually sending the request, which will just use same code > in the > > cookie and in the URL. Keycloak will then always treat this > request as > > valid due the code in the URL and in cookie will always match. > > Couldn't that be an issue? > > > > Marek > > > > On 14/03/17 13:50, Bill Burke wrote: > >> I've got an idea. What about > >> > >> * keep the code in the URL > >> > >> * Additionally add a "current-code" cookie > >> > >> If code in the URL doesn't match the cookie, then redirect to > the URL of > >> the current-code. > >> > >> > >> On 3/14/17 6:53 AM, Marek Posolda wrote: > >>> When working on login sessions, I wonder if we want to improve > browser > >>> back-button and browser refreshes. > >>> > >>> In shortcut, I can see 3 basic options: > >>> > >>> 1) Keep same like now and rely on header "Cache-Control: no-store, > >>> must-revalidate, max-age=0" . This works fine and users never saw > >>> outdated form and never submit outdated form 2 times. However the > >>> usability sucks a bit IMO. When you press back-button after POST > >>> request, you can see the ugly browser page "Web page has > expired" . And > >>> if you press F5 on this, you will see the unfriendly Keycloak > error > >>> page > >>> "Error was occured. Please login again through your application" > >>> because > >>> of invalid code. > >>> > >>> 2) Use the pattern with POST followed by the redirect to GET. > Since we > >>> will have loginSession with the ID in the cookie, the GET request > >>> can be > >>> sent to the URL without any special query parameter. Something > like > >>> > "http://localhost:8180/auth/realms/master/login-actions/authenticate > " > . > >>> This will allow us that in every stage of authentication, user can > >>> press > >>> back-button and will be always redirected to the first step of the > >>> flow. > >>> When he refreshes the page, it will re-send just the GET > request and > >>> always brings him to the current execution. > >>> > >>> This looks most user-friendly. But there is the issue with > performance > >>> though. As we will need to followup every POST request with one > >>> additional GET request. > >>> > >>> 3) Don't do anything special regarding back-button or refresh. > But in > >>> case that page is refreshed AND the post with invalid (already > used) > >>> code will be re-submitted, we won't display the ugly page > "Error was > >>> occured.", but we will just redirect to current step of the flow. > >>> > >>> Example: > >>> a) User was redirected from the application to OIDC > >>> AuthorizationEndpoint request. Login page is shown > >>> b) User confirmed invalid username and password with POST request. > >>> Login > >>> form with error page "Invalid password" is shown > >>> c) User confirmed valid username and password with POST > request. TOTP > >>> page is shown. > >>> d) User press back-button. Now he will see again the page with > >>> username/password form. > >>> e) User press F5. The POST request will be re-sent, but it > will use > >>> previous "code", which is outdated now. So in this case, we will > >>> redirect to the current execution and TOTP form will be shown. No > >>> re-submission of username/password form will happen. > >>> > >>> In case 3, the username/password form will be shown again, but > user > >>> won't be able to resubmit it. > >>> > >>> In shortcut: With 2 and 3, users will never see the browser > page "Web > >>> page is expired" or Keycloak "Error occured. Go back to the > >>> application". With 2, there is additional GET request needed. > With 3, > >>> the back-button may show the authentication forms, which user > already > >>> successfully confirmed, but he won't be able to re-submit > them. Is it > >>> bad regarding usability? To me, it looks better than showing > "Web page > >>> is expired". > >>> > >>> So my preference is 3,2,1. WDYT? Any other options? > >>> > >>> Marek > >>> > >>> _______________________________________________ > >>> keycloak-dev mailing list > >>> keycloak-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From mposolda at redhat.com Thu Mar 16 08:08:44 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 16 Mar 2017 13:08:44 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: <9bb32606-543d-29c7-8799-596ab7d1806b@redhat.com> Message-ID: <12aba796-2550-6d31-0d26-3b70223ca7c2@redhat.com> On 16/03/17 11:34, Stian Thorgersen wrote: > > > On 14 March 2017 at 12:21, Marek Posolda > wrote: > > Few things: > - It will be good to have some OOTB support for multivalued > attributes. You will be able to define if attribute is multivalued > and then in registration/account pages, users will see something > like we have in admin console for "redirect uris" or "web origins" > in client detail page. > > > Any multi valued attributes would have to be done through custom > extension to the account management console and login pages. The > built-in properties we'll provide just don't need multi values (first > name, dob, etc..) I guess it is not big priority, but IMO it will be useful to have some built-in support for multivalued attributes. I know some people from community asked for it. You can have more mobile phones, you can have more children etc :) We can require them to always override theme if they need multivalued support in UI, but if we can have something OOTB it would be nice IMO. > > > - Besides validation, it may be useful to add some "actions" when > attribute is changed? For example if user changes email, there > will be the optional action, which will switch "emailVerified" to > false and put the "VerifyEmail" required action on him. When he > changes mobile number, it will send him SMS and he will need to > confirm it somehow (perhaps again through required action), etc. > > > Yes, not quite sure how to do that though. There's a few built-in > providers we'd like (make email not verified if changed, etc.), but > users should also be able add their own. > > > - It will be probably useful to allow admin to skip validation > (and actions) for certain attributes. Maybe validators could have > an option like "Skip admin" or something like that? Or should we > always skip the validations for admin? > > > Dunno - why should an admin be allowed to bypass validation? > Validation is there to make sure the details in the DB is accurate. Not sure. Currently the fields like firstName/lastName/email are mandatory for users, but not for admins. I can think of some corner cases. For example: User was registered through Twitter identityProvider and hence he doesn't have email filled. Now admin won't be able to edit the user unless he fills some fake email. Marek > > > > Marek > > > > On 14/03/17 10:13, Stian Thorgersen wrote: > > At the moment there is no single point to define validation > for a user. > Even worse for the account management console and admin > console it's not > even possible to define validation for custom attributes. > > Also, as there is no defined list of attributes for a user > there the > mapping of user attributes is error prone. > > I'd like to introduce a Profile SPI to help with this. It > would have > methods to: > > * Validate users during creation and updates > * List defined attributes on a user > > There would be a built-in provider that would delegate to > ProfileAttribute > SPI. ProfileAttribute SPI would allow defining configurable > providers for > single user attributes. I'm also considering adding a separate > Validation > SPI, so a ProfileAttribute provider could delegate validation > to a separate > validator. > > Users could also implement their own Profile provider to do > whatever they > want. I'd like to aim to make the SPI a supported SPI. > > First pass would focus purely on validation. Second pass would > focus on > using the attribute metadata to do things like: > > * Have dropdown boxes in mappers to select user attribute > instead of > copy/pasting the name > * Have additional built-in attributes on registration form, > update profile > form and account management console that can be > enabled/disabled by > defining the Profile. I'm not suggesting a huge amount here > and it will be > limited to a few sensible attributes. Defining more complex > things like > address would still be done through extending the forms. > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > From sthorger at redhat.com Thu Mar 16 09:35:10 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 14:35:10 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: <12aba796-2550-6d31-0d26-3b70223ca7c2@redhat.com> References: <9bb32606-543d-29c7-8799-596ab7d1806b@redhat.com> <12aba796-2550-6d31-0d26-3b70223ca7c2@redhat.com> Message-ID: On 16 March 2017 at 13:08, Marek Posolda wrote: > On 16/03/17 11:34, Stian Thorgersen wrote: > > > > On 14 March 2017 at 12:21, Marek Posolda wrote: > >> Few things: >> - It will be good to have some OOTB support for multivalued attributes. >> You will be able to define if attribute is multivalued and then in >> registration/account pages, users will see something like we have in admin >> console for "redirect uris" or "web origins" in client detail page. >> > > Any multi valued attributes would have to be done through custom extension > to the account management console and login pages. The built-in properties > we'll provide just don't need multi values (first name, dob, etc..) > > I guess it is not big priority, but IMO it will be useful to have some > built-in support for multivalued attributes. I know some people from > community asked for it. You can have more mobile phones, you can have more > children etc :) > Sure we can have built-in support for multiple values - but those would require custom forms IMO. > > We can require them to always override theme if they need multivalued > support in UI, but if we can have something OOTB it would be nice IMO. > > > >> >> - Besides validation, it may be useful to add some "actions" when >> attribute is changed? For example if user changes email, there will be the >> optional action, which will switch "emailVerified" to false and put the >> "VerifyEmail" required action on him. When he changes mobile number, it >> will send him SMS and he will need to confirm it somehow (perhaps again >> through required action), etc. >> > > Yes, not quite sure how to do that though. There's a few built-in > providers we'd like (make email not verified if changed, etc.), but users > should also be able add their own. > > >> >> - It will be probably useful to allow admin to skip validation (and >> actions) for certain attributes. Maybe validators could have an option like >> "Skip admin" or something like that? Or should we always skip the >> validations for admin? > > > Dunno - why should an admin be allowed to bypass validation? Validation is > there to make sure the details in the DB is accurate. > > Not sure. Currently the fields like firstName/lastName/email are mandatory > for users, but not for admins. I can think of some corner cases. > True - maybe just have an option on admin console/endpoints to skip validation? > > For example: User was registered through Twitter identityProvider and > hence he doesn't have email filled. Now admin won't be able to edit the > user unless he fills some fake email. > I agree that would be annoying for the admin. In that case it would be nice to detect "missing properties" and popup the update profile form for the user to require the user to fill in missing properties. > > > Marek > > > >> >> >> Marek >> >> >> >> On 14/03/17 10:13, Stian Thorgersen wrote: >> >>> At the moment there is no single point to define validation for a user. >>> Even worse for the account management console and admin console it's not >>> even possible to define validation for custom attributes. >>> >>> Also, as there is no defined list of attributes for a user there the >>> mapping of user attributes is error prone. >>> >>> I'd like to introduce a Profile SPI to help with this. It would have >>> methods to: >>> >>> * Validate users during creation and updates >>> * List defined attributes on a user >>> >>> There would be a built-in provider that would delegate to >>> ProfileAttribute >>> SPI. ProfileAttribute SPI would allow defining configurable providers for >>> single user attributes. I'm also considering adding a separate Validation >>> SPI, so a ProfileAttribute provider could delegate validation to a >>> separate >>> validator. >>> >>> Users could also implement their own Profile provider to do whatever they >>> want. I'd like to aim to make the SPI a supported SPI. >>> >>> First pass would focus purely on validation. Second pass would focus on >>> using the attribute metadata to do things like: >>> >>> * Have dropdown boxes in mappers to select user attribute instead of >>> copy/pasting the name >>> * Have additional built-in attributes on registration form, update >>> profile >>> form and account management console that can be enabled/disabled by >>> defining the Profile. I'm not suggesting a huge amount here and it will >>> be >>> limited to a few sensible attributes. Defining more complex things like >>> address would still be done through extending the forms. >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >> >> >> > > From sthorger at redhat.com Thu Mar 16 09:43:44 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 14:43:44 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> Message-ID: We should get rid of the code for sure. Maybe we can add the ID of the current step in the flow instead though. Before we discuss implementation though, let's figure out what the ideal user experience would be then figure out how to implement it. What about: * Refresh just works * Back button will display a nicer page, something like "Page has expired. To restart the login process click here. To continue the login process click here.". Or Back button could just go to the start of the flow always. * Resubmitting forms will just display the page above * No need to do redirects. Redirects is bad for performance, but also has twice the response time which is not good from a usability perspective On 16 March 2017 at 12:56, Marek Posolda wrote: > We should be able to detect that with the code in the URL too. So the > question is, if hidden field has any advantage over keeping the code in URL? > > Thing is, that browsers always treat every POST request as unique even if > it goes to the same URL. So for example, even if code is not in the URL and > then the user submits username/password form with incorrect password 5 > times, then he needs to press browser back-button 5 times to return back to > initial AuthorizationEndpoint request. Even if every POST request had same > URL without code like "http://host/auth/realms/master/authenticate" > . > > Only real advantage I see is, that code in hidden field is maybe a bit > safer. Not sure if worth the effort, I will try to investigate how much > effort it is to put the code to the hidden field instead of URL. > > Marek > > > > On 16/03/17 11:44, Stian Thorgersen wrote: > > I like option #3, but what about adding a hidden field on the form that > contains the step in the flow. That way we can easily find out if the form > is a post for the current step or not. If it's not then we simply ignore > the post and return the current step again? That would work for > back/forward and refresh. > > On 14 March 2017 at 23:47, Bill Burke wrote: > >> Ya, similar to #3, my thought is if you combine a cookie with >> code-in-url, you have a solution for backbutton and refresh and there's >> no special headers you have to specify. We used to do #2, but lot of >> people, specifically jboss.org guys, complained about it. >> >> >> On 3/14/17 4:49 PM, Marek Posolda wrote: >> > Thanks, that looks similar to my (3) though. >> > >> > Besides that I wonder if we should save just the ID of loginSession in >> > the cookie and the "current-code" keep inside the loginSession >> > (infinispan) similarly like it is now? >> > >> > I am thinking about the case when potential attacker tricks Keycloak >> > by manually sending the request, which will just use same code in the >> > cookie and in the URL. Keycloak will then always treat this request as >> > valid due the code in the URL and in cookie will always match. >> > Couldn't that be an issue? >> > >> > Marek >> > >> > On 14/03/17 13:50, Bill Burke wrote: >> >> I've got an idea. What about >> >> >> >> * keep the code in the URL >> >> >> >> * Additionally add a "current-code" cookie >> >> >> >> If code in the URL doesn't match the cookie, then redirect to the URL >> of >> >> the current-code. >> >> >> >> >> >> On 3/14/17 6:53 AM, Marek Posolda wrote: >> >>> When working on login sessions, I wonder if we want to improve browser >> >>> back-button and browser refreshes. >> >>> >> >>> In shortcut, I can see 3 basic options: >> >>> >> >>> 1) Keep same like now and rely on header "Cache-Control: no-store, >> >>> must-revalidate, max-age=0" . This works fine and users never saw >> >>> outdated form and never submit outdated form 2 times. However the >> >>> usability sucks a bit IMO. When you press back-button after POST >> >>> request, you can see the ugly browser page "Web page has expired" . >> And >> >>> if you press F5 on this, you will see the unfriendly Keycloak error >> >>> page >> >>> "Error was occured. Please login again through your application" >> >>> because >> >>> of invalid code. >> >>> >> >>> 2) Use the pattern with POST followed by the redirect to GET. Since we >> >>> will have loginSession with the ID in the cookie, the GET request >> >>> can be >> >>> sent to the URL without any special query parameter. Something like >> >>> "http://localhost:8180/auth/realms/master/login-actions/authenticate" >> . >> >>> This will allow us that in every stage of authentication, user can >> >>> press >> >>> back-button and will be always redirected to the first step of the >> >>> flow. >> >>> When he refreshes the page, it will re-send just the GET request and >> >>> always brings him to the current execution. >> >>> >> >>> This looks most user-friendly. But there is the issue with performance >> >>> though. As we will need to followup every POST request with one >> >>> additional GET request. >> >>> >> >>> 3) Don't do anything special regarding back-button or refresh. But in >> >>> case that page is refreshed AND the post with invalid (already used) >> >>> code will be re-submitted, we won't display the ugly page "Error was >> >>> occured.", but we will just redirect to current step of the flow. >> >>> >> >>> Example: >> >>> a) User was redirected from the application to OIDC >> >>> AuthorizationEndpoint request. Login page is shown >> >>> b) User confirmed invalid username and password with POST request. >> >>> Login >> >>> form with error page "Invalid password" is shown >> >>> c) User confirmed valid username and password with POST request. TOTP >> >>> page is shown. >> >>> d) User press back-button. Now he will see again the page with >> >>> username/password form. >> >>> e) User press F5. The POST request will be re-sent, but it will use >> >>> previous "code", which is outdated now. So in this case, we will >> >>> redirect to the current execution and TOTP form will be shown. No >> >>> re-submission of username/password form will happen. >> >>> >> >>> In case 3, the username/password form will be shown again, but user >> >>> won't be able to resubmit it. >> >>> >> >>> In shortcut: With 2 and 3, users will never see the browser page "Web >> >>> page is expired" or Keycloak "Error occured. Go back to the >> >>> application". With 2, there is additional GET request needed. With 3, >> >>> the back-button may show the authentication forms, which user already >> >>> successfully confirmed, but he won't be able to re-submit them. Is it >> >>> bad regarding usability? To me, it looks better than showing "Web page >> >>> is expired". >> >>> >> >>> So my preference is 3,2,1. WDYT? Any other options? >> >>> >> >>> Marek >> >>> >> >>> _______________________________________________ >> >>> keycloak-dev mailing list >> >>> keycloak-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> >> keycloak-dev mailing list >> >> keycloak-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > >> > >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > > From bburke at redhat.com Thu Mar 16 10:27:34 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 16 Mar 2017 10:27:34 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> Message-ID: <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> * Hidden field in a form is not a good approach. Its very brittle and will not work in every situation. So huge -1 there. * browser back button is not required to resubmit the HTTP request as the page can be rendered from cache. Therefore you couldn't have a "Page Expired" page displayed when the back button is pressed without setting the header "Cache-Control: no-store, must-revalidate, max-age=0" * Furthermore, without some type of code/information within the URL, you also wouldn't know if somebody clicked the back button or not or whether this was a page refresh or some other GET request. * Hitting Refresh button also has similar issues when removing the code from the URL. Hitting refresh on a form POST resends the POSTed data. Without some kind of code in the URL, the runtime has no idea if this is a Refresh Button re-POST, or if the browser is POSTing to the current authenticator. The way it works now is you have two types of requests which can be GET or POST. /authenticate?code={code} and /authenticate?code={code}&execution={execution} The first request corresponds to the Authenticator.authenticate() method. It basically means you want to display the currrent authenticator in the flow. The 2nd request means that you processing a challenge sent by the current Authenticator and corresponds to the Authenticator.action() method. Without this type of distinction, it is really hard (impossible) to figure out how to route and process the request. On 3/16/17 9:43 AM, Stian Thorgersen wrote: > We should get rid of the code for sure. Maybe we can add the ID of the > current step in the flow instead though. > > Before we discuss implementation though, let's figure out what the > ideal user experience would be then figure out how to implement it. > What about: > > * Refresh just works > * Back button will display a nicer page, something like "Page has > expired. To restart the login process click here. To continue the > login process click here.". Or Back button could just go to the start > of the flow always. > * Resubmitting forms will just display the page above > * No need to do redirects. Redirects is bad for performance, but also > has twice the response time which is not good from a usability perspective > > On 16 March 2017 at 12:56, Marek Posolda > wrote: > > We should be able to detect that with the code in the URL too. So > the question is, if hidden field has any advantage over keeping > the code in URL? > > > Thing is, that browsers always treat every POST request as unique > even if it goes to the same URL. So for example, even if code is > not in the URL and then the user submits username/password form > with incorrect password 5 times, then he needs to press browser > back-button 5 times to return back to initial > AuthorizationEndpoint request. Even if every POST request had same > URL without code like > "http://host/auth/realms/master/authenticate" > . > > Only real advantage I see is, that code in hidden field is maybe a > bit safer. Not sure if worth the effort, I will try to investigate > how much effort it is to put the code to the hidden field instead > of URL. > > Marek > > > > On 16/03/17 11:44, Stian Thorgersen wrote: >> I like option #3, but what about adding a hidden field on the >> form that contains the step in the flow. That way we can easily >> find out if the form is a post for the current step or not. If >> it's not then we simply ignore the post and return the current >> step again? That would work for back/forward and refresh. >> >> On 14 March 2017 at 23:47, Bill Burke > > wrote: >> >> Ya, similar to #3, my thought is if you combine a cookie with >> code-in-url, you have a solution for backbutton and refresh >> and there's >> no special headers you have to specify. We used to do #2, but >> lot of >> people, specifically jboss.org guys, >> complained about it. >> >> >> On 3/14/17 4:49 PM, Marek Posolda wrote: >> > Thanks, that looks similar to my (3) though. >> > >> > Besides that I wonder if we should save just the ID of >> loginSession in >> > the cookie and the "current-code" keep inside the loginSession >> > (infinispan) similarly like it is now? >> > >> > I am thinking about the case when potential attacker tricks >> Keycloak >> > by manually sending the request, which will just use same >> code in the >> > cookie and in the URL. Keycloak will then always treat this >> request as >> > valid due the code in the URL and in cookie will always match. >> > Couldn't that be an issue? >> > >> > Marek >> > >> > On 14/03/17 13:50, Bill Burke wrote: >> >> I've got an idea. What about >> >> >> >> * keep the code in the URL >> >> >> >> * Additionally add a "current-code" cookie >> >> >> >> If code in the URL doesn't match the cookie, then redirect >> to the URL of >> >> the current-code. >> >> >> >> >> >> On 3/14/17 6:53 AM, Marek Posolda wrote: >> >>> When working on login sessions, I wonder if we want to >> improve browser >> >>> back-button and browser refreshes. >> >>> >> >>> In shortcut, I can see 3 basic options: >> >>> >> >>> 1) Keep same like now and rely on header "Cache-Control: >> no-store, >> >>> must-revalidate, max-age=0" . This works fine and users >> never saw >> >>> outdated form and never submit outdated form 2 times. >> However the >> >>> usability sucks a bit IMO. When you press back-button >> after POST >> >>> request, you can see the ugly browser page "Web page has >> expired" . And >> >>> if you press F5 on this, you will see the unfriendly >> Keycloak error >> >>> page >> >>> "Error was occured. Please login again through your >> application" >> >>> because >> >>> of invalid code. >> >>> >> >>> 2) Use the pattern with POST followed by the redirect to >> GET. Since we >> >>> will have loginSession with the ID in the cookie, the GET >> request >> >>> can be >> >>> sent to the URL without any special query parameter. >> Something like >> >>> >> "http://localhost:8180/auth/realms/master/login-actions/authenticate >> " >> . >> >>> This will allow us that in every stage of authentication, >> user can >> >>> press >> >>> back-button and will be always redirected to the first >> step of the >> >>> flow. >> >>> When he refreshes the page, it will re-send just the GET >> request and >> >>> always brings him to the current execution. >> >>> >> >>> This looks most user-friendly. But there is the issue >> with performance >> >>> though. As we will need to followup every POST request >> with one >> >>> additional GET request. >> >>> >> >>> 3) Don't do anything special regarding back-button or >> refresh. But in >> >>> case that page is refreshed AND the post with invalid >> (already used) >> >>> code will be re-submitted, we won't display the ugly page >> "Error was >> >>> occured.", but we will just redirect to current step of >> the flow. >> >>> >> >>> Example: >> >>> a) User was redirected from the application to OIDC >> >>> AuthorizationEndpoint request. Login page is shown >> >>> b) User confirmed invalid username and password with POST >> request. >> >>> Login >> >>> form with error page "Invalid password" is shown >> >>> c) User confirmed valid username and password with POST >> request. TOTP >> >>> page is shown. >> >>> d) User press back-button. Now he will see again the page >> with >> >>> username/password form. >> >>> e) User press F5. The POST request will be re-sent, but >> it will use >> >>> previous "code", which is outdated now. So in this case, >> we will >> >>> redirect to the current execution and TOTP form will be >> shown. No >> >>> re-submission of username/password form will happen. >> >>> >> >>> In case 3, the username/password form will be shown >> again, but user >> >>> won't be able to resubmit it. >> >>> >> >>> In shortcut: With 2 and 3, users will never see the >> browser page "Web >> >>> page is expired" or Keycloak "Error occured. Go back to the >> >>> application". With 2, there is additional GET request >> needed. With 3, >> >>> the back-button may show the authentication forms, which >> user already >> >>> successfully confirmed, but he won't be able to re-submit >> them. Is it >> >>> bad regarding usability? To me, it looks better than >> showing "Web page >> >>> is expired". >> >>> >> >>> So my preference is 3,2,1. WDYT? Any other options? >> >>> >> >>> Marek >> >>> >> >>> _______________________________________________ >> >>> keycloak-dev mailing list >> >>> keycloak-dev at lists.jboss.org >> >> >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> >> keycloak-dev mailing list >> >> keycloak-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> > >> > >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> > > From mposolda at redhat.com Thu Mar 16 10:33:23 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 16 Mar 2017 15:33:23 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> Message-ID: On 16/03/17 14:43, Stian Thorgersen wrote: > We should get rid of the code for sure. Maybe we can add the ID of the > current step in the flow instead though. > > Before we discuss implementation though, let's figure out what the > ideal user experience would be then figure out how to implement it. > What about: > > * Refresh just works > * Back button will display a nicer page, something like "Page has > expired. To restart the login process click here. To continue the > login process click here.". Yeah, that will be nice for UXP. Problem is, that back/forward button doesn't automatically refresh the page. So we will need some javascript tricks though. Maybe something like this: http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button > Or Back button could just go to the start of the flow always. That's possible without javascript if you do POST with Redirect. Basically what I meant with #2 . Regarding UXP, I personally like your previous proposal better. If user is deep after confirm many authenticator forms and he accidentally clicks back-button, he will need to re-authenticate in all authenticators again. Not so great for usability though? > * Resubmitting forms will just display the page above If we do any of your previous proposal, user will never see the forms, which he already submitted? For example if he submitted username/password and now is on TOTP page, then after click "back" he is either on the "Page has expired" or start of the flow. Which usually will be username/password form, but flow started from scratch, so it won't be resubmitting form, but new submit. Marek > * No need to do redirects. Redirects is bad for performance, but also > has twice the response time which is not good from a usability perspective > > On 16 March 2017 at 12:56, Marek Posolda > wrote: > > We should be able to detect that with the code in the URL too. So > the question is, if hidden field has any advantage over keeping > the code in URL? > > > Thing is, that browsers always treat every POST request as unique > even if it goes to the same URL. So for example, even if code is > not in the URL and then the user submits username/password form > with incorrect password 5 times, then he needs to press browser > back-button 5 times to return back to initial > AuthorizationEndpoint request. Even if every POST request had same > URL without code like > "http://host/auth/realms/master/authenticate" > . > > Only real advantage I see is, that code in hidden field is maybe a > bit safer. Not sure if worth the effort, I will try to investigate > how much effort it is to put the code to the hidden field instead > of URL. > > Marek > > > > On 16/03/17 11:44, Stian Thorgersen wrote: >> I like option #3, but what about adding a hidden field on the >> form that contains the step in the flow. That way we can easily >> find out if the form is a post for the current step or not. If >> it's not then we simply ignore the post and return the current >> step again? That would work for back/forward and refresh. >> >> On 14 March 2017 at 23:47, Bill Burke > > wrote: >> >> Ya, similar to #3, my thought is if you combine a cookie with >> code-in-url, you have a solution for backbutton and refresh >> and there's >> no special headers you have to specify. We used to do #2, but >> lot of >> people, specifically jboss.org guys, >> complained about it. >> >> >> On 3/14/17 4:49 PM, Marek Posolda wrote: >> > Thanks, that looks similar to my (3) though. >> > >> > Besides that I wonder if we should save just the ID of >> loginSession in >> > the cookie and the "current-code" keep inside the loginSession >> > (infinispan) similarly like it is now? >> > >> > I am thinking about the case when potential attacker tricks >> Keycloak >> > by manually sending the request, which will just use same >> code in the >> > cookie and in the URL. Keycloak will then always treat this >> request as >> > valid due the code in the URL and in cookie will always match. >> > Couldn't that be an issue? >> > >> > Marek >> > >> > On 14/03/17 13:50, Bill Burke wrote: >> >> I've got an idea. What about >> >> >> >> * keep the code in the URL >> >> >> >> * Additionally add a "current-code" cookie >> >> >> >> If code in the URL doesn't match the cookie, then redirect >> to the URL of >> >> the current-code. >> >> >> >> >> >> On 3/14/17 6:53 AM, Marek Posolda wrote: >> >>> When working on login sessions, I wonder if we want to >> improve browser >> >>> back-button and browser refreshes. >> >>> >> >>> In shortcut, I can see 3 basic options: >> >>> >> >>> 1) Keep same like now and rely on header "Cache-Control: >> no-store, >> >>> must-revalidate, max-age=0" . This works fine and users >> never saw >> >>> outdated form and never submit outdated form 2 times. >> However the >> >>> usability sucks a bit IMO. When you press back-button >> after POST >> >>> request, you can see the ugly browser page "Web page has >> expired" . And >> >>> if you press F5 on this, you will see the unfriendly >> Keycloak error >> >>> page >> >>> "Error was occured. Please login again through your >> application" >> >>> because >> >>> of invalid code. >> >>> >> >>> 2) Use the pattern with POST followed by the redirect to >> GET. Since we >> >>> will have loginSession with the ID in the cookie, the GET >> request >> >>> can be >> >>> sent to the URL without any special query parameter. >> Something like >> >>> >> "http://localhost:8180/auth/realms/master/login-actions/authenticate >> " >> . >> >>> This will allow us that in every stage of authentication, >> user can >> >>> press >> >>> back-button and will be always redirected to the first >> step of the >> >>> flow. >> >>> When he refreshes the page, it will re-send just the GET >> request and >> >>> always brings him to the current execution. >> >>> >> >>> This looks most user-friendly. But there is the issue >> with performance >> >>> though. As we will need to followup every POST request >> with one >> >>> additional GET request. >> >>> >> >>> 3) Don't do anything special regarding back-button or >> refresh. But in >> >>> case that page is refreshed AND the post with invalid >> (already used) >> >>> code will be re-submitted, we won't display the ugly page >> "Error was >> >>> occured.", but we will just redirect to current step of >> the flow. >> >>> >> >>> Example: >> >>> a) User was redirected from the application to OIDC >> >>> AuthorizationEndpoint request. Login page is shown >> >>> b) User confirmed invalid username and password with POST >> request. >> >>> Login >> >>> form with error page "Invalid password" is shown >> >>> c) User confirmed valid username and password with POST >> request. TOTP >> >>> page is shown. >> >>> d) User press back-button. Now he will see again the page >> with >> >>> username/password form. >> >>> e) User press F5. The POST request will be re-sent, but >> it will use >> >>> previous "code", which is outdated now. So in this case, >> we will >> >>> redirect to the current execution and TOTP form will be >> shown. No >> >>> re-submission of username/password form will happen. >> >>> >> >>> In case 3, the username/password form will be shown >> again, but user >> >>> won't be able to resubmit it. >> >>> >> >>> In shortcut: With 2 and 3, users will never see the >> browser page "Web >> >>> page is expired" or Keycloak "Error occured. Go back to the >> >>> application". With 2, there is additional GET request >> needed. With 3, >> >>> the back-button may show the authentication forms, which >> user already >> >>> successfully confirmed, but he won't be able to re-submit >> them. Is it >> >>> bad regarding usability? To me, it looks better than >> showing "Web page >> >>> is expired". >> >>> >> >>> So my preference is 3,2,1. WDYT? Any other options? >> >>> >> >>> Marek >> >>> >> >>> _______________________________________________ >> >>> keycloak-dev mailing list >> >>> keycloak-dev at lists.jboss.org >> >> >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> >> keycloak-dev mailing list >> >> keycloak-dev at lists.jboss.org >> >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> > >> > >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> > > From mposolda at redhat.com Thu Mar 16 10:50:45 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 16 Mar 2017 15:50:45 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> Message-ID: <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> On 16/03/17 15:27, Bill Burke wrote: > * Hidden field in a form is not a good approach. Its very brittle and > will not work in every situation. So huge -1 there. > > * browser back button is not required to resubmit the HTTP request as > the page can be rendered from cache. Therefore you couldn't have a > "Page Expired" page displayed when the back button is pressed without > setting the header "Cache-Control: no-store, must-revalidate, max-age=0" Maybe we can do some javascript stuff like this: http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button But that would mean that we will need to inject some common javascript stuff into every HTML form displayed by authentication SPI. Could we rely on that? > > * Furthermore, without some type of code/information within the URL, > you also wouldn't know if somebody clicked the back button or not or > whether this was a page refresh or some other GET request. Once we have the cookie with loginSessionID, we can lookup the loginSession. And loginSession will contain last code (same like clientSession now) and last authenticator. Then we just need to compare the code from the loginSession with the code from request. If it matches, we are good. If it doesn't match, it's likely the refresh of some previous page and in that case, we can just redirect to last authenticator. Not sure if we also need to track all codes, so we are able to distinct between the "expired" code, and between the "false" code, which was never valid and was possibly used by some attacker for CSRF. Maybe we can sign codes with HMAC, so we can verify if it is "expired" or "false" code without need to track the list of last codes. Marek > > * Hitting Refresh button also has similar issues when removing the > code from the URL. Hitting refresh on a form POST resends the POSTed > data. Without some kind of code in the URL, the runtime has no idea > if this is a Refresh Button re-POST, or if the browser is POSTing to > the current authenticator. > > The way it works now is you have two types of requests which can be > GET or POST. > > /authenticate?code={code} > > and > > /authenticate?code={code}&execution={execution} > > The first request corresponds to the Authenticator.authenticate() > method. It basically means you want to display the currrent > authenticator in the flow. The 2nd request means that you processing > a challenge sent by the current Authenticator and corresponds to the > Authenticator.action() method. Without this type of distinction, it > is really hard (impossible) to figure out how to route and process the > request. > > > On 3/16/17 9:43 AM, Stian Thorgersen wrote: >> We should get rid of the code for sure. Maybe we can add the ID of >> the current step in the flow instead though. >> >> Before we discuss implementation though, let's figure out what the >> ideal user experience would be then figure out how to implement it. >> What about: >> >> * Refresh just works >> * Back button will display a nicer page, something like "Page has >> expired. To restart the login process click here. To continue the >> login process click here.". Or Back button could just go to the start >> of the flow always. >> * Resubmitting forms will just display the page above >> * No need to do redirects. Redirects is bad for performance, but also >> has twice the response time which is not good from a usability >> perspective >> >> On 16 March 2017 at 12:56, Marek Posolda > > wrote: >> >> We should be able to detect that with the code in the URL too. So >> the question is, if hidden field has any advantage over keeping >> the code in URL? >> >> >> Thing is, that browsers always treat every POST request as unique >> even if it goes to the same URL. So for example, even if code is >> not in the URL and then the user submits username/password form >> with incorrect password 5 times, then he needs to press browser >> back-button 5 times to return back to initial >> AuthorizationEndpoint request. Even if every POST request had >> same URL without code like >> "http://host/auth/realms/master/authenticate" >> . >> >> Only real advantage I see is, that code in hidden field is maybe >> a bit safer. Not sure if worth the effort, I will try to >> investigate how much effort it is to put the code to the hidden >> field instead of URL. >> >> Marek >> >> >> >> On 16/03/17 11:44, Stian Thorgersen wrote: >>> I like option #3, but what about adding a hidden field on the >>> form that contains the step in the flow. That way we can easily >>> find out if the form is a post for the current step or not. If >>> it's not then we simply ignore the post and return the current >>> step again? That would work for back/forward and refresh. >>> >>> On 14 March 2017 at 23:47, Bill Burke >> > wrote: >>> >>> Ya, similar to #3, my thought is if you combine a cookie with >>> code-in-url, you have a solution for backbutton and refresh >>> and there's >>> no special headers you have to specify. We used to do #2, >>> but lot of >>> people, specifically jboss.org guys, >>> complained about it. >>> >>> >>> On 3/14/17 4:49 PM, Marek Posolda wrote: >>> > Thanks, that looks similar to my (3) though. >>> > >>> > Besides that I wonder if we should save just the ID of >>> loginSession in >>> > the cookie and the "current-code" keep inside the loginSession >>> > (infinispan) similarly like it is now? >>> > >>> > I am thinking about the case when potential attacker >>> tricks Keycloak >>> > by manually sending the request, which will just use same >>> code in the >>> > cookie and in the URL. Keycloak will then always treat >>> this request as >>> > valid due the code in the URL and in cookie will always match. >>> > Couldn't that be an issue? >>> > >>> > Marek >>> > >>> > On 14/03/17 13:50, Bill Burke wrote: >>> >> I've got an idea. What about >>> >> >>> >> * keep the code in the URL >>> >> >>> >> * Additionally add a "current-code" cookie >>> >> >>> >> If code in the URL doesn't match the cookie, then >>> redirect to the URL of >>> >> the current-code. >>> >> >>> >> >>> >> On 3/14/17 6:53 AM, Marek Posolda wrote: >>> >>> When working on login sessions, I wonder if we want to >>> improve browser >>> >>> back-button and browser refreshes. >>> >>> >>> >>> In shortcut, I can see 3 basic options: >>> >>> >>> >>> 1) Keep same like now and rely on header "Cache-Control: >>> no-store, >>> >>> must-revalidate, max-age=0" . This works fine and users >>> never saw >>> >>> outdated form and never submit outdated form 2 times. >>> However the >>> >>> usability sucks a bit IMO. When you press back-button >>> after POST >>> >>> request, you can see the ugly browser page "Web page has >>> expired" . And >>> >>> if you press F5 on this, you will see the unfriendly >>> Keycloak error >>> >>> page >>> >>> "Error was occured. Please login again through your >>> application" >>> >>> because >>> >>> of invalid code. >>> >>> >>> >>> 2) Use the pattern with POST followed by the redirect to >>> GET. Since we >>> >>> will have loginSession with the ID in the cookie, the >>> GET request >>> >>> can be >>> >>> sent to the URL without any special query parameter. >>> Something like >>> >>> >>> "http://localhost:8180/auth/realms/master/login-actions/authenticate >>> " >>> . >>> >>> This will allow us that in every stage of >>> authentication, user can >>> >>> press >>> >>> back-button and will be always redirected to the first >>> step of the >>> >>> flow. >>> >>> When he refreshes the page, it will re-send just the GET >>> request and >>> >>> always brings him to the current execution. >>> >>> >>> >>> This looks most user-friendly. But there is the issue >>> with performance >>> >>> though. As we will need to followup every POST request >>> with one >>> >>> additional GET request. >>> >>> >>> >>> 3) Don't do anything special regarding back-button or >>> refresh. But in >>> >>> case that page is refreshed AND the post with invalid >>> (already used) >>> >>> code will be re-submitted, we won't display the ugly >>> page "Error was >>> >>> occured.", but we will just redirect to current step of >>> the flow. >>> >>> >>> >>> Example: >>> >>> a) User was redirected from the application to OIDC >>> >>> AuthorizationEndpoint request. Login page is shown >>> >>> b) User confirmed invalid username and password with >>> POST request. >>> >>> Login >>> >>> form with error page "Invalid password" is shown >>> >>> c) User confirmed valid username and password with POST >>> request. TOTP >>> >>> page is shown. >>> >>> d) User press back-button. Now he will see again the >>> page with >>> >>> username/password form. >>> >>> e) User press F5. The POST request will be re-sent, but >>> it will use >>> >>> previous "code", which is outdated now. So in this case, >>> we will >>> >>> redirect to the current execution and TOTP form will be >>> shown. No >>> >>> re-submission of username/password form will happen. >>> >>> >>> >>> In case 3, the username/password form will be shown >>> again, but user >>> >>> won't be able to resubmit it. >>> >>> >>> >>> In shortcut: With 2 and 3, users will never see the >>> browser page "Web >>> >>> page is expired" or Keycloak "Error occured. Go back to the >>> >>> application". With 2, there is additional GET request >>> needed. With 3, >>> >>> the back-button may show the authentication forms, which >>> user already >>> >>> successfully confirmed, but he won't be able to >>> re-submit them. Is it >>> >>> bad regarding usability? To me, it looks better than >>> showing "Web page >>> >>> is expired". >>> >>> >>> >>> So my preference is 3,2,1. WDYT? Any other options? >>> >>> >>> >>> Marek >>> >>> >>> >>> _______________________________________________ >>> >>> keycloak-dev mailing list >>> >>> keycloak-dev at lists.jboss.org >>> >>> >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >>> >> _______________________________________________ >>> >> keycloak-dev mailing list >>> >> keycloak-dev at lists.jboss.org >>> >>> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >>> > >>> > >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >>> >>> >> >> > From mposolda at redhat.com Thu Mar 16 10:58:06 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 16 Mar 2017 15:58:06 +0100 Subject: [keycloak-dev] Profile SPI In-Reply-To: References: <9bb32606-543d-29c7-8799-596ab7d1806b@redhat.com> <12aba796-2550-6d31-0d26-3b70223ca7c2@redhat.com> Message-ID: On 16/03/17 14:35, Stian Thorgersen wrote: > > > On 16 March 2017 at 13:08, Marek Posolda > wrote: > > On 16/03/17 11:34, Stian Thorgersen wrote: >> >> >> On 14 March 2017 at 12:21, Marek Posolda > > wrote: >> >> Few things: >> - It will be good to have some OOTB support for multivalued >> attributes. You will be able to define if attribute is >> multivalued and then in registration/account pages, users >> will see something like we have in admin console for >> "redirect uris" or "web origins" in client detail page. >> >> >> Any multi valued attributes would have to be done through custom >> extension to the account management console and login pages. The >> built-in properties we'll provide just don't need multi values >> (first name, dob, etc..) > I guess it is not big priority, but IMO it will be useful to have > some built-in support for multivalued attributes. I know some > people from community asked for it. You can have more mobile > phones, you can have more children etc :) > > > Sure we can have built-in support for multiple values - but those > would require custom forms IMO. We already have that at model level and at mappers level. The UI is the only missing piece, which we don't have. Which I was thinking that profile SPI can help with. Anyway, it all depends on priorities. Maybe we can just await feedback from community/customers and add that later. Not sure... > > > We can require them to always override theme if they need > multivalued support in UI, but if we can have something OOTB it > would be nice IMO. >> >> >> - Besides validation, it may be useful to add some "actions" >> when attribute is changed? For example if user changes email, >> there will be the optional action, which will switch >> "emailVerified" to false and put the "VerifyEmail" required >> action on him. When he changes mobile number, it will send >> him SMS and he will need to confirm it somehow (perhaps again >> through required action), etc. >> >> >> Yes, not quite sure how to do that though. There's a few built-in >> providers we'd like (make email not verified if changed, etc.), >> but users should also be able add their own. >> >> >> - It will be probably useful to allow admin to skip >> validation (and actions) for certain attributes. Maybe >> validators could have an option like "Skip admin" or >> something like that? Or should we always skip the validations >> for admin? >> >> >> Dunno - why should an admin be allowed to bypass validation? >> Validation is there to make sure the details in the DB is accurate. > Not sure. Currently the fields like firstName/lastName/email are > mandatory for users, but not for admins. I can think of some > corner cases. > > > True - maybe just have an option on admin console/endpoints to skip > validation? +1 > > > For example: User was registered through Twitter identityProvider > and hence he doesn't have email filled. Now admin won't be able to > edit the user unless he fills some fake email. > > > I agree that would be annoying for the admin. > > In that case it would be nice to detect "missing properties" and popup > the update profile form for the user to require the user to fill in > missing properties. Yeah, requiredActions already have "evaluateTriggers", which is triggered at every login. So doing this will be quite easy though. Marek > > > > Marek > >> >> >> Marek >> >> >> >> On 14/03/17 10:13, Stian Thorgersen wrote: >> >> At the moment there is no single point to define >> validation for a user. >> Even worse for the account management console and admin >> console it's not >> even possible to define validation for custom attributes. >> >> Also, as there is no defined list of attributes for a >> user there the >> mapping of user attributes is error prone. >> >> I'd like to introduce a Profile SPI to help with this. It >> would have >> methods to: >> >> * Validate users during creation and updates >> * List defined attributes on a user >> >> There would be a built-in provider that would delegate to >> ProfileAttribute >> SPI. ProfileAttribute SPI would allow defining >> configurable providers for >> single user attributes. I'm also considering adding a >> separate Validation >> SPI, so a ProfileAttribute provider could delegate >> validation to a separate >> validator. >> >> Users could also implement their own Profile provider to >> do whatever they >> want. I'd like to aim to make the SPI a supported SPI. >> >> First pass would focus purely on validation. Second pass >> would focus on >> using the attribute metadata to do things like: >> >> * Have dropdown boxes in mappers to select user attribute >> instead of >> copy/pasting the name >> * Have additional built-in attributes on registration >> form, update profile >> form and account management console that can be >> enabled/disabled by >> defining the Profile. I'm not suggesting a huge amount >> here and it will be >> limited to a few sensible attributes. Defining more >> complex things like >> address would still be done through extending the forms. >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> >> >> > > From bburke at redhat.com Thu Mar 16 11:05:20 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 16 Mar 2017 11:05:20 -0400 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> Message-ID: <4af2f2e1-3ac2-c2e8-d4c4-aa3e9fa807c0@redhat.com> On 3/16/17 6:19 AM, Stian Thorgersen wrote: > The Keycloak proxy shouldn't be tied directly to the database or > caches. It should ideally be stateless and ideally there's no need for > sticky sessions. > Please stop making broad blanket statements and back up your reponse otherwise I'm just going to ignore you. If the proxy implements pure OIDC it has to minimally store refresh token and access token. Plus I foresee us wanting to provide more complex proxy features which will require storing more an more state. So, the proxy needs sessions which means many users will want this to be fault tolerant, which means that the proxy will require distributed sessions. > It should be capable of running collocated with the Keycloak Server > for simplicity, but also should be possible to run in separate > process. If it's done as an additional subsystem that allows easily > configuring a Keycloak server to be IdP, IdP+Proxy or just Proxy. > > Further, it should leverage OpenID Connect rather than us coming up > with a new separate protocol. > > My reasoning behind this is simple: > > * Please let's not invent another security protocol! That's a lot of > work and a whole new vulnerability vector to deal with. > * There will be tons more requests to a proxy than there are to the > server. Latency overhead will also be much more important. > It wouldn't be a brand new protocol, just an optimized subset of OIDC. For example, you wouldn't have to do a code to token request nor would you have to execute refresh token requests. It would also make things like revocation and backchannel logout much easier, nicer, more efficient, and more robust. I Just see huge advantages with this approach: simpler provisioning, simpler configuration, a real nice user experience overall, and possibly some optimizations. What looking for is disadvantages to this approach which I currently see are: 1) Larger memory footprint 2) More database connections, although these connections should become idle after boot. 3) Possible extra distributed session replication as the User/ClientSession needs to be visible on both the auth server and the proxy. 4) Possible headache of too many nodes in a cluster, although a proxy is supposed to be able to handle proxing multiple apps and multiple instances of that app. What's good about 2-4 is that we can bench this stuff and learn the limits. From what Stuart Douglas tells me is that Undertow is really really competitive with just about every web server (Apache, Jetty, etc.) (usually better). Where Java in general is not as good as its competitors is in SSL/TLS. I don't know how much worse it is. Bill Bill From bburke at redhat.com Thu Mar 16 13:37:21 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 16 Mar 2017 13:37:21 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> Message-ID: <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> On 3/16/17 10:50 AM, Marek Posolda wrote: > On 16/03/17 15:27, Bill Burke wrote: >> * Hidden field in a form is not a good approach. Its very brittle >> and will not work in every situation. So huge -1 there. >> >> * browser back button is not required to resubmit the HTTP request as >> the page can be rendered from cache. Therefore you couldn't have a >> "Page Expired" page displayed when the back button is pressed without >> setting the header "Cache-Control: no-store, must-revalidate, max-age=0" > Maybe we can do some javascript stuff like this: > http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button > > > But that would mean that we will need to inject some common javascript > stuff into every HTML form displayed by authentication SPI. Could we > rely on that? I don't think this is a good approach as Authenticator develoeprs would have to do the same thing. >> >> * Furthermore, without some type of code/information within the URL, >> you also wouldn't know if somebody clicked the back button or not or >> whether this was a page refresh or some other GET request. > Once we have the cookie with loginSessionID, we can lookup the > loginSession. And loginSession will contain last code (same like > clientSession now) and last authenticator. Then we just need to > compare the code from the loginSession with the code from request. If > it matches, we are good. If it doesn't match, it's likely the refresh > of some previous page and in that case, we can just redirect to last > authenticator. > This is the current behavior, but instead of using a cookie, the "code" is stored in the URL. With only a cookie though and no URL information, you won't know the different between a Back Button and a Page Refresh for GET requests. For POST requests, you won't be able to tell the differencee between a Back Button, Page Refresh, or whether the POST is targeted to an actual Authenticator. The more I think about it, things should probably stay the way it currently is, with improvements on user experience. I think we can support what Stian suggested with the current implementation. > Not sure if we also need to track all codes, so we are able to > distinct between the "expired" code, and between the "false" code, > which was never valid and was possibly used by some attacker for CSRF. > Maybe we can sign codes with HMAC, so we can verify if it is "expired" > or "false" code without need to track the list of last codes. This has been done in the past. Then it was switched to using the same code throughout the whole flow, then Stian switched it to changing the code throughout the flow. I don't know if he uses a hash or not. Bill From sthorger at redhat.com Thu Mar 16 15:08:03 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 16 Mar 2017 20:08:03 +0100 Subject: [keycloak-dev] Keycloak 3.0.0.CR1 released Message-ID: Keycloak 3.0.0.CR1 is released. Even though we've been busy wrapping up Keycloak 2.5 we've managed to include quite a few new features. To download the release go to the Keycloak homepage . This release is the first that comes without Mongo support. Highlights - *No import option for LDAP* - This option allows consuming users from LDAP without importing into the Keycloak database - *Initiate linking of identity provider from application* - In the past adding additional identity brokering accounts could only be done through the account management console. Now this can be done from your application - *Hide identity provider* - It's now possible to hide an identity provider from the login page - *Jetty 9.4* - Thanks to reneploetz we now have support for Jetty 9.4 - *Swedish translations* - Thanks to Viktor Kostov for adding Swedish translations - *Checksums for downloads* - The website now has md5 checksums for all downloads - *BOMs* - We've added BOMs for adapters as well as Server SPIs The full list of resolved issues is available in JIRA . Upgrading Before you upgrade remember to backup your database and check the migration guide . From sthorger at redhat.com Fri Mar 17 03:42:08 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 08:42:08 +0100 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: <4af2f2e1-3ac2-c2e8-d4c4-aa3e9fa807c0@redhat.com> References: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> <4af2f2e1-3ac2-c2e8-d4c4-aa3e9fa807c0@redhat.com> Message-ID: On 16 March 2017 at 16:05, Bill Burke wrote: > > > On 3/16/17 6:19 AM, Stian Thorgersen wrote: > > The Keycloak proxy shouldn't be tied directly to the database or caches. > It should ideally be stateless and ideally there's no need for sticky > sessions. > > Please stop making broad blanket statements and back up your reponse > otherwise I'm just going to ignore you. > ??? > > If the proxy implements pure OIDC it has to minimally store refresh token > and access token. Plus I foresee us wanting to provide more complex proxy > features which will require storing more an more state. So, the proxy > needs sessions which means many users will want this to be fault tolerant, > which means that the proxy will require distributed sessions. > > > It should be capable of running collocated with the Keycloak Server for > simplicity, but also should be possible to run in separate process. If it's > done as an additional subsystem that allows easily configuring a Keycloak > server to be IdP, IdP+Proxy or just Proxy. > > > > > Further, it should leverage OpenID Connect rather than us coming up with a > new separate protocol. > > My reasoning behind this is simple: > > * Please let's not invent another security protocol! That's a lot of work > and a whole new vulnerability vector to deal with. > * There will be tons more requests to a proxy than there are to the > server. Latency overhead will also be much more important. > > It wouldn't be a brand new protocol, just an optimized subset of OIDC. > For example, you wouldn't have to do a code to token request nor would you > have to execute refresh token requests. It would also make things like > revocation and backchannel logout much easier, nicer, more efficient, and > more robust. > > I Just see huge advantages with this approach: simpler provisioning, > simpler configuration, a real nice user experience overall, and possibly > some optimizations. What looking for is disadvantages to this approach > which I currently see are: > > 1) Larger memory footprint > 2) More database connections, although these connections should become > idle after boot. > 3) Possible extra distributed session replication as the > User/ClientSession needs to be visible on both the auth server and the > proxy. > 4) Possible headache of too many nodes in a cluster, although a proxy is > supposed to be able to handle proxing multiple apps and multiple instances > of that app. > > What's good about 2-4 is that we can bench this stuff and learn the > limits. From what Stuart Douglas tells me is that Undertow is really > really competitive with just about every web server (Apache, Jetty, etc.) > (usually better). Where Java in general is not as good as its > competitors is in SSL/TLS. I don't know how much worse it is. > > Bill > > Bill > From sthorger at redhat.com Fri Mar 17 03:44:40 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 08:44:40 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: Can we please get back to discussing what the best user experience is first. Then we can discuss implementations? On 16 March 2017 at 18:37, Bill Burke wrote: > > > On 3/16/17 10:50 AM, Marek Posolda wrote: > > On 16/03/17 15:27, Bill Burke wrote: > > * Hidden field in a form is not a good approach. Its very brittle and > will not work in every situation. So huge -1 there. > > * browser back button is not required to resubmit the HTTP request as the > page can be rendered from cache. Therefore you couldn't have a "Page > Expired" page displayed when the back button is pressed without setting the > header "Cache-Control: no-store, must-revalidate, max-age=0" > > Maybe we can do some javascript stuff like this: http://stackoverflow.com/ > questions/9046184/reload-the-site-when-reached-via-browsers-back-button > > But that would mean that we will need to inject some common javascript > stuff into every HTML form displayed by authentication SPI. Could we rely > on that? > > I don't think this is a good approach as Authenticator develoeprs would > have to do the same thing. > > > > * Furthermore, without some type of code/information within the URL, you > also wouldn't know if somebody clicked the back button or not or whether > this was a page refresh or some other GET request. > > Once we have the cookie with loginSessionID, we can lookup the > loginSession. And loginSession will contain last code (same like > clientSession now) and last authenticator. Then we just need to compare the > code from the loginSession with the code from request. If it matches, we > are good. If it doesn't match, it's likely the refresh of some previous > page and in that case, we can just redirect to last authenticator. > > This is the current behavior, but instead of using a cookie, the "code" is > stored in the URL. > > With only a cookie though and no URL information, you won't know the > different between a Back Button and a Page Refresh for GET requests. For > POST requests, you won't be able to tell the differencee between a Back > Button, Page Refresh, or whether the POST is targeted to an actual > Authenticator. > > The more I think about it, things should probably stay the way it > currently is, with improvements on user experience. I think we can support > what Stian suggested with the current implementation. > > > Not sure if we also need to track all codes, so we are able to distinct > between the "expired" code, and between the "false" code, which was never > valid and was possibly used by some attacker for CSRF. Maybe we can sign > codes with HMAC, so we can verify if it is "expired" or "false" code > without need to track the list of last codes. > > > This has been done in the past. Then it was switched to using the same > code throughout the whole flow, then Stian switched it to changing the code > throughout the flow. I don't know if he uses a hash or not. > > Bill > From sthorger at redhat.com Fri Mar 17 03:45:52 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 08:45:52 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: I repeat: Before we discuss implementation though, let's figure out what the ideal user experience would be then figure out how to implement it. What about: * Refresh just works * Back button will display a nicer page, something like "Page has expired. To restart the login process click here. To continue the login process click here.". Or Back button could just go to the start of the flow always. * Resubmitting forms will just display the page above * No need to do redirects. Redirects is bad for performance, but also has twice the response time which is not good from a usability perspective Is this the optimal user experience? Or should we do something else? On 17 March 2017 at 08:44, Stian Thorgersen wrote: > Can we please get back to discussing what the best user experience is > first. Then we can discuss implementations? > > On 16 March 2017 at 18:37, Bill Burke wrote: > >> >> >> On 3/16/17 10:50 AM, Marek Posolda wrote: >> >> On 16/03/17 15:27, Bill Burke wrote: >> >> * Hidden field in a form is not a good approach. Its very brittle and >> will not work in every situation. So huge -1 there. >> >> * browser back button is not required to resubmit the HTTP request as the >> page can be rendered from cache. Therefore you couldn't have a "Page >> Expired" page displayed when the back button is pressed without setting the >> header "Cache-Control: no-store, must-revalidate, max-age=0" >> >> Maybe we can do some javascript stuff like this: >> http://stackoverflow.com/questions/9046184/reload-the-site- >> when-reached-via-browsers-back-button >> >> But that would mean that we will need to inject some common javascript >> stuff into every HTML form displayed by authentication SPI. Could we rely >> on that? >> >> I don't think this is a good approach as Authenticator develoeprs would >> have to do the same thing. >> >> >> >> * Furthermore, without some type of code/information within the URL, you >> also wouldn't know if somebody clicked the back button or not or whether >> this was a page refresh or some other GET request. >> >> Once we have the cookie with loginSessionID, we can lookup the >> loginSession. And loginSession will contain last code (same like >> clientSession now) and last authenticator. Then we just need to compare the >> code from the loginSession with the code from request. If it matches, we >> are good. If it doesn't match, it's likely the refresh of some previous >> page and in that case, we can just redirect to last authenticator. >> >> This is the current behavior, but instead of using a cookie, the "code" >> is stored in the URL. >> >> With only a cookie though and no URL information, you won't know the >> different between a Back Button and a Page Refresh for GET requests. For >> POST requests, you won't be able to tell the differencee between a Back >> Button, Page Refresh, or whether the POST is targeted to an actual >> Authenticator. >> >> The more I think about it, things should probably stay the way it >> currently is, with improvements on user experience. I think we can support >> what Stian suggested with the current implementation. >> >> >> Not sure if we also need to track all codes, so we are able to distinct >> between the "expired" code, and between the "false" code, which was never >> valid and was possibly used by some attacker for CSRF. Maybe we can sign >> codes with HMAC, so we can verify if it is "expired" or "false" code >> without need to track the list of last codes. >> >> >> This has been done in the past. Then it was switched to using the same >> code throughout the whole flow, then Stian switched it to changing the code >> throughout the flow. I don't know if he uses a hash or not. >> >> Bill >> > > From mposolda at redhat.com Fri Mar 17 04:22:09 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 17 Mar 2017 09:22:09 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: Ok, for now just ignoring the browser limitations and fact that back/forward doesn't refresh the page automatically for POST requests :) On 17/03/17 08:45, Stian Thorgersen wrote: > I repeat: > > Before we discuss implementation though, let's figure out what the > ideal user experience would be then figure out how to implement it. > What about: > > * Refresh just works > * Back button will display a nicer page, something like "Page has > expired. To restart the login process click here. To continue the > login process click here.". Yeah, that will be nice for UXP. > Or Back button could just go to the start of the flow always. Regarding UXP, I personally like your previous proposal better. If user is deep after confirm many authenticator forms and he accidentally clicks back-button, he will need to re-authenticate in all authenticators again. Not so great for usability though? > * Resubmitting forms will just display the page above If we do any of your previous proposal, user will never see the forms, which he already submitted? For example if he submitted username/password and now is on TOTP page, then after click "back" he will be either on the "Page has expired" or start of the flow. The start of the flow usually will be username/password form, but flow started from scratch, so it won't be resubmitting form, but new submit? Anyway yes, if some of previous forms is re-submitted, we can display "page is expired" page. Marek > * No need to do redirects. Redirects is bad for performance, but also > has twice the response time which is not good from a usability perspective > > Is this the optimal user experience? Or should we do something else? > > On 17 March 2017 at 08:44, Stian Thorgersen > wrote: > > Can we please get back to discussing what the best user experience > is first. Then we can discuss implementations? > > On 16 March 2017 at 18:37, Bill Burke > wrote: > > > > On 3/16/17 10:50 AM, Marek Posolda wrote: >> On 16/03/17 15:27, Bill Burke wrote: >>> * Hidden field in a form is not a good approach. Its very >>> brittle and will not work in every situation. So huge -1 there. >>> >>> * browser back button is not required to resubmit the HTTP >>> request as the page can be rendered from cache. Therefore >>> you couldn't have a "Page Expired" page displayed when the >>> back button is pressed without setting the header >>> "Cache-Control: no-store, must-revalidate, max-age=0" >> Maybe we can do some javascript stuff like this: >> http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button >> >> >> But that would mean that we will need to inject some common >> javascript stuff into every HTML form displayed by >> authentication SPI. Could we rely on that? > I don't think this is a good approach as Authenticator > develoeprs would have to do the same thing. > > >>> >>> * Furthermore, without some type of code/information within >>> the URL, you also wouldn't know if somebody clicked the back >>> button or not or whether this was a page refresh or some >>> other GET request. >> Once we have the cookie with loginSessionID, we can lookup >> the loginSession. And loginSession will contain last code >> (same like clientSession now) and last authenticator. Then we >> just need to compare the code from the loginSession with the >> code from request. If it matches, we are good. If it doesn't >> match, it's likely the refresh of some previous page and in >> that case, we can just redirect to last authenticator. >> > This is the current behavior, but instead of using a cookie, > the "code" is stored in the URL. > > With only a cookie though and no URL information, you won't > know the different between a Back Button and a Page Refresh > for GET requests. For POST requests, you won't be able to > tell the differencee between a Back Button, Page Refresh, or > whether the POST is targeted to an actual Authenticator. > > The more I think about it, things should probably stay the way > it currently is, with improvements on user experience. I > think we can support what Stian suggested with the current > implementation. > > >> Not sure if we also need to track all codes, so we are able >> to distinct between the "expired" code, and between the >> "false" code, which was never valid and was possibly used by >> some attacker for CSRF. Maybe we can sign codes with HMAC, so >> we can verify if it is "expired" or "false" code without need >> to track the list of last codes. > > This has been done in the past. Then it was switched to using > the same code throughout the whole flow, then Stian switched > it to changing the code throughout the flow. I don't know if > he uses a hash or not. > > Bill > > > From sthorger at redhat.com Fri Mar 17 04:40:19 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 09:40:19 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: On 17 March 2017 at 09:22, Marek Posolda wrote: > Ok, for now just ignoring the browser limitations and fact that > back/forward doesn't refresh the page automatically for POST requests :) > > On 17/03/17 08:45, Stian Thorgersen wrote: > > I repeat: > > Before we discuss implementation though, let's figure out what the ideal > user experience would be then figure out how to implement it. What about: > > * Refresh just works > > * Back button will display a nicer page, something like "Page has expired. > To restart the login process click here. To continue the login process > click here.". > > > Yeah, that will be nice for UXP. > > Or Back button could just go to the start of the flow always. > > > Regarding UXP, I personally like your previous proposal better. If user > is deep after confirm many authenticator forms and he accidentally > clicks back-button, he will need to re-authenticate in all > authenticators again. Not so great for usability though? > > True - giving the user the option to choose is probably best. > * Resubmitting forms will just display the page above > > If we do any of your previous proposal, user will never see the forms, > > which he already submitted? For example if he submitted > username/password and now is on TOTP page, then after click "back" he will be > either on the "Page has expired" or start of the flow. The start of the flow usually > will be username/password form, but flow started from scratch, so it > won't be resubmitting form, but new submit? > > Anyway yes, if some of previous forms is re-submitted, we can display "page is expired" page. > > I'm not quite following. Is it possible to prevent the back buttons from "re-submitting" forms at all? If so that's ideal as you then don't get the ugly message from the browser that the form is expired. > > Marek > > * No need to do redirects. Redirects is bad for performance, but also has > twice the response time which is not good from a usability perspective > > Is this the optimal user experience? Or should we do something else? > > On 17 March 2017 at 08:44, Stian Thorgersen wrote: > >> Can we please get back to discussing what the best user experience is >> first. Then we can discuss implementations? >> >> On 16 March 2017 at 18:37, Bill Burke wrote: >> >>> >>> >>> On 3/16/17 10:50 AM, Marek Posolda wrote: >>> >>> On 16/03/17 15:27, Bill Burke wrote: >>> >>> * Hidden field in a form is not a good approach. Its very brittle and >>> will not work in every situation. So huge -1 there. >>> >>> * browser back button is not required to resubmit the HTTP request as >>> the page can be rendered from cache. Therefore you couldn't have a "Page >>> Expired" page displayed when the back button is pressed without setting the >>> header "Cache-Control: no-store, must-revalidate, max-age=0" >>> >>> Maybe we can do some javascript stuff like this: >>> http://stackoverflow.com/questions/9046184/reload-the-site-w >>> hen-reached-via-browsers-back-button >>> >>> But that would mean that we will need to inject some common javascript >>> stuff into every HTML form displayed by authentication SPI. Could we rely >>> on that? >>> >>> I don't think this is a good approach as Authenticator develoeprs would >>> have to do the same thing. >>> >>> >>> >>> * Furthermore, without some type of code/information within the URL, you >>> also wouldn't know if somebody clicked the back button or not or whether >>> this was a page refresh or some other GET request. >>> >>> Once we have the cookie with loginSessionID, we can lookup the >>> loginSession. And loginSession will contain last code (same like >>> clientSession now) and last authenticator. Then we just need to compare the >>> code from the loginSession with the code from request. If it matches, we >>> are good. If it doesn't match, it's likely the refresh of some previous >>> page and in that case, we can just redirect to last authenticator. >>> >>> This is the current behavior, but instead of using a cookie, the "code" >>> is stored in the URL. >>> >>> With only a cookie though and no URL information, you won't know the >>> different between a Back Button and a Page Refresh for GET requests. For >>> POST requests, you won't be able to tell the differencee between a Back >>> Button, Page Refresh, or whether the POST is targeted to an actual >>> Authenticator. >>> >>> The more I think about it, things should probably stay the way it >>> currently is, with improvements on user experience. I think we can support >>> what Stian suggested with the current implementation. >>> >>> >>> Not sure if we also need to track all codes, so we are able to distinct >>> between the "expired" code, and between the "false" code, which was never >>> valid and was possibly used by some attacker for CSRF. Maybe we can sign >>> codes with HMAC, so we can verify if it is "expired" or "false" code >>> without need to track the list of last codes. >>> >>> >>> This has been done in the past. Then it was switched to using the same >>> code throughout the whole flow, then Stian switched it to changing the code >>> throughout the flow. I don't know if he uses a hash or not. >>> >>> Bill >>> >> >> > > From sthorger at redhat.com Fri Mar 17 05:01:11 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 10:01:11 +0100 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: <4af2f2e1-3ac2-c2e8-d4c4-aa3e9fa807c0@redhat.com> References: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> <4af2f2e1-3ac2-c2e8-d4c4-aa3e9fa807c0@redhat.com> Message-ID: In summary I'm more open towards your approach, but still have some concerns around it. More inline. On 16 March 2017 at 16:05, Bill Burke wrote: > > > On 3/16/17 6:19 AM, Stian Thorgersen wrote: > > The Keycloak proxy shouldn't be tied directly to the database or caches. > It should ideally be stateless and ideally there's no need for sticky > sessions. > > Please stop making broad blanket statements and back up your reponse > otherwise I'm just going to ignore you. > > If the proxy implements pure OIDC it has to minimally store refresh token > and access token. Plus I foresee us wanting to provide more complex proxy > features which will require storing more an more state. So, the proxy > needs sessions which means many users will want this to be fault tolerant, > which means that the proxy will require distributed sessions. > Can't the tokens just be stored in a cookie? That would make it fully stateless and no need for sticky sessions. I guess it comes down to what is more costly refresh token requests or having a distributed "session" cache (which we already have). > > > It should be capable of running collocated with the Keycloak Server for > simplicity, but also should be possible to run in separate process. If it's > done as an additional subsystem that allows easily configuring a Keycloak > server to be IdP, IdP+Proxy or just Proxy. > > > > > Further, it should leverage OpenID Connect rather than us coming up with a > new separate protocol. > > My reasoning behind this is simple: > > * Please let's not invent another security protocol! That's a lot of work > and a whole new vulnerability vector to deal with. > * There will be tons more requests to a proxy than there are to the > server. Latency overhead will also be much more important. > > It wouldn't be a brand new protocol, just an optimized subset of OIDC. > For example, you wouldn't have to do a code to token request nor would you > have to execute refresh token requests. It would also make things like > revocation and backchannel logout much easier, nicer, more efficient, and > more robust. > I like removing the code to token request and refresh token requests. However, doesn't the revocation and backchannel logout mechanism have to be made simpler and more robust for "external apps" as well? Wouldn't it be better to solve this problem in general and make it available to external apps and not just our "embedded" proxy. > > I Just see huge advantages with this approach: simpler provisioning, > simpler configuration, a real nice user experience overall, and possibly > some optimizations. What looking for is disadvantages to this approach > which I currently see are: > > 1) Larger memory footprint > 2) More database connections, although these connections should become idle > after boot. > 3) Possible extra distributed session replication as the > User/ClientSession needs to be visible on both the auth server and the > proxy. > 4) Possible headache of too many nodes in a cluster, although a proxy is > supposed to be able to handle proxing multiple apps and multiple instances > of that app. > I would think it would make it even harder to scale to really big loads. There will already be limits on a Keycloak cluster due to invalidation messages and even more so the sessions. If we add even more nodes and load to the same cluster that just makes the matter even worse. There's also significantly more requests to applications than there is for KC server. That's why it seems safer to keep it separate. It depends on what and how much of the db + cache we're talking about. Is it just user sessions then that can probably be handled with distributed sessions. > > What's good about 2-4 is that we can bench this stuff and learn the > limits. From what Stuart Douglas tells me is that Undertow is really > really competitive with just about every web server (Apache, Jetty, etc.) > (usually better). Where Java in general is not as good as its > competitors is in SSL/TLS. I don't know how much worse it is. > Undertow sure. I don't think that's where there the limitation is going to be. Rather the limitation would most likely be in the Infinispan cluster. > > Bill > > Bill > From sthorger at redhat.com Fri Mar 17 05:47:39 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 10:47:39 +0100 Subject: [keycloak-dev] New Account Management Console and Account REST api Message-ID: As we've discussed a few times now the plan is to do a brand new account management console. Instead of old school forms it will be all modern using HTML5, AngularJS and REST endpoints. The JIRA for this work is: https://issues.jboss.org/browse/KEYCLOAK-1250 We where hoping to get some help from the professional UXP folks for this, but it looks like that may take some time. In the mean time the plan is to base it on the following template: https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# Also, we'll try to use some newer things from PatternFly patterns to improve the screens. First pass will have the same functionality and behavior as the old account management console. Second pass will be to improve the usability (pages like linking, sessions and history are not very nice). We will deprecate the old FreeMarker/forms way of doing things, but keep it around so it doesn't break what people are already doing. This can be removed in the future (probably RHSSO 8.0?). We'll also need to provide full rest endpoints for the account management console. I'll work on that, while Stan works on the UI. As the account management console will be a pure HTML5 and JS app anyone can completely replace it with a theme. They can also customize it a lot. We'll also need to make sure it's easy to add additional pages/sections. Rather than just add to AccountService I'm going to rename that to DeprecatedAccountFormService remove all REST from there and add a new AccountService that only does REST. All features available through forms at the moment will be available as REST API, with the exception of account linking which will be done through Bills work that was introduced in 3.0 that allows applications to initiate the account linking. From mposolda at redhat.com Fri Mar 17 06:12:46 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 17 Mar 2017 11:12:46 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: <273638a2-663c-5b08-d977-97d5721eb21b@redhat.com> On 17/03/17 09:40, Stian Thorgersen wrote: > > > On 17 March 2017 at 09:22, Marek Posolda > wrote: > > Ok, for now just ignoring the browser limitations and fact that > back/forward doesn't refresh the page automatically for POST > requests :) > > On 17/03/17 08:45, Stian Thorgersen wrote: >> I repeat: >> >> Before we discuss implementation though, let's figure out what >> the ideal user experience would be then figure out how to >> implement it. What about: >> >> * Refresh just works >> * Back button will display a nicer page, something like "Page has >> expired. To restart the login process click here. To continue the >> login process click here.". > > Yeah, that will be nice for UXP. > >> Or Back button could just go to the start of the flow always. > > Regarding UXP, I personally like your previous proposal better. If user > is deep after confirm many authenticator forms and he accidentally > clicks back-button, he will need to re-authenticate in all > authenticators again. Not so great for usability though? > > > True - giving the user the option to choose is probably best. > >> * Resubmitting forms will just display the page above > If we do any of your previous proposal, user will never see the > forms, > > which he already submitted? For example if he submitted > username/password and now is on TOTP page, then after click "back" he will be > either on the "Page has expired" or start of the flow. The start of the flow usually > will be username/password form, but flow started from scratch, so it > won't be resubmitting form, but new submit? > > Anyway yes, if some of previous forms is re-submitted, we can display "page is expired" page. > > I'm not quite following. Is it possible to prevent the back buttons > from "re-submitting" forms at all? If so that's ideal as you then > don't get the ugly message from the browser that the form is expired. Yes, as long as we don't send any "Cache-control" header, then browser back/forward buttons doesn't resubmit forms and doesn't re-send any requests. So follow-up on the example above 1) User successfully authenticated on username/password form and he is on TOTP page. 2) User press browser "back" button. Now he will see again the username/password form 3) User will try to re-submit the username/password form OR he press browser "refresh" button. In both cases, we will show our nice "Page has expired. To restart the login process click here. To continue the login process click here." Are we in agreement that this is ideal user experience? If yes, we can achieve that quite easily without need of javascript hacks or hidden form fields though. Marek > > Marek > >> * No need to do redirects. Redirects is bad for performance, but >> also has twice the response time which is not good from a >> usability perspective >> >> Is this the optimal user experience? Or should we do something else? >> >> On 17 March 2017 at 08:44, Stian Thorgersen > > wrote: >> >> Can we please get back to discussing what the best user >> experience is first. Then we can discuss implementations? >> >> On 16 March 2017 at 18:37, Bill Burke > > wrote: >> >> >> >> On 3/16/17 10:50 AM, Marek Posolda wrote: >>> On 16/03/17 15:27, Bill Burke wrote: >>>> * Hidden field in a form is not a good approach. Its >>>> very brittle and will not work in every situation. So >>>> huge -1 there. >>>> >>>> * browser back button is not required to resubmit the >>>> HTTP request as the page can be rendered from cache. >>>> Therefore you couldn't have a "Page Expired" page >>>> displayed when the back button is pressed without >>>> setting the header "Cache-Control: no-store, >>>> must-revalidate, max-age=0" >>> Maybe we can do some javascript stuff like this: >>> http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button >>> >>> >>> But that would mean that we will need to inject some >>> common javascript stuff into every HTML form displayed >>> by authentication SPI. Could we rely on that? >> I don't think this is a good approach as Authenticator >> develoeprs would have to do the same thing. >> >> >>>> >>>> * Furthermore, without some type of code/information >>>> within the URL, you also wouldn't know if somebody >>>> clicked the back button or not or whether this was a >>>> page refresh or some other GET request. >>> Once we have the cookie with loginSessionID, we can >>> lookup the loginSession. And loginSession will contain >>> last code (same like clientSession now) and last >>> authenticator. Then we just need to compare the code >>> from the loginSession with the code from request. If it >>> matches, we are good. If it doesn't match, it's likely >>> the refresh of some previous page and in that case, we >>> can just redirect to last authenticator. >>> >> This is the current behavior, but instead of using a >> cookie, the "code" is stored in the URL. >> >> With only a cookie though and no URL information, you >> won't know the different between a Back Button and a Page >> Refresh for GET requests. For POST requests, you won't >> be able to tell the differencee between a Back Button, >> Page Refresh, or whether the POST is targeted to an >> actual Authenticator. >> >> The more I think about it, things should probably stay >> the way it currently is, with improvements on user >> experience. I think we can support what Stian suggested >> with the current implementation. >> >> >>> Not sure if we also need to track all codes, so we are >>> able to distinct between the "expired" code, and between >>> the "false" code, which was never valid and was possibly >>> used by some attacker for CSRF. Maybe we can sign codes >>> with HMAC, so we can verify if it is "expired" or >>> "false" code without need to track the list of last codes. >> >> This has been done in the past. Then it was switched to >> using the same code throughout the whole flow, then Stian >> switched it to changing the code throughout the flow. I >> don't know if he uses a hash or not. >> >> Bill >> >> >> > > From ssilvert at redhat.com Fri Mar 17 07:54:30 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Fri, 17 Mar 2017 07:54:30 -0400 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: References: Message-ID: <44573b50-db26-ce45-81c0-eb5cc3bbc7e1@redhat.com> On 3/17/2017 5:47 AM, Stian Thorgersen wrote: > As we've discussed a few times now the plan is to do a brand new account > management console. Instead of old school forms it will be all modern using > HTML5, AngularJS and REST endpoints. One thing. That should be "Angular", not "AngularJS". Just to educate everyone, here is what's going on in Angular-land: AngularJS is the old framework we used for the admin console. Angular is the new framework we will use for the account management console. Most of you know the new framework as Angular2 or ng-2, but the powers that be want to just call it "Angular". This framework is completely rewritten and really has no relation to AngularJS, except they both come from Google and both have "Angular" in the name. To avoid confusion, I'm going to call it "Angualr2" for the foreseeable future. > > The JIRA for this work is: > https://issues.jboss.org/browse/KEYCLOAK-1250 > > We where hoping to get some help from the professional UXP folks for this, > but it looks like that may take some time. In the mean time the plan is to > base it on the following template: > > https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# > > Also, we'll try to use some newer things from PatternFly patterns to > improve the screens. > > First pass will have the same functionality and behavior as the old account > management console. Second pass will be to improve the usability (pages > like linking, sessions and history are not very nice). > > We will deprecate the old FreeMarker/forms way of doing things, but keep it > around so it doesn't break what people are already doing. This can be > removed in the future (probably RHSSO 8.0?). > > We'll also need to provide full rest endpoints for the account management > console. I'll work on that, while Stan works on the UI. > > As the account management console will be a pure HTML5 and JS app anyone > can completely replace it with a theme. They can also customize it a lot. > We'll also need to make sure it's easy to add additional pages/sections. > > Rather than just add to AccountService I'm going to rename that > to DeprecatedAccountFormService remove all REST from there and add a new > AccountService that only does REST. All features available through forms at > the moment will be available as REST API, with the exception of account > linking which will be done through Bills work that was introduced in 3.0 > that allows applications to initiate the account linking. > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From sthorger at redhat.com Fri Mar 17 08:08:28 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 13:08:28 +0100 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: <44573b50-db26-ce45-81c0-eb5cc3bbc7e1@redhat.com> References: <44573b50-db26-ce45-81c0-eb5cc3bbc7e1@redhat.com> Message-ID: I'm going to call it "YetAnotherJsFramework" ;) On 17 March 2017 at 12:54, Stan Silvert wrote: > On 3/17/2017 5:47 AM, Stian Thorgersen wrote: > > As we've discussed a few times now the plan is to do a brand new account > > management console. Instead of old school forms it will be all modern > using > > HTML5, AngularJS and REST endpoints. > One thing. That should be "Angular", not "AngularJS". Just to > educate everyone, here is what's going on in Angular-land: > > AngularJS is the old framework we used for the admin console. > Angular is the new framework we will use for the account management > console. > > Most of you know the new framework as Angular2 or ng-2, but the powers > that be want to just call it "Angular". This framework is completely > rewritten and really has no relation to AngularJS, except they both come > from Google and both have "Angular" in the name. > > To avoid confusion, I'm going to call it "Angualr2" for the foreseeable > future. > > > > The JIRA for this work is: > > https://issues.jboss.org/browse/KEYCLOAK-1250 > > > > We where hoping to get some help from the professional UXP folks for > this, > > but it looks like that may take some time. In the mean time the plan is > to > > base it on the following template: > > > > https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# > > > > Also, we'll try to use some newer things from PatternFly patterns to > > improve the screens. > > > > First pass will have the same functionality and behavior as the old > account > > management console. Second pass will be to improve the usability (pages > > like linking, sessions and history are not very nice). > > > > We will deprecate the old FreeMarker/forms way of doing things, but keep > it > > around so it doesn't break what people are already doing. This can be > > removed in the future (probably RHSSO 8.0?). > > > > We'll also need to provide full rest endpoints for the account management > > console. I'll work on that, while Stan works on the UI. > > > > As the account management console will be a pure HTML5 and JS app anyone > > can completely replace it with a theme. They can also customize it a lot. > > We'll also need to make sure it's easy to add additional pages/sections. > > > > Rather than just add to AccountService I'm going to rename that > > to DeprecatedAccountFormService remove all REST from there and add a new > > AccountService that only does REST. All features available through forms > at > > the moment will be available as REST API, with the exception of account > > linking which will be done through Bills work that was introduced in 3.0 > > that allows applications to initiate the account linking. > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Fri Mar 17 08:09:12 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 13:09:12 +0100 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: References: <44573b50-db26-ce45-81c0-eb5cc3bbc7e1@redhat.com> Message-ID: Had another idea. We could quite easily make it possible to configure the "account management url" for a realm. That would let folks redirect to external account management console if they want to completely override it. On 17 March 2017 at 13:08, Stian Thorgersen wrote: > I'm going to call it "YetAnotherJsFramework" ;) > > On 17 March 2017 at 12:54, Stan Silvert wrote: > >> On 3/17/2017 5:47 AM, Stian Thorgersen wrote: >> > As we've discussed a few times now the plan is to do a brand new account >> > management console. Instead of old school forms it will be all modern >> using >> > HTML5, AngularJS and REST endpoints. >> One thing. That should be "Angular", not "AngularJS". Just to >> educate everyone, here is what's going on in Angular-land: >> >> AngularJS is the old framework we used for the admin console. >> Angular is the new framework we will use for the account management >> console. >> >> Most of you know the new framework as Angular2 or ng-2, but the powers >> that be want to just call it "Angular". This framework is completely >> rewritten and really has no relation to AngularJS, except they both come >> from Google and both have "Angular" in the name. >> >> To avoid confusion, I'm going to call it "Angualr2" for the foreseeable >> future. >> > >> > The JIRA for this work is: >> > https://issues.jboss.org/browse/KEYCLOAK-1250 >> > >> > We where hoping to get some help from the professional UXP folks for >> this, >> > but it looks like that may take some time. In the mean time the plan is >> to >> > base it on the following template: >> > >> > https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# >> > >> > Also, we'll try to use some newer things from PatternFly patterns to >> > improve the screens. >> > >> > First pass will have the same functionality and behavior as the old >> account >> > management console. Second pass will be to improve the usability (pages >> > like linking, sessions and history are not very nice). >> > >> > We will deprecate the old FreeMarker/forms way of doing things, but >> keep it >> > around so it doesn't break what people are already doing. This can be >> > removed in the future (probably RHSSO 8.0?). >> > >> > We'll also need to provide full rest endpoints for the account >> management >> > console. I'll work on that, while Stan works on the UI. >> > >> > As the account management console will be a pure HTML5 and JS app anyone >> > can completely replace it with a theme. They can also customize it a >> lot. >> > We'll also need to make sure it's easy to add additional pages/sections. >> > >> > Rather than just add to AccountService I'm going to rename that >> > to DeprecatedAccountFormService remove all REST from there and add a new >> > AccountService that only does REST. All features available through >> forms at >> > the moment will be available as REST API, with the exception of account >> > linking which will be done through Bills work that was introduced in 3.0 >> > that allows applications to initiate the account linking. >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > From ssilvert at redhat.com Fri Mar 17 08:11:05 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Fri, 17 Mar 2017 08:11:05 -0400 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: References: <44573b50-db26-ce45-81c0-eb5cc3bbc7e1@redhat.com> Message-ID: <274fb708-93e1-f9fb-bf4b-a3c40b1c6472@redhat.com> On 3/17/2017 8:08 AM, Stian Thorgersen wrote: > I'm going to call it "YetAnotherJsFramework" ;) :-) And btw, I didn't mean to hijack the thread. I'd rather be getting feedback about the console itself. > > On 17 March 2017 at 12:54, Stan Silvert > wrote: > > On 3/17/2017 5:47 AM, Stian Thorgersen wrote: > > As we've discussed a few times now the plan is to do a brand new > account > > management console. Instead of old school forms it will be all > modern using > > HTML5, AngularJS and REST endpoints. > One thing. That should be "Angular", not "AngularJS". Just to > educate everyone, here is what's going on in Angular-land: > > AngularJS is the old framework we used for the admin console. > Angular is the new framework we will use for the account > management console. > > Most of you know the new framework as Angular2 or ng-2, but the powers > that be want to just call it "Angular". This framework is completely > rewritten and really has no relation to AngularJS, except they > both come > from Google and both have "Angular" in the name. > > To avoid confusion, I'm going to call it "Angualr2" for the > foreseeable > future. > > > > The JIRA for this work is: > > https://issues.jboss.org/browse/KEYCLOAK-1250 > > > > > We where hoping to get some help from the professional UXP folks > for this, > > but it looks like that may take some time. In the mean time the > plan is to > > base it on the following template: > > > > > https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# > > > > > Also, we'll try to use some newer things from PatternFly patterns to > > improve the screens. > > > > First pass will have the same functionality and behavior as the > old account > > management console. Second pass will be to improve the usability > (pages > > like linking, sessions and history are not very nice). > > > > We will deprecate the old FreeMarker/forms way of doing things, > but keep it > > around so it doesn't break what people are already doing. This > can be > > removed in the future (probably RHSSO 8.0?). > > > > We'll also need to provide full rest endpoints for the account > management > > console. I'll work on that, while Stan works on the UI. > > > > As the account management console will be a pure HTML5 and JS > app anyone > > can completely replace it with a theme. They can also customize > it a lot. > > We'll also need to make sure it's easy to add additional > pages/sections. > > > > Rather than just add to AccountService I'm going to rename that > > to DeprecatedAccountFormService remove all REST from there and > add a new > > AccountService that only does REST. All features available > through forms at > > the moment will be available as REST API, with the exception of > account > > linking which will be done through Bills work that was > introduced in 3.0 > > that allows applications to initiate the account linking. > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From ssilvert at redhat.com Fri Mar 17 08:17:14 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Fri, 17 Mar 2017 08:17:14 -0400 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: References: <44573b50-db26-ce45-81c0-eb5cc3bbc7e1@redhat.com> Message-ID: <8c3419ff-9167-8dfc-2166-601dde0f8868@redhat.com> On 3/17/2017 8:09 AM, Stian Thorgersen wrote: > Had another idea. We could quite easily make it possible to configure > the "account management url" for a realm. That would let folks > redirect to external account management console if they want to > completely override it. That would also mean that our own account management console could be served from anywhere or even installed locally on the client machine. > > On 17 March 2017 at 13:08, Stian Thorgersen > wrote: > > I'm going to call it "YetAnotherJsFramework" ;) > > On 17 March 2017 at 12:54, Stan Silvert > wrote: > > On 3/17/2017 5:47 AM, Stian Thorgersen wrote: > > As we've discussed a few times now the plan is to do a brand > new account > > management console. Instead of old school forms it will be > all modern using > > HTML5, AngularJS and REST endpoints. > One thing. That should be "Angular", not "AngularJS". Just to > educate everyone, here is what's going on in Angular-land: > > AngularJS is the old framework we used for the admin console. > Angular is the new framework we will use for the account > management console. > > Most of you know the new framework as Angular2 or ng-2, but > the powers > that be want to just call it "Angular". This framework is > completely > rewritten and really has no relation to AngularJS, except they > both come > from Google and both have "Angular" in the name. > > To avoid confusion, I'm going to call it "Angualr2" for the > foreseeable > future. > > > > The JIRA for this work is: > > https://issues.jboss.org/browse/KEYCLOAK-1250 > > > > > We where hoping to get some help from the professional UXP > folks for this, > > but it looks like that may take some time. In the mean time > the plan is to > > base it on the following template: > > > > > https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# > > > > > Also, we'll try to use some newer things from PatternFly > patterns to > > improve the screens. > > > > First pass will have the same functionality and behavior as > the old account > > management console. Second pass will be to improve the > usability (pages > > like linking, sessions and history are not very nice). > > > > We will deprecate the old FreeMarker/forms way of doing > things, but keep it > > around so it doesn't break what people are already doing. > This can be > > removed in the future (probably RHSSO 8.0?). > > > > We'll also need to provide full rest endpoints for the > account management > > console. I'll work on that, while Stan works on the UI. > > > > As the account management console will be a pure HTML5 and > JS app anyone > > can completely replace it with a theme. They can also > customize it a lot. > > We'll also need to make sure it's easy to add additional > pages/sections. > > > > Rather than just add to AccountService I'm going to rename that > > to DeprecatedAccountFormService remove all REST from there > and add a new > > AccountService that only does REST. All features available > through forms at > > the moment will be available as REST API, with the exception > of account > > linking which will be done through Bills work that was > introduced in 3.0 > > that allows applications to initiate the account linking. > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > From bburke at redhat.com Fri Mar 17 10:08:31 2017 From: bburke at redhat.com (Bill Burke) Date: Fri, 17 Mar 2017 10:08:31 -0400 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: References: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> <4af2f2e1-3ac2-c2e8-d4c4-aa3e9fa807c0@redhat.com> Message-ID: <5b3f7d34-36ab-caec-1ad3-6e5d5ec325e4@redhat.com> On 3/17/17 5:01 AM, Stian Thorgersen wrote: > In summary I'm more open towards your approach, but still have some > concerns around it. More inline. > > On 16 March 2017 at 16:05, Bill Burke > wrote: > > > > On 3/16/17 6:19 AM, Stian Thorgersen wrote: >> The Keycloak proxy shouldn't be tied directly to the database or >> caches. It should ideally be stateless and ideally there's no >> need for sticky sessions. >> > Please stop making broad blanket statements and back up your > reponse otherwise I'm just going to ignore you. > > If the proxy implements pure OIDC it has to minimally store > refresh token and access token. Plus I foresee us wanting to > provide more complex proxy features which will require storing > more an more state. So, the proxy needs sessions which means many > users will want this to be fault tolerant, which means that the > proxy will require distributed sessions. > > > Can't the tokens just be stored in a cookie? That would make it fully > stateless and no need for sticky sessions. > > I guess it comes down to what is more costly refresh token requests or > having a distributed "session" cache (which we already have). I'm worried about cookie size constraints. I'll do some measurements. This issue is orthogonal to the other issues though I think. > > >> It should be capable of running collocated with the Keycloak >> Server for simplicity, but also should be possible to run in >> separate process. If it's done as an additional subsystem that >> allows easily configuring a Keycloak server to be IdP, IdP+Proxy >> or just Proxy. >> > > > >> Further, it should leverage OpenID Connect rather than us coming >> up with a new separate protocol. >> >> My reasoning behind this is simple: >> >> * Please let's not invent another security protocol! That's a lot >> of work and a whole new vulnerability vector to deal with. >> * There will be tons more requests to a proxy than there are to >> the server. Latency overhead will also be much more important. >> > It wouldn't be a brand new protocol, just an optimized subset of > OIDC. For example, you wouldn't have to do a code to token > request nor would you have to execute refresh token requests. It > would also make things like revocation and backchannel logout much > easier, nicer, more efficient, and more robust. > > > I like removing the code to token request and refresh token requests. > However, doesn't the revocation and backchannel logout mechanism have > to be made simpler and more robust for "external apps" as well? > Wouldn't it be better to solve this problem in general and make it > available to external apps and not just our "embedded" proxy. Client nodes currently register themselves with auth server on demand so that they can receive revocation and backchannel logout events. The auth server sends a message to each and every node when these events happen. A proxy that has access to UserSession cache doesn't have to do any of these things. This is the "simpler" and "more efficient" argument. I forgot the "more robust" argument I had. > > I Just see huge advantages with this approach: simpler > provisioning, simpler configuration, a real nice user experience > overall, and possibly some optimizations. What looking for is > disadvantages to this approach which I currently see are: > > 1) Larger memory footprint > > 2) More database connections, although these connections should > become idle after boot. > 3) Possible extra distributed session replication as the > User/ClientSession needs to be visible on both the auth server and > the proxy. > 4) Possible headache of too many nodes in a cluster, although a > proxy is supposed to be able to handle proxing multiple apps and > multiple instances of that app. > > > I would think it would make it even harder to scale to really big > loads. There will already be limits on a Keycloak cluster due to > invalidation messages and even more so the sessions. If we add even > more nodes and load to the same cluster that just makes the matter > even worse. There's also significantly more requests to applications > than there is for KC server. That's why it seems safer to keep it > separate. > Configuring a proxy in the admin console is a good thing right? If that is an assumption, then the proxy needs to receive realm invalidation events so that it can refresh the client view (config settings, mappers, etc.). > It depends on what and how much of the db + cache we're talking about. > Is it just user sessions then that can probably be handled with > distributed sessions. realm info doesn't hit the db much, but user store will be hit. Hmm...didn't think of the user store hit. Something like LDAP would be hit by the auth server and each proxy for each login session. That's a downer... If a proxy could proxy all apps, then maybe there is a way to maintain sticky sessions between the auth server and the proxy so they shared the same node/cache for the same session. Still a huge negative though as things just got a lot more complex. Maybe we could just hook up the proxy to the realm store and realm cache? I prefer this idea as then proxy setup isn't much different than auth server setup. And all the configuration sync logic is already in place as the proxy would receive realm invalidation events. Bill From sthorger at redhat.com Fri Mar 17 10:08:37 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 15:08:37 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <273638a2-663c-5b08-d977-97d5721eb21b@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> <273638a2-663c-5b08-d977-97d5721eb21b@redhat.com> Message-ID: On 17 March 2017 at 11:12, Marek Posolda wrote: > On 17/03/17 09:40, Stian Thorgersen wrote: > > > > On 17 March 2017 at 09:22, Marek Posolda wrote: > >> Ok, for now just ignoring the browser limitations and fact that >> back/forward doesn't refresh the page automatically for POST requests :) >> >> On 17/03/17 08:45, Stian Thorgersen wrote: >> >> I repeat: >> >> Before we discuss implementation though, let's figure out what the ideal >> user experience would be then figure out how to implement it. What about: >> >> * Refresh just works >> >> * Back button will display a nicer page, something like "Page has >> expired. To restart the login process click here. To continue the login >> process click here.". >> >> >> Yeah, that will be nice for UXP. >> >> Or Back button could just go to the start of the flow always. >> >> >> Regarding UXP, I personally like your previous proposal better. If user >> is deep after confirm many authenticator forms and he accidentally >> clicks back-button, he will need to re-authenticate in all >> authenticators again. Not so great for usability though? >> >> > True - giving the user the option to choose is probably best. > > >> * Resubmitting forms will just display the page above >> >> If we do any of your previous proposal, user will never see the forms, >> >> which he already submitted? For example if he submitted >> username/password and now is on TOTP page, then after click "back" he will be >> either on the "Page has expired" or start of the flow. The start of the flow usually >> will be username/password form, but flow started from scratch, so it >> won't be resubmitting form, but new submit? >> >> Anyway yes, if some of previous forms is re-submitted, we can display "page is expired" page. >> >> I'm not quite following. Is it possible to prevent the back buttons from > "re-submitting" forms at all? If so that's ideal as you then don't get the > ugly message from the browser that the form is expired. > > Yes, as long as we don't send any "Cache-control" header, then browser > back/forward buttons doesn't resubmit forms and doesn't re-send any > requests. > > So follow-up on the example above > 1) User successfully authenticated on username/password form and he is on > TOTP page. > 2) User press browser "back" button. Now he will see again the > username/password form > 3) User will try to re-submit the username/password form OR he press > browser "refresh" button. In both cases, we will show our nice "Page has > expired. To restart the login process click here. To continue the login > process click here." > > Are we in agreement that this is ideal user experience? > Not quite. Clicking back shouldn't show the form again. It should rather just show the page expired message and ask user if they want to restart or continue. By the way Google's login flows are really nice. Much better than ours. > > If yes, we can achieve that quite easily without need of javascript hacks > or hidden form fields though. > > Marek > > > >> Marek >> >> * No need to do redirects. Redirects is bad for performance, but also has >> twice the response time which is not good from a usability perspective >> >> Is this the optimal user experience? Or should we do something else? >> >> On 17 March 2017 at 08:44, Stian Thorgersen wrote: >> >>> Can we please get back to discussing what the best user experience is >>> first. Then we can discuss implementations? >>> >>> On 16 March 2017 at 18:37, Bill Burke wrote: >>> >>>> >>>> >>>> On 3/16/17 10:50 AM, Marek Posolda wrote: >>>> >>>> On 16/03/17 15:27, Bill Burke wrote: >>>> >>>> * Hidden field in a form is not a good approach. Its very brittle and >>>> will not work in every situation. So huge -1 there. >>>> >>>> * browser back button is not required to resubmit the HTTP request as >>>> the page can be rendered from cache. Therefore you couldn't have a "Page >>>> Expired" page displayed when the back button is pressed without setting the >>>> header "Cache-Control: no-store, must-revalidate, max-age=0" >>>> >>>> Maybe we can do some javascript stuff like this: >>>> http://stackoverflow.com/questions/9046184/reload-the-site-w >>>> hen-reached-via-browsers-back-button >>>> >>>> But that would mean that we will need to inject some common javascript >>>> stuff into every HTML form displayed by authentication SPI. Could we rely >>>> on that? >>>> >>>> I don't think this is a good approach as Authenticator develoeprs would >>>> have to do the same thing. >>>> >>>> >>>> >>>> * Furthermore, without some type of code/information within the URL, >>>> you also wouldn't know if somebody clicked the back button or not or >>>> whether this was a page refresh or some other GET request. >>>> >>>> Once we have the cookie with loginSessionID, we can lookup the >>>> loginSession. And loginSession will contain last code (same like >>>> clientSession now) and last authenticator. Then we just need to compare the >>>> code from the loginSession with the code from request. If it matches, we >>>> are good. If it doesn't match, it's likely the refresh of some previous >>>> page and in that case, we can just redirect to last authenticator. >>>> >>>> This is the current behavior, but instead of using a cookie, the "code" >>>> is stored in the URL. >>>> >>>> With only a cookie though and no URL information, you won't know the >>>> different between a Back Button and a Page Refresh for GET requests. For >>>> POST requests, you won't be able to tell the differencee between a Back >>>> Button, Page Refresh, or whether the POST is targeted to an actual >>>> Authenticator. >>>> >>>> The more I think about it, things should probably stay the way it >>>> currently is, with improvements on user experience. I think we can support >>>> what Stian suggested with the current implementation. >>>> >>>> >>>> Not sure if we also need to track all codes, so we are able to distinct >>>> between the "expired" code, and between the "false" code, which was never >>>> valid and was possibly used by some attacker for CSRF. Maybe we can sign >>>> codes with HMAC, so we can verify if it is "expired" or "false" code >>>> without need to track the list of last codes. >>>> >>>> >>>> This has been done in the past. Then it was switched to using the same >>>> code throughout the whole flow, then Stian switched it to changing the code >>>> throughout the flow. I don't know if he uses a hash or not. >>>> >>>> Bill >>>> >>> >>> >> >> > > From sthorger at redhat.com Fri Mar 17 10:18:44 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 17 Mar 2017 15:18:44 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> <273638a2-663c-5b08-d977-97d5721eb21b@redhat.com> Message-ID: The ideal behavior IMO is: 1. Back button always just shows the Page expired message, with option to continue or restart 2. Refresh always works 3. No redirect required on POSTs If we can't have all 3 (and it sounds like we can't) I'd probably say #3 is the lowest priority. I think the option to "restart" the flow is important as the user in most cases would click back because they've done something wrong. So what Bill suggested would work I think: * Have "Cache-Control: no-store, must-revalidate, max-age=0" * I guess we need the redirect after a POST to get for the next step * Have the execution ID in the flow as well as the current execution in the authentication session. If requested execution ID is not equal to the one in authentication session display the "Page expired page" What I don't quite get though is why does the redirect after POST prevent the "form is expired do you want to resubmit" or whatever the message is? On 17 March 2017 at 15:08, Stian Thorgersen wrote: > > > On 17 March 2017 at 11:12, Marek Posolda wrote: > >> On 17/03/17 09:40, Stian Thorgersen wrote: >> >> >> >> On 17 March 2017 at 09:22, Marek Posolda wrote: >> >>> Ok, for now just ignoring the browser limitations and fact that >>> back/forward doesn't refresh the page automatically for POST requests :) >>> >>> On 17/03/17 08:45, Stian Thorgersen wrote: >>> >>> I repeat: >>> >>> Before we discuss implementation though, let's figure out what the ideal >>> user experience would be then figure out how to implement it. What about: >>> >>> * Refresh just works >>> >>> * Back button will display a nicer page, something like "Page has >>> expired. To restart the login process click here. To continue the login >>> process click here.". >>> >>> >>> Yeah, that will be nice for UXP. >>> >>> Or Back button could just go to the start of the flow always. >>> >>> >>> Regarding UXP, I personally like your previous proposal better. If user >>> is deep after confirm many authenticator forms and he accidentally >>> clicks back-button, he will need to re-authenticate in all >>> authenticators again. Not so great for usability though? >>> >>> >> True - giving the user the option to choose is probably best. >> >> >>> * Resubmitting forms will just display the page above >>> >>> If we do any of your previous proposal, user will never see the forms, >>> >>> which he already submitted? For example if he submitted >>> username/password and now is on TOTP page, then after click "back" he will be >>> either on the "Page has expired" or start of the flow. The start of the flow usually >>> will be username/password form, but flow started from scratch, so it >>> won't be resubmitting form, but new submit? >>> >>> Anyway yes, if some of previous forms is re-submitted, we can display "page is expired" page. >>> >>> I'm not quite following. Is it possible to prevent the back buttons from >> "re-submitting" forms at all? If so that's ideal as you then don't get the >> ugly message from the browser that the form is expired. >> >> Yes, as long as we don't send any "Cache-control" header, then browser >> back/forward buttons doesn't resubmit forms and doesn't re-send any >> requests. >> >> So follow-up on the example above >> 1) User successfully authenticated on username/password form and he is on >> TOTP page. >> 2) User press browser "back" button. Now he will see again the >> username/password form >> 3) User will try to re-submit the username/password form OR he press >> browser "refresh" button. In both cases, we will show our nice "Page has >> expired. To restart the login process click here. To continue the login >> process click here." >> >> Are we in agreement that this is ideal user experience? >> > > Not quite. Clicking back shouldn't show the form again. It should rather > just show the page expired message and ask user if they want to restart or > continue. > > By the way Google's login flows are really nice. Much better than ours. > > >> >> If yes, we can achieve that quite easily without need of javascript hacks >> or hidden form fields though. >> >> Marek >> >> >> >>> Marek >>> >>> * No need to do redirects. Redirects is bad for performance, but also >>> has twice the response time which is not good from a usability perspective >>> >>> Is this the optimal user experience? Or should we do something else? >>> >>> On 17 March 2017 at 08:44, Stian Thorgersen wrote: >>> >>>> Can we please get back to discussing what the best user experience is >>>> first. Then we can discuss implementations? >>>> >>>> On 16 March 2017 at 18:37, Bill Burke wrote: >>>> >>>>> >>>>> >>>>> On 3/16/17 10:50 AM, Marek Posolda wrote: >>>>> >>>>> On 16/03/17 15:27, Bill Burke wrote: >>>>> >>>>> * Hidden field in a form is not a good approach. Its very brittle and >>>>> will not work in every situation. So huge -1 there. >>>>> >>>>> * browser back button is not required to resubmit the HTTP request as >>>>> the page can be rendered from cache. Therefore you couldn't have a "Page >>>>> Expired" page displayed when the back button is pressed without setting the >>>>> header "Cache-Control: no-store, must-revalidate, max-age=0" >>>>> >>>>> Maybe we can do some javascript stuff like this: >>>>> http://stackoverflow.com/questions/9046184/reload-the-site-w >>>>> hen-reached-via-browsers-back-button >>>>> >>>>> But that would mean that we will need to inject some common javascript >>>>> stuff into every HTML form displayed by authentication SPI. Could we rely >>>>> on that? >>>>> >>>>> I don't think this is a good approach as Authenticator develoeprs >>>>> would have to do the same thing. >>>>> >>>>> >>>>> >>>>> * Furthermore, without some type of code/information within the URL, >>>>> you also wouldn't know if somebody clicked the back button or not or >>>>> whether this was a page refresh or some other GET request. >>>>> >>>>> Once we have the cookie with loginSessionID, we can lookup the >>>>> loginSession. And loginSession will contain last code (same like >>>>> clientSession now) and last authenticator. Then we just need to compare the >>>>> code from the loginSession with the code from request. If it matches, we >>>>> are good. If it doesn't match, it's likely the refresh of some previous >>>>> page and in that case, we can just redirect to last authenticator. >>>>> >>>>> This is the current behavior, but instead of using a cookie, the >>>>> "code" is stored in the URL. >>>>> >>>>> With only a cookie though and no URL information, you won't know the >>>>> different between a Back Button and a Page Refresh for GET requests. For >>>>> POST requests, you won't be able to tell the differencee between a Back >>>>> Button, Page Refresh, or whether the POST is targeted to an actual >>>>> Authenticator. >>>>> >>>>> The more I think about it, things should probably stay the way it >>>>> currently is, with improvements on user experience. I think we can support >>>>> what Stian suggested with the current implementation. >>>>> >>>>> >>>>> Not sure if we also need to track all codes, so we are able to >>>>> distinct between the "expired" code, and between the "false" code, which >>>>> was never valid and was possibly used by some attacker for CSRF. Maybe we >>>>> can sign codes with HMAC, so we can verify if it is "expired" or "false" >>>>> code without need to track the list of last codes. >>>>> >>>>> >>>>> This has been done in the past. Then it was switched to using the >>>>> same code throughout the whole flow, then Stian switched it to changing the >>>>> code throughout the flow. I don't know if he uses a hash or not. >>>>> >>>>> Bill >>>>> >>>> >>>> >>> >>> >> >> > From bruno at abstractj.org Fri Mar 17 10:18:57 2017 From: bruno at abstractj.org (Bruno Oliveira) Date: Fri, 17 Mar 2017 14:18:57 +0000 Subject: [keycloak-dev] Test SMTP settings for realm configuration Message-ID: Good morning, Today if there's something wrong with SMTP settings, Keycloak admin will only notice it when users start to see error messages at the "Forgot password" form and complain or read the logs. I would like to add a button to test the SMTP connection, like we do for LDAP. For that I created the following Jira: https://issues.jboss.org/browse/KEYCLOAK-4604 Does it make sense? From bburke at redhat.com Fri Mar 17 11:21:26 2017 From: bburke at redhat.com (Bill Burke) Date: Fri, 17 Mar 2017 11:21:26 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: The best user experience would be that the user can click the back/forward/refresh buttons as they wanted and things would work as they would expect, i.e. if you're on the OTP page, you could click the back button and go to the username/password page and re-enter username password. I didn't implement support for this approach as its really freakin hard to get right. The real question is, do we want to support going backwards or forwards in a flow? If we don't, then there are considerations and limitations on what we can do for the user experience which I was trying to get at before. Specifically: * Is it possible to determine the difference between the back, forward, or refresh button event? That is the question I struggled the most with when implementing auth flow processing. I'd say we set the bar low and minimally try to provide this experience: * If back, forward, or refresh button is pushed, show the "Page expired" page you suggested before Stian. Anything different than that will be dependent on the limitations of the browser and our auth flow implementation. There's also another type of user experience you aren't considering that should be involved with the discussion. Specifically the user experience of writing an Authenticator. You want the Authenticator development to be simple...you also want to make sure that its implemented in a way that developers can't make mistakes and introduce security holes by accident. All this stuff is tied together which makes the problem really complex. On 3/17/17 3:45 AM, Stian Thorgersen wrote: > I repeat: > > Before we discuss implementation though, let's figure out what the > ideal user experience would be then figure out how to implement it. > What about: > > * Refresh just works > * Back button will display a nicer page, something like "Page has > expired. To restart the login process click here. To continue the > login process click here.". Or Back button could just go to the start > of the flow always. > * Resubmitting forms will just display the page above > * No need to do redirects. Redirects is bad for performance, but also > has twice the response time which is not good from a usability perspective > > Is this the optimal user experience? Or should we do something else? > > On 17 March 2017 at 08:44, Stian Thorgersen > wrote: > > Can we please get back to discussing what the best user experience > is first. Then we can discuss implementations? > > On 16 March 2017 at 18:37, Bill Burke > wrote: > > > > On 3/16/17 10:50 AM, Marek Posolda wrote: >> On 16/03/17 15:27, Bill Burke wrote: >>> * Hidden field in a form is not a good approach. Its very >>> brittle and will not work in every situation. So huge -1 there. >>> >>> * browser back button is not required to resubmit the HTTP >>> request as the page can be rendered from cache. Therefore >>> you couldn't have a "Page Expired" page displayed when the >>> back button is pressed without setting the header >>> "Cache-Control: no-store, must-revalidate, max-age=0" >> Maybe we can do some javascript stuff like this: >> http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button >> >> >> But that would mean that we will need to inject some common >> javascript stuff into every HTML form displayed by >> authentication SPI. Could we rely on that? > I don't think this is a good approach as Authenticator > develoeprs would have to do the same thing. > > >>> >>> * Furthermore, without some type of code/information within >>> the URL, you also wouldn't know if somebody clicked the back >>> button or not or whether this was a page refresh or some >>> other GET request. >> Once we have the cookie with loginSessionID, we can lookup >> the loginSession. And loginSession will contain last code >> (same like clientSession now) and last authenticator. Then we >> just need to compare the code from the loginSession with the >> code from request. If it matches, we are good. If it doesn't >> match, it's likely the refresh of some previous page and in >> that case, we can just redirect to last authenticator. >> > This is the current behavior, but instead of using a cookie, > the "code" is stored in the URL. > > With only a cookie though and no URL information, you won't > know the different between a Back Button and a Page Refresh > for GET requests. For POST requests, you won't be able to > tell the differencee between a Back Button, Page Refresh, or > whether the POST is targeted to an actual Authenticator. > > The more I think about it, things should probably stay the way > it currently is, with improvements on user experience. I > think we can support what Stian suggested with the current > implementation. > > >> Not sure if we also need to track all codes, so we are able >> to distinct between the "expired" code, and between the >> "false" code, which was never valid and was possibly used by >> some attacker for CSRF. Maybe we can sign codes with HMAC, so >> we can verify if it is "expired" or "false" code without need >> to track the list of last codes. > > This has been done in the past. Then it was switched to using > the same code throughout the whole flow, then Stian switched > it to changing the code throughout the flow. I don't know if > he uses a hash or not. > > Bill > > > From tair.sabirgaliev at gmail.com Fri Mar 17 11:25:47 2017 From: tair.sabirgaliev at gmail.com (Tair Sabirgaliev) Date: Fri, 17 Mar 2017 08:25:47 -0700 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: <8c3419ff-9167-8dfc-2166-601dde0f8868@redhat.com> References: <44573b50-db26-ce45-81c0-eb5cc3bbc7e1@redhat.com> <8c3419ff-9167-8dfc-2166-601dde0f8868@redhat.com> Message-ID: +1 for Angular2, this will make maintenance and customisation easier. The framework becomes very popular and close to ?JavaEE mindset?. On 17 March 2017 at 18:19:23, Stan Silvert (ssilvert at redhat.com) wrote: On 3/17/2017 8:09 AM, Stian Thorgersen wrote: > Had another idea. We could quite easily make it possible to configure > the "account management url" for a realm. That would let folks > redirect to external account management console if they want to > completely override it. That would also mean that our own account management console could be served from anywhere or even installed locally on the client machine. > > On 17 March 2017 at 13:08, Stian Thorgersen > wrote: > > I'm going to call it "YetAnotherJsFramework" ;) > > On 17 March 2017 at 12:54, Stan Silvert > wrote: > > On 3/17/2017 5:47 AM, Stian Thorgersen wrote: > > As we've discussed a few times now the plan is to do a brand > new account > > management console. Instead of old school forms it will be > all modern using > > HTML5, AngularJS and REST endpoints. > One thing. That should be "Angular", not "AngularJS". Just to > educate everyone, here is what's going on in Angular-land: > > AngularJS is the old framework we used for the admin console. > Angular is the new framework we will use for the account > management console. > > Most of you know the new framework as Angular2 or ng-2, but > the powers > that be want to just call it "Angular". This framework is > completely > rewritten and really has no relation to AngularJS, except they > both come > from Google and both have "Angular" in the name. > > To avoid confusion, I'm going to call it "Angualr2" for the > foreseeable > future. > > > > The JIRA for this work is: > > https://issues.jboss.org/browse/KEYCLOAK-1250 > > > > > We where hoping to get some help from the professional UXP > folks for this, > > but it looks like that may take some time. In the mean time > the plan is to > > base it on the following template: > > > > > https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# > > > > > Also, we'll try to use some newer things from PatternFly > patterns to > > improve the screens. > > > > First pass will have the same functionality and behavior as > the old account > > management console. Second pass will be to improve the > usability (pages > > like linking, sessions and history are not very nice). > > > > We will deprecate the old FreeMarker/forms way of doing > things, but keep it > > around so it doesn't break what people are already doing. > This can be > > removed in the future (probably RHSSO 8.0?). > > > > We'll also need to provide full rest endpoints for the > account management > > console. I'll work on that, while Stan works on the UI. > > > > As the account management console will be a pure HTML5 and > JS app anyone > > can completely replace it with a theme. They can also > customize it a lot. > > We'll also need to make sure it's easy to add additional > pages/sections. > > > > Rather than just add to AccountService I'm going to rename that > > to DeprecatedAccountFormService remove all REST from there > and add a new > > AccountService that only does REST. All features available > through forms at > > the moment will be available as REST API, with the exception > of account > > linking which will be done through Bills work that was > introduced in 3.0 > > that allows applications to initiate the account linking. > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From thomas.darimont at googlemail.com Sat Mar 18 19:30:55 2017 From: thomas.darimont at googlemail.com (Thomas Darimont) Date: Sun, 19 Mar 2017 00:30:55 +0100 Subject: [keycloak-dev] How to migrate all credentials stored in Keycloak to a new encoding algorithm? Message-ID: Hello group, Sorry - for the long read but the following contains a proposal with a general solution for the problem. TLDR; section at the end. If you have been using Keycloak for a while, you probably have a number of users in the system, whose passwords are encoded by the default Pbkdf2PasswordHashProvider which currently uses the PBKDF2WithHmacSHA1 algorithm. To change the algorithm, one could implement a custom password encoding via Keycloak?s PasswordHashProvider SPI. That works for user credential updates or newly created users, but what about the potentially large number of credentials of already existing users who are not active at the moment? If you need to ensure that user credentials are encoded and stored with the new algorithm, then you have to migrate all user credentials to the new algorithm. Storing and verifying stored passwords usually involves a single step of hashing in each direction: once stored as a hash, each try to enter the password is verified using the same hash function and comparing the hashes. If you have a collection of stored password hashes and the hash function must be changed, the only possibility (apart from re-initializing all password hashes) is to apply the second hash function to the existing hashes and remember to hash the entered passwords twice, too. That?s why it is unavoidable to remember which hash function was used to create the first hash of each password. If this information can be reconstructed, the sequence of hash functions to apply to a clear text password to produce a comparable hash can be reapplied. If the hashes match, the given password can then be hashed with the new hash function and stored as the new hash value, effectively migrating the password to use the new hash function. That?s what I propose below. The following describes an incremental method for credential updates, verification and migration. * Incremental Credential Migration Imagine that you have two different credential encoding algorithms: hash_old(input, ...) - The current encoding algorithm hash_new(input, ...) - The new encoding algorithm We now want to update all stored credentials to use the hash_new encoding algorithm. In order to achieve this the following two steps need to be performed. 1. Incrementally encode existing credentials In this step the existing credentials are encoded with the new encoding algorithm hash_new and stored as the new credential value with additional metadata (old encoding, new encoding) annotated with a ?migration_required? marker. This marker is later used to detect credentials which needs migration during credential validation. Note that since we encode the already encoded credential value we do not need to know the plain text of the credential to perform the encoding. The encoding all credentials will probably take some time and CPU resources, depending on the number of credentials and the used encoding function configuration. Therefore it makes sense to perform this step incrementally and in parallel to the credential validation described in Step 2. This is possible because the newly encoded credential values are annotated with a ?migration_required? marker and all other credentials will be handled by their associated encoding algorithm. Eventually all credentials will be encoded with the new encoding algorithm. Pseudo-Code: encode credentials with new encoding for (CredentialModel credential: passwordCredentials) { // checks if given credential should be migrated, e.g. uses hash_old if (isCredentialMigrationRequired(credential)) { metadata = credential.getConfig(); // credential.value: the original password encoded with hash_old newValue = hash_new(credential.value, credential.salt, ?); metadata = updateMetadata(metadata, ?hash_new?, ?migration_required?) updateCredential (credential, newValue, metadata) } } 2. Credential Validation and Migration In this step the provided password is verified by comparing the stored password hash against the hash computed from the sequential application of the hash functions hash_old and hash_new. 2.1 Credential Validation For credentials marked with ?migration_required?, compare the stored credential hash value with the result of hash_new(hash_old(password,... ),...). For all other credentials the associated credential encoding algorithm is used. Note that credential validation for non-migrated credentials are more expensive due to the multiple hash functions being applied in sequence. If the hashes match, we know that the given password was valid and the actual credential migration can be performed. 2.2 Credential Migration After successful validation of a credential tagged with a ?migration_required? marker, the given password is encoded with the new hash function via hash_new(password). The credential is now stored with the new hash value and updated metadata with the ?migration_required? marker removed. This concludes the migration of the credential. After the migration the hash_new(...) function is sufficient to verify the credential. Pseudo-Code: validate and migrate credential boolean verify(String rawPassword, CredentialModel cred) { if (isMarkedForMigration(cred)){ // Step 2.1 Validate credential by encoding the rawPassword // with the hash_old and then hash_new algorithm. if (hash_new(hash_old(rawPassword, cred), cred) == cred.value) { // Step 2.2 Perform the credential migration migrateCredential(cred, hash_new(rawPassword, cred)); return true; } } else { // verify credential with hash_new(...) OR hash_old(...) } return false; } TLDR: Conclusion The proposed approach supports migration of credentials to a new encoding algorithm in a two step process. First the existing credential value, hashed with the old hash function, is hashed again with the new hash function. The resulting hash is then stored in the credential annotated with a migration marker. To verify a given password against the stored credential hash, the same sequence of hash functions is applied to the password and the resulting hash value is then compared against the stored hash. If the hash matches, the actual credential migration is performed by hashing the given password again but this time only with the new hash function. The resulting hash is then stored with the credential without the migration marker. The main benefit of this method is that one can migrate existing credential encoding mechanisms to new ones without having to keep old credentials hashed with potentially insecure algorithms around. The method can incrementally update the credentials by using markers on the stored credentials to steer credential validation. It comes with the cost of potentially more CPU intensive credential validation for non-migrated credentials that need to be verified and migrated. Given the continuous progression in the fields of security and cryptography it is only a matter of time that one needs to change a credential encoding mechanism in order to comply with the latest recommended security standards. Therefore I think this incremental credential migration would be a valuable feature to add to the Keycloak System. What do you guys think? Cheers, Thomas From sthorger at redhat.com Mon Mar 20 03:49:07 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 20 Mar 2017 08:49:07 +0100 Subject: [keycloak-dev] next-gen Keycloak proxy In-Reply-To: <5b3f7d34-36ab-caec-1ad3-6e5d5ec325e4@redhat.com> References: <4be4c218-27f7-9946-4c19-c2082c5b780a@redhat.com> <4af2f2e1-3ac2-c2e8-d4c4-aa3e9fa807c0@redhat.com> <5b3f7d34-36ab-caec-1ad3-6e5d5ec325e4@redhat.com> Message-ID: There might be a blocker for having the proxy so tightly integrated. If it's directly accessing the db and caches that would require it to be co-located and also co-owned with the Keycloak server itself. What if one department wants to setup the proxy to use with the KC server hosted by another department? Or it could even be a separate company setting up the proxy to the company hosting the KC server? On 17 March 2017 at 15:08, Bill Burke wrote: > > > On 3/17/17 5:01 AM, Stian Thorgersen wrote: > > In summary I'm more open towards your approach, but still have some > concerns around it. More inline. > > On 16 March 2017 at 16:05, Bill Burke wrote: > >> >> >> On 3/16/17 6:19 AM, Stian Thorgersen wrote: >> >> The Keycloak proxy shouldn't be tied directly to the database or caches. >> It should ideally be stateless and ideally there's no need for sticky >> sessions. >> >> Please stop making broad blanket statements and back up your reponse >> otherwise I'm just going to ignore you. >> >> If the proxy implements pure OIDC it has to minimally store refresh token >> and access token. Plus I foresee us wanting to provide more complex proxy >> features which will require storing more an more state. So, the proxy >> needs sessions which means many users will want this to be fault tolerant, >> which means that the proxy will require distributed sessions. >> > > Can't the tokens just be stored in a cookie? That would make it fully > stateless and no need for sticky sessions. > > I guess it comes down to what is more costly refresh token requests or > having a distributed "session" cache (which we already have). > > I'm worried about cookie size constraints. I'll do some measurements. > This issue is orthogonal to the other issues though I think. > Isn't it just the token and refresh token that would need to be saved in the cookie? > > > > > >> >> >> It should be capable of running collocated with the Keycloak Server for >> simplicity, but also should be possible to run in separate process. If it's >> done as an additional subsystem that allows easily configuring a Keycloak >> server to be IdP, IdP+Proxy or just Proxy. >> >> >> >> >> Further, it should leverage OpenID Connect rather than us coming up with >> a new separate protocol. >> >> My reasoning behind this is simple: >> >> * Please let's not invent another security protocol! That's a lot of work >> and a whole new vulnerability vector to deal with. >> * There will be tons more requests to a proxy than there are to the >> server. Latency overhead will also be much more important. >> >> It wouldn't be a brand new protocol, just an optimized subset of OIDC. >> For example, you wouldn't have to do a code to token request nor would you >> have to execute refresh token requests. It would also make things like >> revocation and backchannel logout much easier, nicer, more efficient, and >> more robust. >> > > I like removing the code to token request and refresh token requests. > However, doesn't the revocation and backchannel logout mechanism have to be > made simpler and more robust for "external apps" as well? Wouldn't it be > better to solve this problem in general and make it available to external > apps and not just our "embedded" proxy. > > > Client nodes currently register themselves with auth server on demand so > that they can receive revocation and backchannel logout events. The auth > server sends a message to each and every node when these events happen. A > proxy that has access to UserSession cache doesn't have to do any of these > things. This is the "simpler" and "more efficient" argument. I forgot the > "more robust" argument I had. > > > > > >> >> I Just see huge advantages with this approach: simpler provisioning, >> simpler configuration, a real nice user experience overall, and possibly >> some optimizations. What looking for is disadvantages to this approach >> which I currently see are: >> >> 1) Larger memory footprint >> > 2) More database connections, although these connections should become >> idle after boot. >> 3) Possible extra distributed session replication as the >> User/ClientSession needs to be visible on both the auth server and the >> proxy. >> 4) Possible headache of too many nodes in a cluster, although a proxy is >> supposed to be able to handle proxing multiple apps and multiple instances >> of that app. >> > > I would think it would make it even harder to scale to really big loads. > There will already be limits on a Keycloak cluster due to invalidation > messages and even more so the sessions. If we add even more nodes and load > to the same cluster that just makes the matter even worse. There's also > significantly more requests to applications than there is for KC server. > That's why it seems safer to keep it separate. > > > Configuring a proxy in the admin console is a good thing right? If that > is an assumption, then the proxy needs to receive realm invalidation events > so that it can refresh the client view (config settings, mappers, etc.). > Not sure to be honest. I wonder if a file based config like the current KC proxy is actually simpler to manage. You can still have centralized authz due to the authorization services. I was more thinking about just a simple JSON to describe the "app" that would then sort out registering the client at the Keycloak server using the dynamic client registration services. > > > > > It depends on what and how much of the db + cache we're talking about. Is > it just user sessions then that can probably be handled with distributed > sessions. > > realm info doesn't hit the db much, but user store will be hit. > Hmm...didn't think of the user store hit. Something like LDAP would be hit > by the auth server and each proxy for each login session. That's a > downer... If a proxy could proxy all apps, then maybe there is a way to > maintain sticky sessions between the auth server and the proxy so they > shared the same node/cache for the same session. Still a huge negative > though as things just got a lot more complex. > As long as the proxy is on the same subnet that would be possible, but what if it's in a separate cluster? > > Maybe we could just hook up the proxy to the realm store and realm cache? > I prefer this idea as then proxy setup isn't much different than auth > server setup. And all the configuration sync logic is already in place as > the proxy would receive realm invalidation events. > > Bill > From sthorger at redhat.com Mon Mar 20 03:54:07 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 20 Mar 2017 08:54:07 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: Let's go with the "page expired" page then, but we wouldn't need it for the refresh page would we? That can just re-display the same thing again. +1 I kinda thought about writing authenticators, but you're right that should be made as simple as possible and we should ideally handle all of this outside the authenticators themselves. Something that would be impossible if users could step backwards/forwards in the flow as you'd have to have "roll back" events or something so authenticators could undo stuff if needed. On 17 March 2017 at 16:21, Bill Burke wrote: > The best user experience would be that the user can click the > back/forward/refresh buttons as they wanted and things would work as they > would expect, i.e. if you're on the OTP page, you could click the back > button and go to the username/password page and re-enter username > password. I didn't implement support for this approach as its really > freakin hard to get right. > The real question is, do we want to support going backwards or forwards in > a flow? If we don't, then there are considerations and limitations on what > we can do for the user experience which I was trying to get at before. > Specifically: > > * Is it possible to determine the difference between the back, forward, or > refresh button event? > > That is the question I struggled the most with when implementing auth flow > processing. > > I'd say we set the bar low and minimally try to provide this experience: > > * If back, forward, or refresh button is pushed, show the "Page expired" > page you suggested before Stian. > > Anything different than that will be dependent on the limitations of the > browser and our auth flow implementation. > > There's also another type of user experience you aren't considering that > should be involved with the discussion. Specifically the user experience > of writing an Authenticator. You want the Authenticator development to be > simple...you also want to make sure that its implemented in a way that > developers can't make mistakes and introduce security holes by accident. > All this stuff is tied together which makes the problem really complex. > > > On 3/17/17 3:45 AM, Stian Thorgersen wrote: > > I repeat: > > Before we discuss implementation though, let's figure out what the ideal > user experience would be then figure out how to implement it. What about: > > * Refresh just works > * Back button will display a nicer page, something like "Page has expired. > To restart the login process click here. To continue the login process > click here.". Or Back button could just go to the start of the flow always. > * Resubmitting forms will just display the page above > * No need to do redirects. Redirects is bad for performance, but also has > twice the response time which is not good from a usability perspective > > Is this the optimal user experience? Or should we do something else? > > On 17 March 2017 at 08:44, Stian Thorgersen wrote: > >> Can we please get back to discussing what the best user experience is >> first. Then we can discuss implementations? >> >> On 16 March 2017 at 18:37, Bill Burke wrote: >> >>> >>> >>> On 3/16/17 10:50 AM, Marek Posolda wrote: >>> >>> On 16/03/17 15:27, Bill Burke wrote: >>> >>> * Hidden field in a form is not a good approach. Its very brittle and >>> will not work in every situation. So huge -1 there. >>> >>> * browser back button is not required to resubmit the HTTP request as >>> the page can be rendered from cache. Therefore you couldn't have a "Page >>> Expired" page displayed when the back button is pressed without setting the >>> header "Cache-Control: no-store, must-revalidate, max-age=0" >>> >>> Maybe we can do some javascript stuff like this: >>> http://stackoverflow.com/questions/9046184/reload-the-site-w >>> hen-reached-via-browsers-back-button >>> >>> But that would mean that we will need to inject some common javascript >>> stuff into every HTML form displayed by authentication SPI. Could we rely >>> on that? >>> >>> I don't think this is a good approach as Authenticator develoeprs would >>> have to do the same thing. >>> >>> >>> >>> * Furthermore, without some type of code/information within the URL, you >>> also wouldn't know if somebody clicked the back button or not or whether >>> this was a page refresh or some other GET request. >>> >>> Once we have the cookie with loginSessionID, we can lookup the >>> loginSession. And loginSession will contain last code (same like >>> clientSession now) and last authenticator. Then we just need to compare the >>> code from the loginSession with the code from request. If it matches, we >>> are good. If it doesn't match, it's likely the refresh of some previous >>> page and in that case, we can just redirect to last authenticator. >>> >>> This is the current behavior, but instead of using a cookie, the "code" >>> is stored in the URL. >>> >>> With only a cookie though and no URL information, you won't know the >>> different between a Back Button and a Page Refresh for GET requests. For >>> POST requests, you won't be able to tell the differencee between a Back >>> Button, Page Refresh, or whether the POST is targeted to an actual >>> Authenticator. >>> >>> The more I think about it, things should probably stay the way it >>> currently is, with improvements on user experience. I think we can support >>> what Stian suggested with the current implementation. >>> >>> >>> Not sure if we also need to track all codes, so we are able to distinct >>> between the "expired" code, and between the "false" code, which was never >>> valid and was possibly used by some attacker for CSRF. Maybe we can sign >>> codes with HMAC, so we can verify if it is "expired" or "false" code >>> without need to track the list of last codes. >>> >>> >>> This has been done in the past. Then it was switched to using the same >>> code throughout the whole flow, then Stian switched it to changing the code >>> throughout the flow. I don't know if he uses a hash or not. >>> >>> Bill >>> >> >> > > From sthorger at redhat.com Mon Mar 20 03:58:22 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 20 Mar 2017 08:58:22 +0100 Subject: [keycloak-dev] How to migrate all credentials stored in Keycloak to a new encoding algorithm? In-Reply-To: References: Message-ID: We already do this to some degree. If the default hashing algorithm is changed the users credentials are updated on the next login. Can you summarize your post please? I'm not sure what you are trying to achieve beyond what we already do. On 19 March 2017 at 00:30, Thomas Darimont wrote: > Hello group, > > > Sorry - for the long read but the following contains a proposal with a > > general solution for the problem. > > > TLDR; section at the end. > > > If you have been using Keycloak for a while, you probably have a number of > users in the > > system, whose passwords are encoded by the default > Pbkdf2PasswordHashProvider which > > currently uses the PBKDF2WithHmacSHA1 algorithm. > > To change the algorithm, one could implement a custom password encoding via > Keycloak?s > > PasswordHashProvider SPI. That works for user credential updates or newly > created users, > > but what about the potentially large number of credentials of already > existing users > > who are not active at the moment? > > > If you need to ensure that user credentials are encoded and stored with > > the new algorithm, then you have to migrate all user credentials to the new > algorithm. > > > Storing and verifying stored passwords usually involves a single step of > hashing in each direction: > > once stored as a hash, each try to enter the password is verified using the > same hash function and > > comparing the hashes. If you have a collection of stored password hashes > and the hash function must > > be changed, the only possibility (apart from re-initializing all password > hashes) is to apply the > > second hash function to the existing hashes and remember to hash the > entered passwords twice, too. > > That?s why it is unavoidable to remember which hash function was used to > create the first hash of > > each password. If this information can be reconstructed, the sequence of > hash functions to apply to > > a clear text password to produce a comparable hash can be reapplied. If the > hashes match, the given > > password can then be hashed with the new hash function and stored as the > new hash value, effectively > > migrating the password to use the new hash function. That?s what I propose > below. > > > The following describes an incremental method for credential updates, > verification and migration. > > * Incremental Credential Migration > > > Imagine that you have two different credential encoding algorithms: > > hash_old(input, ...) - The current encoding algorithm > > hash_new(input, ...) - The new encoding algorithm > > > We now want to update all stored credentials to use the hash_new encoding > algorithm. > > In order to achieve this the following two steps need to be performed. > > > 1. Incrementally encode existing credentials > > In this step the existing credentials are encoded with the new encoding > algorithm hash_new > > and stored as the new credential value with additional metadata (old > encoding, new encoding) > > annotated with a ?migration_required? marker. > > This marker is later used to detect credentials which needs migration > during credential validation. > > Note that since we encode the already encoded credential value we do not > need to know the plain > > text of the credential to perform the encoding. > > The encoding all credentials will probably take some time and CPU > resources, depending on the number of credentials and the used encoding > function configuration. > > Therefore it makes sense to perform this step incrementally and in parallel > to the credential validation described in Step 2. This is possible because > the newly encoded credential values > > are annotated with a ?migration_required? marker and all other credentials > will be handled by their associated encoding algorithm. > > > Eventually all credentials will be encoded with the new encoding algorithm. > > > Pseudo-Code: encode credentials with new encoding > > > for (CredentialModel credential: passwordCredentials) { > > // checks if given credential should be migrated, e.g. uses hash_old > > if (isCredentialMigrationRequired(credential)) { > > metadata = credential.getConfig(); > > // credential.value: the original password encoded with hash_old > > newValue = hash_new(credential.value, credential.salt, ?); > > metadata = updateMetadata(metadata, ?hash_new?, ?migration_required?) > > updateCredential (credential, newValue, metadata) > > } > > } > > > 2. Credential Validation and Migration > > In this step the provided password is verified by comparing the stored > password hash against the > > hash computed from the sequential application of the hash functions > hash_old and hash_new. > > > 2.1 Credential Validation > > For credentials marked with ?migration_required?, compare the stored > credential hash value with the result of hash_new(hash_old(password,... > ),...). > > For all other credentials the associated credential encoding algorithm is > used. > > > Note that credential validation for non-migrated credentials are more > expensive due to the multiple > > hash functions being applied in sequence. > > > If the hashes match, we know that the given password was valid and the > actual credential migration can be performed. > > > 2.2 Credential Migration > > After successful validation of a credential tagged with a > ?migration_required? marker, the given > > password is encoded with the new hash function via hash_new(password). The > credential is now stored with the new hash value and updated metadata with > the ?migration_required? marker removed. > > > This concludes the migration of the credential. After the migration the > hash_new(...) function is > > sufficient to verify the credential. > > > Pseudo-Code: validate and migrate credential > > > boolean verify(String rawPassword, CredentialModel cred) { > > > > if (isMarkedForMigration(cred)){ > > // Step 2.1 Validate credential by encoding the rawPassword > > // with the hash_old and then hash_new algorithm. > > if (hash_new(hash_old(rawPassword, cred), cred) == cred.value) { > > > // Step 2.2 Perform the credential migration > > migrateCredential(cred, hash_new(rawPassword, cred)); > > return true; > > } > > } else { > > // verify credential with hash_new(...) OR hash_old(...) > > } > > return false; > > } > > > TLDR: Conclusion > > > The proposed approach supports migration of credentials to a new encoding > algorithm in a two step process. > > First the existing credential value, hashed with the old hash function, is > hashed again with the new hash > > function. The resulting hash is then stored in the credential annotated > with a migration marker. > > > To verify a given password against the stored credential hash, the same > sequence of hash functions is applied to the > > password and the resulting hash value is then compared against the stored > hash. > > If the hash matches, the actual credential migration is performed by > hashing the given password again but > > this time only with the new hash function. > > The resulting hash is then stored with the credential without the migration > marker. > > > The main benefit of this method is that one can migrate existing credential > encoding mechanisms to new > > ones without having to keep old credentials hashed with potentially > insecure algorithms around. > > The method can incrementally update the credentials by using markers on the > stored credentials to > > steer credential validation. > > It comes with the cost of potentially more CPU intensive credential > validation for non-migrated > > credentials that need to be verified and migrated. > > > Given the continuous progression in the fields of security and cryptography > it is only a matter of time > > that one needs to change a credential encoding mechanism in order to comply > with the latest recommended > > security standards. > > > Therefore I think this incremental credential migration would be a valuable > feature to add to > > the Keycloak System. > > > What do you guys think? > > > Cheers, > > Thomas > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mark.pardijs at topicus.nl Mon Mar 20 04:43:19 2017 From: mark.pardijs at topicus.nl (Mark Pardijs) Date: Mon, 20 Mar 2017 08:43:19 +0000 Subject: [keycloak-dev] Deploying provider with dependencies In-Reply-To: References: <1484736842.3591.1.camel@cargosoft.ru> <1484770247.18494.1.camel@cargosoft.ru> <1484774410.18494.3.camel@cargosoft.ru> <0cd22167-1915-1509-2b77-fddcf009d477@redhat.com> <1485123376.9077.1.camel@cargosoft.ru> Message-ID: <2E14FF2E-D8C8-4174-B13B-23020984D1A6@topicus.nl> @Bill Burke, any thoughts on this? Thought I might bump this since it?s a reply on an old thread so maybe it?s not noticed. Op 15 mrt. 2017, om 16:16 heeft Mark Pardijs > het volgende geschreven: I somehow sort of solved this, but still am curious if this is the way to go. The way I understand it now is: the providers dir is a non-hot deploy method which uses the same classpath as keycloak. The deployments dir is the hot-deployment method where you have to provide the libs on classpath or depend on them via modules. The way I configured it now is the following: - The maven jar plugin which builds a manifest file [1]. When I package my third party libs in the ear I get ClassNotFound errors, so I have to depend on modules defined already in keycloak - The maven ear plugin which also builds the proper application.xml [2] Is this the way to go? [1] org.apache.maven.plugins maven-jar-plugin 2.6 org.keycloak.keycloak-server-spi-private,org.keycloak.keycloak-services,org.apache.commons.lang [2] maven-ear-plugin 2.9.1 myGroupId myArtifactId true myGroupId2 myArtifactId2 true Op 14 mrt. 2017, om 18:21 heeft Mark Pardijs > het volgende geschreven: I?m trying to add a custom authenticator to Keycloak, and I?m also confused on this topic. First, I copied my jar in the keycloak/providers directory, this works. Then, I started to depend on a third party library, so I needed to add this third party jar in my providers directory. Since it?s not very convenient to copy all third party libs I tried some other ways but I?m lost now. I could build an ear file with my my custom jar and third party libs in it, and place it in the deployments directory, but then the service module loader fails with the following message: 17:13:31,485 WARN [org.jboss.modules] (ServerService Thread Pool -- 50) Failed to define class nl.MyAuthenticatorFactory in Module "deployment.keycloak-authenticator-ear-1.0-SNAPSHOT.ear.keycloak-authenticat-1.0-SNAPSHOT.jar:main" from Service Module Loader: java.lang.NoClassDefFoundError: Failed to link nl/MyAuthenticatorFactory (Module "deployment.keycloak-authenticator-ear-1.0-SNAPSHOT.ear.keycloak-authenticator-1.0-SNAPSHOT.jar:main" from Service Module Loader): org/keycloak/authentication/AuthenticatorFactory. When I put the ear in the providers dir, it?s not picked up at all. I think I?m missing something in my ear file, I read something about a jboss-deployment-structure.xml but can?t find a good example of using this in this situation. So, hopefully you can help me with these quesitons: 1. Isn?t it possible to depend on a third party lib that?s already on the keycloak?s default classpath? I?m depending on commons-lang lib, thought which is also in keycloak?s default classpath, but still when using my provider I get a ClassNotFound error 2. The documentation is mentioning two ways to deploy a provider: using the /providers dir or using the deployments dir, which one is recommended when? 3. What?s the expected structure of an ear file for use in the deployments directory? Currently I have the same as Dmitry mentioned below Op 22 jan. 2017, om 23:16 heeft Dmitry Telegin > het volgende geschreven: Tried with 2.5.0.Final - the EAR doesn't get recursed unless there's an application.xml with all the internal JARs explicitly declared as modules. Could it have been some special jboss-deployment-structure.xml in your case? Either way, it's a very subtle issue that deserves being documented IMHO. Cheers, Dmitry ? Thu, 19/01/2017 ? 09:56 -0500, Bill Burke ?????: I'm pretty sure you can just remove the application.xml too. Then the ear will be recursed. I tried this out when I first wrote the deployer. On 1/18/17 4:20 PM, Dmitry Telegin wrote: I've finally solved this. There are two moments to pay attention to: - maven-ear-plugin by default generates almost empty application.xml. In order for subdeployment to be recognized, it must be mentioned in application.xml as a module. Both and worked for me; - in JBoss/WildFly, only top-level jboss-deployment-structure.xml files are recognized. Thus, this file should be moved from a JAR to an EAR and tweaked accordingly. This is discussed in detail here: http://stackoverflow.com/questions/26859092/jboss-deployment-struct ure- xml-does-not-loads-the-dependencies-in-my-ear-project Again, it would be nice to have a complete working example to demonstrate this approach. I think I could (one day) update my BeerCloak example to cover this new deployment technique. Dmitry ? Wed, 18/01/2017 ? 23:10 +0300, Dmitry Telegin ?????: Stian, I've tried to package my provider JAR into an EAR, but that didn't work :( The layout is the following: foo-0.1-SNAPSHOT.ear +- foo-provider-0.1-SNAPSHOT.jar +- META-INF +- application.xml When I put the JAR into the deployments subdir, it is deployed successfully, initialization code is called etc. But when I drop the EAR in the same subdir, there is only a successful deployment message. The provider doesn't get initialized; seems like deployer doesn't recurse into EAR contents. Tried to place the JAR into the "lib" subdir inside EAR, this didn't work either. The EAR is generated by maven- ear- plugin with the standard settings. Am I missing something? Sorry for bugging you, but unfortunately there is not much said in the docs about deploying providers from inside EARs. A working example would be helpful as well. Dmitry ? Wed, 18/01/2017 ? 12:34 +0100, Stian Thorgersen ?????: You have two options deploy as a module which requires adding modules for all dependencies or using the new deploy as a JEE archive approach, which also supports hot deployment. Check out the server developer guide for more details. On 18 January 2017 at 11:54, Dmitry Telegin wrote: Hi, It's easy to imagine a provider that would integrate a third party library which, together with transitive dependencies, might result in dozens of JARs. A real-world example: OpenID 2.0 login protocol implementation using openid4java, which in its turn pulls in another 10 JARs. What are the deployment options for configurations like that? Is it really necessary to install each and every dependency as a WildFly module? This could become a PITA if there are a lot of deps. Could it be a single, self-sufficient artifact just to be put into deployments subdir? If yes, what type of artifact it should be (EAR maybe)? Thx, Dmitry _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Mon Mar 20 04:53:30 2017 From: mposolda at redhat.com (Marek Posolda) Date: Mon, 20 Mar 2017 09:53:30 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: On 20/03/17 08:54, Stian Thorgersen wrote: > Let's go with the "page expired" page then, but we wouldn't need it > for the refresh page would we? That can just re-display the same thing > again. IMO if you are on TOTP page and you click "refresh", you should just receive the TOTP page. However for the case that you are on TOTP page and you clicked "back", you are on our "page expired" page. Now you click "Refresh" you should be still on "page expired" page IMO. If user wants to go to last step, he can click on the link, which we provide on our "page expired" page and then he will go to last step and TOTP page is shown. I think it will be challenging to do it other way as there is no easy way to track if request was triggered with back, forward or refresh button. > > +1 I kinda thought about writing authenticators, but you're right that > should be made as simple as possible and we should ideally handle all > of this outside the authenticators themselves. Something that would be > impossible if users could step backwards/forwards in the flow as you'd > have to have "roll back" events or something so authenticators could > undo stuff if needed. Another posibility is to track the complete history of loginSession. Hence clicking back-button will revert also the loginSession to the step before last authenticator and all the changes to the state of loginSession (notes, attributes authenticatedUser etc) will be reverted to that stage. However this looks like quite challenging and not optimal for memory footprint... I guess not an option? Marek > > On 17 March 2017 at 16:21, Bill Burke > wrote: > > The best user experience would be that the user can click the > back/forward/refresh buttons as they wanted and things would work > as they would expect, i.e. if you're on the OTP page, you could > click the back button and go to the username/password page and > re-enter username password. I didn't implement support for this > approach as its really freakin hard to get right. > > The real question is, do we want to support going backwards or > forwards in a flow? If we don't, then there are considerations > and limitations on what we can do for the user experience which I > was trying to get at before. Specifically: > > * Is it possible to determine the difference between the back, > forward, or refresh button event? > > That is the question I struggled the most with when implementing > auth flow processing. > > I'd say we set the bar low and minimally try to provide this > experience: > > * If back, forward, or refresh button is pushed, show the "Page > expired" page you suggested before Stian. > > Anything different than that will be dependent on the limitations > of the browser and our auth flow implementation. > > There's also another type of user experience you aren't > considering that should be involved with the discussion. > Specifically the user experience of writing an Authenticator. You > want the Authenticator development to be simple...you also want to > make sure that its implemented in a way that developers can't make > mistakes and introduce security holes by accident. All this stuff > is tied together which makes the problem really complex. > > > On 3/17/17 3:45 AM, Stian Thorgersen wrote: >> I repeat: >> >> Before we discuss implementation though, let's figure out what >> the ideal user experience would be then figure out how to >> implement it. What about: >> >> * Refresh just works >> * Back button will display a nicer page, something like "Page has >> expired. To restart the login process click here. To continue the >> login process click here.". Or Back button could just go to the >> start of the flow always. >> * Resubmitting forms will just display the page above >> * No need to do redirects. Redirects is bad for performance, but >> also has twice the response time which is not good from a >> usability perspective >> >> Is this the optimal user experience? Or should we do something else? >> >> On 17 March 2017 at 08:44, Stian Thorgersen > > wrote: >> >> Can we please get back to discussing what the best user >> experience is first. Then we can discuss implementations? >> >> On 16 March 2017 at 18:37, Bill Burke > > wrote: >> >> >> >> On 3/16/17 10:50 AM, Marek Posolda wrote: >>> On 16/03/17 15:27, Bill Burke wrote: >>>> * Hidden field in a form is not a good approach. Its >>>> very brittle and will not work in every situation. So >>>> huge -1 there. >>>> >>>> * browser back button is not required to resubmit the >>>> HTTP request as the page can be rendered from cache. >>>> Therefore you couldn't have a "Page Expired" page >>>> displayed when the back button is pressed without >>>> setting the header "Cache-Control: no-store, >>>> must-revalidate, max-age=0" >>> Maybe we can do some javascript stuff like this: >>> http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button >>> >>> >>> But that would mean that we will need to inject some >>> common javascript stuff into every HTML form displayed >>> by authentication SPI. Could we rely on that? >> I don't think this is a good approach as Authenticator >> develoeprs would have to do the same thing. >> >> >>>> >>>> * Furthermore, without some type of code/information >>>> within the URL, you also wouldn't know if somebody >>>> clicked the back button or not or whether this was a >>>> page refresh or some other GET request. >>> Once we have the cookie with loginSessionID, we can >>> lookup the loginSession. And loginSession will contain >>> last code (same like clientSession now) and last >>> authenticator. Then we just need to compare the code >>> from the loginSession with the code from request. If it >>> matches, we are good. If it doesn't match, it's likely >>> the refresh of some previous page and in that case, we >>> can just redirect to last authenticator. >>> >> This is the current behavior, but instead of using a >> cookie, the "code" is stored in the URL. >> >> With only a cookie though and no URL information, you >> won't know the different between a Back Button and a Page >> Refresh for GET requests. For POST requests, you won't >> be able to tell the differencee between a Back Button, >> Page Refresh, or whether the POST is targeted to an >> actual Authenticator. >> >> The more I think about it, things should probably stay >> the way it currently is, with improvements on user >> experience. I think we can support what Stian suggested >> with the current implementation. >> >> >>> Not sure if we also need to track all codes, so we are >>> able to distinct between the "expired" code, and between >>> the "false" code, which was never valid and was possibly >>> used by some attacker for CSRF. Maybe we can sign codes >>> with HMAC, so we can verify if it is "expired" or >>> "false" code without need to track the list of last codes. >> >> This has been done in the past. Then it was switched to >> using the same code throughout the whole flow, then Stian >> switched it to changing the code throughout the flow. I >> don't know if he uses a hash or not. >> >> Bill >> >> >> > > From sthorger at redhat.com Mon Mar 20 09:35:31 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 20 Mar 2017 14:35:31 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: On 20 March 2017 at 09:53, Marek Posolda wrote: > On 20/03/17 08:54, Stian Thorgersen wrote: > > Let's go with the "page expired" page then, but we wouldn't need it for > the refresh page would we? That can just re-display the same thing again. > > IMO if you are on TOTP page and you click "refresh", you should just > receive the TOTP page. > > However for the case that you are on TOTP page and you clicked "back", you > are on our "page expired" page. Now you click "Refresh" you should be still > on "page expired" page IMO. If user wants to go to last step, he can click > on the link, which we provide on our "page expired" page and then he will > go to last step and TOTP page is shown. I think it will be challenging to > do it other way as there is no easy way to track if request was triggered > with back, forward or refresh button. > Yes, refresh should always just show the same again. > > > > +1 I kinda thought about writing authenticators, but you're right that > should be made as simple as possible and we should ideally handle all of > this outside the authenticators themselves. Something that would be > impossible if users could step backwards/forwards in the flow as you'd have > to have "roll back" events or something so authenticators could undo stuff > if needed. > > Another posibility is to track the complete history of loginSession. Hence > clicking back-button will revert also the loginSession to the step before > last authenticator and all the changes to the state of loginSession (notes, > attributes authenticatedUser etc) will be reverted to that stage. However > this looks like quite challenging and not optimal for memory footprint... I > guess not an option? > There could also be changes outside of the login session that we can't track. > > > Marek > > > On 17 March 2017 at 16:21, Bill Burke wrote: > >> The best user experience would be that the user can click the >> back/forward/refresh buttons as they wanted and things would work as they >> would expect, i.e. if you're on the OTP page, you could click the back >> button and go to the username/password page and re-enter username >> password. I didn't implement support for this approach as its really >> freakin hard to get right. >> The real question is, do we want to support going backwards or forwards >> in a flow? If we don't, then there are considerations and limitations on >> what we can do for the user experience which I was trying to get at >> before. Specifically: >> >> * Is it possible to determine the difference between the back, forward, >> or refresh button event? >> >> That is the question I struggled the most with when implementing auth >> flow processing. >> >> I'd say we set the bar low and minimally try to provide this experience: >> >> * If back, forward, or refresh button is pushed, show the "Page expired" >> page you suggested before Stian. >> >> Anything different than that will be dependent on the limitations of the >> browser and our auth flow implementation. >> >> There's also another type of user experience you aren't considering that >> should be involved with the discussion. Specifically the user experience >> of writing an Authenticator. You want the Authenticator development to be >> simple...you also want to make sure that its implemented in a way that >> developers can't make mistakes and introduce security holes by accident. >> All this stuff is tied together which makes the problem really complex. >> >> >> On 3/17/17 3:45 AM, Stian Thorgersen wrote: >> >> I repeat: >> >> Before we discuss implementation though, let's figure out what the ideal >> user experience would be then figure out how to implement it. What about: >> >> * Refresh just works >> * Back button will display a nicer page, something like "Page has >> expired. To restart the login process click here. To continue the login >> process click here.". Or Back button could just go to the start of the flow >> always. >> * Resubmitting forms will just display the page above >> * No need to do redirects. Redirects is bad for performance, but also has >> twice the response time which is not good from a usability perspective >> >> Is this the optimal user experience? Or should we do something else? >> >> On 17 March 2017 at 08:44, Stian Thorgersen wrote: >> >>> Can we please get back to discussing what the best user experience is >>> first. Then we can discuss implementations? >>> >>> On 16 March 2017 at 18:37, Bill Burke wrote: >>> >>>> >>>> >>>> On 3/16/17 10:50 AM, Marek Posolda wrote: >>>> >>>> On 16/03/17 15:27, Bill Burke wrote: >>>> >>>> * Hidden field in a form is not a good approach. Its very brittle and >>>> will not work in every situation. So huge -1 there. >>>> >>>> * browser back button is not required to resubmit the HTTP request as >>>> the page can be rendered from cache. Therefore you couldn't have a "Page >>>> Expired" page displayed when the back button is pressed without setting the >>>> header "Cache-Control: no-store, must-revalidate, max-age=0" >>>> >>>> Maybe we can do some javascript stuff like this: >>>> http://stackoverflow.com/questions/9046184/reload-the-site-w >>>> hen-reached-via-browsers-back-button >>>> >>>> But that would mean that we will need to inject some common javascript >>>> stuff into every HTML form displayed by authentication SPI. Could we rely >>>> on that? >>>> >>>> I don't think this is a good approach as Authenticator develoeprs would >>>> have to do the same thing. >>>> >>>> >>>> >>>> * Furthermore, without some type of code/information within the URL, >>>> you also wouldn't know if somebody clicked the back button or not or >>>> whether this was a page refresh or some other GET request. >>>> >>>> Once we have the cookie with loginSessionID, we can lookup the >>>> loginSession. And loginSession will contain last code (same like >>>> clientSession now) and last authenticator. Then we just need to compare the >>>> code from the loginSession with the code from request. If it matches, we >>>> are good. If it doesn't match, it's likely the refresh of some previous >>>> page and in that case, we can just redirect to last authenticator. >>>> >>>> This is the current behavior, but instead of using a cookie, the "code" >>>> is stored in the URL. >>>> >>>> With only a cookie though and no URL information, you won't know the >>>> different between a Back Button and a Page Refresh for GET requests. For >>>> POST requests, you won't be able to tell the differencee between a Back >>>> Button, Page Refresh, or whether the POST is targeted to an actual >>>> Authenticator. >>>> >>>> The more I think about it, things should probably stay the way it >>>> currently is, with improvements on user experience. I think we can support >>>> what Stian suggested with the current implementation. >>>> >>>> >>>> Not sure if we also need to track all codes, so we are able to distinct >>>> between the "expired" code, and between the "false" code, which was never >>>> valid and was possibly used by some attacker for CSRF. Maybe we can sign >>>> codes with HMAC, so we can verify if it is "expired" or "false" code >>>> without need to track the list of last codes. >>>> >>>> >>>> This has been done in the past. Then it was switched to using the same >>>> code throughout the whole flow, then Stian switched it to changing the code >>>> throughout the flow. I don't know if he uses a hash or not. >>>> >>>> Bill >>>> >>> >>> >> >> > > From thomas.darimont at googlemail.com Mon Mar 20 09:37:30 2017 From: thomas.darimont at googlemail.com (Thomas Darimont) Date: Mon, 20 Mar 2017 14:37:30 +0100 Subject: [keycloak-dev] How to migrate all credentials stored in Keycloak to a new encoding algorithm? In-Reply-To: References: Message-ID: Hello Stian, I know that Keycloak currently performs an ad-hoc migration of user credentials - but this requires user interaction. Users who don't login won't get their credentials updated. The idea described in the long post essentially boils down to the following: Migrate all existing user credentials incrementally, as a batch job in the background, while remembering the credential encoding algorithm previously used to be able to perform an ad-hoc credential validation and migration on the next user login. Validation via: new_encoding(old_encoding(rawPassword)) == storedHash, if hashes matches: update credential via new_encoding(rawPassword) With that in place one could comply with new credential encoding requirements for all users without requiring any user interaction while still being able to verify given credentials. Cheers, Thomas 2017-03-20 8:58 GMT+01:00 Stian Thorgersen : > We already do this to some degree. If the default hashing algorithm is > changed the users credentials are updated on the next login. > > Can you summarize your post please? I'm not sure what you are trying to > achieve beyond what we already do. > > On 19 March 2017 at 00:30, Thomas Darimont > wrote: > >> Hello group, >> >> >> Sorry - for the long read but the following contains a proposal with a >> >> general solution for the problem. >> >> >> TLDR; section at the end. >> >> >> If you have been using Keycloak for a while, you probably have a number of >> users in the >> >> system, whose passwords are encoded by the default >> Pbkdf2PasswordHashProvider which >> >> currently uses the PBKDF2WithHmacSHA1 algorithm. >> >> To change the algorithm, one could implement a custom password encoding >> via >> Keycloak?s >> >> PasswordHashProvider SPI. That works for user credential updates or newly >> created users, >> >> but what about the potentially large number of credentials of already >> existing users >> >> who are not active at the moment? >> >> >> If you need to ensure that user credentials are encoded and stored with >> >> the new algorithm, then you have to migrate all user credentials to the >> new >> algorithm. >> >> >> Storing and verifying stored passwords usually involves a single step of >> hashing in each direction: >> >> once stored as a hash, each try to enter the password is verified using >> the >> same hash function and >> >> comparing the hashes. If you have a collection of stored password hashes >> and the hash function must >> >> be changed, the only possibility (apart from re-initializing all password >> hashes) is to apply the >> >> second hash function to the existing hashes and remember to hash the >> entered passwords twice, too. >> >> That?s why it is unavoidable to remember which hash function was used to >> create the first hash of >> >> each password. If this information can be reconstructed, the sequence of >> hash functions to apply to >> >> a clear text password to produce a comparable hash can be reapplied. If >> the >> hashes match, the given >> >> password can then be hashed with the new hash function and stored as the >> new hash value, effectively >> >> migrating the password to use the new hash function. That?s what I propose >> below. >> >> >> The following describes an incremental method for credential updates, >> verification and migration. >> >> * Incremental Credential Migration >> >> >> Imagine that you have two different credential encoding algorithms: >> >> hash_old(input, ...) - The current encoding algorithm >> >> hash_new(input, ...) - The new encoding algorithm >> >> >> We now want to update all stored credentials to use the hash_new encoding >> algorithm. >> >> In order to achieve this the following two steps need to be performed. >> >> >> 1. Incrementally encode existing credentials >> >> In this step the existing credentials are encoded with the new encoding >> algorithm hash_new >> >> and stored as the new credential value with additional metadata (old >> encoding, new encoding) >> >> annotated with a ?migration_required? marker. >> >> This marker is later used to detect credentials which needs migration >> during credential validation. >> >> Note that since we encode the already encoded credential value we do not >> need to know the plain >> >> text of the credential to perform the encoding. >> >> The encoding all credentials will probably take some time and CPU >> resources, depending on the number of credentials and the used encoding >> function configuration. >> >> Therefore it makes sense to perform this step incrementally and in >> parallel >> to the credential validation described in Step 2. This is possible because >> the newly encoded credential values >> >> are annotated with a ?migration_required? marker and all other >> credentials >> will be handled by their associated encoding algorithm. >> >> >> Eventually all credentials will be encoded with the new encoding >> algorithm. >> >> >> Pseudo-Code: encode credentials with new encoding >> >> >> for (CredentialModel credential: passwordCredentials) { >> >> // checks if given credential should be migrated, e.g. uses hash_old >> >> if (isCredentialMigrationRequired(credential)) { >> >> metadata = credential.getConfig(); >> >> // credential.value: the original password encoded with hash_old >> >> newValue = hash_new(credential.value, credential.salt, ?); >> >> metadata = updateMetadata(metadata, ?hash_new?, ?migration_required?) >> >> updateCredential (credential, newValue, metadata) >> >> } >> >> } >> >> >> 2. Credential Validation and Migration >> >> In this step the provided password is verified by comparing the stored >> password hash against the >> >> hash computed from the sequential application of the hash functions >> hash_old and hash_new. >> >> >> 2.1 Credential Validation >> >> For credentials marked with ?migration_required?, compare the stored >> credential hash value with the result of hash_new(hash_old(password,... >> ),...). >> >> For all other credentials the associated credential encoding algorithm is >> used. >> >> >> Note that credential validation for non-migrated credentials are more >> expensive due to the multiple >> >> hash functions being applied in sequence. >> >> >> If the hashes match, we know that the given password was valid and the >> actual credential migration can be performed. >> >> >> 2.2 Credential Migration >> >> After successful validation of a credential tagged with a >> ?migration_required? marker, the given >> >> password is encoded with the new hash function via hash_new(password). The >> credential is now stored with the new hash value and updated metadata with >> the ?migration_required? marker removed. >> >> >> This concludes the migration of the credential. After the migration the >> hash_new(...) function is >> >> sufficient to verify the credential. >> >> >> Pseudo-Code: validate and migrate credential >> >> >> boolean verify(String rawPassword, CredentialModel cred) { >> >> >> >> if (isMarkedForMigration(cred)){ >> >> // Step 2.1 Validate credential by encoding the rawPassword >> >> // with the hash_old and then hash_new algorithm. >> >> if (hash_new(hash_old(rawPassword, cred), cred) == cred.value) { >> >> >> // Step 2.2 Perform the credential migration >> >> migrateCredential(cred, hash_new(rawPassword, cred)); >> >> return true; >> >> } >> >> } else { >> >> // verify credential with hash_new(...) OR hash_old(...) >> >> } >> >> return false; >> >> } >> >> >> TLDR: Conclusion >> >> >> The proposed approach supports migration of credentials to a new encoding >> algorithm in a two step process. >> >> First the existing credential value, hashed with the old hash function, is >> hashed again with the new hash >> >> function. The resulting hash is then stored in the credential annotated >> with a migration marker. >> >> >> To verify a given password against the stored credential hash, the same >> sequence of hash functions is applied to the >> >> password and the resulting hash value is then compared against the stored >> hash. >> >> If the hash matches, the actual credential migration is performed by >> hashing the given password again but >> >> this time only with the new hash function. >> >> The resulting hash is then stored with the credential without the >> migration >> marker. >> >> >> The main benefit of this method is that one can migrate existing >> credential >> encoding mechanisms to new >> >> ones without having to keep old credentials hashed with potentially >> insecure algorithms around. >> >> The method can incrementally update the credentials by using markers on >> the >> stored credentials to >> >> steer credential validation. >> >> It comes with the cost of potentially more CPU intensive credential >> validation for non-migrated >> >> credentials that need to be verified and migrated. >> >> >> Given the continuous progression in the fields of security and >> cryptography >> it is only a matter of time >> >> that one needs to change a credential encoding mechanism in order to >> comply >> with the latest recommended >> >> security standards. >> >> >> Therefore I think this incremental credential migration would be a >> valuable >> feature to add to >> >> the Keycloak System. >> >> >> What do you guys think? >> >> >> Cheers, >> >> Thomas >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From sthorger at redhat.com Mon Mar 20 09:47:41 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 20 Mar 2017 14:47:41 +0100 Subject: [keycloak-dev] How to migrate all credentials stored in Keycloak to a new encoding algorithm? In-Reply-To: References: Message-ID: Ok, I figured that might have been what you where saying. That would probably be a nice addition, but it would be fairly complex thing to add. Say you have 10 million users in the db, starting a background task to rehash all the credentials would make it churn away for days and grind all other requests to a halt. So you'd need a background scheduling mechanism that is capable of only scheduling when there is free resources. Then there's also clustering to deal with. On 20 March 2017 at 14:37, Thomas Darimont wrote: > Hello Stian, > > I know that Keycloak currently performs an ad-hoc migration of user > credentials - but this requires user interaction. > Users who don't login won't get their credentials updated. > > The idea described in the long post essentially boils down to the > following: > > Migrate all existing user credentials incrementally, as a batch job in the > background, while remembering the credential encoding algorithm previously > used to be able to perform an ad-hoc credential validation and migration on > the next user login. > > Validation via: > new_encoding(old_encoding(rawPassword)) == storedHash, > if hashes matches: > update credential via new_encoding(rawPassword) > > With that in place one could comply with new credential encoding > requirements for all users > without requiring any user interaction while still being able to verify > given credentials. > > Cheers, > Thomas > > 2017-03-20 8:58 GMT+01:00 Stian Thorgersen : > >> We already do this to some degree. If the default hashing algorithm is >> changed the users credentials are updated on the next login. >> >> Can you summarize your post please? I'm not sure what you are trying to >> achieve beyond what we already do. >> >> On 19 March 2017 at 00:30, Thomas Darimont > m> wrote: >> >>> Hello group, >>> >>> >>> Sorry - for the long read but the following contains a proposal with a >>> >>> general solution for the problem. >>> >>> >>> TLDR; section at the end. >>> >>> >>> If you have been using Keycloak for a while, you probably have a number >>> of >>> users in the >>> >>> system, whose passwords are encoded by the default >>> Pbkdf2PasswordHashProvider which >>> >>> currently uses the PBKDF2WithHmacSHA1 algorithm. >>> >>> To change the algorithm, one could implement a custom password encoding >>> via >>> Keycloak?s >>> >>> PasswordHashProvider SPI. That works for user credential updates or newly >>> created users, >>> >>> but what about the potentially large number of credentials of already >>> existing users >>> >>> who are not active at the moment? >>> >>> >>> If you need to ensure that user credentials are encoded and stored with >>> >>> the new algorithm, then you have to migrate all user credentials to the >>> new >>> algorithm. >>> >>> >>> Storing and verifying stored passwords usually involves a single step of >>> hashing in each direction: >>> >>> once stored as a hash, each try to enter the password is verified using >>> the >>> same hash function and >>> >>> comparing the hashes. If you have a collection of stored password hashes >>> and the hash function must >>> >>> be changed, the only possibility (apart from re-initializing all password >>> hashes) is to apply the >>> >>> second hash function to the existing hashes and remember to hash the >>> entered passwords twice, too. >>> >>> That?s why it is unavoidable to remember which hash function was used to >>> create the first hash of >>> >>> each password. If this information can be reconstructed, the sequence of >>> hash functions to apply to >>> >>> a clear text password to produce a comparable hash can be reapplied. If >>> the >>> hashes match, the given >>> >>> password can then be hashed with the new hash function and stored as the >>> new hash value, effectively >>> >>> migrating the password to use the new hash function. That?s what I >>> propose >>> below. >>> >>> >>> The following describes an incremental method for credential updates, >>> verification and migration. >>> >>> * Incremental Credential Migration >>> >>> >>> Imagine that you have two different credential encoding algorithms: >>> >>> hash_old(input, ...) - The current encoding algorithm >>> >>> hash_new(input, ...) - The new encoding algorithm >>> >>> >>> We now want to update all stored credentials to use the hash_new encoding >>> algorithm. >>> >>> In order to achieve this the following two steps need to be performed. >>> >>> >>> 1. Incrementally encode existing credentials >>> >>> In this step the existing credentials are encoded with the new encoding >>> algorithm hash_new >>> >>> and stored as the new credential value with additional metadata (old >>> encoding, new encoding) >>> >>> annotated with a ?migration_required? marker. >>> >>> This marker is later used to detect credentials which needs migration >>> during credential validation. >>> >>> Note that since we encode the already encoded credential value we do not >>> need to know the plain >>> >>> text of the credential to perform the encoding. >>> >>> The encoding all credentials will probably take some time and CPU >>> resources, depending on the number of credentials and the used encoding >>> function configuration. >>> >>> Therefore it makes sense to perform this step incrementally and in >>> parallel >>> to the credential validation described in Step 2. This is possible >>> because >>> the newly encoded credential values >>> >>> are annotated with a ?migration_required? marker and all other >>> credentials >>> will be handled by their associated encoding algorithm. >>> >>> >>> Eventually all credentials will be encoded with the new encoding >>> algorithm. >>> >>> >>> Pseudo-Code: encode credentials with new encoding >>> >>> >>> for (CredentialModel credential: passwordCredentials) { >>> >>> // checks if given credential should be migrated, e.g. uses hash_old >>> >>> if (isCredentialMigrationRequired(credential)) { >>> >>> metadata = credential.getConfig(); >>> >>> // credential.value: the original password encoded with hash_old >>> >>> newValue = hash_new(credential.value, credential.salt, ?); >>> >>> metadata = updateMetadata(metadata, ?hash_new?, >>> ?migration_required?) >>> >>> updateCredential (credential, newValue, metadata) >>> >>> } >>> >>> } >>> >>> >>> 2. Credential Validation and Migration >>> >>> In this step the provided password is verified by comparing the stored >>> password hash against the >>> >>> hash computed from the sequential application of the hash functions >>> hash_old and hash_new. >>> >>> >>> 2.1 Credential Validation >>> >>> For credentials marked with ?migration_required?, compare the stored >>> credential hash value with the result of hash_new(hash_old(password,... >>> ),...). >>> >>> For all other credentials the associated credential encoding algorithm is >>> used. >>> >>> >>> Note that credential validation for non-migrated credentials are more >>> expensive due to the multiple >>> >>> hash functions being applied in sequence. >>> >>> >>> If the hashes match, we know that the given password was valid and the >>> actual credential migration can be performed. >>> >>> >>> 2.2 Credential Migration >>> >>> After successful validation of a credential tagged with a >>> ?migration_required? marker, the given >>> >>> password is encoded with the new hash function via hash_new(password). >>> The >>> credential is now stored with the new hash value and updated metadata >>> with >>> the ?migration_required? marker removed. >>> >>> >>> This concludes the migration of the credential. After the migration the >>> hash_new(...) function is >>> >>> sufficient to verify the credential. >>> >>> >>> Pseudo-Code: validate and migrate credential >>> >>> >>> boolean verify(String rawPassword, CredentialModel cred) { >>> >>> >>> >>> if (isMarkedForMigration(cred)){ >>> >>> // Step 2.1 Validate credential by encoding the rawPassword >>> >>> // with the hash_old and then hash_new algorithm. >>> >>> if (hash_new(hash_old(rawPassword, cred), cred) == cred.value) { >>> >>> >>> // Step 2.2 Perform the credential migration >>> >>> migrateCredential(cred, hash_new(rawPassword, cred)); >>> >>> return true; >>> >>> } >>> >>> } else { >>> >>> // verify credential with hash_new(...) OR hash_old(...) >>> >>> } >>> >>> return false; >>> >>> } >>> >>> >>> TLDR: Conclusion >>> >>> >>> The proposed approach supports migration of credentials to a new encoding >>> algorithm in a two step process. >>> >>> First the existing credential value, hashed with the old hash function, >>> is >>> hashed again with the new hash >>> >>> function. The resulting hash is then stored in the credential annotated >>> with a migration marker. >>> >>> >>> To verify a given password against the stored credential hash, the same >>> sequence of hash functions is applied to the >>> >>> password and the resulting hash value is then compared against the stored >>> hash. >>> >>> If the hash matches, the actual credential migration is performed by >>> hashing the given password again but >>> >>> this time only with the new hash function. >>> >>> The resulting hash is then stored with the credential without the >>> migration >>> marker. >>> >>> >>> The main benefit of this method is that one can migrate existing >>> credential >>> encoding mechanisms to new >>> >>> ones without having to keep old credentials hashed with potentially >>> insecure algorithms around. >>> >>> The method can incrementally update the credentials by using markers on >>> the >>> stored credentials to >>> >>> steer credential validation. >>> >>> It comes with the cost of potentially more CPU intensive credential >>> validation for non-migrated >>> >>> credentials that need to be verified and migrated. >>> >>> >>> Given the continuous progression in the fields of security and >>> cryptography >>> it is only a matter of time >>> >>> that one needs to change a credential encoding mechanism in order to >>> comply >>> with the latest recommended >>> >>> security standards. >>> >>> >>> Therefore I think this incremental credential migration would be a >>> valuable >>> feature to add to >>> >>> the Keycloak System. >>> >>> >>> What do you guys think? >>> >>> >>> Cheers, >>> >>> Thomas >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> > From bburke at redhat.com Mon Mar 20 12:46:15 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 20 Mar 2017 12:46:15 -0400 Subject: [keycloak-dev] How to migrate all credentials stored in Keycloak to a new encoding algorithm? In-Reply-To: References: Message-ID: 100 users with recommended hashing iterations (20000) took 9 seconds on my laptop. This is just raw encoding with no DB updates included. https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html " 9,223,372,036,854,775,808 SHA1 computations" And, I think, but am not certain, this is for one SHA1 iteration. Are people overreacting to this, specifically for password hash storage? On 3/18/17 7:30 PM, Thomas Darimont wrote: > Hello group, > > > Sorry - for the long read but the following contains a proposal with a > > general solution for the problem. > > > TLDR; section at the end. > > > If you have been using Keycloak for a while, you probably have a number of > users in the > > system, whose passwords are encoded by the default > Pbkdf2PasswordHashProvider which > > currently uses the PBKDF2WithHmacSHA1 algorithm. > > To change the algorithm, one could implement a custom password encoding via > Keycloak?s > > PasswordHashProvider SPI. That works for user credential updates or newly > created users, > > but what about the potentially large number of credentials of already > existing users > > who are not active at the moment? > > > If you need to ensure that user credentials are encoded and stored with > > the new algorithm, then you have to migrate all user credentials to the new > algorithm. > > > Storing and verifying stored passwords usually involves a single step of > hashing in each direction: > > once stored as a hash, each try to enter the password is verified using the > same hash function and > > comparing the hashes. If you have a collection of stored password hashes > and the hash function must > > be changed, the only possibility (apart from re-initializing all password > hashes) is to apply the > > second hash function to the existing hashes and remember to hash the > entered passwords twice, too. > > That?s why it is unavoidable to remember which hash function was used to > create the first hash of > > each password. If this information can be reconstructed, the sequence of > hash functions to apply to > > a clear text password to produce a comparable hash can be reapplied. If the > hashes match, the given > > password can then be hashed with the new hash function and stored as the > new hash value, effectively > > migrating the password to use the new hash function. That?s what I propose > below. > > > The following describes an incremental method for credential updates, > verification and migration. > > * Incremental Credential Migration > > > Imagine that you have two different credential encoding algorithms: > > hash_old(input, ...) - The current encoding algorithm > > hash_new(input, ...) - The new encoding algorithm > > > We now want to update all stored credentials to use the hash_new encoding > algorithm. > > In order to achieve this the following two steps need to be performed. > > > 1. Incrementally encode existing credentials > > In this step the existing credentials are encoded with the new encoding > algorithm hash_new > > and stored as the new credential value with additional metadata (old > encoding, new encoding) > > annotated with a ?migration_required? marker. > > This marker is later used to detect credentials which needs migration > during credential validation. > > Note that since we encode the already encoded credential value we do not > need to know the plain > > text of the credential to perform the encoding. > > The encoding all credentials will probably take some time and CPU > resources, depending on the number of credentials and the used encoding > function configuration. > > Therefore it makes sense to perform this step incrementally and in parallel > to the credential validation described in Step 2. This is possible because > the newly encoded credential values > > are annotated with a ?migration_required? marker and all other credentials > will be handled by their associated encoding algorithm. > > > Eventually all credentials will be encoded with the new encoding algorithm. > > > Pseudo-Code: encode credentials with new encoding > > > for (CredentialModel credential: passwordCredentials) { > > // checks if given credential should be migrated, e.g. uses hash_old > > if (isCredentialMigrationRequired(credential)) { > > metadata = credential.getConfig(); > > // credential.value: the original password encoded with hash_old > > newValue = hash_new(credential.value, credential.salt, ?); > > metadata = updateMetadata(metadata, ?hash_new?, ?migration_required?) > > updateCredential (credential, newValue, metadata) > > } > > } > > > 2. Credential Validation and Migration > > In this step the provided password is verified by comparing the stored > password hash against the > > hash computed from the sequential application of the hash functions > hash_old and hash_new. > > > 2.1 Credential Validation > > For credentials marked with ?migration_required?, compare the stored > credential hash value with the result of hash_new(hash_old(password,... > ),...). > > For all other credentials the associated credential encoding algorithm is > used. > > > Note that credential validation for non-migrated credentials are more > expensive due to the multiple > > hash functions being applied in sequence. > > > If the hashes match, we know that the given password was valid and the > actual credential migration can be performed. > > > 2.2 Credential Migration > > After successful validation of a credential tagged with a > ?migration_required? marker, the given > > password is encoded with the new hash function via hash_new(password). The > credential is now stored with the new hash value and updated metadata with > the ?migration_required? marker removed. > > > This concludes the migration of the credential. After the migration the > hash_new(...) function is > > sufficient to verify the credential. > > > Pseudo-Code: validate and migrate credential > > > boolean verify(String rawPassword, CredentialModel cred) { > > > > if (isMarkedForMigration(cred)){ > > // Step 2.1 Validate credential by encoding the rawPassword > > // with the hash_old and then hash_new algorithm. > > if (hash_new(hash_old(rawPassword, cred), cred) == cred.value) { > > > // Step 2.2 Perform the credential migration > > migrateCredential(cred, hash_new(rawPassword, cred)); > > return true; > > } > > } else { > > // verify credential with hash_new(...) OR hash_old(...) > > } > > return false; > > } > > > TLDR: Conclusion > > > The proposed approach supports migration of credentials to a new encoding > algorithm in a two step process. > > First the existing credential value, hashed with the old hash function, is > hashed again with the new hash > > function. The resulting hash is then stored in the credential annotated > with a migration marker. > > > To verify a given password against the stored credential hash, the same > sequence of hash functions is applied to the > > password and the resulting hash value is then compared against the stored > hash. > > If the hash matches, the actual credential migration is performed by > hashing the given password again but > > this time only with the new hash function. > > The resulting hash is then stored with the credential without the migration > marker. > > > The main benefit of this method is that one can migrate existing credential > encoding mechanisms to new > > ones without having to keep old credentials hashed with potentially > insecure algorithms around. > > The method can incrementally update the credentials by using markers on the > stored credentials to > > steer credential validation. > > It comes with the cost of potentially more CPU intensive credential > validation for non-migrated > > credentials that need to be verified and migrated. > > > Given the continuous progression in the fields of security and cryptography > it is only a matter of time > > that one needs to change a credential encoding mechanism in order to comply > with the latest recommended > > security standards. > > > Therefore I think this incremental credential migration would be a valuable > feature to add to > > the Keycloak System. > > > What do you guys think? > > > Cheers, > > Thomas > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Mon Mar 20 13:04:04 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 20 Mar 2017 13:04:04 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: On 3/20/17 3:54 AM, Stian Thorgersen wrote: > Let's go with the "page expired" page then, The current behavior is "sticky" in that any back button or page refresh just renders the current Authenticator in the flow. What sucks with that is that you have to have a cancel button on each page in order to restart the flow. > but we wouldn't need it for the refresh page would we? That can just > re-display the same thing again. > Depends on the implementation of flow processing. This was what I was trying to explain before. Remember that refresh re-executes the request to original URL. IF you set cache control headers, back button will also re-execute the request. With current impl, code is in the URL, so with a refresh or back button re-execution the runtime knows the request is a bad one, so it just finds your current place in the flow and re-renders it. Without a any code in the URL, the runtime would not know whether your POST was an old POST triggered by a browser button, or if you were posting to the correct and current Authenticator. > +1 I kinda thought about writing authenticators, but you're right that > should be made as simple as possible and we should ideally handle all > of this outside the authenticators themselves. Something that would be > impossible if users could step backwards/forwards in the flow as you'd > have to have "roll back" events or something so authenticators could > undo stuff if needed. You could have a stack and push and pop on it as you went forwards and backwards. > > On 17 March 2017 at 16:21, Bill Burke > wrote: > > The best user experience would be that the user can click the > back/forward/refresh buttons as they wanted and things would work > as they would expect, i.e. if you're on the OTP page, you could > click the back button and go to the username/password page and > re-enter username password. I didn't implement support for this > approach as its really freakin hard to get right. > > The real question is, do we want to support going backwards or > forwards in a flow? If we don't, then there are considerations > and limitations on what we can do for the user experience which I > was trying to get at before. Specifically: > > * Is it possible to determine the difference between the back, > forward, or refresh button event? > > That is the question I struggled the most with when implementing > auth flow processing. > > I'd say we set the bar low and minimally try to provide this > experience: > > * If back, forward, or refresh button is pushed, show the "Page > expired" page you suggested before Stian. > > Anything different than that will be dependent on the limitations > of the browser and our auth flow implementation. > > There's also another type of user experience you aren't > considering that should be involved with the discussion. > Specifically the user experience of writing an Authenticator. You > want the Authenticator development to be simple...you also want to > make sure that its implemented in a way that developers can't make > mistakes and introduce security holes by accident. All this stuff > is tied together which makes the problem really complex. > > > On 3/17/17 3:45 AM, Stian Thorgersen wrote: >> I repeat: >> >> Before we discuss implementation though, let's figure out what >> the ideal user experience would be then figure out how to >> implement it. What about: >> >> * Refresh just works >> * Back button will display a nicer page, something like "Page has >> expired. To restart the login process click here. To continue the >> login process click here.". Or Back button could just go to the >> start of the flow always. >> * Resubmitting forms will just display the page above >> * No need to do redirects. Redirects is bad for performance, but >> also has twice the response time which is not good from a >> usability perspective >> >> Is this the optimal user experience? Or should we do something else? >> >> On 17 March 2017 at 08:44, Stian Thorgersen > > wrote: >> >> Can we please get back to discussing what the best user >> experience is first. Then we can discuss implementations? >> >> On 16 March 2017 at 18:37, Bill Burke > > wrote: >> >> >> >> On 3/16/17 10:50 AM, Marek Posolda wrote: >>> On 16/03/17 15:27, Bill Burke wrote: >>>> * Hidden field in a form is not a good approach. Its >>>> very brittle and will not work in every situation. So >>>> huge -1 there. >>>> >>>> * browser back button is not required to resubmit the >>>> HTTP request as the page can be rendered from cache. >>>> Therefore you couldn't have a "Page Expired" page >>>> displayed when the back button is pressed without >>>> setting the header "Cache-Control: no-store, >>>> must-revalidate, max-age=0" >>> Maybe we can do some javascript stuff like this: >>> http://stackoverflow.com/questions/9046184/reload-the-site-when-reached-via-browsers-back-button >>> >>> >>> But that would mean that we will need to inject some >>> common javascript stuff into every HTML form displayed >>> by authentication SPI. Could we rely on that? >> I don't think this is a good approach as Authenticator >> develoeprs would have to do the same thing. >> >> >>>> >>>> * Furthermore, without some type of code/information >>>> within the URL, you also wouldn't know if somebody >>>> clicked the back button or not or whether this was a >>>> page refresh or some other GET request. >>> Once we have the cookie with loginSessionID, we can >>> lookup the loginSession. And loginSession will contain >>> last code (same like clientSession now) and last >>> authenticator. Then we just need to compare the code >>> from the loginSession with the code from request. If it >>> matches, we are good. If it doesn't match, it's likely >>> the refresh of some previous page and in that case, we >>> can just redirect to last authenticator. >>> >> This is the current behavior, but instead of using a >> cookie, the "code" is stored in the URL. >> >> With only a cookie though and no URL information, you >> won't know the different between a Back Button and a Page >> Refresh for GET requests. For POST requests, you won't >> be able to tell the differencee between a Back Button, >> Page Refresh, or whether the POST is targeted to an >> actual Authenticator. >> >> The more I think about it, things should probably stay >> the way it currently is, with improvements on user >> experience. I think we can support what Stian suggested >> with the current implementation. >> >> >>> Not sure if we also need to track all codes, so we are >>> able to distinct between the "expired" code, and between >>> the "false" code, which was never valid and was possibly >>> used by some attacker for CSRF. Maybe we can sign codes >>> with HMAC, so we can verify if it is "expired" or >>> "false" code without need to track the list of last codes. >> >> This has been done in the past. Then it was switched to >> using the same code throughout the whole flow, then Stian >> switched it to changing the code throughout the flow. I >> don't know if he uses a hash or not. >> >> Bill >> >> >> > > From sthorger at redhat.com Tue Mar 21 02:28:26 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Tue, 21 Mar 2017 07:28:26 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: On 20 March 2017 at 18:04, Bill Burke wrote: > > > On 3/20/17 3:54 AM, Stian Thorgersen wrote: > > Let's go with the "page expired" page then, > > The current behavior is "sticky" in that any back button or page refresh > just renders the current Authenticator in the flow. What sucks with that > is that you have to have a cancel button on each page in order to restart > the flow. > > > but we wouldn't need it for the refresh page would we? That can just > re-display the same thing again. > > > Depends on the implementation of flow processing. This was what I was > trying to explain before. > > Remember that refresh re-executes the request to original URL. IF you set > cache control headers, back button will also re-execute the request. With > current impl, code is in the URL, so with a refresh or back button > re-execution the runtime knows the request is a bad one, so it just finds > your current place in the flow and re-renders it. Without a any code in > the URL, the runtime would not know whether your POST was an old POST > triggered by a browser button, or if you were posting to the correct and > current Authenticator. > So we have to have something in the URL that's clear to me now. We should just add a step number or something rather than the code we have now (the codes are expensive to create). > > > > +1 I kinda thought about writing authenticators, but you're right that > should be made as simple as possible and we should ideally handle all of > this outside the authenticators themselves. Something that would be > impossible if users could step backwards/forwards in the flow as you'd have > to have "roll back" events or something so authenticators could undo stuff > if needed. > > You could have a stack and push and pop on it as you went forwards and > backwards. > That would only work if there are no changes outside authentication session, but we already have things like required actions that change the user directly. We would also need some option on authenticators and required actions whether or not they can be replayed or something like that. Sounds complicated... > > > > > On 17 March 2017 at 16:21, Bill Burke wrote: > >> The best user experience would be that the user can click the >> back/forward/refresh buttons as they wanted and things would work as they >> would expect, i.e. if you're on the OTP page, you could click the back >> button and go to the username/password page and re-enter username >> password. I didn't implement support for this approach as its really >> freakin hard to get right. >> The real question is, do we want to support going backwards or forwards >> in a flow? If we don't, then there are considerations and limitations on >> what we can do for the user experience which I was trying to get at >> before. Specifically: >> >> * Is it possible to determine the difference between the back, forward, >> or refresh button event? >> >> That is the question I struggled the most with when implementing auth >> flow processing. >> >> I'd say we set the bar low and minimally try to provide this experience: >> >> * If back, forward, or refresh button is pushed, show the "Page expired" >> page you suggested before Stian. >> >> Anything different than that will be dependent on the limitations of the >> browser and our auth flow implementation. >> >> There's also another type of user experience you aren't considering that >> should be involved with the discussion. Specifically the user experience >> of writing an Authenticator. You want the Authenticator development to be >> simple...you also want to make sure that its implemented in a way that >> developers can't make mistakes and introduce security holes by accident. >> All this stuff is tied together which makes the problem really complex. >> >> >> On 3/17/17 3:45 AM, Stian Thorgersen wrote: >> >> I repeat: >> >> Before we discuss implementation though, let's figure out what the ideal >> user experience would be then figure out how to implement it. What about: >> >> * Refresh just works >> * Back button will display a nicer page, something like "Page has >> expired. To restart the login process click here. To continue the login >> process click here.". Or Back button could just go to the start of the flow >> always. >> * Resubmitting forms will just display the page above >> * No need to do redirects. Redirects is bad for performance, but also has >> twice the response time which is not good from a usability perspective >> >> Is this the optimal user experience? Or should we do something else? >> >> On 17 March 2017 at 08:44, Stian Thorgersen wrote: >> >>> Can we please get back to discussing what the best user experience is >>> first. Then we can discuss implementations? >>> >>> On 16 March 2017 at 18:37, Bill Burke wrote: >>> >>>> >>>> >>>> On 3/16/17 10:50 AM, Marek Posolda wrote: >>>> >>>> On 16/03/17 15:27, Bill Burke wrote: >>>> >>>> * Hidden field in a form is not a good approach. Its very brittle and >>>> will not work in every situation. So huge -1 there. >>>> >>>> * browser back button is not required to resubmit the HTTP request as >>>> the page can be rendered from cache. Therefore you couldn't have a "Page >>>> Expired" page displayed when the back button is pressed without setting the >>>> header "Cache-Control: no-store, must-revalidate, max-age=0" >>>> >>>> Maybe we can do some javascript stuff like this: >>>> http://stackoverflow.com/questions/9046184/reload-the-site-w >>>> hen-reached-via-browsers-back-button >>>> >>>> But that would mean that we will need to inject some common javascript >>>> stuff into every HTML form displayed by authentication SPI. Could we rely >>>> on that? >>>> >>>> I don't think this is a good approach as Authenticator develoeprs would >>>> have to do the same thing. >>>> >>>> >>>> >>>> * Furthermore, without some type of code/information within the URL, >>>> you also wouldn't know if somebody clicked the back button or not or >>>> whether this was a page refresh or some other GET request. >>>> >>>> Once we have the cookie with loginSessionID, we can lookup the >>>> loginSession. And loginSession will contain last code (same like >>>> clientSession now) and last authenticator. Then we just need to compare the >>>> code from the loginSession with the code from request. If it matches, we >>>> are good. If it doesn't match, it's likely the refresh of some previous >>>> page and in that case, we can just redirect to last authenticator. >>>> >>>> This is the current behavior, but instead of using a cookie, the "code" >>>> is stored in the URL. >>>> >>>> With only a cookie though and no URL information, you won't know the >>>> different between a Back Button and a Page Refresh for GET requests. For >>>> POST requests, you won't be able to tell the differencee between a Back >>>> Button, Page Refresh, or whether the POST is targeted to an actual >>>> Authenticator. >>>> >>>> The more I think about it, things should probably stay the way it >>>> currently is, with improvements on user experience. I think we can support >>>> what Stian suggested with the current implementation. >>>> >>>> >>>> Not sure if we also need to track all codes, so we are able to distinct >>>> between the "expired" code, and between the "false" code, which was never >>>> valid and was possibly used by some attacker for CSRF. Maybe we can sign >>>> codes with HMAC, so we can verify if it is "expired" or "false" code >>>> without need to track the list of last codes. >>>> >>>> >>>> This has been done in the past. Then it was switched to using the same >>>> code throughout the whole flow, then Stian switched it to changing the code >>>> throughout the flow. I don't know if he uses a hash or not. >>>> >>>> Bill >>>> >>> >>> >> >> > > From velias at redhat.com Tue Mar 21 05:21:57 2017 From: velias at redhat.com (Vlastimil Elias) Date: Tue, 21 Mar 2017 10:21:57 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> <273638a2-663c-5b08-d977-97d5721eb21b@redhat.com> Message-ID: Hi, On 17.3.2017 15:18, Stian Thorgersen wrote: > The ideal behavior IMO is: > > 1. Back button always just shows the Page expired message, with option to > continue or restart > 2. Refresh always works > 3. No redirect required on POSTs My fear is how to explain "restart" action (on Expired page) to the common user to understand it. Login flows are typically part of some broader user flows started in the client app, users typically do not perceive them as separate thing. Also where exactly the user will be pointed after "restart"? Eg. what "restart" mean when user went over registration form (initiated for either /register OIDC endpoind or "Register" button of Login page)? Does this mean that he lost everything he entered into registration form? This may be really painful for more complicated reg. forms (eg. the one we have for RHD) if user only used back button to correct something entered into Reg form. Other example of one flow one of our users really used: 1. User came to login page from some app 2. Used social login link (eg. Github) 3. Social account was not linked so he got a form to fill in other info necessary to create new account 4. User used back button to get back to our login page, then used "Register" button there. He got some error (I'm not sure in which version of Keycloak, maybe it is resolved now, but we do not want regression). How will this work in the behavior you proposed? I believe this kind of cases (switch between "login" and "register" types of flow) should work correctly and show usable Register form without any error page/message in this case. > If we can't have all 3 (and it sounds like we can't) I'd probably say #3 is > the lowest priority. Agree, but there must be maximally one browser redirect after POST (we complained when one historical implementation used more redirects after one POST) due to user response time. Vl. > I think the option to "restart" the flow is important > as the user in most cases would click back because they've done something > wrong. > > So what Bill suggested would work I think: > > * Have "Cache-Control: no-store, must-revalidate, max-age=0" > * I guess we need the redirect after a POST to get for the next step > * Have the execution ID in the flow as well as the current execution in the > authentication session. If requested execution ID is not equal to the one > in authentication session display the "Page expired page" > > What I don't quite get though is why does the redirect after POST prevent > the "form is expired do you want to resubmit" or whatever the message is? > > > > On 17 March 2017 at 15:08, Stian Thorgersen wrote: > >> >> On 17 March 2017 at 11:12, Marek Posolda wrote: >> >>> On 17/03/17 09:40, Stian Thorgersen wrote: >>> >>> >>> >>> On 17 March 2017 at 09:22, Marek Posolda wrote: >>> >>>> Ok, for now just ignoring the browser limitations and fact that >>>> back/forward doesn't refresh the page automatically for POST requests :) >>>> >>>> On 17/03/17 08:45, Stian Thorgersen wrote: >>>> >>>> I repeat: >>>> >>>> Before we discuss implementation though, let's figure out what the ideal >>>> user experience would be then figure out how to implement it. What about: >>>> >>>> * Refresh just works >>>> >>>> * Back button will display a nicer page, something like "Page has >>>> expired. To restart the login process click here. To continue the login >>>> process click here.". >>>> >>>> >>>> Yeah, that will be nice for UXP. >>>> >>>> Or Back button could just go to the start of the flow always. >>>> >>>> >>>> Regarding UXP, I personally like your previous proposal better. If user >>>> is deep after confirm many authenticator forms and he accidentally >>>> clicks back-button, he will need to re-authenticate in all >>>> authenticators again. Not so great for usability though? >>>> >>>> >>> True - giving the user the option to choose is probably best. >>> >>> >>>> * Resubmitting forms will just display the page above >>>> >>>> If we do any of your previous proposal, user will never see the forms, >>>> >>>> which he already submitted? For example if he submitted >>>> username/password and now is on TOTP page, then after click "back" he will be >>>> either on the "Page has expired" or start of the flow. The start of the flow usually >>>> will be username/password form, but flow started from scratch, so it >>>> won't be resubmitting form, but new submit? >>>> >>>> Anyway yes, if some of previous forms is re-submitted, we can display "page is expired" page. >>>> >>>> I'm not quite following. Is it possible to prevent the back buttons from >>> "re-submitting" forms at all? If so that's ideal as you then don't get the >>> ugly message from the browser that the form is expired. >>> >>> Yes, as long as we don't send any "Cache-control" header, then browser >>> back/forward buttons doesn't resubmit forms and doesn't re-send any >>> requests. >>> >>> So follow-up on the example above >>> 1) User successfully authenticated on username/password form and he is on >>> TOTP page. >>> 2) User press browser "back" button. Now he will see again the >>> username/password form >>> 3) User will try to re-submit the username/password form OR he press >>> browser "refresh" button. In both cases, we will show our nice "Page has >>> expired. To restart the login process click here. To continue the login >>> process click here." >>> >>> Are we in agreement that this is ideal user experience? >>> >> Not quite. Clicking back shouldn't show the form again. It should rather >> just show the page expired message and ask user if they want to restart or >> continue. >> >> By the way Google's login flows are really nice. Much better than ours. >> >> >>> If yes, we can achieve that quite easily without need of javascript hacks >>> or hidden form fields though. >>> >>> Marek >>> >>> >>> >>>> Marek >>>> >>>> * No need to do redirects. Redirects is bad for performance, but also >>>> has twice the response time which is not good from a usability perspective >>>> >>>> Is this the optimal user experience? Or should we do something else? >>>> >>>> On 17 March 2017 at 08:44, Stian Thorgersen wrote: >>>> >>>>> Can we please get back to discussing what the best user experience is >>>>> first. Then we can discuss implementations? >>>>> >>>>> On 16 March 2017 at 18:37, Bill Burke wrote: >>>>> >>>>>> >>>>>> On 3/16/17 10:50 AM, Marek Posolda wrote: >>>>>> >>>>>> On 16/03/17 15:27, Bill Burke wrote: >>>>>> >>>>>> * Hidden field in a form is not a good approach. Its very brittle and >>>>>> will not work in every situation. So huge -1 there. >>>>>> >>>>>> * browser back button is not required to resubmit the HTTP request as >>>>>> the page can be rendered from cache. Therefore you couldn't have a "Page >>>>>> Expired" page displayed when the back button is pressed without setting the >>>>>> header "Cache-Control: no-store, must-revalidate, max-age=0" >>>>>> >>>>>> Maybe we can do some javascript stuff like this: >>>>>> http://stackoverflow.com/questions/9046184/reload-the-site-w >>>>>> hen-reached-via-browsers-back-button >>>>>> >>>>>> But that would mean that we will need to inject some common javascript >>>>>> stuff into every HTML form displayed by authentication SPI. Could we rely >>>>>> on that? >>>>>> >>>>>> I don't think this is a good approach as Authenticator develoeprs >>>>>> would have to do the same thing. >>>>>> >>>>>> >>>>>> >>>>>> * Furthermore, without some type of code/information within the URL, >>>>>> you also wouldn't know if somebody clicked the back button or not or >>>>>> whether this was a page refresh or some other GET request. >>>>>> >>>>>> Once we have the cookie with loginSessionID, we can lookup the >>>>>> loginSession. And loginSession will contain last code (same like >>>>>> clientSession now) and last authenticator. Then we just need to compare the >>>>>> code from the loginSession with the code from request. If it matches, we >>>>>> are good. If it doesn't match, it's likely the refresh of some previous >>>>>> page and in that case, we can just redirect to last authenticator. >>>>>> >>>>>> This is the current behavior, but instead of using a cookie, the >>>>>> "code" is stored in the URL. >>>>>> >>>>>> With only a cookie though and no URL information, you won't know the >>>>>> different between a Back Button and a Page Refresh for GET requests. For >>>>>> POST requests, you won't be able to tell the differencee between a Back >>>>>> Button, Page Refresh, or whether the POST is targeted to an actual >>>>>> Authenticator. >>>>>> >>>>>> The more I think about it, things should probably stay the way it >>>>>> currently is, with improvements on user experience. I think we can support >>>>>> what Stian suggested with the current implementation. >>>>>> >>>>>> >>>>>> Not sure if we also need to track all codes, so we are able to >>>>>> distinct between the "expired" code, and between the "false" code, which >>>>>> was never valid and was possibly used by some attacker for CSRF. Maybe we >>>>>> can sign codes with HMAC, so we can verify if it is "expired" or "false" >>>>>> code without need to track the list of last codes. >>>>>> >>>>>> >>>>>> This has been done in the past. Then it was switched to using the >>>>>> same code throughout the whole flow, then Stian switched it to changing the >>>>>> code throughout the flow. I don't know if he uses a hash or not. >>>>>> >>>>>> Bill >>>>>> >>>>> >>>> >>> > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev -- Vlastimil Elias Principal Software Engineer Red Hat Developer | Engineering From bburke at redhat.com Tue Mar 21 09:13:51 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 21 Mar 2017 09:13:51 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> Message-ID: <99a22b94-9d8a-aa5c-07d8-03500ed7dd6c@redhat.com> On 3/21/17 2:28 AM, Stian Thorgersen wrote: > > > On 20 March 2017 at 18:04, Bill Burke > wrote: > > > > On 3/20/17 3:54 AM, Stian Thorgersen wrote: >> Let's go with the "page expired" page then, > The current behavior is "sticky" in that any back button or page > refresh just renders the current Authenticator in the flow. What > sucks with that is that you have to have a cancel button on each > page in order to restart the flow. > > >> but we wouldn't need it for the refresh page would we? That can >> just re-display the same thing again. >> > > Depends on the implementation of flow processing. This was what I > was trying to explain before. > > Remember that refresh re-executes the request to original URL. IF > you set cache control headers, back button will also re-execute > the request. With current impl, code is in the URL, so with a > refresh or back button re-execution the runtime knows the request > is a bad one, so it just finds your current place in the flow and > re-renders it. Without a any code in the URL, the runtime would > not know whether your POST was an old POST triggered by a browser > button, or if you were posting to the correct and current > Authenticator. > > > So we have to have something in the URL that's clear to me now. We > should just add a step number or something rather than the code we > have now (the codes are expensive to create). To detect refresh button, you'd need to know the previous action too I think. We have two things right now: * code - this corresponds to the client session code * execution-code - this is the Authenticator's component id. This is only in the URL if you are executing an action on the Authenticator, i.e. posting username/password. I think we should keep generating the code + hash, but only do it when the login state transitions, i.e. from authentication->required-actions->code to token->logged in ...AND...make sure that code != code-to-token code :). We should also make sure that you cannot go backwards in a flow. i.e., once Authentication is successful, you can't go back to the Authentication state. Once code-to-token happens, you can go back to required actions, etc. Just to be safe, we could store the flow code both in a cookie and in the url which would guard against CSRF attacks, although I don't know if it would be possible to do a CSRF attack in an authentication flow or what it would buy the attacker. > > > >> +1 I kinda thought about writing authenticators, but you're right >> that should be made as simple as possible and we should ideally >> handle all of this outside the authenticators themselves. >> Something that would be impossible if users could step >> backwards/forwards in the flow as you'd have to have "roll back" >> events or something so authenticators could undo stuff if needed. > You could have a stack and push and pop on it as you went forwards > and backwards. > > > That would only work if there are no changes outside authentication > session, but we already have things like required actions that change > the user directly. We would also need some option on authenticators > and required actions whether or not they can be replayed or something > like that. Sounds complicated... Yeah, I don't want to implement it. I just worry we'd get something wrong and create a security hole. Bill From mposolda at redhat.com Tue Mar 21 09:50:39 2017 From: mposolda at redhat.com (Marek Posolda) Date: Tue, 21 Mar 2017 14:50:39 +0100 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <99a22b94-9d8a-aa5c-07d8-03500ed7dd6c@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> <99a22b94-9d8a-aa5c-07d8-03500ed7dd6c@redhat.com> Message-ID: <6517cbaa-19f6-0dc3-7b44-cc57ae3fb9ca@redhat.com> On 21/03/17 14:13, Bill Burke wrote: > > > > On 3/21/17 2:28 AM, Stian Thorgersen wrote: >> >> >> On 20 March 2017 at 18:04, Bill Burke > > wrote: >> >> >> >> On 3/20/17 3:54 AM, Stian Thorgersen wrote: >>> Let's go with the "page expired" page then, >> The current behavior is "sticky" in that any back button or page >> refresh just renders the current Authenticator in the flow. What >> sucks with that is that you have to have a cancel button on each >> page in order to restart the flow. >> >> >>> but we wouldn't need it for the refresh page would we? That can >>> just re-display the same thing again. >>> >> >> Depends on the implementation of flow processing. This was what >> I was trying to explain before. >> >> Remember that refresh re-executes the request to original URL. >> IF you set cache control headers, back button will also >> re-execute the request. With current impl, code is in the URL, >> so with a refresh or back button re-execution the runtime knows >> the request is a bad one, so it just finds your current place in >> the flow and re-renders it. Without a any code in the URL, the >> runtime would not know whether your POST was an old POST >> triggered by a browser button, or if you were posting to the >> correct and current Authenticator. >> >> >> So we have to have something in the URL that's clear to me now. We >> should just add a step number or something rather than the code we >> have now (the codes are expensive to create). > > To detect refresh button, you'd need to know the previous action too I > think. > > We have two things right now: > > * code - this corresponds to the client session code > * execution-code - this is the Authenticator's component id. This is > only in the URL if you are executing an action on the Authenticator, > i.e. posting username/password. > > I think we should keep generating the code + hash, but only do it when > the login state transitions, i.e. from > authentication->required-actions->code to token->logged in > ...AND...make sure that code != code-to-token code :). We should also > make sure that you cannot go backwards in a flow. i.e., once > Authentication is successful, you can't go back to the Authentication > state. Once code-to-token happens, you can go back to required > actions, etc. > > Just to be safe, we could store the flow code both in a cookie and in > the url which would guard against CSRF attacks, although I don't know > if it would be possible to do a CSRF attack in an authentication flow > or what it would buy the attacker. What I have right now in my cross-dc prototype branch is, that "code" is used just for the action requests. Code is still single use and it's compared with the code from login session (same like we have in master). So every action is also single use and cannot be replayed. I have the "execution" parameter used in both action and non-action requests. That's better as refreshing page doesn't need to change execution in the URL and back-button will always turn you one authenticator back and it shows the "page expired" page if execution doesn't match the last used execution from login session. Marek > > > > >> >> >> >>> +1 I kinda thought about writing authenticators, but you're >>> right that should be made as simple as possible and we should >>> ideally handle all of this outside the authenticators >>> themselves. Something that would be impossible if users could >>> step backwards/forwards in the flow as you'd have to have "roll >>> back" events or something so authenticators could undo stuff if >>> needed. >> You could have a stack and push and pop on it as you went >> forwards and backwards. >> >> >> That would only work if there are no changes outside authentication >> session, but we already have things like required actions that change >> the user directly. We would also need some option on authenticators >> and required actions whether or not they can be replayed or something >> like that. Sounds complicated... > > Yeah, I don't want to implement it. I just worry we'd get something > wrong and create a security hole. > > Bill From bburke at redhat.com Tue Mar 21 09:56:21 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 21 Mar 2017 09:56:21 -0400 Subject: [keycloak-dev] Improve back-button and refreshes in authenticators? In-Reply-To: <6517cbaa-19f6-0dc3-7b44-cc57ae3fb9ca@redhat.com> References: <714a288e-fec9-eccf-f5e9-134970daf7b8@redhat.com> <85aba1da-64bc-bba6-3272-c165de28d4e1@redhat.com> <800c31f4-ad7d-fe05-e4d3-00e3ac003260@redhat.com> <81b90531-ecd5-4a15-9c3d-2ccc9c9317b5@redhat.com> <7d195e6a-cb45-1565-5f72-2c5412def258@redhat.com> <68cc026b-0b5d-323b-4c6c-85b2414ca052@redhat.com> <7d85a3a6-d247-2234-d8eb-648ad66fc6f6@redhat.com> <99a22b94-9d8a-aa5c-07d8-03500ed7dd6c@redhat.com> <6517cbaa-19f6-0dc3-7b44-cc57ae3fb9ca@redhat.com> Message-ID: <2b309b1d-b8ad-6c95-e1da-c91aff41b439@redhat.com> On 3/21/17 9:50 AM, Marek Posolda wrote: > On 21/03/17 14:13, Bill Burke wrote: >> >> >> >> On 3/21/17 2:28 AM, Stian Thorgersen wrote: >>> >>> >>> On 20 March 2017 at 18:04, Bill Burke >> > wrote: >>> >>> >>> >>> On 3/20/17 3:54 AM, Stian Thorgersen wrote: >>>> Let's go with the "page expired" page then, >>> The current behavior is "sticky" in that any back button or page >>> refresh just renders the current Authenticator in the flow. >>> What sucks with that is that you have to have a cancel button on >>> each page in order to restart the flow. >>> >>> >>>> but we wouldn't need it for the refresh page would we? That can >>>> just re-display the same thing again. >>>> >>> >>> Depends on the implementation of flow processing. This was what >>> I was trying to explain before. >>> >>> Remember that refresh re-executes the request to original URL. >>> IF you set cache control headers, back button will also >>> re-execute the request. With current impl, code is in the URL, >>> so with a refresh or back button re-execution the runtime knows >>> the request is a bad one, so it just finds your current place in >>> the flow and re-renders it. Without a any code in the URL, the >>> runtime would not know whether your POST was an old POST >>> triggered by a browser button, or if you were posting to the >>> correct and current Authenticator. >>> >>> >>> So we have to have something in the URL that's clear to me now. We >>> should just add a step number or something rather than the code we >>> have now (the codes are expensive to create). >> >> To detect refresh button, you'd need to know the previous action too >> I think. >> >> We have two things right now: >> >> * code - this corresponds to the client session code >> * execution-code - this is the Authenticator's component id. This is >> only in the URL if you are executing an action on the Authenticator, >> i.e. posting username/password. >> >> I think we should keep generating the code + hash, but only do it >> when the login state transitions, i.e. from >> authentication->required-actions->code to token->logged in >> ...AND...make sure that code != code-to-token code :). We should >> also make sure that you cannot go backwards in a flow. i.e., once >> Authentication is successful, you can't go back to the Authentication >> state. Once code-to-token happens, you can go back to required >> actions, etc. >> >> Just to be safe, we could store the flow code both in a cookie and in >> the url which would guard against CSRF attacks, although I don't know >> if it would be possible to do a CSRF attack in an authentication flow >> or what it would buy the attacker. > What I have right now in my cross-dc prototype branch is, that "code" > is used just for the action requests. Code is still single use and > it's compared with the code from login session (same like we have in > master). So every action is also single use and cannot be replayed. > > I have the "execution" parameter used in both action and non-action > requests. That's better as refreshing page doesn't need to change > execution in the URL and back-button will always turn you one > authenticator back and it shows the "page expired" page if execution > doesn't match the last used execution from login session. Ah, cool! I hate it I didn't find that solution when implementing this stuff. Bill From bburke at redhat.com Tue Mar 21 10:58:21 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 21 Mar 2017 10:58:21 -0400 Subject: [keycloak-dev] bugs and limitations in alternative flows Message-ID: User just came across this bug, (well I haven't tested it is a bug but pretty sure it is): Inside the Browser flow we have Username Password Form 2SV - sub flow required OTP execution - alternative SMS execution - alternative Neither OTP or SMS challenge is returned and both are just skipped. Another problem is that if we fixed the above problem there is no code that handles the case where both alternatives are not configured. Finally, there is a limitation if all of this was fixed, what to do if both of these Authenticators are not configured? How is the required action formed and executed? From bburke at redhat.com Tue Mar 21 12:25:36 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 21 Mar 2017 12:25:36 -0400 Subject: [keycloak-dev] JWS sizes Message-ID: <8367424c-0f69-451a-9f24-383f123d8cbe@redhat.com> FYI, Signature for RSA-Sha-256 for JWS is 172 bytes. The Header of the JWS is minimally 20 extra bytes. Can be more depending on additional headers (kid, typ, cty). Wanted to state these numbers as they effect if we want to use a cookie to store session information instead of within a ClientSessionModel on the auth server, or HttpSession on clients/apps. Supposedly cookie storage is limited to 4k per domain, so we're immediately starting 200 bytes (5%) in the hole. Bill From mposolda at redhat.com Tue Mar 21 13:57:37 2017 From: mposolda at redhat.com (Marek Posolda) Date: Tue, 21 Mar 2017 18:57:37 +0100 Subject: [keycloak-dev] JWS sizes In-Reply-To: <8367424c-0f69-451a-9f24-383f123d8cbe@redhat.com> References: <8367424c-0f69-451a-9f24-383f123d8cbe@redhat.com> Message-ID: <4241622c-7839-d6a4-2f8e-05451cece311@redhat.com> I guess we're not going to support cookie storage anyway, but if yes (in theory) isn't it sufficient to go with Hmac-SHA256 based signature? It would be Keycloak server itself, which both creates and verifies cookie, so perhaps not a need for bigger and less performant RSA? Which reminds that we can probably save some performance points by using HMAC for refresh tokens too? Since it's the Keycloak itself which signs and verifies it and from the adapter perspective, refresh token is just an opaque string. Marek On 21/03/17 17:25, Bill Burke wrote: > FYI, > > Signature for RSA-Sha-256 for JWS is 172 bytes. The Header of the JWS > is minimally 20 extra bytes. Can be more depending on additional > headers (kid, typ, cty). Wanted to state these numbers as they effect > if we want to use a cookie to store session information instead of > within a ClientSessionModel on the auth server, or HttpSession on > clients/apps. Supposedly cookie storage is limited to 4k per domain, so > we're immediately starting 200 bytes (5%) in the hole. > > Bill > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Tue Mar 21 17:10:14 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 21 Mar 2017 17:10:14 -0400 Subject: [keycloak-dev] initial fine-grain admin permissions Message-ID: Here's what we want to be able to manage for fine-grain admin permissions for the 1st iteration. If you think we need more, let me know, but I want to keep this list as small as possible. User management * Admin can only apply certain roles to a user * Admin can view users of a specific group * Admin can manage users of a specific group (creds, role mappings, etc) Group Management * Admin can only manage a specific group * Admin can only apply certain roles to a group * Admin can only manage attributes of a specific group * Admin can control group membership (add/remove members) Client management: * Admin can only manage a specific client. * Admin can manage only configuration for a specific client and not scope mappings or mappers. We have this distinction so that rogues can't expand the scope of the client beyond what it is allowed to. * Service accounts can manage the configuration of the client by default? From sthorger at redhat.com Wed Mar 22 03:43:40 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Wed, 22 Mar 2017 08:43:40 +0100 Subject: [keycloak-dev] JWS sizes In-Reply-To: <4241622c-7839-d6a4-2f8e-05451cece311@redhat.com> References: <8367424c-0f69-451a-9f24-383f123d8cbe@redhat.com> <4241622c-7839-d6a4-2f8e-05451cece311@redhat.com> Message-ID: It's even worse there's cases where cookie storage is limited to 2k per domain. Some reverse proxies have that as the default apparently. On 21 March 2017 at 18:57, Marek Posolda wrote: > I guess we're not going to support cookie storage anyway, but if yes (in > theory) isn't it sufficient to go with Hmac-SHA256 based signature? It > would be Keycloak server itself, which both creates and verifies cookie, > so perhaps not a need for bigger and less performant RSA? > > Which reminds that we can probably save some performance points by using > HMAC for refresh tokens too? Since it's the Keycloak itself which signs > and verifies it and from the adapter perspective, refresh token is just > an opaque string. > +1 Good point! Can you JIRA it and set fix version to 3.3 please? > > Marek > > On 21/03/17 17:25, Bill Burke wrote: > > FYI, > > > > Signature for RSA-Sha-256 for JWS is 172 bytes. The Header of the JWS > > is minimally 20 extra bytes. Can be more depending on additional > > headers (kid, typ, cty). Wanted to state these numbers as they effect > > if we want to use a cookie to store session information instead of > > within a ClientSessionModel on the auth server, or HttpSession on > > clients/apps. Supposedly cookie storage is limited to 4k per domain, so > > we're immediately starting 200 bytes (5%) in the hole. > > > > Bill > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Wed Mar 22 03:49:45 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Wed, 22 Mar 2017 08:49:45 +0100 Subject: [keycloak-dev] Authorization code and refresh tokens in cross DC Message-ID: One problem with authorization codes and refresh tokens in cross DC setups is the fact that they are supposed to be one-time only. We could have an option on a realm to set if they are one-time or not (we already do for refresh tokens). If they are one-time we have a separate synchronously replicated cache to handle. Question though would it be acceptable that they are not one-time? It does imposed a certain risk as it wouldn't be possible to detect if they are leaked. For example imagine someone somehow manages to sniff all codes sent to an application (say it's a public application since that doesn't require credentials). They can then exchange the code for tokens alongside the real application. As the codes are not one-time this goes undetected. If codes on the other hand are undetected the realm application will either be the only one to exchange the code, or it will notice something is not right. This issue may be resolved by the introduction of Proof Key for Code Exchange [1] though. For refresh tokens sure it would be nice to be able to detect leakage, but one-time is fairly brittle as HTTP requests are not transactional so you can easily end up with loosing a response which would render the old refresh token useless and the new one lost. [1] https://tools.ietf.org/html/rfc7636 From mposolda at redhat.com Wed Mar 22 04:12:09 2017 From: mposolda at redhat.com (Marek Posolda) Date: Wed, 22 Mar 2017 09:12:09 +0100 Subject: [keycloak-dev] JWS sizes In-Reply-To: References: <8367424c-0f69-451a-9f24-383f123d8cbe@redhat.com> <4241622c-7839-d6a4-2f8e-05451cece311@redhat.com> Message-ID: <041ea915-d224-d2c7-c4d7-67142120ecf4@redhat.com> On 22/03/17 08:43, Stian Thorgersen wrote: > It's even worse there's cases where cookie storage is limited to 2k > per domain. Some reverse proxies have that as the default apparently. > > On 21 March 2017 at 18:57, Marek Posolda > wrote: > > I guess we're not going to support cookie storage anyway, but if > yes (in > theory) isn't it sufficient to go with Hmac-SHA256 based signature? It > would be Keycloak server itself, which both creates and verifies > cookie, > so perhaps not a need for bigger and less performant RSA? > > Which reminds that we can probably save some performance points by > using > HMAC for refresh tokens too? Since it's the Keycloak itself which > signs > and verifies it and from the adapter perspective, refresh token is > just > an opaque string. > > > +1 Good point! Can you JIRA it and set fix version to 3.3 please? Created https://issues.jboss.org/browse/KEYCLOAK-4622 for refresh tokens. Also created https://issues.jboss.org/browse/KEYCLOAK-4623 for client registration tokens, which I think is a similar case. The performance here is not so critical though, but still, I think the fix would be pretty-easy and worth to do it IMO. Marek > > > Marek > > On 21/03/17 17:25, Bill Burke wrote: > > FYI, > > > > Signature for RSA-Sha-256 for JWS is 172 bytes. The Header of > the JWS > > is minimally 20 extra bytes. Can be more depending on additional > > headers (kid, typ, cty). Wanted to state these numbers as they > effect > > if we want to use a cookie to store session information instead of > > within a ClientSessionModel on the auth server, or HttpSession on > > clients/apps. Supposedly cookie storage is limited to 4k per > domain, so > > we're immediately starting 200 bytes (5%) in the hole. > > > > Bill > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From sthorger at redhat.com Wed Mar 22 04:42:22 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Wed, 22 Mar 2017 09:42:22 +0100 Subject: [keycloak-dev] JWS sizes In-Reply-To: <041ea915-d224-d2c7-c4d7-67142120ecf4@redhat.com> References: <8367424c-0f69-451a-9f24-383f123d8cbe@redhat.com> <4241622c-7839-d6a4-2f8e-05451cece311@redhat.com> <041ea915-d224-d2c7-c4d7-67142120ecf4@redhat.com> Message-ID: We also need to make sure action tokens use HMAC On 22 March 2017 at 09:12, Marek Posolda wrote: > On 22/03/17 08:43, Stian Thorgersen wrote: > > It's even worse there's cases where cookie storage is limited to 2k per > domain. Some reverse proxies have that as the default apparently. > > On 21 March 2017 at 18:57, Marek Posolda wrote: > >> I guess we're not going to support cookie storage anyway, but if yes (in >> theory) isn't it sufficient to go with Hmac-SHA256 based signature? It >> would be Keycloak server itself, which both creates and verifies cookie, >> so perhaps not a need for bigger and less performant RSA? >> >> Which reminds that we can probably save some performance points by using >> HMAC for refresh tokens too? Since it's the Keycloak itself which signs >> and verifies it and from the adapter perspective, refresh token is just >> an opaque string. >> > > +1 Good point! Can you JIRA it and set fix version to 3.3 please? > > Created https://issues.jboss.org/browse/KEYCLOAK-4622 for refresh tokens. > > Also created https://issues.jboss.org/browse/KEYCLOAK-4623 for client > registration tokens, which I think is a similar case. The performance here > is not so critical though, but still, I think the fix would be pretty-easy > and worth to do it IMO. > > Marek > > > >> >> Marek >> >> On 21/03/17 17:25, Bill Burke wrote: >> > FYI, >> > >> > Signature for RSA-Sha-256 for JWS is 172 bytes. The Header of the JWS >> > is minimally 20 extra bytes. Can be more depending on additional >> > headers (kid, typ, cty). Wanted to state these numbers as they effect >> > if we want to use a cookie to store session information instead of >> > within a ClientSessionModel on the auth server, or HttpSession on >> > clients/apps. Supposedly cookie storage is limited to 4k per domain, so >> > we're immediately starting 200 bytes (5%) in the hole. >> > >> > Bill >> > >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > > From mposolda at redhat.com Wed Mar 22 05:49:44 2017 From: mposolda at redhat.com (Marek Posolda) Date: Wed, 22 Mar 2017 10:49:44 +0100 Subject: [keycloak-dev] Authorization code and refresh tokens in cross DC In-Reply-To: References: Message-ID: <3e5a4a95-29f9-3222-6c8d-e6ae1442e535@redhat.com> On 22/03/17 08:49, Stian Thorgersen wrote: > One problem with authorization codes and refresh tokens in cross DC setups > is the fact that they are supposed to be one-time only. > > We could have an option on a realm to set if they are one-time or not (we > already do for refresh tokens). If they are one-time we have a separate > synchronously replicated cache to handle. > > Question though would it be acceptable that they are not one-time? It does > imposed a certain risk as it wouldn't be possible to detect if they are > leaked. The OAuth2 specs requires that code MUST be one-time. It also mentions that authorization server SHOULD revoke the previously issued tokens if there is an attempt to use it second time. Which we do as OIDC certification had test for both these scenarios - currently we revoke the clientSession if there is an attempt to use the code twice. If we stop support the revoking, the OIDC test will be just "yellow" because of "should", so we are still OIDC certified. But more usage of same code will make us "red" due to "must". On the other hand, refresh tokens is not required to be one-time per specification. Code is supposed to be more vulnerable as it's in the URL and hence can occur in browser's history. Maybe we need 2 separate caches for both? Refresh tokens can be asynchronous by default as it's not mandated to be single use by default. And code should likely be synchronous by default. > > For example imagine someone somehow manages to sniff all codes sent to an > application (say it's a public application since that doesn't require > credentials). They can then exchange the code for tokens alongside the real > application. As the codes are not one-time this goes undetected. If codes > on the other hand are undetected the realm application will either be the > only one to exchange the code, or it will notice something is not right. > This issue may be resolved by the introduction of Proof Key for Code > Exchange [1] though. Unfortunately proof-key-for-code-exchange requires support on adapter side. So we can't rely on it as we don't know that 3rd party OIDC/OAuth adapter supports it. For our own adapters, we can have an optimized solution for cross-dc anyway. As long as both "code" and "refreshToken" is JWT, which will contain the jvmRoute, the adapter can set the cookie with that route to ensure loadbalancer will route it to correct node. But the 3rd party adapters is the challenge... Marek > > For refresh tokens sure it would be nice to be able to detect leakage, but > one-time is fairly brittle as HTTP requests are not transactional so you > can easily end up with loosing a response which would render the old > refresh token useless and the new one lost. > > [1] https://tools.ietf.org/html/rfc7636 > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Wed Mar 22 09:37:47 2017 From: mposolda at redhat.com (Marek Posolda) Date: Wed, 22 Mar 2017 14:37:47 +0100 Subject: [keycloak-dev] initial fine-grain admin permissions In-Reply-To: References: Message-ID: On 21/03/17 22:10, Bill Burke wrote: > Here's what we want to be able to manage for fine-grain admin > permissions for the 1st iteration. If you think we need more, let me > know, but I want to keep this list as small as possible. > > User management > > * Admin can only apply certain roles to a user > * Admin can view users of a specific group > * Admin can manage users of a specific group (creds, role mappings, etc) Maybe also: * Admin can only apply roles/groups, which he himself has AFAIK currently we have issues that user with "manage-users" role can assign any role to himself and hence gain permission to everything. > > Group Management > > * Admin can only manage a specific group > * Admin can only apply certain roles to a group > * Admin can only manage attributes of a specific group > * Admin can control group membership (add/remove members) > > Client management: > > * Admin can only manage a specific client. > * Admin can manage only configuration for a specific client and not > scope mappings or mappers. We have this distinction so that rogues > can't expand the scope of the client beyond what it is allowed to. +1 Especially stuff like hardcoded-role protocol mapper is quite tricky stuff. If admin can add it to any client, he can retrieve token with permission to edit anything. Maybe just some "safe" protocol mapper implementations should be whitelisted? Or have authorization policy integration to doublecheck that user is really member of particular role and not just rely on token roles? Marek > * Service accounts can manage the configuration of the client by default? > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Wed Mar 22 10:16:49 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 22 Mar 2017 10:16:49 -0400 Subject: [keycloak-dev] Authorization code and refresh tokens in cross DC In-Reply-To: References: Message-ID: I know I suggested this as a possible solutionn, but I was thinking about this more last night. Optimizations of session replications have the following limitations each requiring additional data be available in the session. This isn't limited to OIDC. * code to token - one time usage * refresh token - one time usage as well as revocation via logout or other mechanism * client initiated linking - new feature. Needs to know that a particular client is logged in. It also verifies hash based on client session code. * logout - minimally needs list of clients to push logout event to. Maybe also client specific session information? * broker logout - need to know what broker logged in the user so that logout event can be pushed to parent IDP. Also need to know broker's session id. * step-up authentication - we'll have to know how the user was previously authenticated in the session. * no-import broker login - we will need to store external user info in a session. Is there any more? On 3/22/17 3:49 AM, Stian Thorgersen wrote: > One problem with authorization codes and refresh tokens in cross DC setups > is the fact that they are supposed to be one-time only. > > We could have an option on a realm to set if they are one-time or not (we > already do for refresh tokens). If they are one-time we have a separate > synchronously replicated cache to handle. > > Question though would it be acceptable that they are not one-time? It does > imposed a certain risk as it wouldn't be possible to detect if they are > leaked. > > For example imagine someone somehow manages to sniff all codes sent to an > application (say it's a public application since that doesn't require > credentials). They can then exchange the code for tokens alongside the real > application. As the codes are not one-time this goes undetected. If codes > on the other hand are undetected the realm application will either be the > only one to exchange the code, or it will notice something is not right. > This issue may be resolved by the introduction of Proof Key for Code > Exchange [1] though. > > For refresh tokens sure it would be nice to be able to detect leakage, but > one-time is fairly brittle as HTTP requests are not transactional so you > can easily end up with loosing a response which would render the old > refresh token useless and the new one lost. > > [1] https://tools.ietf.org/html/rfc7636 > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From tech at psynd.net Wed Mar 22 11:34:24 2017 From: tech at psynd.net (Tech) Date: Wed, 22 Mar 2017 16:34:24 +0100 Subject: [keycloak-dev] How to increase the log level of Keycloak Message-ID: <11bac04e-8280-62c7-a6f8-46a347276c82@psynd.net> Dear experts, we are working with Keycloak 2.5.1 and we configured a OIDC resource. While we are connecting from outside, Keycloak returns a 400 (detected from our application), but we don't see this error reflected into the console and we are not able to increase the log level of Keycloak. We checked some parameters into /standalone/configuration/standalone.xml, we changed the console-handler to "FINEST", but we still don't get any needed log. Could you please advise? Thanks! From bburke at redhat.com Wed Mar 22 12:15:58 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 22 Mar 2017 12:15:58 -0400 Subject: [keycloak-dev] initial fine-grain admin permissions In-Reply-To: References: Message-ID: <693aaed0-9571-8d2f-9a6b-f16a8ddde367@redhat.com> On 3/22/17 9:37 AM, Marek Posolda wrote: > On 21/03/17 22:10, Bill Burke wrote: >> Here's what we want to be able to manage for fine-grain admin >> permissions for the 1st iteration. If you think we need more, let me >> know, but I want to keep this list as small as possible. >> >> User management >> >> * Admin can only apply certain roles to a user >> * Admin can view users of a specific group >> * Admin can manage users of a specific group (creds, role >> mappings, etc) > Maybe also: > * Admin can only apply roles/groups, which he himself has > > AFAIK currently we have issues that user with "manage-users" role can > assign any role to himself and hence gain permission to everything. > This falls under the category of "Admin can only apply certain roles to a user". We're talking implementation detail here, but the way I envision it will work is each role can define policies on how it is allowed to be assigned. For example: the "manage-realm" role can only be assigned if the user has the "admin" role. Also, any policy will be defined using the Authz service. Bill From thomas.darimont at googlemail.com Wed Mar 22 13:48:04 2017 From: thomas.darimont at googlemail.com (Thomas Darimont) Date: Wed, 22 Mar 2017 18:48:04 +0100 Subject: [keycloak-dev] How to increase the log level of Keycloak In-Reply-To: <11bac04e-8280-62c7-a6f8-46a347276c82@psynd.net> References: <11bac04e-8280-62c7-a6f8-46a347276c82@psynd.net> Message-ID: Hello, a better place to ask this is the keycloak-user mailing list (http://lists.jboss.org/pipermail/keycloak-user/) as this is just for dev discussions. Do you really use the standalone.xml or rather the standalone-ha.xml? Anyways for standalone.sh cd $KEYCLOAK_HOME bin/jboss-cli.sh embed-server --server-config=standalone.xml --std-out=echo /subsystem=logging/console-handler=CONSOLE:change-log-level(level=ALL) /subsystem=logging/root-logger=ROOT/:write-attribute(name=level,value=DEBUG) This should give you debug output. Cheers, Thomas 2017-03-22 16:34 GMT+01:00 Tech : > Dear experts, > > we are working with Keycloak 2.5.1 and we configured a OIDC resource. > > While we are connecting from outside, Keycloak returns a 400 (detected > from our application), but we don't see this error reflected into the > console and we are not able to increase the log level of Keycloak. > > We checked some parameters into > /standalone/configuration/standalone.xml, we changed the console-handler > to "FINEST", but we still don't get any needed log. > > Could you please advise? > > Thanks! > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Wed Mar 22 15:09:44 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Wed, 22 Mar 2017 20:09:44 +0100 Subject: [keycloak-dev] Keycloak 3.0.0.Final is released Message-ID: Keycloak 3.0.0.Final is released. To download the release go to the Keycloak homepage . The full list of resolved issues is available in JIRA . Upgrading Before you upgrade remember to backup your database and check the migration guide . From Shankar_Bhaskaran at infosys.com Wed Mar 22 15:59:29 2017 From: Shankar_Bhaskaran at infosys.com (Shankar_Bhaskaran) Date: Wed, 22 Mar 2017 19:59:29 +0000 Subject: [keycloak-dev] Question on securing teiid with keycloak Message-ID: <588cb4688a854e9e80ebdc52e3aa1743@CHNSHLMBX33.ad.infosys.com> Hi , I am trying to secure teiid/jdbc and odata using keycloak . I have a secure domain defined as follows but doesn't seem to work for direct access(jdbc from squirrel) . If I remove org.keycloak.adapters.jboss.KeycloakLoginModule it seem to work fine. Regards, Shankar From bburke at redhat.com Wed Mar 22 16:04:43 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 22 Mar 2017 16:04:43 -0400 Subject: [keycloak-dev] ResourceFactory SPI for AuthZ service Message-ID: <590bfdb0-299e-08b2-e824-9ad7d4cedff8@redhat.com> I want to use AuthZ service to implement fine-grain admin console permissions. To do this, I foresee that I'll have to define resources that correspond one to one to objects in the Keycloak domain model. Specifically roles, groups, and clients. There are a few problems with this approach: * Some deployments of keycloak have tens of thousands of roles and groups or hundreds of clients * Synchronizing an AuthZ resource that represents a role, group, etc. must be done. i.e. when role/group/client is removed or renamed. * I'd like for policies to be able to have the real object that the resource represents when evaluating policies I want to suggest something similar that we've done with User Storage SPI in that links to AuthZ resources are a "smart" id. "f:" + providerId + ":" + resource id When evaluating policies the engine would navigate to a provider that could load an instance of the Resource interface. This way I could represent a role or group as an AuthZ resource without creating a resource in the Authz datamodel. Am I making sense? Bill From psilva at redhat.com Wed Mar 22 17:08:55 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 22 Mar 2017 18:08:55 -0300 Subject: [keycloak-dev] ResourceFactory SPI for AuthZ service In-Reply-To: <590bfdb0-299e-08b2-e824-9ad7d4cedff8@redhat.com> References: <590bfdb0-299e-08b2-e824-9ad7d4cedff8@redhat.com> Message-ID: I see. That makes sense. It would save a lot of work and can also be useful for people looking to hook their own resources without necessarily creating them. Regards. Pedro Igor On Wed, Mar 22, 2017 at 5:04 PM, Bill Burke wrote: > I want to use AuthZ service to implement fine-grain admin console > permissions. To do this, I foresee that I'll have to define resources > that correspond one to one to objects in the Keycloak domain model. > Specifically roles, groups, and clients. There are a few problems with > this approach: > > * Some deployments of keycloak have tens of thousands of roles and > groups or hundreds of clients > * Synchronizing an AuthZ resource that represents a role, group, etc. > must be done. i.e. when role/group/client is removed or renamed. > * I'd like for policies to be able to have the real object that the > resource represents when evaluating policies > > I want to suggest something similar that we've done with User Storage > SPI in that links to AuthZ resources are a "smart" id. > > "f:" + providerId + ":" + resource id > > When evaluating policies the engine would navigate to a provider that > could load an instance of the Resource interface. This way I could > represent a role or group as an AuthZ resource without creating a > resource in the Authz datamodel. Am I making sense? > > Bill > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From psilva at redhat.com Wed Mar 22 17:45:09 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 22 Mar 2017 18:45:09 -0300 Subject: [keycloak-dev] ResourceFactory SPI for AuthZ service In-Reply-To: References: <590bfdb0-299e-08b2-e824-9ad7d4cedff8@redhat.com> Message-ID: Btw, are you already looking this or do you want me to write it down ? On Wed, Mar 22, 2017 at 6:08 PM, Pedro Igor Silva wrote: > I see. That makes sense. It would save a lot of work and can also be > useful for people looking to hook their own resources without necessarily > creating them. > > Regards. > Pedro Igor > > On Wed, Mar 22, 2017 at 5:04 PM, Bill Burke wrote: > >> I want to use AuthZ service to implement fine-grain admin console >> permissions. To do this, I foresee that I'll have to define resources >> that correspond one to one to objects in the Keycloak domain model. >> Specifically roles, groups, and clients. There are a few problems with >> this approach: >> >> * Some deployments of keycloak have tens of thousands of roles and >> groups or hundreds of clients >> * Synchronizing an AuthZ resource that represents a role, group, etc. >> must be done. i.e. when role/group/client is removed or renamed. >> * I'd like for policies to be able to have the real object that the >> resource represents when evaluating policies >> >> I want to suggest something similar that we've done with User Storage >> SPI in that links to AuthZ resources are a "smart" id. >> >> "f:" + providerId + ":" + resource id >> >> When evaluating policies the engine would navigate to a provider that >> could load an instance of the Resource interface. This way I could >> represent a role or group as an AuthZ resource without creating a >> resource in the Authz datamodel. Am I making sense? >> >> Bill >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > From bburke at redhat.com Wed Mar 22 18:52:47 2017 From: bburke at redhat.com (Bill Burke) Date: Wed, 22 Mar 2017 18:52:47 -0400 Subject: [keycloak-dev] ResourceFactory SPI for AuthZ service In-Reply-To: References: <590bfdb0-299e-08b2-e824-9ad7d4cedff8@redhat.com> Message-ID: <4fe550c8-7cf8-64ea-978e-b2001aaec2fd@redhat.com> I need it to move forward. You or me. I don't care. On 3/22/17 5:45 PM, Pedro Igor Silva wrote: > Btw, are you already looking this or do you want me to write it down ? > > On Wed, Mar 22, 2017 at 6:08 PM, Pedro Igor Silva > wrote: > > I see. That makes sense. It would save a lot of work and can also > be useful for people looking to hook their own resources without > necessarily creating them. > > Regards. > Pedro Igor > > On Wed, Mar 22, 2017 at 5:04 PM, Bill Burke > wrote: > > I want to use AuthZ service to implement fine-grain admin console > permissions. To do this, I foresee that I'll have to define > resources > that correspond one to one to objects in the Keycloak domain > model. > Specifically roles, groups, and clients. There are a few > problems with > this approach: > > * Some deployments of keycloak have tens of thousands of > roles and > groups or hundreds of clients > * Synchronizing an AuthZ resource that represents a role, > group, etc. > must be done. i.e. when role/group/client is removed or > renamed. > * I'd like for policies to be able to have the real object > that the > resource represents when evaluating policies > > I want to suggest something similar that we've done with User > Storage > SPI in that links to AuthZ resources are a "smart" id. > > "f:" + providerId + ":" + resource id > > When evaluating policies the engine would navigate to a > provider that > could load an instance of the Resource interface. This way I could > represent a role or group as an AuthZ resource without creating a > resource in the Authz datamodel. Am I making sense? > > Bill > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > From psilva at redhat.com Wed Mar 22 19:09:36 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 22 Mar 2017 20:09:36 -0300 Subject: [keycloak-dev] Elytron Adapter Message-ID: Hello, I'm starting to finish up some tests with the new Elytron Adapters for OIDC and SAML. The idea is push the adapter as soon as I prepare arquillian testsuite to run against a container using Elytron. Until now, I was using integration tests to test these adapters. But for now on, I'll be running all arquillian tests as suggested by Stian. Results are pretty good so far, there is a single test failure right now (org.keycloak.testsuite.adapter.servlet.AbstractDemoServletsAdapterTest#historyOfAccessResourceTest) which I need to figure out what is going on. We are going to have a specific profile to test Elytron adapters. This profile is configured to run a WFLY 11 SNAPSHOT. I've already discussed this topic with Stian and the idea is to create a baseline for Elytron adapters as well start preparing Keycloak for Wildfly 11 and the new security infrastructure provided by Elytron. This *does not* mean that we are replacing undertow adapters. But just preparing our adapters code base to the next WFLY release (and EAP 7). I'll probably send a PR on friday or early next week. Please, let me know if you have any questions about this work. Regards. Pedro Igor From psilva at redhat.com Wed Mar 22 19:11:03 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 22 Mar 2017 20:11:03 -0300 Subject: [keycloak-dev] ResourceFactory SPI for AuthZ service In-Reply-To: <4fe550c8-7cf8-64ea-978e-b2001aaec2fd@redhat.com> References: <590bfdb0-299e-08b2-e824-9ad7d4cedff8@redhat.com> <4fe550c8-7cf8-64ea-978e-b2001aaec2fd@redhat.com> Message-ID: I'll be busy this week and probably next week preparing a PR with Elytron adapters. Just sent an email about it. If you can wait until there ... Regards. Pedro Igor On Wed, Mar 22, 2017 at 7:52 PM, Bill Burke wrote: > I need it to move forward. You or me. I don't care. > > On 3/22/17 5:45 PM, Pedro Igor Silva wrote: > > Btw, are you already looking this or do you want me to write it down ? > > On Wed, Mar 22, 2017 at 6:08 PM, Pedro Igor Silva > wrote: > >> I see. That makes sense. It would save a lot of work and can also be >> useful for people looking to hook their own resources without necessarily >> creating them. >> >> Regards. >> Pedro Igor >> >> On Wed, Mar 22, 2017 at 5:04 PM, Bill Burke wrote: >> >>> I want to use AuthZ service to implement fine-grain admin console >>> permissions. To do this, I foresee that I'll have to define resources >>> that correspond one to one to objects in the Keycloak domain model. >>> Specifically roles, groups, and clients. There are a few problems with >>> this approach: >>> >>> * Some deployments of keycloak have tens of thousands of roles and >>> groups or hundreds of clients >>> * Synchronizing an AuthZ resource that represents a role, group, etc. >>> must be done. i.e. when role/group/client is removed or renamed. >>> * I'd like for policies to be able to have the real object that the >>> resource represents when evaluating policies >>> >>> I want to suggest something similar that we've done with User Storage >>> SPI in that links to AuthZ resources are a "smart" id. >>> >>> "f:" + providerId + ":" + resource id >>> >>> When evaluating policies the engine would navigate to a provider that >>> could load an instance of the Resource interface. This way I could >>> represent a role or group as an AuthZ resource without creating a >>> resource in the Authz datamodel. Am I making sense? >>> >>> Bill >>> >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >> >> > > From dsbenghe at gmail.com Wed Mar 22 20:52:34 2017 From: dsbenghe at gmail.com (Dumitru Sbenghe) Date: Thu, 23 Mar 2017 11:52:34 +1100 Subject: [keycloak-dev] initial fine-grain admin permissions In-Reply-To: References: Message-ID: What about Identity providers * Admin can only manage a specific identity provider? On Wed, Mar 22, 2017 at 8:10 AM, Bill Burke wrote: > Here's what we want to be able to manage for fine-grain admin > permissions for the 1st iteration. If you think we need more, let me > know, but I want to keep this list as small as possible. > > User management > > * Admin can only apply certain roles to a user > * Admin can view users of a specific group > * Admin can manage users of a specific group (creds, role mappings, etc) > > Group Management > > * Admin can only manage a specific group > * Admin can only apply certain roles to a group > * Admin can only manage attributes of a specific group > * Admin can control group membership (add/remove members) > > Client management: > > * Admin can only manage a specific client. > * Admin can manage only configuration for a specific client and not > scope mappings or mappers. We have this distinction so that rogues > can't expand the scope of the client beyond what it is allowed to. > * Service accounts can manage the configuration of the client by default? > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From psilva at redhat.com Wed Mar 22 21:07:49 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Wed, 22 Mar 2017 22:07:49 -0300 Subject: [keycloak-dev] Elytron Adapter In-Reply-To: References: Message-ID: I'm experiencing some failures with tests in org.keycloak.testsuite.adapter.example. For instance, WildflyJSConsoleExampleAdapterTest. If i deploy js-console example everything works fine. Is this a known issue and I can safely ignore ? Even when running against master (no elytron adapter changes) the test is failing. If a known issue, which tests should I care most to make sure Elytron Adapter is functional ? Regards. Pedro Igor On Wed, Mar 22, 2017 at 8:09 PM, Pedro Igor Silva wrote: > Hello, > > I'm starting to finish up some tests with the new Elytron Adapters for > OIDC and SAML. The idea is push the adapter as soon as I prepare arquillian > testsuite to run against a container using Elytron. > > Until now, I was using integration tests to test these adapters. But for > now on, I'll be running all arquillian tests as suggested by Stian. Results > are pretty good so far, there is a single test failure right now > (org.keycloak.testsuite.adapter.servlet.AbstractDemoServletsAdapterTest#historyOfAccessResourceTest) > which I need to figure out what is going on. > > We are going to have a specific profile to test Elytron adapters. This > profile is configured to run a WFLY 11 SNAPSHOT. > > I've already discussed this topic with Stian and the idea is to create a > baseline for Elytron adapters as well start preparing Keycloak for Wildfly > 11 and the new security infrastructure provided by Elytron. > > This *does not* mean that we are replacing undertow adapters. But just > preparing our adapters code base to the next WFLY release (and EAP 7). > > I'll probably send a PR on friday or early next week. > > Please, let me know if you have any questions about this work. > > Regards. > Pedro Igor > From sthorger at redhat.com Thu Mar 23 01:54:04 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 23 Mar 2017 06:54:04 +0100 Subject: [keycloak-dev] initial fine-grain admin permissions In-Reply-To: References: Message-ID: That seems to cover the use-cases I had in mind. I'd also like to highlight what Marek pointed out around protocol mappers. That was found as one of the issues with dynamic client registration that we had to tackle. Basically it could have been used for priviledge escalation through the client registration services. We solved that by introducing the client registration policies. Maybe we need different policies applied to different admins. How would this be encoded? Rather than having lists of admin can access this, admin can access that. Would it not be better to have some role or group where a member of that role/group can access a set of users, a set of roles, a set of clients, etc..? On 21 March 2017 at 22:10, Bill Burke wrote: > Here's what we want to be able to manage for fine-grain admin > permissions for the 1st iteration. If you think we need more, let me > know, but I want to keep this list as small as possible. > > User management > > * Admin can only apply certain roles to a user > * Admin can view users of a specific group > * Admin can manage users of a specific group (creds, role mappings, etc) > > Group Management > > * Admin can only manage a specific group > * Admin can only apply certain roles to a group > * Admin can only manage attributes of a specific group > * Admin can control group membership (add/remove members) > > Client management: > > * Admin can only manage a specific client. > * Admin can manage only configuration for a specific client and not > scope mappings or mappers. We have this distinction so that rogues > can't expand the scope of the client beyond what it is allowed to. > * Service accounts can manage the configuration of the client by default? > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Thu Mar 23 02:24:30 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 23 Mar 2017 07:24:30 +0100 Subject: [keycloak-dev] initial fine-grain admin permissions In-Reply-To: References: Message-ID: Is that really needed? Managing identity providers is managing realm configuration. It's also very complicated due to fact that it's importing users into the realm and the permissions those users receive can also be configured through mappers. On 23 March 2017 at 01:52, Dumitru Sbenghe wrote: > What about > > Identity providers > * Admin can only manage a specific identity provider? > > On Wed, Mar 22, 2017 at 8:10 AM, Bill Burke wrote: > > > Here's what we want to be able to manage for fine-grain admin > > permissions for the 1st iteration. If you think we need more, let me > > know, but I want to keep this list as small as possible. > > > > User management > > > > * Admin can only apply certain roles to a user > > * Admin can view users of a specific group > > * Admin can manage users of a specific group (creds, role mappings, > etc) > > > > Group Management > > > > * Admin can only manage a specific group > > * Admin can only apply certain roles to a group > > * Admin can only manage attributes of a specific group > > * Admin can control group membership (add/remove members) > > > > Client management: > > > > * Admin can only manage a specific client. > > * Admin can manage only configuration for a specific client and not > > scope mappings or mappers. We have this distinction so that rogues > > can't expand the scope of the client beyond what it is allowed to. > > * Service accounts can manage the configuration of the client by > default? > > > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mhajas at redhat.com Thu Mar 23 05:53:19 2017 From: mhajas at redhat.com (Michal Hajas) Date: Thu, 23 Mar 2017 10:53:19 +0100 Subject: [keycloak-dev] Elytron Adapter In-Reply-To: References: Message-ID: Hi Pedro, What failures are you experiencing? Is it the failure of test historyOfAccessResourceTest or is it something else? I remember this test was unstable, but I believe I fixed it already as I haven't experienced any failures for a while. In case it is something else, there is PR [1] for a small bug I introduced during conflict merge, so maybe it will solve your issue. Anyway, please send me some more details about this, I will investigate whether it is a testsuite issue or something else. Michal [1] https://github.com/keycloak/keycloak/pull/3960 On Thu, Mar 23, 2017 at 2:07 AM, Pedro Igor Silva wrote: > I'm experiencing some failures with tests in > org.keycloak.testsuite.adapter.example. For > instance, WildflyJSConsoleExampleAdapterTest. If i deploy js-console > example everything works fine. > > Is this a known issue and I can safely ignore ? Even when running against > master (no elytron adapter changes) the test is failing. > > If a known issue, which tests should I care most to make sure Elytron > Adapter is functional ? > > Regards. > Pedro Igor > > On Wed, Mar 22, 2017 at 8:09 PM, Pedro Igor Silva > wrote: > > > Hello, > > > > I'm starting to finish up some tests with the new Elytron Adapters for > > OIDC and SAML. The idea is push the adapter as soon as I prepare > arquillian > > testsuite to run against a container using Elytron. > > > > Until now, I was using integration tests to test these adapters. But for > > now on, I'll be running all arquillian tests as suggested by Stian. > Results > > are pretty good so far, there is a single test failure right now > > (org.keycloak.testsuite.adapter.servlet.AbstractDemoServletsAdapterTes > t#historyOfAccessResourceTest) > > which I need to figure out what is going on. > > > > We are going to have a specific profile to test Elytron adapters. This > > profile is configured to run a WFLY 11 SNAPSHOT. > > > > I've already discussed this topic with Stian and the idea is to create a > > baseline for Elytron adapters as well start preparing Keycloak for > Wildfly > > 11 and the new security infrastructure provided by Elytron. > > > > This *does not* mean that we are replacing undertow adapters. But just > > preparing our adapters code base to the next WFLY release (and EAP 7). > > > > I'll probably send a PR on friday or early next week. > > > > Please, let me know if you have any questions about this work. > > > > Regards. > > Pedro Igor > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From psilva at redhat.com Thu Mar 23 06:44:47 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Thu, 23 Mar 2017 07:44:47 -0300 Subject: [keycloak-dev] Elytron Adapter In-Reply-To: References: Message-ID: What about WildflyJSConsoleExampleAdapterTest ? Am I missing some property in the command line ? Last time I run into some issues and passing -Dbrowser=phantomjs did the tricky for some tests using JS-based applications. Didn't test it with the mentioned test though ... Thanks. On Thu, Mar 23, 2017 at 6:53 AM, Michal Hajas wrote: > Hi Pedro, > > What failures are you experiencing? Is it the failure of test > historyOfAccessResourceTest or is it something else? I remember this test > was unstable, but I believe I fixed it already as I haven't experienced any > failures for a while. > > In case it is something else, there is PR [1] for a small bug I introduced > during conflict merge, so maybe it will solve your issue. > > Anyway, please send me some more details about this, I will investigate > whether it is a testsuite issue or something else. > > Michal > > [1] https://github.com/keycloak/keycloak/pull/3960 > > On Thu, Mar 23, 2017 at 2:07 AM, Pedro Igor Silva > wrote: > >> I'm experiencing some failures with tests in >> org.keycloak.testsuite.adapter.example. For >> instance, WildflyJSConsoleExampleAdapterTest. If i deploy js-console >> example everything works fine. >> >> Is this a known issue and I can safely ignore ? Even when running against >> master (no elytron adapter changes) the test is failing. >> >> If a known issue, which tests should I care most to make sure Elytron >> Adapter is functional ? >> >> Regards. >> Pedro Igor >> >> On Wed, Mar 22, 2017 at 8:09 PM, Pedro Igor Silva >> wrote: >> >> > Hello, >> > >> > I'm starting to finish up some tests with the new Elytron Adapters for >> > OIDC and SAML. The idea is push the adapter as soon as I prepare >> arquillian >> > testsuite to run against a container using Elytron. >> > >> > Until now, I was using integration tests to test these adapters. But for >> > now on, I'll be running all arquillian tests as suggested by Stian. >> Results >> > are pretty good so far, there is a single test failure right now >> > (org.keycloak.testsuite.adapter.servlet.AbstractDemoServlets >> AdapterTest#historyOfAccessResourceTest) >> > which I need to figure out what is going on. >> > >> > We are going to have a specific profile to test Elytron adapters. This >> > profile is configured to run a WFLY 11 SNAPSHOT. >> > >> > I've already discussed this topic with Stian and the idea is to create a >> > baseline for Elytron adapters as well start preparing Keycloak for >> Wildfly >> > 11 and the new security infrastructure provided by Elytron. >> > >> > This *does not* mean that we are replacing undertow adapters. But just >> > preparing our adapters code base to the next WFLY release (and EAP 7). >> > >> > I'll probably send a PR on friday or early next week. >> > >> > Please, let me know if you have any questions about this work. >> > >> > Regards. >> > Pedro Igor >> > >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > From psilva at redhat.com Thu Mar 23 07:19:39 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Thu, 23 Mar 2017 08:19:39 -0300 Subject: [keycloak-dev] ResourceFactory SPI for AuthZ service In-Reply-To: References: <590bfdb0-299e-08b2-e824-9ad7d4cedff8@redhat.com> <4fe550c8-7cf8-64ea-978e-b2001aaec2fd@redhat.com> Message-ID: Bill, I was thinking about this and would like your thoughts about an idea. In order to enforce policies to a resource, you don't necessarily need to create a protected resource in AuthZ Services. As you already noticed, when sending an authorization request (or using the AuthZ API internally) you can send the resource(s) and scope(s) you want to access. What if instead of creating resources for each thing you want to protect (groups, roles, clients, etc) we just associate them with a specific protected resource in AuthZ Service ? For instance, one of the use cases you proposed is: Admin can only apply certain roles to a user We would have a protected resource called "Special Role" which is associated with a permission/policies that must be satisfied in order to allow an administrator to grant a specific role to an user. That way, before granting a role, you would check the resource associated with the role and ask the policy evaluation engine for permissions. I think that would work if "to a user" means any user, otherwise we would need to pass additional contextual data along with an authorization request, like we previously discussed. What do you think ? On Wed, Mar 22, 2017 at 8:11 PM, Pedro Igor Silva wrote: > I'll be busy this week and probably next week preparing a PR with Elytron > adapters. Just sent an email about it. > > If you can wait until there ... > > Regards. > Pedro Igor > > On Wed, Mar 22, 2017 at 7:52 PM, Bill Burke wrote: > >> I need it to move forward. You or me. I don't care. >> >> On 3/22/17 5:45 PM, Pedro Igor Silva wrote: >> >> Btw, are you already looking this or do you want me to write it down ? >> >> On Wed, Mar 22, 2017 at 6:08 PM, Pedro Igor Silva >> wrote: >> >>> I see. That makes sense. It would save a lot of work and can also be >>> useful for people looking to hook their own resources without necessarily >>> creating them. >>> >>> Regards. >>> Pedro Igor >>> >>> On Wed, Mar 22, 2017 at 5:04 PM, Bill Burke wrote: >>> >>>> I want to use AuthZ service to implement fine-grain admin console >>>> permissions. To do this, I foresee that I'll have to define resources >>>> that correspond one to one to objects in the Keycloak domain model. >>>> Specifically roles, groups, and clients. There are a few problems with >>>> this approach: >>>> >>>> * Some deployments of keycloak have tens of thousands of roles and >>>> groups or hundreds of clients >>>> * Synchronizing an AuthZ resource that represents a role, group, etc. >>>> must be done. i.e. when role/group/client is removed or renamed. >>>> * I'd like for policies to be able to have the real object that the >>>> resource represents when evaluating policies >>>> >>>> I want to suggest something similar that we've done with User Storage >>>> SPI in that links to AuthZ resources are a "smart" id. >>>> >>>> "f:" + providerId + ":" + resource id >>>> >>>> When evaluating policies the engine would navigate to a provider that >>>> could load an instance of the Resource interface. This way I could >>>> represent a role or group as an AuthZ resource without creating a >>>> resource in the Authz datamodel. Am I making sense? >>>> >>>> Bill >>>> >>>> _______________________________________________ >>>> keycloak-dev mailing list >>>> keycloak-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>>> >>> >>> >> >> > From thomas_connolly at yahoo.com Thu Mar 23 08:28:06 2017 From: thomas_connolly at yahoo.com (Thomas Connolly) Date: Thu, 23 Mar 2017 12:28:06 +0000 (UTC) Subject: [keycloak-dev] New Account Management Console and Account REST api References: <965131964.2044922.1490272086588.ref@mail.yahoo.com> Message-ID: <965131964.2044922.1490272086588@mail.yahoo.com> Hi All Could this UI and API be put on a separate port please? RegardsTom.----------------------------------- Message: 1Date: Fri, 17 Mar 2017 08:25:47 -0700 From: Tair Sabirgaliev Subject: Re: [keycloak-dev] New Account Management Console and Account ??? REST??? api To: Stan Silvert , stian at redhat.com Cc: keycloak-dev Message-ID: ??? Content-Type: text/plain; charset=UTF-8 +1 for Angular2, this will make maintenance and customisation easier. The framework becomes very popular and close to ?JavaEE mindset?. On 17 March 2017 at 18:19:23, Stan Silvert (ssilvert at redhat.com) wrote: On 3/17/2017 8:09 AM, Stian Thorgersen wrote: > Had another idea. We could quite easily make it possible to configure > the "account management url" for a realm. That would let folks > redirect to external account management console if they want to > completely override it. That would also mean that our own account management console could be served from anywhere or even installed locally on the client machine. > > On 17 March 2017 at 13:08, Stian Thorgersen > wrote: > > I'm going to call it "YetAnotherJsFramework" ;) > > On 17 March 2017 at 12:54, Stan Silvert > wrote: > > On 3/17/2017 5:47 AM, Stian Thorgersen wrote: > > As we've discussed a few times now the plan is to do a brand > new account > > management console. Instead of old school forms it will be > all modern using > > HTML5, AngularJS and REST endpoints. > One thing. That should be "Angular", not "AngularJS". Just to > educate everyone, here is what's going on in Angular-land: > > AngularJS is the old framework we used for the admin console. > Angular is the new framework we will use for the account > management console. > > Most of you know the new framework as Angular2 or ng-2, but > the powers > that be want to just call it "Angular". This framework is > completely > rewritten and really has no relation to AngularJS, except they > both come > from Google and both have "Angular" in the name. > > To avoid confusion, I'm going to call it "Angualr2" for the > foreseeable > future. > > > > The JIRA for this work is: > > https://issues.jboss.org/browse/KEYCLOAK-1250 > > > > > We where hoping to get some help from the professional UXP > folks for this, > > but it looks like that may take some time. In the mean time > the plan is to > > base it on the following template: > > > > > https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# > > > > > Also, we'll try to use some newer things from PatternFly > patterns to > > improve the screens. > > > > First pass will have the same functionality and behavior as > the old account > > management console. Second pass will be to improve the > usability (pages > > like linking, sessions and history are not very nice). > > > > We will deprecate the old FreeMarker/forms way of doing > things, but keep it > > around so it doesn't break what people are already doing. > This can be > > removed in the future (probably RHSSO 8.0?). > > > > We'll also need to provide full rest endpoints for the > account management > > console. I'll work on that, while Stan works on the UI. > > > > As the account management console will be a pure HTML5 and > JS app anyone > > can completely replace it with a theme. They can also > customize it a lot. > > We'll also need to make sure it's easy to add additional > pages/sections. > > > > Rather than just add to AccountService I'm going to rename that > > to DeprecatedAccountFormService remove all REST from there > and add a new > > AccountService that only does REST. All features available > through forms at > > the moment will be available as REST API, with the exception > of account > > linking which will be done through Bills work that was > introduced in 3.0 > > that allows applications to initiate the account linking. > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Thu Mar 23 10:02:27 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 23 Mar 2017 10:02:27 -0400 Subject: [keycloak-dev] initial fine-grain admin permissions In-Reply-To: References: Message-ID: <992e7b9a-2a27-dbd8-c06b-add1646c46ca@redhat.com> I'm just trying to define the initial operations we want to cover and how fine grain we might want to go. When I wrote "Admin can" below, I do not discuss how the admin is defined. I'll get into what default policies will be later on as I learn the limitations of the Authz service. I want to implement as you describe, not defining permissions PER ADMIN, but based on group membership and role mappings of the admin user. On 3/23/17 1:54 AM, Stian Thorgersen wrote: > That seems to cover the use-cases I had in mind. I'd also like to > highlight what Marek pointed out around protocol mappers. That was > found as one of the issues with dynamic client registration that we > had to tackle. Basically it could have been used for priviledge > escalation through the client registration services. We solved that by > introducing the client registration policies. Maybe we need different > policies applied to different admins. > > How would this be encoded? Rather than having lists of admin can > access this, admin can access that. Would it not be better to have > some role or group where a member of that role/group can access a set > of users, a set of roles, a set of clients, etc..? > > On 21 March 2017 at 22:10, Bill Burke > wrote: > > Here's what we want to be able to manage for fine-grain admin > permissions for the 1st iteration. If you think we need more, let me > know, but I want to keep this list as small as possible. > > User management > > * Admin can only apply certain roles to a user > * Admin can view users of a specific group > * Admin can manage users of a specific group (creds, role > mappings, etc) > > Group Management > > * Admin can only manage a specific group > * Admin can only apply certain roles to a group > * Admin can only manage attributes of a specific group > * Admin can control group membership (add/remove members) > > Client management: > > * Admin can only manage a specific client. > * Admin can manage only configuration for a specific client and not > scope mappings or mappers. We have this distinction so that > rogues > can't expand the scope of the client beyond what it is allowed to. > * Service accounts can manage the configuration of the client by > default? > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > From mhajas at redhat.com Thu Mar 23 10:17:12 2017 From: mhajas at redhat.com (Michal Hajas) Date: Thu, 23 Mar 2017 15:17:12 +0100 Subject: [keycloak-dev] Elytron Adapter In-Reply-To: References: Message-ID: I am not sure whether I understand you, WildflyJSConsoleExampleAdapterTest is a whole class, not one specific test method. If you want to run WildflyJSConsoleExampleAdapterTest you need to run these: 1. navigate to integration-arquillian testsuite and run: mvn clean install -Papp-server-wildfly [-Pauth-server-wildfly] -DskipTests 2. navigate to tests/other and run mvn clean install -Papp-server-wildfly [-Pauth-server-wildfly] -Dtest=*JSConsoleExampleAdapterTest [-Dbrowser=firefox | phantomjs | ...] Hope I didn't forget about something. At the moment, I am not sure whether Adapter tests are working correctly in upstream, I haven't run it for a while, but, as soon as I will have time, after finishing 7.1 stuff, I will look at adapter tests in the community. On Thu, Mar 23, 2017 at 11:44 AM, Pedro Igor Silva wrote: > What about WildflyJSConsoleExampleAdapterTest ? Am I missing some > property in the command line ? > > Last time I run into some issues and passing -Dbrowser=phantomjs did the > tricky for some tests using JS-based applications. Didn't test it with the > mentioned test though ... > > Thanks. > > On Thu, Mar 23, 2017 at 6:53 AM, Michal Hajas wrote: > >> Hi Pedro, >> >> What failures are you experiencing? Is it the failure of test >> historyOfAccessResourceTest or is it something else? I remember this >> test was unstable, but I believe I fixed it already as I haven't >> experienced any failures for a while. >> >> In case it is something else, there is PR [1] for a small bug I >> introduced during conflict merge, so maybe it will solve your issue. >> >> Anyway, please send me some more details about this, I will investigate >> whether it is a testsuite issue or something else. >> >> Michal >> >> [1] https://github.com/keycloak/keycloak/pull/3960 >> >> On Thu, Mar 23, 2017 at 2:07 AM, Pedro Igor Silva >> wrote: >> >>> I'm experiencing some failures with tests in >>> org.keycloak.testsuite.adapter.example. For >>> instance, WildflyJSConsoleExampleAdapterTest. If i deploy js-console >>> example everything works fine. >>> >>> Is this a known issue and I can safely ignore ? Even when running against >>> master (no elytron adapter changes) the test is failing. >>> >>> If a known issue, which tests should I care most to make sure Elytron >>> Adapter is functional ? >>> >>> Regards. >>> Pedro Igor >>> >>> On Wed, Mar 22, 2017 at 8:09 PM, Pedro Igor Silva >>> wrote: >>> >>> > Hello, >>> > >>> > I'm starting to finish up some tests with the new Elytron Adapters for >>> > OIDC and SAML. The idea is push the adapter as soon as I prepare >>> arquillian >>> > testsuite to run against a container using Elytron. >>> > >>> > Until now, I was using integration tests to test these adapters. But >>> for >>> > now on, I'll be running all arquillian tests as suggested by Stian. >>> Results >>> > are pretty good so far, there is a single test failure right now >>> > (org.keycloak.testsuite.adapter.servlet.AbstractDemoServlets >>> AdapterTest#historyOfAccessResourceTest) >>> > which I need to figure out what is going on. >>> > >>> > We are going to have a specific profile to test Elytron adapters. This >>> > profile is configured to run a WFLY 11 SNAPSHOT. >>> > >>> > I've already discussed this topic with Stian and the idea is to create >>> a >>> > baseline for Elytron adapters as well start preparing Keycloak for >>> Wildfly >>> > 11 and the new security infrastructure provided by Elytron. >>> > >>> > This *does not* mean that we are replacing undertow adapters. But just >>> > preparing our adapters code base to the next WFLY release (and EAP 7). >>> > >>> > I'll probably send a PR on friday or early next week. >>> > >>> > Please, let me know if you have any questions about this work. >>> > >>> > Regards. >>> > Pedro Igor >>> > >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >> >> > From mposolda at redhat.com Thu Mar 23 11:35:29 2017 From: mposolda at redhat.com (Marek Posolda) Date: Thu, 23 Mar 2017 16:35:29 +0100 Subject: [keycloak-dev] User-managed permissions Message-ID: <57998e07-a098-d20f-ae2e-eb0671fce980@redhat.com> I was wondering about the use-case when users manage permissions to their own objects. It seems that proper support for this can be very challenging for the amount of DB space. For example: I have 1000 documents and I have 1000 users. I want to be able to define fine-grained permissions and be able to define that user "john" is able to see document-1 and document-2, but not document-3 etc. So I can end with up to: count of users * number of documents = 1000 users * 1000 documents = 1000000 permission records in DB When authorization scopes (actions) come into play and I want to specify that "john" is able just to "read" document-1 when "alice" is able to "read", "update" and "comment" on document-1, I may end up with 5 million objects in DB (assuming I have 5 actions). We can do something like divide documents into "groups" and grant the permission just per group. But for example Google allows to group things (you can put more photos into one photoalbum and share whole photoalbum with user "john"), but also define fine-grained permission (share just single photo with user "john"). My estimation is, that using for JPA for save such data is likely not feasible. And I bet that Google is really using something different :-) Maybe we need to restore Mongo or some other similar DB type for manage this stuff? Or is it something where the "Nearby policy evaluation" can help and permissions data would rather need to be saved by the application itself? Marek From sts at ono.at Thu Mar 23 11:39:34 2017 From: sts at ono.at (Stefan Schlesinger) Date: Thu, 23 Mar 2017 16:39:34 +0100 Subject: [keycloak-dev] Keycloak High-Availability / Database In-Reply-To: References: Message-ID: <45A5F86B-53EF-4374-B82D-884202BFB64F@ono.at> Any thoughts? Shall I rather create a bug for this issue? Best, Stefan. > On 15 Mar 2017, at 11:22, Stefan Schlesinger wrote: > > Hello Folks! > > I tried to setup a Keycloak HA cluster with Percona XtraDB/Galera as HA database backend and it looks like its not currently supported, at least by the database schema Keycloak uses and the default Galera settings. Galera requires or recommends (performance) all table schemas to be defined with a primary key field and when I tried to add a role to a group I got the following error: > > ERROR [io.undertow.request] (default task-14) UT005023: Exception handling request to /auth/admin/realms/vault/groups/c2a04652-a322-1111-18ea-b2145bab2222/role-mappings/realm: org.jboss.resteasy.spi.UnhandledException: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement > ... > Caused by: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement > ... > Caused by: org.hibernate.exception.GenericJDBCException: could not execute statement > ... > Caused by: java.sql.SQLException: Percona-XtraDB-Cluster prohibits use of DML command on a table (keycloak.ADMIN_EVENT_ENTITY) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER > > Looking through the database schema the following tables don?t have a primary key defined: > > ADMIN_EVENT_ENTITY > COMPOSITE_ROLE* > CREDENTIAL_ATTRIBUTE > DATABASECHANGELOG > FED_CREDENTIAL_ATTRIBUTE > REALM_ENABLED_EVENT_TYPES* > REALM_EVENTS_LISTENERS* > REALM_SUPPORTED_LOCALES* > REDIRECT_URIS* > WEB_ORIGINS* > > Tables marked with an asterisk don?t even have an ID field, the rest of the tables actually got an ID field (with a UUID 'primary key'), which I think could be easily defined as primary key, and could easily be added. > > Looking at the Percona documentation[1][2], the limitation to only support tables with primary keys was liftet in more recent versions with the introduction of wsrep_certify_nonPK. However, it's still generally a best practice to have explicit PKs. If you don't define a PK, Galera will use an implicit hidden 6-byte PK for Innodb tables, taking up space that you can't use for querying. Innodb is very much optimized towards PK lookups. > > Also I?d need to set pxc_strict_mode from ENFORCING to PERMISSIVE, but that might have other side effects, as it relaxes other validations as well. Any experiences? > > Also, would it be possible to add primary keys in a bugfix version? > > Best, > > Stefan. > > [1] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/features/pxc-strict-mode.html#tables-without-primary-keys > [2] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/wsrep-system-index.html#wsrep_certify_nonPK From ssilvert at redhat.com Thu Mar 23 13:00:56 2017 From: ssilvert at redhat.com (Stan Silvert) Date: Thu, 23 Mar 2017 13:00:56 -0400 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: <965131964.2044922.1490272086588@mail.yahoo.com> References: <965131964.2044922.1490272086588.ref@mail.yahoo.com> <965131964.2044922.1490272086588@mail.yahoo.com> Message-ID: On 3/23/2017 8:28 AM, Thomas Connolly wrote: > Hi All > Could this UI and API be put on a separate port please? It's still very early in development, but you will probably have the option of putting it on a different port and even a different server. Of course, the default will be to sill run it as you do today. But I'm interested in your use case. Why do you need it on a different port? > RegardsTom.----------------------------------- > > Message: 1Date: Fri, 17 Mar 2017 08:25:47 -0700 > From: Tair Sabirgaliev > Subject: Re: [keycloak-dev] New Account Management Console and Account > REST api > To: Stan Silvert , stian at redhat.com > Cc: keycloak-dev > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > +1 for Angular2, this will make maintenance and customisation easier. > The framework becomes very popular and close to ?JavaEE mindset?. > > On 17 March 2017 at 18:19:23, Stan Silvert (ssilvert at redhat.com) wrote: > > On 3/17/2017 8:09 AM, Stian Thorgersen wrote: >> Had another idea. We could quite easily make it possible to configure >> the "account management url" for a realm. That would let folks >> redirect to external account management console if they want to >> completely override it. > That would also mean that our own account management console could be > served from anywhere or even installed locally on the client machine. >> On 17 March 2017 at 13:08, Stian Thorgersen > > wrote: >> >> I'm going to call it "YetAnotherJsFramework" ;) >> >> On 17 March 2017 at 12:54, Stan Silvert > > wrote: >> >> On 3/17/2017 5:47 AM, Stian Thorgersen wrote: >>> As we've discussed a few times now the plan is to do a brand >> new account >>> management console. Instead of old school forms it will be >> all modern using >>> HTML5, AngularJS and REST endpoints. >> One thing. That should be "Angular", not "AngularJS". Just to >> educate everyone, here is what's going on in Angular-land: >> >> AngularJS is the old framework we used for the admin console. >> Angular is the new framework we will use for the account >> management console. >> >> Most of you know the new framework as Angular2 or ng-2, but >> the powers >> that be want to just call it "Angular". This framework is >> completely >> rewritten and really has no relation to AngularJS, except they >> both come >> from Google and both have "Angular" in the name. >> >> To avoid confusion, I'm going to call it "Angualr2" for the >> foreseeable >> future. >>> The JIRA for this work is: >>> https://issues.jboss.org/browse/KEYCLOAK-1250 >> >>> We where hoping to get some help from the professional UXP >> folks for this, >>> but it looks like that may take some time. In the mean time >> the plan is to >>> base it on the following template: >>> >>> >> https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# >> >>> Also, we'll try to use some newer things from PatternFly >> patterns to >>> improve the screens. >>> >>> First pass will have the same functionality and behavior as >> the old account >>> management console. Second pass will be to improve the >> usability (pages >>> like linking, sessions and history are not very nice). >>> >>> We will deprecate the old FreeMarker/forms way of doing >> things, but keep it >>> around so it doesn't break what people are already doing. >> This can be >>> removed in the future (probably RHSSO 8.0?). >>> >>> We'll also need to provide full rest endpoints for the >> account management >>> console. I'll work on that, while Stan works on the UI. >>> >>> As the account management console will be a pure HTML5 and >> JS app anyone >>> can completely replace it with a theme. They can also >> customize it a lot. >>> We'll also need to make sure it's easy to add additional >> pages/sections. >>> Rather than just add to AccountService I'm going to rename that >>> to DeprecatedAccountFormService remove all REST from there >> and add a new >>> AccountService that only does REST. All features available >> through forms at >>> the moment will be available as REST API, with the exception >> of account >>> linking which will be done through Bills work that was >> introduced in 3.0 >>> that allows applications to initiate the account linking. >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >> >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> >> > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From hmlnarik at redhat.com Thu Mar 23 13:11:53 2017 From: hmlnarik at redhat.com (Hynek Mlnarik) Date: Thu, 23 Mar 2017 18:11:53 +0100 Subject: [keycloak-dev] Keycloak High-Availability / Database In-Reply-To: <45A5F86B-53EF-4374-B82D-884202BFB64F@ono.at> References: <45A5F86B-53EF-4374-B82D-884202BFB64F@ono.at> Message-ID: <0a512c33-fb4c-bce5-fd43-521a0c3381f0@redhat.com> Could you please file a JIRA issue. This seems reasonable to fix even though the standard InnoDB engine for MariaDB/MySQL works well. Thanks --Hynek On 03/23/2017 04:39 PM, Stefan Schlesinger wrote: > Any thoughts? Shall I rather create a bug for this issue? > > Best, Stefan. > > >> On 15 Mar 2017, at 11:22, Stefan Schlesinger wrote: >> >> Hello Folks! >> >> I tried to setup a Keycloak HA cluster with Percona XtraDB/Galera as HA database backend and it looks like its not currently supported, at least by the database schema Keycloak uses and the default Galera settings. Galera requires or recommends (performance) all table schemas to be defined with a primary key field and when I tried to add a role to a group I got the following error: >> >> ERROR [io.undertow.request] (default task-14) UT005023: Exception handling request to /auth/admin/realms/vault/groups/c2a04652-a322-1111-18ea-b2145bab2222/role-mappings/realm: org.jboss.resteasy.spi.UnhandledException: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >> ... >> Caused by: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >> ... >> Caused by: org.hibernate.exception.GenericJDBCException: could not execute statement >> ... >> Caused by: java.sql.SQLException: Percona-XtraDB-Cluster prohibits use of DML command on a table (keycloak.ADMIN_EVENT_ENTITY) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER >> >> Looking through the database schema the following tables don?t have a primary key defined: >> >> ADMIN_EVENT_ENTITY >> COMPOSITE_ROLE* >> CREDENTIAL_ATTRIBUTE >> DATABASECHANGELOG >> FED_CREDENTIAL_ATTRIBUTE >> REALM_ENABLED_EVENT_TYPES* >> REALM_EVENTS_LISTENERS* >> REALM_SUPPORTED_LOCALES* >> REDIRECT_URIS* >> WEB_ORIGINS* >> >> Tables marked with an asterisk don?t even have an ID field, the rest of the tables actually got an ID field (with a UUID 'primary key'), which I think could be easily defined as primary key, and could easily be added. >> >> Looking at the Percona documentation[1][2], the limitation to only support tables with primary keys was liftet in more recent versions with the introduction of wsrep_certify_nonPK. However, it's still generally a best practice to have explicit PKs. If you don't define a PK, Galera will use an implicit hidden 6-byte PK for Innodb tables, taking up space that you can't use for querying. Innodb is very much optimized towards PK lookups. >> >> Also I?d need to set pxc_strict_mode from ENFORCING to PERMISSIVE, but that might have other side effects, as it relaxes other validations as well. Any experiences? >> >> Also, would it be possible to add primary keys in a bugfix version? >> >> Best, >> >> Stefan. >> >> [1] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/features/pxc-strict-mode.html#tables-without-primary-keys >> [2] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/wsrep-system-index.html#wsrep_certify_nonPK > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bburke at redhat.com Thu Mar 23 14:22:20 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 23 Mar 2017 14:22:20 -0400 Subject: [keycloak-dev] User-managed permissions In-Reply-To: <57998e07-a098-d20f-ae2e-eb0671fce980@redhat.com> References: <57998e07-a098-d20f-ae2e-eb0671fce980@redhat.com> Message-ID: <2258b00a-baa6-7f1d-fc9d-a6910b282b53@redhat.com> Are you sure this is too hard for an RDBMS to manage? Prove first that an RDBMS can't handle it. On 3/23/17 11:35 AM, Marek Posolda wrote: > I was wondering about the use-case when users manage permissions to > their own objects. It seems that proper support for this can be very > challenging for the amount of DB space. > > For example: I have 1000 documents and I have 1000 users. I want to be > able to define fine-grained permissions and be able to define that user > "john" is able to see document-1 and document-2, but not document-3 etc. > So I can end with up to: > > count of users * number of documents = 1000 users * 1000 documents = > 1000000 permission records in DB > > When authorization scopes (actions) come into play and I want to specify > that "john" is able just to "read" document-1 when "alice" is able to > "read", "update" and "comment" on document-1, I may end up with 5 > million objects in DB (assuming I have 5 actions). > > We can do something like divide documents into "groups" and grant the > permission just per group. But for example Google allows to group things > (you can put more photos into one photoalbum and share whole photoalbum > with user "john"), but also define fine-grained permission (share just > single photo with user "john"). > > My estimation is, that using for JPA for save such data is likely not > feasible. And I bet that Google is really using something different :-) > > Maybe we need to restore Mongo or some other similar DB type for manage > this stuff? Or is it something where the "Nearby policy evaluation" can > help and permissions data would rather need to be saved by the > application itself? > > Marek > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From bburke at redhat.com Thu Mar 23 14:27:41 2017 From: bburke at redhat.com (Bill Burke) Date: Thu, 23 Mar 2017 14:27:41 -0400 Subject: [keycloak-dev] Keycloak High-Availability / Database In-Reply-To: <45A5F86B-53EF-4374-B82D-884202BFB64F@ono.at> References: <45A5F86B-53EF-4374-B82D-884202BFB64F@ono.at> Message-ID: These are join tables. Don't most RDBMS schemas have join tables? I don't see us putting effort into this. Community will have to do it On 3/23/17 11:39 AM, Stefan Schlesinger wrote: > Any thoughts? Shall I rather create a bug for this issue? > > Best, Stefan. > > >> On 15 Mar 2017, at 11:22, Stefan Schlesinger wrote: >> >> Hello Folks! >> >> I tried to setup a Keycloak HA cluster with Percona XtraDB/Galera as HA database backend and it looks like its not currently supported, at least by the database schema Keycloak uses and the default Galera settings. Galera requires or recommends (performance) all table schemas to be defined with a primary key field and when I tried to add a role to a group I got the following error: >> >> ERROR [io.undertow.request] (default task-14) UT005023: Exception handling request to /auth/admin/realms/vault/groups/c2a04652-a322-1111-18ea-b2145bab2222/role-mappings/realm: org.jboss.resteasy.spi.UnhandledException: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >> ... >> Caused by: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >> ... >> Caused by: org.hibernate.exception.GenericJDBCException: could not execute statement >> ... >> Caused by: java.sql.SQLException: Percona-XtraDB-Cluster prohibits use of DML command on a table (keycloak.ADMIN_EVENT_ENTITY) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER >> >> Looking through the database schema the following tables don?t have a primary key defined: >> >> ADMIN_EVENT_ENTITY >> COMPOSITE_ROLE* >> CREDENTIAL_ATTRIBUTE >> DATABASECHANGELOG >> FED_CREDENTIAL_ATTRIBUTE >> REALM_ENABLED_EVENT_TYPES* >> REALM_EVENTS_LISTENERS* >> REALM_SUPPORTED_LOCALES* >> REDIRECT_URIS* >> WEB_ORIGINS* >> >> Tables marked with an asterisk don?t even have an ID field, the rest of the tables actually got an ID field (with a UUID 'primary key'), which I think could be easily defined as primary key, and could easily be added. >> >> Looking at the Percona documentation[1][2], the limitation to only support tables with primary keys was liftet in more recent versions with the introduction of wsrep_certify_nonPK. However, it's still generally a best practice to have explicit PKs. If you don't define a PK, Galera will use an implicit hidden 6-byte PK for Innodb tables, taking up space that you can't use for querying. Innodb is very much optimized towards PK lookups. >> >> Also I?d need to set pxc_strict_mode from ENFORCING to PERMISSIVE, but that might have other side effects, as it relaxes other validations as well. Any experiences? >> >> Also, would it be possible to add primary keys in a bugfix version? >> >> Best, >> >> Stefan. >> >> [1] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/features/pxc-strict-mode.html#tables-without-primary-keys >> [2] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/wsrep-system-index.html#wsrep_certify_nonPK > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From psilva at redhat.com Thu Mar 23 14:29:01 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Thu, 23 Mar 2017 15:29:01 -0300 Subject: [keycloak-dev] Elytron Adapter In-Reply-To: References: Message-ID: Yeah, sure the alll methods on that class were failing. Will try using your commands. Didin't try with firefox/phantomjs paramters (although using phantomjs to run photoz authz tests). Thanks. On Thu, Mar 23, 2017 at 11:17 AM, Michal Hajas wrote: > I am not sure whether I understand you, WildflyJSConsoleExampleAdapterTest > is a whole class, not one specific test method. > > If you want to run WildflyJSConsoleExampleAdapterTest you need to run > these: > > 1. navigate to integration-arquillian testsuite and run: mvn clean install > -Papp-server-wildfly [-Pauth-server-wildfly] -DskipTests > 2. navigate to tests/other and run mvn clean install -Papp-server-wildfly > [-Pauth-server-wildfly] -Dtest=*JSConsoleExampleAdapterTest > [-Dbrowser=firefox | phantomjs | ...] > > Hope I didn't forget about something. > > At the moment, I am not sure whether Adapter tests are working correctly in > upstream, I haven't run it for a while, but, as soon as I will have time, > after finishing 7.1 stuff, I will look at adapter tests in the community. > > On Thu, Mar 23, 2017 at 11:44 AM, Pedro Igor Silva > wrote: > > > What about WildflyJSConsoleExampleAdapterTest ? Am I missing some > > property in the command line ? > > > > Last time I run into some issues and passing -Dbrowser=phantomjs did the > > tricky for some tests using JS-based applications. Didn't test it with > the > > mentioned test though ... > > > > Thanks. > > > > On Thu, Mar 23, 2017 at 6:53 AM, Michal Hajas wrote: > > > >> Hi Pedro, > >> > >> What failures are you experiencing? Is it the failure of test > >> historyOfAccessResourceTest or is it something else? I remember this > >> test was unstable, but I believe I fixed it already as I haven't > >> experienced any failures for a while. > >> > >> In case it is something else, there is PR [1] for a small bug I > >> introduced during conflict merge, so maybe it will solve your issue. > >> > >> Anyway, please send me some more details about this, I will investigate > >> whether it is a testsuite issue or something else. > >> > >> Michal > >> > >> [1] https://github.com/keycloak/keycloak/pull/3960 > >> > >> On Thu, Mar 23, 2017 at 2:07 AM, Pedro Igor Silva > >> wrote: > >> > >>> I'm experiencing some failures with tests in > >>> org.keycloak.testsuite.adapter.example. For > >>> instance, WildflyJSConsoleExampleAdapterTest. If i deploy js-console > >>> example everything works fine. > >>> > >>> Is this a known issue and I can safely ignore ? Even when running > against > >>> master (no elytron adapter changes) the test is failing. > >>> > >>> If a known issue, which tests should I care most to make sure Elytron > >>> Adapter is functional ? > >>> > >>> Regards. > >>> Pedro Igor > >>> > >>> On Wed, Mar 22, 2017 at 8:09 PM, Pedro Igor Silva > >>> wrote: > >>> > >>> > Hello, > >>> > > >>> > I'm starting to finish up some tests with the new Elytron Adapters > for > >>> > OIDC and SAML. The idea is push the adapter as soon as I prepare > >>> arquillian > >>> > testsuite to run against a container using Elytron. > >>> > > >>> > Until now, I was using integration tests to test these adapters. But > >>> for > >>> > now on, I'll be running all arquillian tests as suggested by Stian. > >>> Results > >>> > are pretty good so far, there is a single test failure right now > >>> > (org.keycloak.testsuite.adapter.servlet.AbstractDemoServlets > >>> AdapterTest#historyOfAccessResourceTest) > >>> > which I need to figure out what is going on. > >>> > > >>> > We are going to have a specific profile to test Elytron adapters. > This > >>> > profile is configured to run a WFLY 11 SNAPSHOT. > >>> > > >>> > I've already discussed this topic with Stian and the idea is to > create > >>> a > >>> > baseline for Elytron adapters as well start preparing Keycloak for > >>> Wildfly > >>> > 11 and the new security infrastructure provided by Elytron. > >>> > > >>> > This *does not* mean that we are replacing undertow adapters. But > just > >>> > preparing our adapters code base to the next WFLY release (and EAP > 7). > >>> > > >>> > I'll probably send a PR on friday or early next week. > >>> > > >>> > Please, let me know if you have any questions about this work. > >>> > > >>> > Regards. > >>> > Pedro Igor > >>> > > >>> _______________________________________________ > >>> keycloak-dev mailing list > >>> keycloak-dev at lists.jboss.org > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >>> > >> > >> > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From psilva at redhat.com Thu Mar 23 14:43:44 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Thu, 23 Mar 2017 15:43:44 -0300 Subject: [keycloak-dev] User-managed permissions In-Reply-To: <57998e07-a098-d20f-ae2e-eb0671fce980@redhat.com> References: <57998e07-a098-d20f-ae2e-eb0671fce980@redhat.com> Message-ID: I think a database still makes sense. What we put on top of that is the tricky part. AFAIK, that is what most of these social networks do, some of them use a database (e.g.: MySQL) but also different other things on top of it to avoid unnecessary hits. Usually these social networks are OK with eventual inconsistency. In our case, I'm not sure if we want to allow that. >From a database perspective there are some tricks like partitioning data. In addition with a good and smart cache layer on top of it. But yes, that is not something trivial to do considering the amount of data we can be managing. Hope we can start doing some prototypes soon and see how it goes. On Thu, Mar 23, 2017 at 12:35 PM, Marek Posolda wrote: > I was wondering about the use-case when users manage permissions to > their own objects. It seems that proper support for this can be very > challenging for the amount of DB space. > > For example: I have 1000 documents and I have 1000 users. I want to be > able to define fine-grained permissions and be able to define that user > "john" is able to see document-1 and document-2, but not document-3 etc. > So I can end with up to: > > count of users * number of documents = 1000 users * 1000 documents = > 1000000 permission records in DB > > When authorization scopes (actions) come into play and I want to specify > that "john" is able just to "read" document-1 when "alice" is able to > "read", "update" and "comment" on document-1, I may end up with 5 > million objects in DB (assuming I have 5 actions). > > We can do something like divide documents into "groups" and grant the > permission just per group. But for example Google allows to group things > (you can put more photos into one photoalbum and share whole photoalbum > with user "john"), but also define fine-grained permission (share just > single photo with user "john"). > > My estimation is, that using for JPA for save such data is likely not > feasible. And I bet that Google is really using something different :-) > > Maybe we need to restore Mongo or some other similar DB type for manage > this stuff? Or is it something where the "Nearby policy evaluation" can > help and permissions data would rather need to be saved by the > application itself? > > Marek > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From hmlnarik at redhat.com Thu Mar 23 16:59:15 2017 From: hmlnarik at redhat.com (Hynek Mlnarik) Date: Thu, 23 Mar 2017 21:59:15 +0100 Subject: [keycloak-dev] Keycloak High-Availability / Database In-Reply-To: References: <45A5F86B-53EF-4374-B82D-884202BFB64F@ono.at> Message-ID: Even in join tables we should declare a (composite) primary key on the fields. --Hynek On Thu, Mar 23, 2017 at 7:27 PM, Bill Burke wrote: > These are join tables. Don't most RDBMS schemas have join tables? I > don't see us putting effort into this. Community will have to do it > > > On 3/23/17 11:39 AM, Stefan Schlesinger wrote: >> Any thoughts? Shall I rather create a bug for this issue? >> >> Best, Stefan. >> >> >>> On 15 Mar 2017, at 11:22, Stefan Schlesinger wrote: >>> >>> Hello Folks! >>> >>> I tried to setup a Keycloak HA cluster with Percona XtraDB/Galera as HA database backend and it looks like its not currently supported, at least by the database schema Keycloak uses and the default Galera settings. Galera requires or recommends (performance) all table schemas to be defined with a primary key field and when I tried to add a role to a group I got the following error: >>> >>> ERROR [io.undertow.request] (default task-14) UT005023: Exception handling request to /auth/admin/realms/vault/groups/c2a04652-a322-1111-18ea-b2145bab2222/role-mappings/realm: org.jboss.resteasy.spi.UnhandledException: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >>> ... >>> Caused by: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >>> ... >>> Caused by: org.hibernate.exception.GenericJDBCException: could not execute statement >>> ... >>> Caused by: java.sql.SQLException: Percona-XtraDB-Cluster prohibits use of DML command on a table (keycloak.ADMIN_EVENT_ENTITY) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER >>> >>> Looking through the database schema the following tables don?t have a primary key defined: >>> >>> ADMIN_EVENT_ENTITY >>> COMPOSITE_ROLE* >>> CREDENTIAL_ATTRIBUTE >>> DATABASECHANGELOG >>> FED_CREDENTIAL_ATTRIBUTE >>> REALM_ENABLED_EVENT_TYPES* >>> REALM_EVENTS_LISTENERS* >>> REALM_SUPPORTED_LOCALES* >>> REDIRECT_URIS* >>> WEB_ORIGINS* >>> >>> Tables marked with an asterisk don?t even have an ID field, the rest of the tables actually got an ID field (with a UUID 'primary key'), which I think could be easily defined as primary key, and could easily be added. >>> >>> Looking at the Percona documentation[1][2], the limitation to only support tables with primary keys was liftet in more recent versions with the introduction of wsrep_certify_nonPK. However, it's still generally a best practice to have explicit PKs. If you don't define a PK, Galera will use an implicit hidden 6-byte PK for Innodb tables, taking up space that you can't use for querying. Innodb is very much optimized towards PK lookups. >>> >>> Also I?d need to set pxc_strict_mode from ENFORCING to PERMISSIVE, but that might have other side effects, as it relaxes other validations as well. Any experiences? >>> >>> Also, would it be possible to add primary keys in a bugfix version? >>> >>> Best, >>> >>> Stefan. >>> >>> [1] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/features/pxc-strict-mode.html#tables-without-primary-keys >>> [2] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/wsrep-system-index.html#wsrep_certify_nonPK >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev -- --Hynek From srossillo at smartling.com Thu Mar 23 17:24:13 2017 From: srossillo at smartling.com (Scott Rossillo) Date: Thu, 23 Mar 2017 17:24:13 -0400 Subject: [keycloak-dev] Keycloak High-Availability / Database In-Reply-To: References: <45A5F86B-53EF-4374-B82D-884202BFB64F@ono.at> Message-ID: <05F37858-CB75-46F2-A869-8C67AF81659F@smartling.com> The primary key on a join table could logically be defined as a composite index on its columns. Whether or not the index helps or hinders performance is another question. Scott Rossillo Smartling | Senior Software Engineer srossillo at smartling.com > On Mar 23, 2017, at 2:27 PM, Bill Burke wrote: > > These are join tables. Don't most RDBMS schemas have join tables? I > don't see us putting effort into this. Community will have to do it > > > On 3/23/17 11:39 AM, Stefan Schlesinger wrote: >> Any thoughts? Shall I rather create a bug for this issue? >> >> Best, Stefan. >> >> >>> On 15 Mar 2017, at 11:22, Stefan Schlesinger wrote: >>> >>> Hello Folks! >>> >>> I tried to setup a Keycloak HA cluster with Percona XtraDB/Galera as HA database backend and it looks like its not currently supported, at least by the database schema Keycloak uses and the default Galera settings. Galera requires or recommends (performance) all table schemas to be defined with a primary key field and when I tried to add a role to a group I got the following error: >>> >>> ERROR [io.undertow.request] (default task-14) UT005023: Exception handling request to /auth/admin/realms/vault/groups/c2a04652-a322-1111-18ea-b2145bab2222/role-mappings/realm: org.jboss.resteasy.spi.UnhandledException: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >>> ... >>> Caused by: org.keycloak.models.ModelException: org.hibernate.exception.GenericJDBCException: could not execute statement >>> ... >>> Caused by: org.hibernate.exception.GenericJDBCException: could not execute statement >>> ... >>> Caused by: java.sql.SQLException: Percona-XtraDB-Cluster prohibits use of DML command on a table (keycloak.ADMIN_EVENT_ENTITY) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER >>> >>> Looking through the database schema the following tables don?t have a primary key defined: >>> >>> ADMIN_EVENT_ENTITY >>> COMPOSITE_ROLE* >>> CREDENTIAL_ATTRIBUTE >>> DATABASECHANGELOG >>> FED_CREDENTIAL_ATTRIBUTE >>> REALM_ENABLED_EVENT_TYPES* >>> REALM_EVENTS_LISTENERS* >>> REALM_SUPPORTED_LOCALES* >>> REDIRECT_URIS* >>> WEB_ORIGINS* >>> >>> Tables marked with an asterisk don?t even have an ID field, the rest of the tables actually got an ID field (with a UUID 'primary key'), which I think could be easily defined as primary key, and could easily be added. >>> >>> Looking at the Percona documentation[1][2], the limitation to only support tables with primary keys was liftet in more recent versions with the introduction of wsrep_certify_nonPK. However, it's still generally a best practice to have explicit PKs. If you don't define a PK, Galera will use an implicit hidden 6-byte PK for Innodb tables, taking up space that you can't use for querying. Innodb is very much optimized towards PK lookups. >>> >>> Also I?d need to set pxc_strict_mode from ENFORCING to PERMISSIVE, but that might have other side effects, as it relaxes other validations as well. Any experiences? >>> >>> Also, would it be possible to add primary keys in a bugfix version? >>> >>> Best, >>> >>> Stefan. >>> >>> [1] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/features/pxc-strict-mode.html#tables-without-primary-keys >>> [2] - https://www.percona.com/doc/percona-xtradb-cluster/5.7/wsrep-system-index.html#wsrep_certify_nonPK >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From thomas_connolly at yahoo.com Thu Mar 23 19:02:50 2017 From: thomas_connolly at yahoo.com (Thomas Connolly) Date: Thu, 23 Mar 2017 23:02:50 +0000 (UTC) Subject: [keycloak-dev] New Account Management Console and Account REST api References: <1051566681.2538511.1490310170200.ref@mail.yahoo.com> Message-ID: <1051566681.2538511.1490310170200@mail.yahoo.com> Hi Stian Our scenario is that we do not want to expose the admin UI externally.This opens the system to an external exploit. At the moment we have two options,1) Block, via a rule on the load balancer port / (partial) path2) Change / hack the KeycloakSessionServletFilter to block external requests Note we had to implement 2 as the company policies for the LB didn't allow path based rules.The issue has been raised previously...https://issues.jboss.org/browse/KEYCLOAK-2944 RegardsTom Connolly Message: 5 Date: Thu, 23 Mar 2017 13:00:56 -0400 From: Stan Silvert Subject: Re: [keycloak-dev] New Account Management Console and Account ??? REST api To:?keycloak-dev at lists.jboss.org Message-ID: Content-Type: text/plain; charset=utf-8; format=flowed On 3/23/2017 8:28 AM, Thomas Connolly wrote: > Hi All > Could this UI and API be put on a separate port please? It's still very early in development, but you will probably have the? option of putting it on a different port and even a different server.?? Of course, the default will be to sill run it as you do today. But I'm interested in your use case.? Why do you need it on a different? port? > RegardsTom.----------------------------------- > > Message: 1Date: Fri, 17 Mar 2017 08:25:47 -0700 > From: Tair Sabirgaliev > Subject: Re: [keycloak-dev] New Account Management Console and Account >? ? ? REST? ? api > To: Stan Silvert ,?stian at redhat.com > Cc: keycloak-dev > Message-ID: >? ? ? > Content-Type: text/plain; charset=UTF-8 > > +1 for Angular2, this will make maintenance and customisation easier. > The framework becomes very popular and close to ?JavaEE mindset?. > > On 17 March 2017 at 18:19:23, Stan Silvert (ssilvert at redhat.com) wrote: > > On 3/17/2017 8:09 AM, Stian Thorgersen wrote: >> Had another idea. We could quite easily make it possible to configure >> the "account management url" for a realm. That would let folks >> redirect to external account management console if they want to >> completely override it. > That would also mean that our own account management console could be > served from anywhere or even installed locally on the client machine. >> On 17 March 2017 at 13:08, Stian Thorgersen > > wrote: >> >> I'm going to call it "YetAnotherJsFramework" ;) >> >> On 17 March 2017 at 12:54, Stan Silvert > > wrote: >> >> On 3/17/2017 5:47 AM, Stian Thorgersen wrote: >>> As we've discussed a few times now the plan is to do a brand >> new account >>> management console. Instead of old school forms it will be >> all modern using >>> HTML5, AngularJS and REST endpoints. >> One thing. That should be "Angular", not "AngularJS". Just to >> educate everyone, here is what's going on in Angular-land: >> >> AngularJS is the old framework we used for the admin console. >> Angular is the new framework we will use for the account >> management console. >> >> Most of you know the new framework as Angular2 or ng-2, but >> the powers >> that be want to just call it "Angular". This framework is >> completely >> rewritten and really has no relation to AngularJS, except they >> both come >> from Google and both have "Angular" in the name. >> >> To avoid confusion, I'm going to call it "Angualr2" for the >> foreseeable >> future. >>> The JIRA for this work is: >>>?https://issues.jboss.org/browse/KEYCLOAK-1250 >> >>> We where hoping to get some help from the professional UXP >> folks for this, >>> but it looks like that may take some time. In the mean time >> the plan is to >>> base it on the following template: >>> >>> >>?https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# >> >>> Also, we'll try to use some newer things from PatternFly >> patterns to >>> improve the screens. >>> >>> First pass will have the same functionality and behavior as >> the old account >>> management console. Second pass will be to improve the >> usability (pages >>> like linking, sessions and history are not very nice). >>> >>> We will deprecate the old FreeMarker/forms way of doing >> things, but keep it >>> around so it doesn't break what people are already doing. >> This can be >>> removed in the future (probably RHSSO 8.0?). >>> >>> We'll also need to provide full rest endpoints for the >> account management >>> console. I'll work on that, while Stan works on the UI. >>> >>> As the account management console will be a pure HTML5 and >> JS app anyone >>> can completely replace it with a theme. They can also >> customize it a lot. >>> We'll also need to make sure it's easy to add additional >> pages/sections. >>> Rather than just add to AccountService I'm going to rename that >>> to DeprecatedAccountFormService remove all REST from there >> and add a new >>> AccountService that only does REST. All features available >> through forms at >>> the moment will be available as REST API, with the exception >> of account >>> linking which will be done through Bills work that was >> introduced in 3.0 >>> that allows applications to initiate the account linking. >>> _______________________________________________ >>> keycloak-dev mailing list >>>?keycloak-dev at lists.jboss.org >> >>>?https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> keycloak-dev mailing list >>?keycloak-dev at lists.jboss.org? >>?https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> >> > _______________________________________________ > keycloak-dev mailing list >?keycloak-dev at lists.jboss.org >?https://lists.jboss.org/mailman/listinfo/keycloak-dev > _______________________________________________ > keycloak-dev mailing list >?keycloak-dev at lists.jboss.org >?https://lists.jboss.org/mailman/listinfo/keycloak-dev From sthorger at redhat.com Fri Mar 24 01:10:30 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 24 Mar 2017 06:10:30 +0100 Subject: [keycloak-dev] New Account Management Console and Account REST api In-Reply-To: <1051566681.2538511.1490310170200@mail.yahoo.com> References: <1051566681.2538511.1490310170200.ref@mail.yahoo.com> <1051566681.2538511.1490310170200@mail.yahoo.com> Message-ID: We're not talking about the admin UI at all here. This is only around the account management and it makes no sense to expose that on a different port as it should be accessible by end users. With regards to KEYCLOAK-2944 and the admin console/endpoints that makes perfect sense. Problem is that it may be very hard to implement, but we should probably look into it at least. On 24 March 2017 at 00:02, Thomas Connolly wrote: > Hi Stian > Our scenario is that we do not want to expose the admin UI externally.This > opens the system to an external exploit. > At the moment we have two options,1) Block, via a rule on the load > balancer port / (partial) path2) Change / hack the > KeycloakSessionServletFilter to block external requests > Note we had to implement 2 as the company policies for the LB didn't allow > path based rules.The issue has been raised previously...https://issues. > jboss.org/browse/KEYCLOAK-2944 > > RegardsTom Connolly > Message: 5 > Date: Thu, 23 Mar 2017 13:00:56 -0400 > From: Stan Silvert > Subject: Re: [keycloak-dev] New Account Management Console and Account > REST api > To: keycloak-dev at lists.jboss.org > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > On 3/23/2017 8:28 AM, Thomas Connolly wrote: > > Hi All > > Could this UI and API be put on a separate port please? > It's still very early in development, but you will probably have the > option of putting it on a different port and even a different server. > Of course, the default will be to sill run it as you do today. > > But I'm interested in your use case. Why do you need it on a different > port? > > > RegardsTom.----------------------------------- > > > > Message: 1Date: Fri, 17 Mar 2017 08:25:47 -0700 > > From: Tair Sabirgaliev > > Subject: Re: [keycloak-dev] New Account Management Console and Account > > REST api > > To: Stan Silvert , stian at redhat.com > > Cc: keycloak-dev > > Message-ID: > > > > > Content-Type: text/plain; charset=UTF-8 > > > > +1 for Angular2, this will make maintenance and customisation easier. > > The framework becomes very popular and close to ?JavaEE mindset?. > > > > On 17 March 2017 at 18:19:23, Stan Silvert (ssilvert at redhat.com) wrote: > > > > On 3/17/2017 8:09 AM, Stian Thorgersen wrote: > >> Had another idea. We could quite easily make it possible to configure > >> the "account management url" for a realm. That would let folks > >> redirect to external account management console if they want to > >> completely override it. > > That would also mean that our own account management console could be > > served from anywhere or even installed locally on the client machine. > >> On 17 March 2017 at 13:08, Stian Thorgersen >> > wrote: > >> > >> I'm going to call it "YetAnotherJsFramework" ;) > >> > >> On 17 March 2017 at 12:54, Stan Silvert >> > wrote: > >> > >> On 3/17/2017 5:47 AM, Stian Thorgersen wrote: > >>> As we've discussed a few times now the plan is to do a brand > >> new account > >>> management console. Instead of old school forms it will be > >> all modern using > >>> HTML5, AngularJS and REST endpoints. > >> One thing. That should be "Angular", not "AngularJS". Just to > >> educate everyone, here is what's going on in Angular-land: > >> > >> AngularJS is the old framework we used for the admin console. > >> Angular is the new framework we will use for the account > >> management console. > >> > >> Most of you know the new framework as Angular2 or ng-2, but > >> the powers > >> that be want to just call it "Angular". This framework is > >> completely > >> rewritten and really has no relation to AngularJS, except they > >> both come > >> from Google and both have "Angular" in the name. > >> > >> To avoid confusion, I'm going to call it "Angualr2" for the > >> foreseeable > >> future. > >>> The JIRA for this work is: > >>> https://issues.jboss.org/browse/KEYCLOAK-1250 > >> > >>> We where hoping to get some help from the professional UXP > >> folks for this, > >>> but it looks like that may take some time. In the mean time > >> the plan is to > >>> base it on the following template: > >>> > >>> > >> https://rawgit.com/andresgalante/kc-user/master/layout-alt-fixed.html# > >> > > >>> Also, we'll try to use some newer things from PatternFly > >> patterns to > >>> improve the screens. > >>> > >>> First pass will have the same functionality and behavior as > >> the old account > >>> management console. Second pass will be to improve the > >> usability (pages > >>> like linking, sessions and history are not very nice). > >>> > >>> We will deprecate the old FreeMarker/forms way of doing > >> things, but keep it > >>> around so it doesn't break what people are already doing. > >> This can be > >>> removed in the future (probably RHSSO 8.0?). > >>> > >>> We'll also need to provide full rest endpoints for the > >> account management > >>> console. I'll work on that, while Stan works on the UI. > >>> > >>> As the account management console will be a pure HTML5 and > >> JS app anyone > >>> can completely replace it with a theme. They can also > >> customize it a lot. > >>> We'll also need to make sure it's easy to add additional > >> pages/sections. > >>> Rather than just add to AccountService I'm going to rename that > >>> to DeprecatedAccountFormService remove all REST from there > >> and add a new > >>> AccountService that only does REST. All features available > >> through forms at > >>> the moment will be available as REST API, with the exception > >> of account > >>> linking which will be done through Bills work that was > >> introduced in 3.0 > >>> that allows applications to initiate the account linking. > >>> _______________________________________________ > >>> keycloak-dev mailing list > >>> keycloak-dev at lists.jboss.org > >> > >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >> > >> _______________________________________________ > >> keycloak-dev mailing list > >> keycloak-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > >> > >> > >> > >> > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Fri Mar 24 04:40:48 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 24 Mar 2017 09:40:48 +0100 Subject: [keycloak-dev] Product (RHSSO) profile available in community Message-ID: Currently the (minor) changes between Keycloak and RHSSO are done in a separate internal repository. This causes a lot of merge conflicts and also makes it much harder to maintain these differences. The plan is to do all the changes using Maven profiles in the Keycloak repository. The exception will be POM manipulation changes (changes versions to '-redhat' versions). I've started the work and at the moment you can build something similar to RHSSO by running: mvn clean install -Pdistribution -Dproduct The trick is '-Dproduct'. At the moment the profile does the following changes: * Add RHSSO theme * Set RHSSO build properties (names, etc.) More to come ;) From bburke at redhat.com Sat Mar 25 12:43:52 2017 From: bburke at redhat.com (Bill Burke) Date: Sat, 25 Mar 2017 12:43:52 -0400 Subject: [keycloak-dev] logout social providers? Message-ID: If a user logs in through Facebook or links to Facebook in the account service, should we logout the Facebook when the user logs out? My thinking is that we should otherwise that machine will keep facebook logged in. Bill From bburke at redhat.com Sat Mar 25 12:53:51 2017 From: bburke at redhat.com (Bill Burke) Date: Sat, 25 Mar 2017 12:53:51 -0400 Subject: [keycloak-dev] logout social providers? In-Reply-To: References: Message-ID: <3cd5fc40-7a1e-6adc-8ebe-b1d9ba11df5b@redhat.com> Actually its just account linking that is effected. If you log in through Facebook, you will log out of facebook. I assume we want a logout to happen to linked accounts too. On 3/25/17 12:43 PM, Bill Burke wrote: > If a user logs in through Facebook or links to Facebook in the account > service, should we logout the Facebook when the user logs out? My > thinking is that we should otherwise that machine will keep facebook > logged in. > > Bill > From dtrunk90 at gmail.com Sat Mar 25 14:00:48 2017 From: dtrunk90 at gmail.com (Danny Trunk) Date: Sat, 25 Mar 2017 19:00:48 +0100 Subject: [keycloak-dev] Password Hashing in custom User Storage Provider Message-ID: Hi, when implementing my own User Storage Provider I've noticed that the password has to be raw in my database as no Password Hash Provider is getting triggered. The User Storage Provider is based on the JPA Example located here: https://github.com/keycloak/keycloak/tree/master/examples/providers/user-storage-jpa When adding some logging into the isValid method of the Provider to see whats the content of password and cred.getValue() I can see that password (the one from the database) is hashed whereas cred.getValue() isn't. That's why it mismatches and the user can see an invalid credentials error message. Do I have to call all (as I could have multiple algorithms in my database without any information which algorithm it is) PasswordHashProvider myself in this method? I guess that's not the intended behaviour of the Password Hash Providers?! From bburke at redhat.com Sun Mar 26 11:06:48 2017 From: bburke at redhat.com (Bill Burke) Date: Sun, 26 Mar 2017 11:06:48 -0400 Subject: [keycloak-dev] [authz] REST and Java API need work Message-ID: Authorization component of Keycloak is really cool and has a strong core base of functionality. I think it needs another iteration though especially around the RESET interface and Java API. The REST interface is just too complex for anybody to use. I'll give some examples: * To create a permission, you must create a PolicyRepresentation. Policy and Permission are overloaded and its unclear how to use the REST API to create concepts that exist in the admin console. * To apply resources and scopes to a permission definition, you have to store a stringified JSON array into a regular JSON map. * In java api, Policy and Permission are also overloaded. In data model policy and permission are also overloaded. This makes it really unclear how to create a permission vs. just a plain policy. Suggestion: * Create a PermissionDefinitionRepresentation and pull core config optiosn (scopes, applied policies, resources) into actual fields rather than in a generic config map. * Leverage the ComponentModel API to store non-core configuration, i.e. policy type specific information. It supports multi-valued hash maps and also has utilities in admin console for rendering this configuration data. * Create a PermissionDefinition interface in storage API Bill From bburke at redhat.com Sun Mar 26 11:44:56 2017 From: bburke at redhat.com (Bill Burke) Date: Sun, 26 Mar 2017 11:44:56 -0400 Subject: [keycloak-dev] [authz] introducing security holes by mistake Message-ID: In Authz you can define a permission that applies only to scope or only to resource or only to a specific resource type. These are different ways to define default behaviors. Also, you can define multiple permission definitions for the same scope, resource, or combination of both. You can do this multiple times. This bring me to the point of this email. Isn't it too easy to screw things up? Somebody could add a more constrained permission that overrides default behavior and not realize that they've screwed things up. Somebody could add an additional permission for a resource and not realize there was already an existing one. It all seems very error prone. I'm wondering if we should constrain this a bit more. For instance, what if each resource-scope pair is unique? That is, you can't define more than one permission for each resource-scope pair. This goes for resource only, resource type only, and scope only permission definitions too. That way, when somebody goes to define a permission, they see all policies that are currently applied. I also think that any resource-scope permission definition should automatically inherit from permissions defined in resource-type, resource, or pure-scope permission definitions. You should see these inherited permissions in the "Applied Policies" when you create the resource-scope permission. Then admins can remove the inherited permissions they want to. Bill From takashi.norimatsu.ws at hitachi.com Mon Mar 27 02:25:37 2017 From: takashi.norimatsu.ws at hitachi.com (=?iso-2022-jp?B?GyRCPmg+Pk40O1YbKEIgLyBOT1JJTUFUU1UbJEIhJBsoQlRBS0FTSEk=?=) Date: Mon, 27 Mar 2017 06:25:37 +0000 Subject: [keycloak-dev] typos in manual Message-ID: <831D472326678942A9B4BB933AAA103D25F8FF79@GSjpTK1DCembx01.service.hitachi.net> Dear all, I've been engaging in applying keycloak onto the systems whose emphasis are on high-security. By the way, I've found some typos in RH-SSO and keycloak's manuals, and an erroneous description on RH-SSO and keycloak's UI, as follows. I'm not sure it be appropriate that I post such the issue onto this dev mailing list. If not, please tell me. 1) On 3.19.7 Compromised Access Codes of Server Administration Guide for keycloak 3.0.0 and before, we'd like to use "Authorization Codes" instead of "Access Codes". The same is applied on 17.8 Compromised Access Code of Server Administration Guide for RH-SSO 7.1beta and before. 2) On 3.14.3 Session and Token Timeouts for keycloak 3.0.0 and before, we'd like to use "Authorization Code Flow in OIDC" instead of "Authentication Code Flow in OIDC". The same is applied on 13.3 Session and Token Timeouts of Server Administration Guide for RH-SSO 7.1beta and before. 3) On "Security Defences" of "Realm Settings" for keycloak 3.0.0 and before, the description of the tooltip for "Content-Security-Policy" is the same as "X-Frame-Options". However, CSP is the different mechanism against X-Frame-Options according to https://www.w3.org/TR/CSP/. we'd better consider other description. For example, "Default value prevents pages from accessing non-origin resources(click label for more information)". Regards. Takashi Norimatsu From sthorger at redhat.com Mon Mar 27 02:56:22 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 27 Mar 2017 08:56:22 +0200 Subject: [keycloak-dev] User-managed permissions In-Reply-To: References: <57998e07-a098-d20f-ae2e-eb0671fce980@redhat.com> Message-ID: We do need to start stress testing this stuff and make sure it can hold up to the load it needs to. That will have to be done prior to making authz services supported. On 23 March 2017 at 19:43, Pedro Igor Silva wrote: > I think a database still makes sense. What we put on top of that is the > tricky part. > > AFAIK, that is what most of these social networks do, some of them use a > database (e.g.: MySQL) but also different other things on top of it > to avoid unnecessary hits. Usually these social networks are OK with > eventual inconsistency. In our case, I'm not sure if we want to allow that. > > >From a database perspective there are some tricks like partitioning data. > In addition with a good and smart cache layer on top of it. > > But yes, that is not something trivial to do considering the amount of data > we can be managing. Hope we can start doing some prototypes soon and see > how it goes. > > On Thu, Mar 23, 2017 at 12:35 PM, Marek Posolda > wrote: > > > I was wondering about the use-case when users manage permissions to > > their own objects. It seems that proper support for this can be very > > challenging for the amount of DB space. > > > > For example: I have 1000 documents and I have 1000 users. I want to be > > able to define fine-grained permissions and be able to define that user > > "john" is able to see document-1 and document-2, but not document-3 etc. > > So I can end with up to: > > > > count of users * number of documents = 1000 users * 1000 documents = > > 1000000 permission records in DB > > > > When authorization scopes (actions) come into play and I want to specify > > that "john" is able just to "read" document-1 when "alice" is able to > > "read", "update" and "comment" on document-1, I may end up with 5 > > million objects in DB (assuming I have 5 actions). > > > > We can do something like divide documents into "groups" and grant the > > permission just per group. But for example Google allows to group things > > (you can put more photos into one photoalbum and share whole photoalbum > > with user "john"), but also define fine-grained permission (share just > > single photo with user "john"). > > > > My estimation is, that using for JPA for save such data is likely not > > feasible. And I bet that Google is really using something different :-) > > > > Maybe we need to restore Mongo or some other similar DB type for manage > > this stuff? Or is it something where the "Nearby policy evaluation" can > > help and permissions data would rather need to be saved by the > > application itself? > > > > Marek > > > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mposolda at redhat.com Mon Mar 27 03:11:42 2017 From: mposolda at redhat.com (Marek Posolda) Date: Mon, 27 Mar 2017 09:11:42 +0200 Subject: [keycloak-dev] logout social providers? In-Reply-To: <3cd5fc40-7a1e-6adc-8ebe-b1d9ba11df5b@redhat.com> References: <3cd5fc40-7a1e-6adc-8ebe-b1d9ba11df5b@redhat.com> Message-ID: <4edacbaa-9d7a-55bd-e63a-8bc5cfe830f1@redhat.com> IMO the logout of child broker should be propagated to parent broker logout just in case, that parent broker was actively authenticated because of child broker. In other words, when I click to "Sign In with Facebook" on Keycloak login screen, but I am already authenticated to Facebook (hence no Facebook login screen is displayed), then logout from KC shouldn't logout me from Facebook IMO. However I don't know if it's possible to detect this. In case that Keycloak is used as parent broker, we have "auth_time" as a claim in the token, so we can decide if parent Keycloak broker was actively authenticated because of our request. Not sure if Facebook, Google, Twitter and others OIDC providers have something like this. Also not even sure if Facebook (and other social providers) allow you to logout their session from the "child" app... Marek On 25/03/17 17:53, Bill Burke wrote: > Actually its just account linking that is effected. If you log in > through Facebook, you will log out of facebook. I assume we want a > logout to happen to linked accounts too. > > > On 3/25/17 12:43 PM, Bill Burke wrote: >> If a user logs in through Facebook or links to Facebook in the account >> service, should we logout the Facebook when the user logs out? My >> thinking is that we should otherwise that machine will keep facebook >> logged in. >> >> Bill >> > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From psilva at redhat.com Mon Mar 27 07:01:42 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 27 Mar 2017 08:01:42 -0300 Subject: [keycloak-dev] [authz] REST and Java API need work In-Reply-To: References: Message-ID: On Sun, Mar 26, 2017 at 12:06 PM, Bill Burke wrote: > Authorization component of Keycloak is really cool and has a strong core > base of functionality. I think it needs another iteration though > especially around the RESET interface and Java API. > > The REST interface is just too complex for anybody to use. I'll give > some examples: > > * To create a permission, you must create a PolicyRepresentation. > Policy and Permission are overloaded and its unclear how to use the REST > API to create concepts that exist in the admin console. > * To apply resources and scopes to a permission definition, you have to > store a stringified JSON array into a regular JSON map. > > * In java api, Policy and Permission are also overloaded. In data model > policy and permission are also overloaded. This makes it really unclear > how to create a permission vs. just a plain policy. > > > Suggestion: > > * Create a PermissionDefinitionRepresentation and pull core config > optiosn (scopes, applied policies, resources) into actual fields rather > than in a generic config map. > As we already discussed in a previous thread, policy management via REST API is a TODO and we have a JIRA for this. Will work on it this week. > > * Leverage the ComponentModel API to store non-core configuration, i.e. > policy type specific information. It supports multi-valued hash maps > and also has utilities in admin console for rendering this configuration > data. > +1. Yeah, I really missed this capability. I will review this part of the code and check how component model works. > > * Create a PermissionDefinition interface in storage API > I'm not willing to change model now .... But we can change the API to start introducing this. What do you say ? > > Bill > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From mposolda at redhat.com Mon Mar 27 07:02:44 2017 From: mposolda at redhat.com (Marek Posolda) Date: Mon, 27 Mar 2017 13:02:44 +0200 Subject: [keycloak-dev] Authentication sessions prototype Message-ID: <141abad0-f228-75f5-1539-1a0d60210bba@redhat.com> We started on the work for cross-dc support. One of the initial steps for this is to improve current "sessions" cache to avoid unnecessary communication between data-centers. Currently ClientSessionModel is created at the start of the authentication and every step in the authentication flow means some writes to the ClientSessionModel. So the idea is, to create separate provider and separate "Authentication session", which will be used just during the authentication time. The advantage is, that authentication usually doesn't take lots of times and can be tracked with browser sticky session. So typical deployment will be able to rely on sticky sessions and won't need the authentication sessions to be replicated across different data centers. I have some prototype already working in the branch [1] What I did so far is: - Created separate provider AuthenticationSessionProvider and separate AuthenticationSessionModel - During start of authentication (at the time of request from OIDC or SAML application is sent to Keycloak), the AuthenticationSession is created instead of old ClientSession. For now, there is cookie with the authentication-session-id created. This one is used for track sticky sessions - AuthenticationSession is used for the time of authentication, requiredActions and consents. The UserSession is now created after the consent is confirmed (before redirecting to OIDC/SAML application). Some minor changes were needed in the authentication SPI, requiredActions SPI, forms SPI to use AuthenticationSession instead of ClientSession and to not use UserSession. - For now, UserSession still tracks the list of clientSessions of the authenticated clients. But those authenticated client-sessions are now saved just as an attachment of userSession entity, so there is just single infinispan entity for userSession and not additional entities for clientSessions. This is just another step. Hopefully we will be able to get rid of "clientSession" at all and keep just list of the client IDs in the user session. This would require some additional refactoring as we currently have some data in clientSession, which are used during refresh and during logout. But this will be done later though (eg. ensure that roles and protocolMappers will be available in refreshToken. Maybe support for OIDC logout on adapters side similar to what we have for SAML as currently we track the HttpSession ID as the note in clientSession and this one is needed to logout HttpSession on the adapter side etc) - There are some improvements done around back / forward / refresh button. We discussed this in another thread. For now, the aim is to never display the Keycloak page with "We're sorry. An error occurred and please login through your application" but rather display the more friendly "Page is expired" with the links to the start of authenticationFlow and/or with go to last step. Anything more tricky functionality (track history with real "rollback" of some authentication / requiredAction / registration actions etc) is beyond the scope of this, so I am likely not going to do anything related to it. - I have the most important flows working (login, registration, required actions, consents, reset password). There are still many TODOs and non-working flows (eg. brokering) and also many failing tests. But hopefully in 1-2 weeks I will be able to have this more stable and send PR for it. - In the branch, I have also cherry-picked some initial work by Hynek on "action tokens". This is used in reset password flow. I think that Hynek will send separate email around this later with more details. [1] https://github.com/mposolda/keycloak/tree/cross-dc2 Marek From psilva at redhat.com Mon Mar 27 07:23:20 2017 From: psilva at redhat.com (Pedro Igor Silva) Date: Mon, 27 Mar 2017 08:23:20 -0300 Subject: [keycloak-dev] [authz] introducing security holes by mistake In-Reply-To: References: Message-ID: On Sun, Mar 26, 2017 at 12:44 PM, Bill Burke wrote: > In Authz you can define a permission that applies only to scope or only > to resource or only to a specific resource type. These are different > ways to define default behaviors. Also, you can define multiple > permission definitions for the same scope, resource, or combination of > both. You can do this multiple times. > > This bring me to the point of this email. Isn't it too easy to screw > things up? Somebody could add a more constrained permission that > overrides default behavior and not realize that they've screwed things > up. Somebody could add an additional permission for a resource and not > realize there was already an existing one. It all seems very error prone. > I'm wondering if we should constrain this a bit more. For instance, > what if each resource-scope pair is unique? That is, you can't define > more than one permission for each resource-scope pair. This goes for > resource only, resource type only, and scope only permission definitions > too. That way, when somebody goes to define a permission, they see all > policies that are currently applied. > I understand your point. But I would prefer to avoid such constraints. If you want to have multiple permissions for the same resource-scope pair you should be able to do it. I think there are other ways to address this like improving UI to show a "Permission Graph", versioning of AuthZ settings, etc. I tried to address this issue in some way when you click "Show Details" when listing resources. There you can check it out all permissions applied to a resource. The same for scopes ... The evaluation tool also helps to check for situations like that. From there you can see all permissions/policies evaluated for a resource/scope. > > I also think that any resource-scope permission definition should > automatically inherit from permissions defined in resource-type, > resource, or pure-scope permission definitions. You should see these > inherited permissions in the "Applied Policies" when you create the > resource-scope permission. Then admins can remove the inherited > permissions they want to. > The "Show Details" on resource listing does provide you something similar, doesn't it ? I mean, from there you may check all permissions that are associated with a resource (even if not associated directly like when using a scope-based permission). > > Bill > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From grossws at gmail.com Mon Mar 27 07:27:51 2017 From: grossws at gmail.com (Konstantin Gribov) Date: Mon, 27 Mar 2017 11:27:51 +0000 Subject: [keycloak-dev] logout social providers? In-Reply-To: <4edacbaa-9d7a-55bd-e63a-8bc5cfe830f1@redhat.com> References: <3cd5fc40-7a1e-6adc-8ebe-b1d9ba11df5b@redhat.com> <4edacbaa-9d7a-55bd-e63a-8bc5cfe830f1@redhat.com> Message-ID: +1 to Marek, if you logged in in keycloak through identity provider like fb/google/github/whatever user'd be greatly annoyed by logging him out from fb (and all applications which used that login that don't go through keycloak) just because user logged out of some keycloak-integrated application. ??, 27 ???. 2017 ?. ? 10:13, Marek Posolda : > IMO the logout of child broker should be propagated to parent broker > logout just in case, that parent broker was actively authenticated > because of child broker. > > In other words, when I click to "Sign In with Facebook" on Keycloak > login screen, but I am already authenticated to Facebook (hence no > Facebook login screen is displayed), then logout from KC shouldn't > logout me from Facebook IMO. > > However I don't know if it's possible to detect this. In case that > Keycloak is used as parent broker, we have "auth_time" as a claim in the > token, so we can decide if parent Keycloak broker was actively > authenticated because of our request. Not sure if Facebook, Google, > Twitter and others OIDC providers have something like this. Also not > even sure if Facebook (and other social providers) allow you to logout > their session from the "child" app... > > Marek > > On 25/03/17 17:53, Bill Burke wrote: > > Actually its just account linking that is effected. If you log in > > through Facebook, you will log out of facebook. I assume we want a > > logout to happen to linked accounts too. > > > > > > On 3/25/17 12:43 PM, Bill Burke wrote: > >> If a user logs in through Facebook or links to Facebook in the account > >> service, should we logout the Facebook when the user logs out? My > >> thinking is that we should otherwise that machine will keep facebook > >> logged in. > >> > >> Bill > >> > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > -- Best regards, Konstantin Gribov From tech at psynd.net Mon Mar 27 07:44:25 2017 From: tech at psynd.net (Tech) Date: Mon, 27 Mar 2017 13:44:25 +0200 Subject: [keycloak-dev] OpenID Connect: userkey instead of username Message-ID: <14b6e0d9-c1d1-8abd-441d-5abcf1c23222@psynd.net> Dear experts, we are working with an application that implements with a plugin OIDC. What we detected is that when we run the authentication for a local user present into Keycloak, the remote username appearing on the application is the Keycloak's userKey instead of the Keycloak's username. Is there anything that we should do to retrieve instead the username? Thanks! From hmlnarik at redhat.com Mon Mar 27 08:11:45 2017 From: hmlnarik at redhat.com (Hynek Mlnarik) Date: Mon, 27 Mar 2017 14:11:45 +0200 Subject: [keycloak-dev] Action tokens Message-ID: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> Following up e-mail sent earlier today by Marek, I'm sending info on action tokens. Action token is a concept intended as a time-boxed ticket for a bearer to perform a single operation like reset password. They will be used for one-time actions that can be potentially delayed or executed outside of current authentication flow. The idea is to implement them as signed JWT tokens where the allowed operation will be specified in token type field. Action tokens will support expiration definable per action (different expiration for e.g. verify e-mail and reset password, or customizable expiration when sent from admin interface). JWT allows both signing and supports custom fields that can be used by the operation to supply additional arguments and to implement prevention of reusing the token once the operation would be performed already. Initially it seemed that a distributed cache would be needed to prevent reusing the token for the second time. After thinking it over however it turned out that currently all required cases can be prevented by introducing a field like "last timestamp of the password change" into a reset password token that is checked and operation is only allowed if the token value is equal to the one from database. So far the initial implementation covers token in reset-password e-mail. Cache-independent version of action tokens is available here [1]. --Hynek [1] https://github.com/hmlnarik/keycloak/tree/mposolda--cross-dc2-replaced-hmlnariks-commits From sthorger at redhat.com Mon Mar 27 09:48:48 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Mon, 27 Mar 2017 15:48:48 +0200 Subject: [keycloak-dev] OpenID Connect: userkey instead of username In-Reply-To: <14b6e0d9-c1d1-8abd-441d-5abcf1c23222@psynd.net> References: <14b6e0d9-c1d1-8abd-441d-5abcf1c23222@psynd.net> Message-ID: Please ask on the user mailing list On 27 March 2017 at 13:44, Tech wrote: > Dear experts, > > we are working with an application that implements with a plugin OIDC. > > What we detected is that when we run the authentication for a local user > present into Keycloak, the remote username appearing on the application > is the Keycloak's userKey instead of the Keycloak's username. > > Is there anything that we should do to retrieve instead the username? > > Thanks! > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bburke at redhat.com Mon Mar 27 10:24:37 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 27 Mar 2017 10:24:37 -0400 Subject: [keycloak-dev] logout social providers? In-Reply-To: References: <3cd5fc40-7a1e-6adc-8ebe-b1d9ba11df5b@redhat.com> <4edacbaa-9d7a-55bd-e63a-8bc5cfe830f1@redhat.com> Message-ID: Like marek said, you can't really tell if facebook was already logged in or not. IMO, it is better to annoy the user than the alternative of somebody taking over somebody's Facebook account because they stepped away from the computer. On 3/27/17 7:27 AM, Konstantin Gribov wrote: > +1 to Marek, if you logged in in keycloak through identity provider > like fb/google/github/whatever user'd be greatly annoyed by logging > him out from fb (and all applications which used that login that don't > go through keycloak) just because user logged out of some > keycloak-integrated application. > > ??, 27 ???. 2017 ?. ? 10:13, Marek Posolda >: > > IMO the logout of child broker should be propagated to parent broker > logout just in case, that parent broker was actively authenticated > because of child broker. > > In other words, when I click to "Sign In with Facebook" on Keycloak > login screen, but I am already authenticated to Facebook (hence no > Facebook login screen is displayed), then logout from KC shouldn't > logout me from Facebook IMO. > > However I don't know if it's possible to detect this. In case that > Keycloak is used as parent broker, we have "auth_time" as a claim > in the > token, so we can decide if parent Keycloak broker was actively > authenticated because of our request. Not sure if Facebook, Google, > Twitter and others OIDC providers have something like this. Also not > even sure if Facebook (and other social providers) allow you to logout > their session from the "child" app... > > Marek > > On 25/03/17 17:53, Bill Burke wrote: > > Actually its just account linking that is effected. If you log in > > through Facebook, you will log out of facebook. I assume we want a > > logout to happen to linked accounts too. > > > > > > On 3/25/17 12:43 PM, Bill Burke wrote: > >> If a user logs in through Facebook or links to Facebook in the > account > >> service, should we logout the Facebook when the user logs out? My > >> thinking is that we should otherwise that machine will keep > facebook > >> logged in. > >> > >> Bill > >> > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > -- > > Best regards, > Konstantin Gribov > From bburke at redhat.com Mon Mar 27 10:38:28 2017 From: bburke at redhat.com (Bill Burke) Date: Mon, 27 Mar 2017 10:38:28 -0400 Subject: [keycloak-dev] Action tokens In-Reply-To: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> Message-ID: <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> * what if the password isn't stored in Keycloak? What if the password is stored in an external User Storage Provider? * Each action token type is going to have to maintain this timestamp thing * What about verify email or update profile? * reset-password isn't actually reseting the password. It runs an authentication flow. This flow could ask for additional information (i.e. "mother's name, birthday, etc.") It could also reset multiple credential types beyond password. * aren't you just replacing dependency on one type of replication (user session) with another (database)? * Aren't action tokens supposed to be independent of User sessions anyways? * How can somebody continue with the login flow with an action token? Aren't you still going to have to obtain the user session? I like the idea of action tokens mainly because they can be independent of a User Session. I just don't think it solves/helps with anything cross-DC related. On 3/27/17 8:11 AM, Hynek Mlnarik wrote: > Following up e-mail sent earlier today by Marek, I'm sending info on action tokens. > > Action token is a concept intended as a time-boxed ticket for a bearer to perform a single operation like reset password. They will be used for one-time actions that can be potentially delayed or executed outside of current authentication flow. > > The idea is to implement them as signed JWT tokens where the allowed operation will be specified in token type field. Action tokens will support expiration definable per action (different expiration for e.g. verify e-mail and reset password, or customizable expiration when sent from admin interface). JWT allows both signing and supports custom fields that can be used by the operation to supply additional arguments and to implement prevention of reusing the token once the operation would be performed already. > > Initially it seemed that a distributed cache would be needed to prevent reusing the token for the second time. After thinking it over however it turned out that currently all required cases can be prevented by introducing a field like "last timestamp of the password change" into a reset password token that is checked and operation is only allowed if the token value is equal to the one from database. > > So far the initial implementation covers token in reset-password e-mail. Cache-independent version of action tokens is available here [1]. > > --Hynek > > [1] https://github.com/hmlnarik/keycloak/tree/mposolda--cross-dc2-replaced-hmlnariks-commits > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From mposolda at redhat.com Mon Mar 27 11:53:23 2017 From: mposolda at redhat.com (Marek Posolda) Date: Mon, 27 Mar 2017 17:53:23 +0200 Subject: [keycloak-dev] logout social providers? In-Reply-To: References: <3cd5fc40-7a1e-6adc-8ebe-b1d9ba11df5b@redhat.com> <4edacbaa-9d7a-55bd-e63a-8bc5cfe830f1@redhat.com> Message-ID: Both options are quite bad imo. But likely better to prefer security over usability... Btv. It looks that currently we never propagate Keycloak logout to Facebook. And probably it's same for other social networks. It looks that FB has some possibility how to propagate logout and it's even able to divide between the cases when user was already logged or not. There is this for Javascript API [1] and some possibilities here, which may work for server-side apps too [2] . But it looks that you need Facebook accessToken to logout. [1] https://developers.facebook.com/docs/reference/javascript/FB.logout [2] http://stackoverflow.com/questions/2764436/facebook-oauth-logout Marek On 27/03/17 16:24, Bill Burke wrote: > > Like marek said, you can't really tell if facebook was already logged > in or not. IMO, it is better to annoy the user than the alternative > of somebody taking over somebody's Facebook account because they > stepped away from the computer. > > > On 3/27/17 7:27 AM, Konstantin Gribov wrote: >> +1 to Marek, if you logged in in keycloak through identity provider >> like fb/google/github/whatever user'd be greatly annoyed by logging >> him out from fb (and all applications which used that login that >> don't go through keycloak) just because user logged out of some >> keycloak-integrated application. >> >> ??, 27 ???. 2017 ?. ? 10:13, Marek Posolda > >: >> >> IMO the logout of child broker should be propagated to parent broker >> logout just in case, that parent broker was actively authenticated >> because of child broker. >> >> In other words, when I click to "Sign In with Facebook" on Keycloak >> login screen, but I am already authenticated to Facebook (hence no >> Facebook login screen is displayed), then logout from KC shouldn't >> logout me from Facebook IMO. >> >> However I don't know if it's possible to detect this. In case that >> Keycloak is used as parent broker, we have "auth_time" as a claim >> in the >> token, so we can decide if parent Keycloak broker was actively >> authenticated because of our request. Not sure if Facebook, Google, >> Twitter and others OIDC providers have something like this. Also not >> even sure if Facebook (and other social providers) allow you to >> logout >> their session from the "child" app... >> >> Marek >> >> On 25/03/17 17:53, Bill Burke wrote: >> > Actually its just account linking that is effected. If you log in >> > through Facebook, you will log out of facebook. I assume we want a >> > logout to happen to linked accounts too. >> > >> > >> > On 3/25/17 12:43 PM, Bill Burke wrote: >> >> If a user logs in through Facebook or links to Facebook in the >> account >> >> service, should we logout the Facebook when the user logs out? My >> >> thinking is that we should otherwise that machine will keep >> facebook >> >> logged in. >> >> >> >> Bill >> >> >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> -- >> >> Best regards, >> Konstantin Gribov >> > From sven.thoms at gmail.com Mon Mar 27 12:52:12 2017 From: sven.thoms at gmail.com (Sven Thoms) Date: Mon, 27 Mar 2017 18:52:12 +0200 Subject: [keycloak-dev] Entitlement API, role based Policy, forbidden Message-ID: I have users in my realm that I have assigned realm roles to: realm roles: Master, Apprentice one such user is test_user roles: uma_authorization, Apprentice When I enable authorization on a client and 1. add a resource besides the default resource to it, say "Second Resource" 2. under Policies - Roles a role-based policy referencing the realm role Apprentice that my user belongs to Using the test user?s acess_token gotten from the realm token endpoint: curl -X POST \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=admin-cli&username=test_user&password=password&grant_type=password" \ https://mykeycloak.domain/auth/realms/myrealm/protocol/openid-connect/token and checking the entitlement API response for the client?s id and using the bearer access token of the user as well as the payload for the Second Resource, I always get status code forbidden curl -v -X POST \ -H "Content-Type:application/json" \ -H 'Authorization: bearer userbearerrertoken' \ -d '{"permissions":[{"resource_set_name:"Second Resource"}]}' \ https://mykeycloak.domainauth/realms/myrealm/authz/entitlement/my_client_id For the Default Resource, all is fine and I get back an RPT. Am I missing something regarding the user?s needed roles? According to the documentation, the role-level permission for the Second Resource should lead to the user being authorized to access the second resource if any realm role in a role-based permission for a resource holds. I am using keycloak 2.5.1. From sthorger at redhat.com Tue Mar 28 03:29:35 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Tue, 28 Mar 2017 09:29:35 +0200 Subject: [keycloak-dev] Action tokens In-Reply-To: <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> Message-ID: The main idea was to not require action tokens to be one-time, but rather invalidate when the users state has changed. That way it doesn't matter if user clicks on the link once or twice all links will still work until the user changes something. Also, there would be no need to replicate anything as it's just leveraging data that is already there. Maybe it's not possible with user storage SPI and custom reset password flows though. On 27 March 2017 at 16:38, Bill Burke wrote: > * what if the password isn't stored in Keycloak? What if the password > is stored in an external User Storage Provider? > Can't you get the last update time for the credentials in an external user storage provider? > > * Each action token type is going to have to maintain this timestamp thing > > * What about verify email or update profile? > Verify email can allow verifying the email as long as the token hasn't expired and the email address hasn't changed. It doesn't have to be a one-time action. Update profile would have benefited on having a last updated attribute on the user model. Would that screw up user storage provider SPI? If we had that the token would be permitted as long as the user hasn't been updated since the token was created. > > * reset-password isn't actually reseting the password. It runs an > authentication flow. This flow could ask for additional information > (i.e. "mother's name, birthday, etc.") It could also reset multiple > credential types beyond password. > > * aren't you just replacing dependency on one type of replication (user > session) with another (database)? > > * Aren't action tokens supposed to be independent of User sessions anyways? > > * How can somebody continue with the login flow with an action token? > Aren't you still going to have to obtain the user session? > In that case they will open the link in the same browser and the authentication session will be there. > > I like the idea of action tokens mainly because they can be independent > of a User Session. I just don't think it solves/helps with anything > cross-DC related. > It would at least for some things, for example verify email is simple to do without any need to replicate anything. > > > On 3/27/17 8:11 AM, Hynek Mlnarik wrote: > > Following up e-mail sent earlier today by Marek, I'm sending info on > action tokens. > > > > Action token is a concept intended as a time-boxed ticket for a bearer > to perform a single operation like reset password. They will be used for > one-time actions that can be potentially delayed or executed outside of > current authentication flow. > > > > The idea is to implement them as signed JWT tokens where the allowed > operation will be specified in token type field. Action tokens will support > expiration definable per action (different expiration for e.g. verify > e-mail and reset password, or customizable expiration when sent from admin > interface). JWT allows both signing and supports custom fields that can be > used by the operation to supply additional arguments and to implement > prevention of reusing the token once the operation would be performed > already. > > > > Initially it seemed that a distributed cache would be needed to prevent > reusing the token for the second time. After thinking it over however it > turned out that currently all required cases can be prevented by > introducing a field like "last timestamp of the password change" into a > reset password token that is checked and operation is only allowed if the > token value is equal to the one from database. > > > > So far the initial implementation covers token in reset-password e-mail. > Cache-independent version of action tokens is available here [1]. > > > > --Hynek > > > > [1] https://github.com/hmlnarik/keycloak/tree/mposolda--cross- > dc2-replaced-hmlnariks-commits > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From hmlnarik at redhat.com Tue Mar 28 05:56:17 2017 From: hmlnarik at redhat.com (Hynek Mlnarik) Date: Tue, 28 Mar 2017 11:56:17 +0200 Subject: [keycloak-dev] Action tokens In-Reply-To: References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> Message-ID: <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> On 03/28/2017 09:29 AM, Stian Thorgersen wrote: > The main idea was to not require action tokens to be one-time, but rather > invalidate when the users state has changed. That way it doesn't matter if > user clicks on the link once or twice all links will still work until the > user changes something. Also, there would be no need to replicate anything > as it's just leveraging data that is already there. Maybe it's not possible > with user storage SPI and custom reset password flows though. I have apparently expressed the one-time support of action tokens in the original e-mail too strongly. What was meant is that action tokens provide means to achieve one-timeness. However the action tokens just by themselves do not guarantee one-time use. If required, it is necessary to implement a single-use check in the operation as outlined in the first thread in this e-mail. E.g. for e-mail verification, this check is not needed, while for password reset it is. > On 27 March 2017 at 16:38, Bill Burke wrote: > >> * what if the password isn't stored in Keycloak? What if the password >> is stored in an external User Storage Provider? >> > > Can't you get the last update time for the credentials in an external user > storage provider? > > >> >> * Each action token type is going to have to maintain this timestamp thing Only those action tokens that are dedicated for single use - and those anyway need to contain something (be it timestamp, random bytes, ...) that the single-use check has to verify to prevent reuse of the token. >> * What about verify email or update profile? > > Verify email can allow verifying the email as long as the token hasn't > expired and the email address hasn't changed. It doesn't have to be a > one-time action. > > Update profile would have benefited on having a last updated attribute on > the user model. Would that screw up user storage provider SPI? If we had > that the token would be permitted as long as the user hasn't been updated > since the token was created. > >> >> * reset-password isn't actually reseting the password. It runs an >> authentication flow. This flow could ask for additional information >> (i.e. "mother's name, birthday, etc.") It could also reset multiple >> credential types beyond password. Absolutely, that's a good point. A reset password action token is actually a permission to start a reset-password flow for the given user. If user storage provider SPI supported last updated attribute on user model (does it?), staring the flow would be allowed only if this attribute on the user (e.g. last updated timestamp) matches the one from the token. If that were not possible, I believe the same behaviour can be achieved by introducing a generic attribute on user model that would change once action token is used and flow finishes. Hence replication would only take place on data change. >> * aren't you just replacing dependency on one type of replication (user >> session) with another (database)? Not replacing, rather removing the need to replicate more data than necessary. Database replication will have to happen on any user change regardless of mechanism allowing that change, and here it is this mechanism that is being improved. >> * Aren't action tokens supposed to be independent of User sessions anyways? >> * How can somebody continue with the login flow with an action token? >> Aren't you still going to have to obtain the user session? Not have to, and yes, I can make use of it to continue in the session in progress. > In that case they will open the link in the same browser and the > authentication session will be there. > > >> >> I like the idea of action tokens mainly because they can be independent >> of a User Session. I just don't think it solves/helps with anything >> cross-DC related. >> > > It would at least for some things, for example verify email is simple to do > without any need to replicate anything. > > >> >> >> On 3/27/17 8:11 AM, Hynek Mlnarik wrote: >>> Following up e-mail sent earlier today by Marek, I'm sending info on >> action tokens. >>> >>> Action token is a concept intended as a time-boxed ticket for a bearer >> to perform a single operation like reset password. They will be used for >> one-time actions that can be potentially delayed or executed outside of >> current authentication flow. >>> >>> The idea is to implement them as signed JWT tokens where the allowed >> operation will be specified in token type field. Action tokens will support >> expiration definable per action (different expiration for e.g. verify >> e-mail and reset password, or customizable expiration when sent from admin >> interface). JWT allows both signing and supports custom fields that can be >> used by the operation to supply additional arguments and to implement >> prevention of reusing the token once the operation would be performed >> already. >>> >>> Initially it seemed that a distributed cache would be needed to prevent >> reusing the token for the second time. After thinking it over however it >> turned out that currently all required cases can be prevented by >> introducing a field like "last timestamp of the password change" into a >> reset password token that is checked and operation is only allowed if the >> token value is equal to the one from database. >>> >>> So far the initial implementation covers token in reset-password e-mail. >> Cache-independent version of action tokens is available here [1]. >>> >>> --Hynek >>> >>> [1] https://github.com/hmlnarik/keycloak/tree/mposolda--cross- >> dc2-replaced-hmlnariks-commits >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bburke at redhat.com Tue Mar 28 09:13:04 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 28 Mar 2017 09:13:04 -0400 Subject: [keycloak-dev] Action tokens In-Reply-To: References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> Message-ID: <89af37a0-402a-5279-c852-b9f4d120dc8d@redhat.com> On 3/28/17 3:29 AM, Stian Thorgersen wrote: > The main idea was to not require action tokens to be one-time, but > rather invalidate when the users state has changed. That way it > doesn't matter if user clicks on the link once or twice all links will > still work until the user changes something. Also, there would be no > need to replicate anything as it's just leveraging data that is > already there. Maybe it's not possible with user storage SPI and > custom reset password flows though. > > On 27 March 2017 at 16:38, Bill Burke > wrote: > > * what if the password isn't stored in Keycloak? What if the password > is stored in an external User Storage Provider? > > > Can't you get the last update time for the credentials in an external > user storage provider? Maybe. Maybe not. Who knows? We won't. > > * Each action token type is going to have to maintain this > timestamp thing > > * What about verify email or update profile? > > > Verify email can allow verifying the email as long as the token hasn't > expired and the email address hasn't changed. It doesn't have to be a > one-time action. > > Update profile would have benefited on having a last updated attribute > on the user model. Would that screw up user storage provider SPI? If > we had that the token would be permitted as long as the user hasn't > been updated since the token was created. Some storage providers are either read-only or can only write a specific set of attributes. FYI: We already store user consent and account links no matter which provider is providing user info. > > * reset-password isn't actually reseting the password. It runs an > authentication flow. This flow could ask for additional information > (i.e. "mother's name, birthday, etc.") It could also reset multiple > credential types beyond password. > > * aren't you just replacing dependency on one type of replication > (user > session) with another (database)? > > * Aren't action tokens supposed to be independent of User sessions > anyways? > > * How can somebody continue with the login flow with an action token? > Aren't you still going to have to obtain the user session? > > > In that case they will open the link in the same browser and the > authentication session will be there. You still need some sort of session for reset-credential flow. > > I like the idea of action tokens mainly because they can be > independent > of a User Session. I just don't think it solves/helps with anything > cross-DC related. > > > It would at least for some things, for example verify email is simple > to do without any need to replicate anything. We have custom required actions. The SPI will now have to provide metadata on whether the action requires the update timestamp or not. Bill From bburke at redhat.com Tue Mar 28 09:25:32 2017 From: bburke at redhat.com (Bill Burke) Date: Tue, 28 Mar 2017 09:25:32 -0400 Subject: [keycloak-dev] Action tokens In-Reply-To: <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> Message-ID: <9edd9f56-5267-314e-958f-1e73104e4400@redhat.com> IMO, action tokens should be implemented correctly, as a feature, not as an optimization to support cross-DC. This means support for one time use policies, etc. On 3/28/17 5:56 AM, Hynek Mlnarik wrote: > >>> * Aren't action tokens supposed to be independent of User sessions >>> anyways? >>> * How can somebody continue with the login flow with an action token? >>> Aren't you still going to have to obtain the user session? > > Not have to, and yes, I can make use of it to continue in the session > in progress. I'm saying do you have to/should you verify that the action token originated from a specific session in order to continue the session? I don't know, just asking. These are all things you have to take into account and figure out how to easily hide or provide through the Authentication/Required Action SPI too. Bill From hmlnarik at redhat.com Tue Mar 28 09:46:27 2017 From: hmlnarik at redhat.com (Hynek Mlnarik) Date: Tue, 28 Mar 2017 15:46:27 +0200 Subject: [keycloak-dev] Action tokens In-Reply-To: <9edd9f56-5267-314e-958f-1e73104e4400@redhat.com> References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> <9edd9f56-5267-314e-958f-1e73104e4400@redhat.com> Message-ID: On Tue, Mar 28, 2017 at 3:25 PM, Bill Burke wrote: > IMO, action tokens should be implemented correctly, as a feature, not as an > optimization to support cross-DC. This means support for one time use > policies, etc. Okay, it seems that support for single use should be implemented as a service and then used by action tokens. So this can be implemented as a cache that would be shared across the cluster / DCs with as little information as possible. Preliminary implementation exists in [1], I'll plug that into current code. [1] https://github.com/keycloak/keycloak/pull/3918 > On 3/28/17 5:56 AM, Hynek Mlnarik wrote: >> >> >>>> * Aren't action tokens supposed to be independent of User sessions >>>> anyways? >>>> * How can somebody continue with the login flow with an action token? >>>> Aren't you still going to have to obtain the user session? >> >> >> Not have to, and yes, I can make use of it to continue in the session in >> progress. > > > I'm saying do you have to/should you verify that the action token originated > from a specific session in order to continue the session? I don't know, > just asking. These are all things you have to take into account and figure > out how to easily hide or provide through the Authentication/Required Action > SPI too. I don't think I have to (for instance expiration of the action token to reset password can be e.g. 2 days - much longer than that of a session). But I think that we should support case when the user is in the middle of the flow and is asked to verify their e-mail - here we should continue with the next step in the flow. --Hynek From tair.sabirgaliev at gmail.com Tue Mar 28 14:31:38 2017 From: tair.sabirgaliev at gmail.com (Tair Sabirgaliev) Date: Tue, 28 Mar 2017 14:31:38 -0400 Subject: [keycloak-dev] Keycloak in Thoughtworks Technology Radar Message-ID: Hi Keycloak Devs! Keycloak is mentioned in Thoughtworks Technology Radar! Here is the link: https://assets.thoughtworks.com/assets/technology-radar-vol-16-en.pdf Just wanted to say thank you once again for this great project! Being in ?Assess? category of TechRadar is one more sign of the projects' success! From a.nekrasov at ftc.ru Wed Mar 29 03:11:26 2017 From: a.nekrasov at ftc.ru (Nekrasov Aleksandr) Date: Wed, 29 Mar 2017 07:11:26 +0000 Subject: [keycloak-dev] logout social providers? In-Reply-To: References: <3cd5fc40-7a1e-6adc-8ebe-b1d9ba11df5b@redhat.com> <4edacbaa-9d7a-55bd-e63a-8bc5cfe830f1@redhat.com> Message-ID: <9e5c6898e0b641e6a2831fb92a60660a@nut-mbx-1.win.ftc.ru> As user scenario, user logged twice, first time as keycloak user and the second time as FB user to provide auth to keycloak. And i`m, as user, remember, that i`m logged into FB. In case when sync keycloak logout with social I will be surprised, If opening FB see login page again. Also, I can working with keycloak secured app and FB at the same time ( ex. serfing news on FB ) and It will be bad, if FB-site logout when I logout from keycloak. -----Original Message----- From: keycloak-dev-bounces at lists.jboss.org [mailto:keycloak-dev-bounces at lists.jboss.org] On Behalf Of Marek Posolda Sent: Monday, March 27, 2017 10:53 PM To: Bill Burke; Konstantin Gribov; keycloak-dev Subject: Re: [keycloak-dev] logout social providers? Both options are quite bad imo. But likely better to prefer security over usability... Btv. It looks that currently we never propagate Keycloak logout to Facebook. And probably it's same for other social networks. It looks that FB has some possibility how to propagate logout and it's even able to divide between the cases when user was already logged or not. There is this for Javascript API [1] and some possibilities here, which may work for server-side apps too [2] . But it looks that you need Facebook accessToken to logout. [1] https://developers.facebook.com/docs/reference/javascript/FB.logout [2] http://stackoverflow.com/questions/2764436/facebook-oauth-logout Marek On 27/03/17 16:24, Bill Burke wrote: > > Like marek said, you can't really tell if facebook was already logged > in or not. IMO, it is better to annoy the user than the alternative > of somebody taking over somebody's Facebook account because they > stepped away from the computer. > > > On 3/27/17 7:27 AM, Konstantin Gribov wrote: >> +1 to Marek, if you logged in in keycloak through identity provider >> like fb/google/github/whatever user'd be greatly annoyed by logging >> him out from fb (and all applications which used that login that >> don't go through keycloak) just because user logged out of some >> keycloak-integrated application. >> >> ??, 27 ???. 2017 ?. ? 10:13, Marek Posolda > >: >> >> IMO the logout of child broker should be propagated to parent broker >> logout just in case, that parent broker was actively authenticated >> because of child broker. >> >> In other words, when I click to "Sign In with Facebook" on Keycloak >> login screen, but I am already authenticated to Facebook (hence no >> Facebook login screen is displayed), then logout from KC shouldn't >> logout me from Facebook IMO. >> >> However I don't know if it's possible to detect this. In case that >> Keycloak is used as parent broker, we have "auth_time" as a claim >> in the >> token, so we can decide if parent Keycloak broker was actively >> authenticated because of our request. Not sure if Facebook, Google, >> Twitter and others OIDC providers have something like this. Also not >> even sure if Facebook (and other social providers) allow you to >> logout >> their session from the "child" app... >> >> Marek >> >> On 25/03/17 17:53, Bill Burke wrote: >> > Actually its just account linking that is effected. If you log in >> > through Facebook, you will log out of facebook. I assume we want a >> > logout to happen to linked accounts too. >> > >> > >> > On 3/25/17 12:43 PM, Bill Burke wrote: >> >> If a user logs in through Facebook or links to Facebook in the >> account >> >> service, should we logout the Facebook when the user logs out? My >> >> thinking is that we should otherwise that machine will keep >> facebook >> >> logged in. >> >> >> >> Bill >> >> >> > _______________________________________________ >> > keycloak-dev mailing list >> > keycloak-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> -- >> >> Best regards, >> Konstantin Gribov >> > _______________________________________________ keycloak-dev mailing list keycloak-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/keycloak-dev From thomas.darimont at googlemail.com Thu Mar 30 06:43:36 2017 From: thomas.darimont at googlemail.com (Thomas Darimont) Date: Thu, 30 Mar 2017 12:43:36 +0200 Subject: [keycloak-dev] Some questions from a Keycloak talk Message-ID: Hi group, yesterday I gave a talk about Keycloak at the Javaland conference in Germany. The talk was well attended (~100) and I got a lot of questions at the end. Some of the things people asked for were: Q1: Will Keycloak support JWT with EC signature? Q2: How to integrate Keycloak login forms or use custom login components in Single Page Applications? Q3:Will the Spring Boot Adapter make use of the Spring Security Adapter instead of using the Servlet Container specific implementations? Q4: Is there a reserved path for custom REST-Resources to avoid clashes with Keycloak REST-Resources in new releases? Q5: Is there a documentation of all exposed Resource paths in Keycloak (appart from the REST API Docs)? Q6: Are there some guidelines for protecting a Keycloak server? Q7: The RH-SSO commercial offering states that it is based on the Open Source Community Edition of Keycloak and that on can get patches and support. Will those patches (e.g. for security vulnerabilities) also end up in the Community Edition? In addition to those questions. Some people asked for a list of services using Keycloak. Since not many people talk about that they are using Keycloak I found a nice way to find some Keycloak installations with a simple google search, just try: inurl:auth inurl:realms inurl:protocol Cheers, Thomas From sblanc at redhat.com Thu Mar 30 07:47:14 2017 From: sblanc at redhat.com (Sebastien Blanc) Date: Thu, 30 Mar 2017 13:47:14 +0200 Subject: [keycloak-dev] Some questions from a Keycloak talk In-Reply-To: References: Message-ID: On Thu, Mar 30, 2017 at 12:43 PM, Thomas Darimont < thomas.darimont at googlemail.com> wrote: > Hi group, > > yesterday I gave a talk about Keycloak at the Javaland conference in > Germany. > The talk was well attended (~100) and I got a lot of questions at the end. > Congrats ! I also saw some nice tweets about your talk. Let me just answer the question about Spring Boot / Security > > Some of the things people asked for were: > Q1: Will Keycloak support JWT with EC signature? > > Q2: How to integrate Keycloak login forms or use custom login components > in Single Page Applications? > > Q3:Will the Spring Boot Adapter make use of the Spring Security Adapter > instead of > using the Servlet Container specific implementations? > For now no, we had this discussion already sometime ago but we have a pretty fair amount of users who uses Spring Boot without Spring Security so it make sense to keep it separated. But in the next release there will be some small enhancement to make the combination of the 2 more smooth. > > Q4: Is there a reserved path for custom REST-Resources to avoid > clashes with Keycloak REST-Resources in new releases? > > Q5: Is there a documentation of all exposed Resource paths in Keycloak > (appart from the REST API Docs)? > > Q6: Are there some guidelines for protecting a Keycloak server? > > Q7: The RH-SSO commercial offering states that it is based on the Open > Source > Community Edition of Keycloak and that on can get patches and support. > Will those patches (e.g. for security vulnerabilities) also end up in the > Community Edition? > > In addition to those questions. Some people asked for a list of services > using Keycloak. > It's an easy one but one big user is ... Red Hat ;) > > Since not many people talk about that they are using Keycloak > I found a nice way to find some Keycloak installations with a simple > google search, just try: > > inurl:auth inurl:realms inurl:protocol > > Cheers, > Thomas > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Thu Mar 30 08:24:29 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Thu, 30 Mar 2017 14:24:29 +0200 Subject: [keycloak-dev] Some questions from a Keycloak talk In-Reply-To: References: Message-ID: On 30 March 2017 at 12:43, Thomas Darimont wrote: > Hi group, > > yesterday I gave a talk about Keycloak at the Javaland conference in > Germany. > The talk was well attended (~100) and I got a lot of questions at the end. > > Some of the things people asked for were: > Q1: Will Keycloak support JWT with EC signature? > We'd like to eventually, but currently this is in the backlog of features to add. > > Q2: How to integrate Keycloak login forms or use custom login components > in Single Page Applications? > Don't is the simple answer, use a redirect. It's possible to embed with an iframe, but awkward and has security implications is the slightly longer answer. > > Q3:Will the Spring Boot Adapter make use of the Spring Security Adapter > instead of > using the Servlet Container specific implementations? > > Q4: Is there a reserved path for custom REST-Resources to avoid > clashes with Keycloak REST-Resources in new releases? > Good question. No there isn't. > > Q5: Is there a documentation of all exposed Resource paths in Keycloak > (appart from the REST API Docs)? > No > > Q6: Are there some guidelines for protecting a Keycloak server? > Yes, somewhere in the admin guide (it's the last chapter if I remember correctly) > > Q7: The RH-SSO commercial offering states that it is based on the Open > Source > Community Edition of Keycloak and that on can get patches and support. > Will those patches (e.g. for security vulnerabilities) also end up in the > Community Edition? > Yes, but there are key differences here. In RH-SSO we can issue security patches and allow customers to patch the current installation before anything is made public. Only after customers have had a chance to patch will be provide the fix in community and in most cases (unless it's very bad) you will also have to wait and upgrade to the next release as we don't in general do micro releases in community. > > In addition to those questions. Some people asked for a list of services > using Keycloak. > > Since not many people talk about that they are using Keycloak > I found a nice way to find some Keycloak installations with a simple > google search, just try: > > inurl:auth inurl:realms inurl:protocol > Looks like our robots.txt isn't stopping all indexing in Google for some reason. That's not good. In any case that list doesn't show all users of Keycloak as there are plenty I know about not being revealed by that search. We don't distribute list of customers of RH-SSO, nor do we go around announcing who uses Keycloak either. > > Cheers, > Thomas > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bruno at abstractj.org Thu Mar 30 15:04:03 2017 From: bruno at abstractj.org (Bruno Oliveira) Date: Thu, 30 Mar 2017 19:04:03 +0000 Subject: [keycloak-dev] Keycloak builders Message-ID: Hi, While writing tests for the quickstarts, we started to create some code which I believe overlaps with the same thing ClientBuilder, RealmBuilder...and other do today[1]. I would like to expose these builders to make people's life easy. There are some options: 1. Move the builders available here[2] to keycloak-core. 2. Move it to keycloak-test-helper 3. Do nothing and duplicate code that matters into keycloak-test-helper I know that doing 1 or 2, is just silly if you think about quickstarts. But at the same time, others can benefit from a more fluent API, to programatically create users, realms... Thoughts? [1] - https://github.com/keycloak/keycloak/blob/master/misc/keycloak-test-helper/src/main/java/org/keycloak/helper/TestsHelper.java [2] - https://github.com/abstractj/keycloak/tree/119435ac76c17d3a66590df0f87365f64e3395cd/testsuite/integration-arquillian/tests/base/src/test/java/org/keycloak/testsuite/util From Gideon.Caranzo at gemalto.com Thu Mar 30 17:54:11 2017 From: Gideon.Caranzo at gemalto.com (Caranzo Gideon) Date: Thu, 30 Mar 2017 21:54:11 +0000 Subject: [keycloak-dev] OIDC Secret Encryption Message-ID: Hi, Is there a way to extend Keycloak so that we can encrypt OIDC secret? Or is there an existing feature for this? Thanks, Gideon ________________________________ This message and any attachments are intended solely for the addressees and may contain confidential information. Any unauthorized use or disclosure, either whole or partial, is prohibited. E-mails are susceptible to alteration. Our company shall not be liable for the message if altered, changed or falsified. If you are not the intended recipient of this message, please delete it and notify the sender. Although all reasonable efforts have been made to keep this transmission free from viruses, the sender will not be liable for damages caused by a transmitted virus. From mposolda at redhat.com Fri Mar 31 03:17:19 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 31 Mar 2017 09:17:19 +0200 Subject: [keycloak-dev] Action tokens In-Reply-To: References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> <9edd9f56-5267-314e-958f-1e73104e4400@redhat.com> Message-ID: <3d780a48-472a-10e2-542d-4d8c89447279@redhat.com> I was thinking if we can have the variant of the interactive email verification flows ("interactive" means those not triggered by admin, but by user himself during authentication process) like this: - User triggers the flow (For example by click "Forget password" on login screen in case of reset-password. Other actions like identity-broker linking verification are triggered automatically during authentication flow etc) - Browser displays "We just sent you an email with the generated code. Please type this code here: ". The input field will be displayed too. - Email doesn't contain any link. Just the generated code. User needs to copy/paste it to the field in the browser and after submit, the flow continues. Advantages: - No need to care about spam filters. As no link in the email - No need to care if it's same or different browser. Flow will always continue in same browser - Cross-dc solved. It would be always same browser, so we just need to keep the code in authentication session. No action-tokens or any cross-dc replication needed Does it sucks from the usability perspective? For me personally not, as when I need to deal with some web-page, which sends me those verification emails, I usually just copy/paste the link into the browser instead of directly clicking on it (yes, because I don't know in which browser it will be opened and I usually want to continue in the same browser). We will still need action-tokens for the admin actions though. For the interactive actions, admin will have possibility to choose if he wants action-tokens (with link in the email etc) or this optimized flow. I can see this can help with spam, cross-dc performance, so IMO makes sense for some deployments. Marek On 28/03/17 15:46, Hynek Mlnarik wrote: > On Tue, Mar 28, 2017 at 3:25 PM, Bill Burke wrote: >> IMO, action tokens should be implemented correctly, as a feature, not as an >> optimization to support cross-DC. This means support for one time use >> policies, etc. > Okay, it seems that support for single use should be implemented as a > service and then used by action tokens. > > So this can be implemented as a cache that would be shared across the > cluster / DCs with as little information as possible. Preliminary > implementation exists in [1], I'll plug that into current code. > > [1] https://github.com/keycloak/keycloak/pull/3918 > >> On 3/28/17 5:56 AM, Hynek Mlnarik wrote: >>> >>>>> * Aren't action tokens supposed to be independent of User sessions >>>>> anyways? >>>>> * How can somebody continue with the login flow with an action token? >>>>> Aren't you still going to have to obtain the user session? >>> >>> Not have to, and yes, I can make use of it to continue in the session in >>> progress. >> >> I'm saying do you have to/should you verify that the action token originated >> from a specific session in order to continue the session? I don't know, >> just asking. These are all things you have to take into account and figure >> out how to easily hide or provide through the Authentication/Required Action >> SPI too. > I don't think I have to (for instance expiration of the action token > to reset password can be e.g. 2 days - much longer than that of a > session). But I think that we should support case when the user is in > the middle of the flow and is asked to verify their e-mail - here we > should continue with the next step in the flow. > > --Hynek > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev From sthorger at redhat.com Fri Mar 31 03:48:14 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 31 Mar 2017 09:48:14 +0200 Subject: [keycloak-dev] Keycloak builders In-Reply-To: References: Message-ID: Ideal would probably be keycloak-core, but that would require some refactoring, adding new missing things and also I'm not sure all builders should be included. On 30 March 2017 at 21:04, Bruno Oliveira wrote: > Hi, > > While writing tests for the quickstarts, we started to create some code > which I believe overlaps with the same thing ClientBuilder, > RealmBuilder...and other do today[1]. I would like to expose these builders > to make people's life easy. > > There are some options: > > 1. Move the builders available here[2] to keycloak-core. > 2. Move it to keycloak-test-helper > 3. Do nothing and duplicate code that matters into keycloak-test-helper > > I know that doing 1 or 2, is just silly if you think about quickstarts. But > at the same time, others can benefit from a more fluent API, to > programatically create users, realms... > > Thoughts? > > [1] - > https://github.com/keycloak/keycloak/blob/master/misc/ > keycloak-test-helper/src/main/java/org/keycloak/helper/TestsHelper.java > [2] - > https://github.com/abstractj/keycloak/tree/119435ac76c17d3a66590df0f87365 > f64e3395cd/testsuite/integration-arquillian/tests/base/src/test/java/org/ > keycloak/testsuite/util > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From sthorger at redhat.com Fri Mar 31 03:48:57 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 31 Mar 2017 09:48:57 +0200 Subject: [keycloak-dev] Keycloak builders In-Reply-To: References: Message-ID: Could even have a static method on ClientRepresentation#build/create or something so it's easier to find. On 31 March 2017 at 09:48, Stian Thorgersen wrote: > Ideal would probably be keycloak-core, but that would require some > refactoring, adding new missing things and also I'm not sure all builders > should be included. > > On 30 March 2017 at 21:04, Bruno Oliveira wrote: > >> Hi, >> >> While writing tests for the quickstarts, we started to create some code >> which I believe overlaps with the same thing ClientBuilder, >> RealmBuilder...and other do today[1]. I would like to expose these >> builders >> to make people's life easy. >> >> There are some options: >> >> 1. Move the builders available here[2] to keycloak-core. >> 2. Move it to keycloak-test-helper >> 3. Do nothing and duplicate code that matters into keycloak-test-helper >> >> I know that doing 1 or 2, is just silly if you think about quickstarts. >> But >> at the same time, others can benefit from a more fluent API, to >> programatically create users, realms... >> >> Thoughts? >> >> [1] - >> https://github.com/keycloak/keycloak/blob/master/misc/keyclo >> ak-test-helper/src/main/java/org/keycloak/helper/TestsHelper.java >> [2] - >> https://github.com/abstractj/keycloak/tree/119435ac76c17d3a6 >> 6590df0f87365f64e3395cd/testsuite/integration-arquillian/tests/base/src/ >> test/java/org/keycloak/testsuite/util >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> > > From sthorger at redhat.com Fri Mar 31 03:49:39 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 31 Mar 2017 09:49:39 +0200 Subject: [keycloak-dev] Keycloak builders In-Reply-To: References: Message-ID: An example of things that would need to be refactored: https://github.com/abstractj/keycloak/blob/119435ac76c17d3a66590df0f87365f64e3395cd/testsuite/integration-arquillian/tests/base/src/test/java/org/keycloak/testsuite/util/RealmBuilder.java#L82 On 31 March 2017 at 09:48, Stian Thorgersen wrote: > Could even have a static method on ClientRepresentation#build/create or > something so it's easier to find. > > On 31 March 2017 at 09:48, Stian Thorgersen wrote: > >> Ideal would probably be keycloak-core, but that would require some >> refactoring, adding new missing things and also I'm not sure all builders >> should be included. >> >> On 30 March 2017 at 21:04, Bruno Oliveira wrote: >> >>> Hi, >>> >>> While writing tests for the quickstarts, we started to create some code >>> which I believe overlaps with the same thing ClientBuilder, >>> RealmBuilder...and other do today[1]. I would like to expose these >>> builders >>> to make people's life easy. >>> >>> There are some options: >>> >>> 1. Move the builders available here[2] to keycloak-core. >>> 2. Move it to keycloak-test-helper >>> 3. Do nothing and duplicate code that matters into keycloak-test-helper >>> >>> I know that doing 1 or 2, is just silly if you think about quickstarts. >>> But >>> at the same time, others can benefit from a more fluent API, to >>> programatically create users, realms... >>> >>> Thoughts? >>> >>> [1] - >>> https://github.com/keycloak/keycloak/blob/master/misc/keyclo >>> ak-test-helper/src/main/java/org/keycloak/helper/TestsHelper.java >>> [2] - >>> https://github.com/abstractj/keycloak/tree/119435ac76c17d3a6 >>> 6590df0f87365f64e3395cd/testsuite/integration-arquillian/ >>> tests/base/src/test/java/org/keycloak/testsuite/util >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >>> >> >> > From sthorger at redhat.com Fri Mar 31 04:04:38 2017 From: sthorger at redhat.com (Stian Thorgersen) Date: Fri, 31 Mar 2017 10:04:38 +0200 Subject: [keycloak-dev] Action tokens In-Reply-To: <3d780a48-472a-10e2-542d-4d8c89447279@redhat.com> References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> <9edd9f56-5267-314e-958f-1e73104e4400@redhat.com> <3d780a48-472a-10e2-542d-4d8c89447279@redhat.com> Message-ID: I can see the benefits in cross-dc, but I'm not convinced due to the following disadvantages: * It would change the current behaviour * It would need different implementation if the action was triggered by a user or an admin * Users would see different behaviour depending on who triggered the action Personally I don't like the copy/pasting approaches and much prefer simply clicking on a link. On 31 March 2017 at 09:17, Marek Posolda wrote: > I was thinking if we can have the variant of the interactive email > verification flows ("interactive" means those not triggered by admin, > but by user himself during authentication process) like this: > > - User triggers the flow (For example by click "Forget password" on > login screen in case of reset-password. Other actions like > identity-broker linking verification are triggered automatically during > authentication flow etc) > - Browser displays "We just sent you an email with the generated code. > Please type this code here: ". The input field will be displayed too. > - Email doesn't contain any link. Just the generated code. User needs to > copy/paste it to the field in the browser and after submit, the flow > continues. > > Advantages: > - No need to care about spam filters. As no link in the email > - No need to care if it's same or different browser. Flow will always > continue in same browser > - Cross-dc solved. It would be always same browser, so we just need to > keep the code in authentication session. No action-tokens or any > cross-dc replication needed > > Does it sucks from the usability perspective? For me personally not, as > when I need to deal with some web-page, which sends me those > verification emails, I usually just copy/paste the link into the browser > instead of directly clicking on it (yes, because I don't know in which > browser it will be opened and I usually want to continue in the same > browser). > > We will still need action-tokens for the admin actions though. For the > interactive actions, admin will have possibility to choose if he wants > action-tokens (with link in the email etc) or this optimized flow. I can > see this can help with spam, cross-dc performance, so IMO makes sense > for some deployments. > > Marek > > > On 28/03/17 15:46, Hynek Mlnarik wrote: > > On Tue, Mar 28, 2017 at 3:25 PM, Bill Burke wrote: > >> IMO, action tokens should be implemented correctly, as a feature, not > as an > >> optimization to support cross-DC. This means support for one time use > >> policies, etc. > > Okay, it seems that support for single use should be implemented as a > > service and then used by action tokens. > > > > So this can be implemented as a cache that would be shared across the > > cluster / DCs with as little information as possible. Preliminary > > implementation exists in [1], I'll plug that into current code. > > > > [1] https://github.com/keycloak/keycloak/pull/3918 > > > >> On 3/28/17 5:56 AM, Hynek Mlnarik wrote: > >>> > >>>>> * Aren't action tokens supposed to be independent of User sessions > >>>>> anyways? > >>>>> * How can somebody continue with the login flow with an action token? > >>>>> Aren't you still going to have to obtain the user session? > >>> > >>> Not have to, and yes, I can make use of it to continue in the session > in > >>> progress. > >> > >> I'm saying do you have to/should you verify that the action token > originated > >> from a specific session in order to continue the session? I don't know, > >> just asking. These are all things you have to take into account and > figure > >> out how to easily hide or provide through the Authentication/Required > Action > >> SPI too. > > I don't think I have to (for instance expiration of the action token > > to reset password can be e.g. 2 days - much longer than that of a > > session). But I think that we should support case when the user is in > > the middle of the flow and is asked to verify their e-mail - here we > > should continue with the next step in the flow. > > > > --Hynek > > _______________________________________________ > > keycloak-dev mailing list > > keycloak-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > From bruno at abstractj.org Fri Mar 31 06:33:13 2017 From: bruno at abstractj.org (Bruno Oliveira) Date: Fri, 31 Mar 2017 10:33:13 +0000 Subject: [keycloak-dev] Keycloak builders In-Reply-To: References: Message-ID: I realized that while giving it a try, so thinking that we would like to refactor it to the core. Which classes we would like to see there? * ClientBuilder.java * CredentialBuilder.java * ExecutionBuilder.java * FederatedIdentityBuilder.java * FlowBuilder.java * GroupBuilder.java * IdentityProviderBuilder.java * RealmBuilder.java * RoleBuilder.java * RolesBuilder.java * UserBuilder.java * UserFederationProviderBuilder.java For my selfish purposes I just need ClientBuilder, RealBuilder, RoleBuilder and UserBuilder. But we may want to refactor more. > Could even have a static method on ClientRepresentation#build/create or something so it's easier to find. That would be really nice. I believe that RealmBuilder would be the tricky one to refactor, because it depends on EventListenerProviderFactory ( https://github.com/abstractj/keycloak/blob/fc9dbcf6cb1daa5e19bb3214012ed44154104cb0/testsuite/integration-arquillian/servers/auth-server/services/testsuite-providers/src/main/java/org/keycloak/testsuite/events/EventsListenerProviderFactory.java#L29-L29 ). On Fri, Mar 31, 2017 at 4:49 AM Stian Thorgersen wrote: > An example of things that would need to be refactored: > > https://github.com/abstractj/keycloak/blob/119435ac76c17d3a66590df0f87365f64e3395cd/testsuite/integration-arquillian/tests/base/src/test/java/org/keycloak/testsuite/util/RealmBuilder.java#L82 > > On 31 March 2017 at 09:48, Stian Thorgersen wrote: > > Could even have a static method on ClientRepresentation#build/create or > something so it's easier to find. > > On 31 March 2017 at 09:48, Stian Thorgersen wrote: > > Ideal would probably be keycloak-core, but that would require some > refactoring, adding new missing things and also I'm not sure all builders > should be included. > > On 30 March 2017 at 21:04, Bruno Oliveira wrote: > > Hi, > > While writing tests for the quickstarts, we started to create some code > which I believe overlaps with the same thing ClientBuilder, > RealmBuilder...and other do today[1]. I would like to expose these builders > to make people's life easy. > > There are some options: > > 1. Move the builders available here[2] to keycloak-core. > 2. Move it to keycloak-test-helper > 3. Do nothing and duplicate code that matters into keycloak-test-helper > > I know that doing 1 or 2, is just silly if you think about quickstarts. But > at the same time, others can benefit from a more fluent API, to > programatically create users, realms... > > Thoughts? > > [1] - > > https://github.com/keycloak/keycloak/blob/master/misc/keycloak-test-helper/src/main/java/org/keycloak/helper/TestsHelper.java > [2] - > > https://github.com/abstractj/keycloak/tree/119435ac76c17d3a66590df0f87365f64e3395cd/testsuite/integration-arquillian/tests/base/src/test/java/org/keycloak/testsuite/util > _______________________________________________ > keycloak-dev mailing list > keycloak-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/keycloak-dev > > > > > From bburke at redhat.com Fri Mar 31 09:11:57 2017 From: bburke at redhat.com (Bill Burke) Date: Fri, 31 Mar 2017 09:11:57 -0400 Subject: [keycloak-dev] Action tokens In-Reply-To: <3d780a48-472a-10e2-542d-4d8c89447279@redhat.com> References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> <9edd9f56-5267-314e-958f-1e73104e4400@redhat.com> <3d780a48-472a-10e2-542d-4d8c89447279@redhat.com> Message-ID: <721f53ec-c1ab-85a3-45cf-0f084b8f9df6@redhat.com> I implemented it this way initially with the Auth SPI rewrite and was vetoed. Personally I dont like the link because it opens up a new browser tab or window. Plus with the code approach there's no way an attacker can spoof the url and send people rogue emails. But, the link approach is better if I'm on a mobile device. On 3/31/17 3:17 AM, Marek Posolda wrote: > I was thinking if we can have the variant of the interactive email > verification flows ("interactive" means those not triggered by admin, > but by user himself during authentication process) like this: > > - User triggers the flow (For example by click "Forget password" on > login screen in case of reset-password. Other actions like > identity-broker linking verification are triggered automatically > during authentication flow etc) > - Browser displays "We just sent you an email with the generated code. > Please type this code here: ". The input field will be displayed too. > - Email doesn't contain any link. Just the generated code. User needs > to copy/paste it to the field in the browser and after submit, the > flow continues. > > Advantages: > - No need to care about spam filters. As no link in the email > - No need to care if it's same or different browser. Flow will always > continue in same browser > - Cross-dc solved. It would be always same browser, so we just need to > keep the code in authentication session. No action-tokens or any > cross-dc replication needed > > Does it sucks from the usability perspective? For me personally not, > as when I need to deal with some web-page, which sends me those > verification emails, I usually just copy/paste the link into the > browser instead of directly clicking on it (yes, because I don't know > in which browser it will be opened and I usually want to continue in > the same browser). > > We will still need action-tokens for the admin actions though. For the > interactive actions, admin will have possibility to choose if he wants > action-tokens (with link in the email etc) or this optimized flow. I > can see this can help with spam, cross-dc performance, so IMO makes > sense for some deployments. > > Marek > > > On 28/03/17 15:46, Hynek Mlnarik wrote: >> On Tue, Mar 28, 2017 at 3:25 PM, Bill Burke wrote: >>> IMO, action tokens should be implemented correctly, as a feature, >>> not as an >>> optimization to support cross-DC. This means support for one time use >>> policies, etc. >> Okay, it seems that support for single use should be implemented as a >> service and then used by action tokens. >> >> So this can be implemented as a cache that would be shared across the >> cluster / DCs with as little information as possible. Preliminary >> implementation exists in [1], I'll plug that into current code. >> >> [1] https://github.com/keycloak/keycloak/pull/3918 >> >>> On 3/28/17 5:56 AM, Hynek Mlnarik wrote: >>>> >>>>>> * Aren't action tokens supposed to be independent of User sessions >>>>>> anyways? >>>>>> * How can somebody continue with the login flow with an action >>>>>> token? >>>>>> Aren't you still going to have to obtain the user session? >>>> >>>> Not have to, and yes, I can make use of it to continue in the >>>> session in >>>> progress. >>> >>> I'm saying do you have to/should you verify that the action token >>> originated >>> from a specific session in order to continue the session? I don't >>> know, >>> just asking. These are all things you have to take into account and >>> figure >>> out how to easily hide or provide through the >>> Authentication/Required Action >>> SPI too. >> I don't think I have to (for instance expiration of the action token >> to reset password can be e.g. 2 days - much longer than that of a >> session). But I think that we should support case when the user is in >> the middle of the flow and is asked to verify their e-mail - here we >> should continue with the next step in the flow. >> >> --Hynek >> _______________________________________________ >> keycloak-dev mailing list >> keycloak-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/keycloak-dev > > From mposolda at redhat.com Fri Mar 31 11:46:29 2017 From: mposolda at redhat.com (Marek Posolda) Date: Fri, 31 Mar 2017 17:46:29 +0200 Subject: [keycloak-dev] Action tokens In-Reply-To: <721f53ec-c1ab-85a3-45cf-0f084b8f9df6@redhat.com> References: <72878714-952f-496c-a252-f57e9785ffd7@redhat.com> <53c4c187-377b-c59a-aa2a-9b349e25c069@redhat.com> <261cf04a-cb3c-7073-d9ea-f3f0d30024a4@redhat.com> <9edd9f56-5267-314e-958f-1e73104e4400@redhat.com> <3d780a48-472a-10e2-542d-4d8c89447279@redhat.com> <721f53ec-c1ab-85a3-45cf-0f084b8f9df6@redhat.com> Message-ID: <7e8a0041-45c1-5d18-8d9f-25fa22b38109@redhat.com> Ah, yes. Copy-paste would suck on mobile. It seems that we need to support action-tokens anyway, but I was thinking about the "copy-paste" approach as an option, which would admin enable if he wants to optimize cross-dc performance and avoid issues with spam filters. Hopefully it would be possible to have an abstraction, which would allow authenticators to easily use either the first or second approach. But not sure it makes sense as usability on mobiles looks like a blocker... Marek On 31/03/17 15:11, Bill Burke wrote: > I implemented it this way initially with the Auth SPI rewrite and was > vetoed. > > Personally I dont like the link because it opens up a new browser tab > or window. Plus with the code approach there's no way an attacker > can spoof the url and send people rogue emails. But, the link > approach is better if I'm on a mobile device. > > > On 3/31/17 3:17 AM, Marek Posolda wrote: >> I was thinking if we can have the variant of the interactive email >> verification flows ("interactive" means those not triggered by admin, >> but by user himself during authentication process) like this: >> >> - User triggers the flow (For example by click "Forget password" on >> login screen in case of reset-password. Other actions like >> identity-broker linking verification are triggered automatically >> during authentication flow etc) >> - Browser displays "We just sent you an email with the generated >> code. Please type this code here: ". The input field will be >> displayed too. >> - Email doesn't contain any link. Just the generated code. User needs >> to copy/paste it to the field in the browser and after submit, the >> flow continues. >> >> Advantages: >> - No need to care about spam filters. As no link in the email >> - No need to care if it's same or different browser. Flow will always >> continue in same browser >> - Cross-dc solved. It would be always same browser, so we just need >> to keep the code in authentication session. No action-tokens or any >> cross-dc replication needed >> >> Does it sucks from the usability perspective? For me personally not, >> as when I need to deal with some web-page, which sends me those >> verification emails, I usually just copy/paste the link into the >> browser instead of directly clicking on it (yes, because I don't know >> in which browser it will be opened and I usually want to continue in >> the same browser). >> >> We will still need action-tokens for the admin actions though. For >> the interactive actions, admin will have possibility to choose if he >> wants action-tokens (with link in the email etc) or this optimized >> flow. I can see this can help with spam, cross-dc performance, so IMO >> makes sense for some deployments. >> >> Marek >> >> >> On 28/03/17 15:46, Hynek Mlnarik wrote: >>> On Tue, Mar 28, 2017 at 3:25 PM, Bill Burke wrote: >>>> IMO, action tokens should be implemented correctly, as a feature, >>>> not as an >>>> optimization to support cross-DC. This means support for one time use >>>> policies, etc. >>> Okay, it seems that support for single use should be implemented as a >>> service and then used by action tokens. >>> >>> So this can be implemented as a cache that would be shared across the >>> cluster / DCs with as little information as possible. Preliminary >>> implementation exists in [1], I'll plug that into current code. >>> >>> [1] https://github.com/keycloak/keycloak/pull/3918 >>> >>>> On 3/28/17 5:56 AM, Hynek Mlnarik wrote: >>>>> >>>>>>> * Aren't action tokens supposed to be independent of User sessions >>>>>>> anyways? >>>>>>> * How can somebody continue with the login flow with an action >>>>>>> token? >>>>>>> Aren't you still going to have to obtain the user session? >>>>> >>>>> Not have to, and yes, I can make use of it to continue in the >>>>> session in >>>>> progress. >>>> >>>> I'm saying do you have to/should you verify that the action token >>>> originated >>>> from a specific session in order to continue the session? I don't >>>> know, >>>> just asking. These are all things you have to take into account >>>> and figure >>>> out how to easily hide or provide through the >>>> Authentication/Required Action >>>> SPI too. >>> I don't think I have to (for instance expiration of the action token >>> to reset password can be e.g. 2 days - much longer than that of a >>> session). But I think that we should support case when the user is in >>> the middle of the flow and is asked to verify their e-mail - here we >>> should continue with the next step in the flow. >>> >>> --Hynek >>> _______________________________________________ >>> keycloak-dev mailing list >>> keycloak-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/keycloak-dev >> >> >