[security-dev] managing OTP
Pedro Igor Silva
psilva at redhat.com
Mon Aug 12 19:21:04 EDT 2013
Wow, that is a lot of experience :) Thanks, I'm sure we need to consider all this knowledge.
I understand your thoughts about the store federation issue you mentioned. Actually, I think is how most IDM solutions work. Everything is pushed and pulled from a single store. With some built-in connectors for each repository.
----- Original Message -----
From: "Boleslaw Dawidowicz" <bdawidow at redhat.com>
To: "Pedro Igor Silva" <psilva at redhat.com>
Cc: "Anil Saldhana" <Anil.Saldhana at redhat.com>, security-dev at lists.jboss.org
Sent: Monday, August 12, 2013 7:55:46 PM
Subject: Re: [security-dev] managing OTP
Sadly we never did it fully for IDM alone - only for portal as a whole. We were also not measuring particular IDM operations - rather optimising on portal usage. Additionally we tuned a lot with 4 levels of cache altogether - hibernate, identity store, IDM API, portal API. Plus some additional workarounds for corner cases. Not really something to compare against :/
I think for web application acceptable page load time is around 2s. Then 4-5s is probably worst case scenario nowadays. For certain rare administrative tasks you can accept 10-20s. For first page load after app boot it can be a bit more as you need to populate cache. Still even for enterprise class app with huge store having 8s for first time load is a max IMO. IDM layer needs to make it achievable taking into account that web app will add a bit of it's own overhead on top. If you test with proposed large store and are safely below those numbers I think it is enough initially.
Just to give example why I suggested such number of users. IDM 1.x has single table for attributes related to any identity type. With 1M users and 20 attributes each you end up with 20M rows. Plus groups… You can then easily identify any wrong join you make. Even just playing with simple CRUD demo app.
Few things that I recall about most important operations to check
- User creation - checking uniqueness can be costly - especially with common requirement to check unique email separately to username.
- Authentication
- Authorisation check of given user based on group/role membership
- Queries with attribute filters and pagination - typical for management console
- Queries based on role membership (plus filters and pagination) - same
Those are tricky with more complex schema but still achievable with some adjustments and SQL/HQL profiling.
Most of fun is really with LDAP and if you mix it with DB. Few things are easier to achieve in efficient manner. For example attribute filters are fast by design - not like joins. Some are not doable at all - like paginated query based on membership and filtered by attribute value. Some LDAP servers don't support all controls useful in such scenarios. However real fun is when you have part of data in DB and part in LDAP and need to merge it. This is a dead end - you can still workaround and optimize for few scenarios but nothing more.
I wrote it few times here already - If I was implementing it again I wouldn't do identity store federation at all. If LDAP store is not enough and you need to have additional data in DB - just sync everything there. This is doable in efficient manner using LDAP changelog feature which stores information to data change the delta since last sync.
This is another suggestion from me :) Don't underestimate possible issues with LDAP store performance. You can do exactly the same as with DB. It is even simpler… I was generating huge LDAP store with really dumb bash script writing into ldif file to import. But this is another topic… and you should focus on optimising for typical MSAD windows domain schema as that is 95% of LDAP stores around :)
On Aug 12, 2013, at 11:43 PM, Pedro Igor Silva <psilva at redhat.com> wrote:
> Hi Bolek,
>
> Do you have any reference from where I can get the PL-IDM 1.x performance metrics and numbers ?
>
> Wondering if we can use them as a goal when running 2.5 tests.
>
> Thanks.
> Pedro Igor
>
> ----- Original Message -----
> From: "Boleslaw Dawidowicz" <bdawidow at redhat.com>
> To: "Pedro Igor Silva" <psilva at redhat.com>
> Cc: "Anil Saldhana" <Anil.Saldhana at redhat.com>, security-dev at lists.jboss.org
> Sent: Monday, August 12, 2013 1:18:44 PM
> Subject: Re: [security-dev] managing OTP
>
>
> On Aug 12, 2013, at 3:38 PM, Pedro Igor Silva <psilva at redhat.com> wrote:
>
>> ----- Original Message -----
>>> From: "Anil Saldhana" <Anil.Saldhana at redhat.com>
>>> To: security-dev at lists.jboss.org
>>> Sent: Monday, August 12, 2013 10:23:07 AM
>>> Subject: Re: [security-dev] managing OTP
>>>
>>> On 08/12/2013 08:20 AM, Bill Burke wrote:
>>>>
>>>> On 8/12/2013 6:19 AM, Pedro Igor Silva wrote:
>>>>> ----- Original Message -----
>>>>>> From: "Bill Burke" <bburke at redhat.com>
>>>>>> To: security-dev at lists.jboss.org
>>>>>> Sent: Sunday, August 11, 2013 8:58:27 AM
>>>>>> Subject: [security-dev] managing OTP
>>>>>>
>>>>>> There's a few issues with managing credentials. The first is, there is
>>>>>> no way to remove a credential. This is essential to TOTP as you may end
>>>>>> up with a lost or obsolete device.
>>>>>>
>>>>>> https://issues.jboss.org/browse/PLINK-236
>>>>>>
>>>>> I missed that too and have discussed that with Shane a long time ago. The
>>>>> idea is to have a history of all account's credentials.
>>>>>
>>>> The reason for this is?
>>>>
>>>>> If a devices becomes obsolete, you just set expiration date.
>>>>>
>>>> Its not just TOTP, same with password. Every time a user has a lost
>>>> password two new obsolete ones are added to the database: temporary
>>>> one, then a password change. Maybe not such a big deal with a few
>>>> users, but when you get to tens, hundreds of thousands of users, won't
>>>> this kind of be a problem?
>>> There will be thousands of users for PicketLink IDM. As Bolek can
>>> attest, PL 1.x IDM had that usage.
>>> Pedro, lets review this password/credential issue.
>>>
>>
>> Let's do this.
>
> Please don't think thousands. It will show nothing :) It is a good exercise to make to feed your storage with 1M users, 200k of groups and/or roles, define membership and etc. Set at least 20 attributes for each user. Perform some operations - like mentioned setting credentials. Make sure to perform various types of queries. This is the level at which your SSD, 8GB of RAM and quad core CPU can't fake you anymore.
>
> Relatively quick and simple to setup on your laptop but quite good to identify weak spots in storage implementation. This is also fairly true scenario. Enterprise class customer can easily have that number of external users. Bigger organisation will have crazy amount of groups in LDAP and you may need to sync all of them into DB. Also with so flexible schema developers will abuse it by mapping all possible permission types to big set of roles - even if you could do it more efficient in different way. Sadly you cannot ban wrong usage of your model ;)
>
> As you are currently discussing JDBC storage I would suggest implementing such tests early. Outcome can easily lead you to complete redesign of your storage implementation. Also it matters to measure time of certain operations and see how they scale depending on numbers of users. This can also help to identify leaks like the one pointed by Bill. Also my personal assumption is that with so flexible implementation like you have with JPA you won't be able to maintain scalability easily. It is better to verify. It took me and Marek a lot of time and careful fine tuning to make it perform for JPP and it didn't happen without tradeoffs.
>
> This is the PL IDM 1.x experience to share :)
>
>>
>>>>>> THe 2nd is that for TOTP, you will want to check every device on a
>>>>>> credential validation rather than just one:
>>>>>>
>>>>>> https://issues.jboss.org/browse/PLINK-237
>>>>>>
>>>>>> Our own VPN allows me to set up multiple tokens. I have one on my
>>>>>> iphone and ipad just in case I lose one or the other. OUr VPN allows me
>>>>>> to use either to login in.
>>>>>>
>>>>> Is not a valid option you iterate over user's devices and try each one ?
>>>>>
>>>> Sure, this is why this is an enhancement.
>>>>
>>>
>>> _______________________________________________
>>> security-dev mailing list
>>> security-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/security-dev
>>>
>> _______________________________________________
>> security-dev mailing list
>> security-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/security-dev
>
More information about the security-dev
mailing list