stian at redhat.com
Fri Feb 21 09:13:39 EST 2014
I agree that there are headaches involved in a distributed cache, but they will always be an issue if you have a cache.
You're going to need to have some mechanism to invalidate entries in the cache whenever there's an update to the db. Infinispan provides various mechanisms to expire unused items, and it also has multiple clustering modes where the most interesting to us would be invalidation not replication. In invalidation mode the actual data isn't sent on the network, so should be less risky with regards to security.
I would also hope that Infinispan supports OpenShift, or plans to soon.
Non-JPA cache I agree with is the better option, but it may prove to be a fair amount of work, and possible error-prone. I've done this in the past and it was a real PITA to write and maintain.
>From your previous mail on OpenID connect spec and codes it sounds like it's fine for us to encode/encrypt the full access code entry into the code, as we already have a short lifespan on these. That would mean we'd not need any clustering of that, which would be great. Same goes with SocialManager, we can encode/encrypt everything into the state variable for OAuth2 enabled providers, leaving only Twitter as a problem (we could use a cookie in that case).
With regards to huge deployments, I think the dream figure for LiveOak is millions of developers. For those numbers I think sharding is the only viable option. AFAIK DB doesn't like millions of entries in a table, especially not with joins.
----- Original Message -----
> From: "Bill Burke" <bburke at redhat.com>
> To: "Stian Thorgersen" <stian at redhat.com>
> Cc: keycloak-dev at lists.jboss.org
> Sent: Friday, 21 February, 2014 1:41:52 PM
> Subject: Re: [keycloak-dev] caching
> I don't want to require a distributed cache. It may make both securing
> this cache, configuring, and deploying it much more complicated than we
> want it to be. Especially in an environment like Openshift. Don't you
> think? Plus there's things that can be stored in a non-JPA cache. i.e.
> you can pre-calculate role mappings, scope mappings, access grants.
> Then all you have to do is marshall the tokens into json and sign them.
> As far as huge deployments goes, define huge? Even a realm with 1
> million users would probably be around 10GiG, which is very easy to
> handle for most modern databases.
> On 2/21/2014 4:14 AM, Stian Thorgersen wrote:
> > Initially I think using Infinispan as a 2nd level cache for JPA would be
> > the way to go. It provides all of this stuff with the minimum fuzz. Later
> > if we can't tune it enough, we could use Infinispan directly.
> > For really huge deployments I we'd probably need support for sharding as
> > well.
> > ----- Original Message -----
> >> From: "Bill Burke" <bburke at redhat.com>
> >> To: keycloak-dev at lists.jboss.org
> >> Sent: Thursday, 20 February, 2014 8:54:01 PM
> >> Subject: [keycloak-dev] caching
> >> What's been brewing around in my mind awhile is optimization of the
> >> token service. There's no reason everything couldn't be cached in
> >> memory for each token service deployed. Even millions of users could be
> >> cache. Memory is cheap.
> >> The cache should be local only and only the Token Service should use it.
> >> Admin console, or any other update operations would cause invalidation
> >> of each cache on each machine by sending invalidation messages. These
> >> invalidation messages would be REST invocation secured by Keycloak of
> >> course! If we wanted to put in any guarantees, we could back these
> >> invalidation messages with HornetQ or something.
> >> --
> >> Bill Burke
> >> JBoss, a division of Red Hat
> >> http://bill.burkecentral.com
> >> _______________________________________________
> >> keycloak-dev mailing list
> >> keycloak-dev at lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/keycloak-dev
> Bill Burke
> JBoss, a division of Red Hat
More information about the keycloak-dev