On Nov 12, 2012, at 1:08 AM, Shane Bryzak <sbryzak(a)redhat.com> wrote:
On 11/09/2012 07:57 AM, Boleslaw Dawidowicz wrote:
> On Nov 6, 2012, at 11:10 PM, Shane Bryzak <sbryzak(a)redhat.com> wrote:
>
>> On 11/07/2012 07:35 AM, Boleslaw Dawidowicz wrote:
>>> Also +1. It looks really good.
>>>
>>> I assume the strategy to handle operations when two stores are configured is
IdentityManager implementation area. I wonder if we should make this part also more
flexible in some way. Not really thinking about IdentityStoreRepository kind of design I
had in 1.x as it is probably a bit too much. However it should be at least easy to extend
DefaultIdentityManager to add some customizations to how specific operations are handled.
Or we should have something like GenericIdentityManager for such purpose.
>>
>> This feature (which I've been referring to as partitioning) is supported by
requiring each configured IdentityStore to provide metadata as to which features are
supported, via the getFeatureSet() method:
>>
>> Set<Feature> getFeatureSet();
>>
>> Feature is an enum defining all currently supported identity management
operations:
>>
>> public enum Feature { createUser, readUser, updateUser, deleteUser,
>> createGroup, readGroup, updateGroup, deleteGroup,
>> createRole, readRole, updateRole, deleteRole,
>> createMembership, readMembership, updateMembership,
deleteMembership,
>> validateCredential, updateCredential,
>> all }
>>
>> When an IdentityManager method is invoked, the correct IdentityStore for the
required operation is selected based on its supported feature set. Here's an example
of this, in the createUser() method:
>>
>>
>> @Override
>> public User createUser(String name) {
>> User user = new SimpleUser(name);
>> IdentityStore store = getStoreForFeature(Feature.createUser);
>> store.createUser(getContextFactory().getContext(store), user);
>> return user;
>> }
>>
>> The getStoreForFeature() method iterates through the configured stores and
returns the one that supports the specified Feature, in this example Feature.createUser.
This way we can configure multiple stores with one of them providing user-related
operations, one providing group and role operations, and so forth.
>>
>
> This will still not cover all use cases.
>
> You have few stores that support credential validation and implement some kind of
strategy like in pam. I'm not saying that spi should be more complex - no I think it
is rather perfect for the balance between complexity and features. Just pointing out that
people will need to extend IdentityManager to introduce some more complex multi store
behaviors. Implementation should be done with it in mind. Avoid private methods and so
on.
It should be relatively simple to extend the default IdentityManager and override this
behaviour if the developer wants something else.
>
>>>
>>> Looking forward to see your ideas around design of event handling part. I
think it will be critical to truly pluggable and extendable.
>>
>> This feature was quite simple. I've opted to go with the "class as an
event" model, where each event type is represented by a POJO containing the relevant
event state. For example, the following event is raised when a new user is created:
>>
>> public class UserCreatedEvent extends AbstractBaseEvent {
>> private User user;
>>
>> public UserCreatedEvent(User user) {
>> this.user = user;
>> }
>>
>> public User getUser() {
>> return user;
>> }
>> }
>>
>> In this example, the event class contains a reference to the User that was
created. In addition to this, each event class should extend AbstractBaseEvent which also
provides an event context:
>>
>> public abstract class AbstractBaseEvent {
>> private EventContext context = new EventContext();
>>
>> public EventContext getContext() {
>> return context;
>> }
>> }
>>
>> The EventContext provides a general purpose mechanism for passing arbitrary
state:
>>
>> public class EventContext {
>> private Map<String,Object> context;
>>
>> public Object getValue(String name) {
>> return context != null ? context.get(name) : null;
>> }
>>
>> public void setValue(String name, Object value) {
>> if (context == null) {
>> context = new HashMap<String,Object>();
>> }
>> context.put(name, value);
>> }
>>
>> public boolean contains(String name) {
>> return context != null && context.containsKey(name);
>> }
>>
>> public boolean isEmpty() {
>> return context == null || context.isEmpty();
>> }
>> }
>>
>> It is via the EventContext that we can pass IdentityStore implementation-specific
state (for example the actual entity bean instance that was persisted to the database in
the case of a JPA backed IdentityStore) or any additional state that might be relevant to
the event.
>>
>> As for bridging the event itself, the IdentityStoreInvocationContext provides
access to the EventBridge via the getEventBridge() method:
>>
>> EventBridge getEventBridge();
>>
>> This is an extremely simple interface declaring just a single method:
>>
>> public interface EventBridge {
>> void raiseEvent(Object event);
>> }
>>
>> The idea here is that you can provide an EventBridge implementation tailored for
the environment that you're running in. In an EE6 environment, this is a piece of
cake as we just pass the event straight on through to the CDI event bus.
>>
>> That pretty much sums up event handling.
>
> Looks pretty simple indeed. What I had in mind was something with post and per
operation events but I guess that if needed those would fit this design right? It more
seems about defining valid event types and raising points.
>
> I would like to be able to have access to full context - not just the single store.
Use case is for example to synchronize or propagate objects from one identity store into
other one in case of specific operation.
Pre and post operation events will fit this design easily, yes. Propagation should be
possible by creating a 'wrapper' IdentityStore that synchronizes identity state
from one of the wrapped stores into another (I wouldn't base this on the event API).
I'm not quite convinced that synchronization is such a useful thing though, could you
enlighten me with a use case where this would be required?
Simple use case is where you have store with username/password and some basic group
information. You want to include it but don't need to query it all the time. This
matches minimal LDAP integration in some scenarios. What you can do is to just perform
authentication against this store and do initial sync of some attributes and group
memberships into DB. Then for single attributes it can be still desired to propagate new
values to the original store. Using events for new values propagation and some wrapper
Identity store for sync during auth can be more efficient way then setting up whole
identity store federation approach.