[keycloak-user] Replace use of Infinispan with User Sessions SPI ?
Bill Burke
bburke at redhat.com
Tue Dec 15 09:55:32 EST 2015
See Alan Field's response. He's being moderated and...I've forgotten
the moderator password. :)
On 12/14/2015 7:55 PM, Alan Field wrote:
> Hey Scott,
>
> ------------------------------------------------------------------------
>
> *From: *"Scott Rossillo" <srossillo at smartling.com>
> *To: *"Marek Posolda" <mposolda at redhat.com>, afield at redhat.com
> *Cc: *"keycloak-user" <keycloak-user at lists.jboss.org>, "Bill Burke"
> <bburke at redhat.com>
> *Sent: *Monday, December 14, 2015 6:31:30 PM
> *Subject: *Re: [keycloak-user] Replace use of Infinispan with User
> Session844129162306s SPI ?
>
> There are two issues:
>
> 1. Infinispan relies on JGroups, which is difficult to configure
> correctly with the various ping techniques that aren’t UDP
> multicast. I can elaborate on each one that we tested but it’s just
> generally complex to get right. That’s not to say it’s impossible or
> the biggest reason this is complicated on ECS or _insert container
> service here_, see #2 for that.
>
>
> The Infinispan server and JBoss EAP include a TCP-based stack in the
> configuration to run on EC2 that looks like this:
>
> <stack name="s3">
> <transport type="TCP" socket-binding="jgroups-tcp"/>
> <protocol type="S3_PING">
> <property name="location">${jgroups.s3.bucket:}</property>
> <property name="access_key">${jgroups.s3.access_key:}</property>
> <property
> name="secret_access_key">${jgroups.s3.secret_access_key:}</property>
> <property
> name="pre_signed_delete_url">${jgroups.s3.pre_signed_delete_url:}</property>
> <property
> name="pre_signed_put_url">${jgroups.s3.pre_signed_put_url:}</property>
> <property name="prefix">${jgroups.s3.prefix:}</property>
> </protocol>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2">
> <property name="use_mcast_xmit">false</property>
> </protocol>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> </stack>
>
>
> With this in the configuration file, you can start the server with the
> following system properties defined:
>
>
> bin/clustered.sh -Djboss.node.name=node0
> -Djboss.socket.binding.port-offset=0 -Djboss.default.jgroups.stack=s3
> -Djgroups.s3.bucket=<s3_bucket_name>
> -Djgroups.s3.access_key=<access_key>
> -Djgroups.s3.secret_access_key=<secret_access_key>
>
>
> This will cause the server to start and the nodes will write to a file
> in the S3 bucket to allow the nodes to discover each other. I do not see
> this stack defined in the configuration used by WildFly 9, but it should
> work there as well. It is also possible to use the JGroups Gossip Router
> for discovery, but it requires running a separate process that all of
> the nodes contact during the discovery phase.
>
>
>
> 2. It is difficult to do discovery correctly with JGroups and
> Docker. Non-privileged Docker instances - the default and recommend
> type - do not implicitly know their host’s IP. This causes IP
> mismatches between what JGroups thinks the machine’s IP is and what
> it actually is when connecting to hosts on different machines. This
> is the main issue and it’s not the fault of JGroups per se, but
> there’s no simple work around.
>
> Take for example a simple 2 node cluster:
>
> Node 1 comes up on the docker0 interface of host A with the IP
> address 172.16.0.4. The host A IP is 10.10.0.100.
> Node 2 comes up on the docker0 interface of host B with the IP
> address 172.16.0.8. The host B IP is 10.10.0.108.
>
> The 172.16 network is not routable between hosts (by design). Docker
> does port forwarding for ports we wish to expose to this works fine
> for HTTP/HTTPS but not the cluster traffic.
>
> So Node 1 will advertise itself as having IP 172.16.0.4 while Node 2
> advertises 172.16.0.8. The two cannot talk to each other by default.
> However, using the hard coded IPs and TCP PING, we can
> set external_addr on Node 1 to 10.10.0.100 and external_addr on Node
> 2 to 10.10.0.108 and set initial_hosts to 10.10.0.100, 10.10.0.108.
> This will cause the nodes to discover each other. However, they will
> not form a cluster. The nodes will reject the handshake thinking
> they’re not actually 10.10.0.100 or 10.10.0.108 respectively.
>
> I’d like to discuss further and I can share where we’ve gotten so
> far with workarounds to this but it may be better to get into the
> weeds on another list.
>
> Let me know what you think.
>
> This issue is a little trickier, and I think we should probably move the
> discussion to the jgroups-users list which you can subscribe to here.
> [1] Bela Ban may have some ideas about how to set the binding address or
> interface to get around this. The Fabric8 project is also using a
> JGroups discovery protocol that relies on Kubernetes, but I don't think
> ECS uses Kubernetes.
>
> Thanks,
> Alan
>
> [1] https://lists.sourceforge.net/lists/listinfo/javagroups-users
>
>
> Best,
> Scott
>
> Scott Rossillo
> Smartling | Senior Software Engineer
> srossillo at smartling.com <mailto:srossillo at smartling.com>
>
> Powered by Sigstr <http://www.sigstr.com/>
>
> On Dec 14, 2015, at 5:32 PM, Marek Posolda <mposolda at redhat.com
> <mailto:mposolda at redhat.com>> wrote:
>
> CCing Alan Field from RH Infinispan team and forwarding his
> question:
>
> I'd like to know which configuration files you are using and why is is
> harder to use with Amazon’s Docker service (ECS) or Beanstalk. I'd also be
> interested in how big a cluster you are using in AWS.
>
>
>
> On 14/12/15 22:24, Scott Rossillo wrote:
>
> AWS was why we didn’t use Infinispan to begin with. That
> and it’s even more complicated when you deploy using
> Amazon’s Docker service (ECS) or Beanstalk.
>
> It’s too bad Infinispan / JGroups are beasts when the out
> of the box configuration can’t be used. I’m planning to
> document this as we fix but I’d avoid S3_PING and use
> JDBC_PING. You already need JDBC for the Keycloak DB, unless
> you’re using Mongo and it’s easier to test locally.
>
> TCPPING will bite you on AWS if Amazon decides to replace
> one of your instances (which it does occasionally w/ECS or
> Beanstalk).
>
> Best,
> Scott
>
> Scott Rossillo
> Smartling | Senior Software Engineer
> srossillo at smartling.com <mailto:srossillo at smartling.com>
>
> Powered by Sigstr <http://www.sigstr.com/>
>
> On Dec 14, 2015, at 10:59 AM, Marek Posolda
> <mposolda at redhat.com <mailto:mposolda at redhat.com>> wrote:
>
> On 14/12/15 16:55, Marek Posolda wrote:
>
> On 14/12/15 15:58, Bill Burke wrote:
>
> On 12/14/2015 5:01 AM, Niko Köbler wrote:
>
> Hi Marek,
>
> Am 14.12.2015 um 08:50 schrieb Marek
> Posolda <mposolda at redhat.com
> <mailto:mposolda at redhat.com>>:
>
> Btv. what's your motivation to not use
> infinispan? If you afraid of
> cluster communication, you don't need to
> worry much about it, because
> if you run single keycloak through
> standalone.xml, the infinispan
> automatically works in LOCAL mode and
> there is no any cluster
> communication at all.
>
> My current customer is running his apps in
> AWS. As known, multicast is
> not available in cloud infrastructures.
> Wildfly/Infinispan Cluster works
> pretty well with multicast w/o having to
> know too much about JGroups
> config. S3_PING seams to be a viable way to
> get a cluster running in AWS.
> But additionally, my customer doesn’t have
> any (deep) knowledge about
> JBoss infrastructures and so I’m looking for
> a way to be able to run
> Keycloak in a cluster in AWS without the
> need to build up deeper
> knowlegde of JGroups config, for example in
> getting rid of Infinispan.
> But I do understand all the concerns in
> doing this.
> I still have to test S3_PING, if it works as
> easy as multicast. If yes,
> we can use it, if no… I don’t know yet. But
> this gets offtopic for
> Keycloak mailinglist, it’s more related to
> pure Wildfly/Infinispan.
>
> seems to me it would be much easier to get
> Infinispan working on AWS
> than to write and maintain an entire new caching
> mechanism and hope we
> don't refactor the cache SPI.
>
>
> +1
>
> I am sure infinispan/JGroups has possibility to run
> in non-multicast
> environment. You may just need to figure how exactly
> to configure it. So
> I agree that this issue is more related to
> Wildfly/Infinispan itself
> than to Keycloak.
>
> You may need to use jgroups protocols like TCP
> instead of default UDP
> and maybe TCPPING (this requires to manually list
> all your cluster
> nodes. But still, it's much better option IMO than
> rewriting UserSession
> SPI)
>
> Btv. if TCPPING or S3_PING is an issue, there is also
> AWS_PING
> http://www.jgroups.org/manual-3.x/html/protlist.html#d0e5100
> , but it's
> not official part of jgroups.
>
> Marek
>
>
> Marek
> _______________________________________________
> keycloak-user mailing list
> keycloak-user at lists.jboss.org
> <mailto:keycloak-user at lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/keycloak-user
>
>
> _______________________________________________
> keycloak-user mailing list
> keycloak-user at lists.jboss.org
> <mailto:keycloak-user at lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/keycloak-user
>
>
>
>
>
--
Bill Burke
JBoss, a division of Red Hat
http://bill.burkecentral.com
More information about the keycloak-user
mailing list