<!DOCTYPE html>
<html>
<head>
<title></title>
</head>
<body><div>I haven't found any way around this other than turning on session affinity at the load balancer level.</div>
<div> </div>
<div> </div>
<div>On Wed, Feb 17, 2016, at 03:37 PM, Nicolás Pozo wrote:<br></div>
<blockquote type="cite"><div dir="ltr"><div>Hi,<br></div>
<div><span class="colour" style="color:rgb(80, 0, 80)">JDBC_PING did the Job and infinispan seems to be working now. But I have another issue. </span><br></div>
<div> </div>
<div><span class="colour" style="color:rgb(80, 0, 80)">I have 2 keycloak instances running behind a load balancer. When I get a token from server 1 and then load balancer sends requests to server 2 using this token, I get an error 401 because token is not valid. Is there any other missing configuration to sinchronize tokens? </span><br></div>
<div> </div>
<div><span class="colour" style="color:rgb(80, 0, 80)">Thanks,</span><br></div>
<div><div><span class="colour" style="color:rgb(80, 0, 80)">Nicolás.-<br></span></div>
<div><div> </div>
<div><div>2016-02-17 13:01 GMT-03:00 Aikeaguinea <span dir="ltr"><<a href="mailto:aikeaguinea@xsmail.com">aikeaguinea@xsmail.com</a>></span>:<br></div>
<blockquote style="margin-top:0px;margin-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;border-left-color:rgb(204, 204, 204);border-left-style:solid;padding-left:1ex;"><div>Apologies to those reading my message in plaintext; apparently all the<br></div>
<div>
spaces come out as question marks. I've updated the message to use<br></div>
<div>
plaintext below.<br></div>
<div> </div>
<div>
--------------------------------------------------------------------------------------------------------------------------------------------<br></div>
<div><div><div> </div>
<div>
I just got JGroups/Infinispan with JDBC_PING working from inside a<br></div>
<div>
Docker cluster in ECS on EC2. I use JDBC_PING rather than S3_PING, since<br></div>
<div>
I need a database anyway and didn't want to have to set up an S3 bucket<br></div>
<div>
just for this one purpose. Nicolás, if you're on AWS the default UDP<br></div>
<div>
transport for JGroups doesn't work because multicast isn't supported<br></div>
<div>
inside EC2, which may be your problem.<br></div>
<div> </div>
<div>
Here are the configurations you'd need:<br></div>
<div> </div>
<div>
1. The JGroups module has to reference to the db module. So in<br></div>
<div>
jgroups-module.xml I have:<br></div>
<div> </div>
<div>
<dependencies><br></div>
<div>
<module name="javax.api"/><br></div>
<div>
<module name="org.postgresql.jdbc"/><br></div>
<div>
</dependencies><br></div>
<div> </div>
<div>
2. The standalone-ha.xml has a JGroups subsystem (with TCP and<br></div>
<div>
JDBC_PING) that looks like the configuration below; I read certain<br></div>
<div>
variables from the environment, but may use the Wildfly vault tool for<br></div>
<div>
some of them. The external_addr property configurations are only needed<br></div>
<div>
if you're inside a Docker container, since Wildfly has to read the<br></div>
<div>
address of the EC2 instance hosting the container to register itself<br></div>
<div>
with JGroups. For the initialize_sql you can generally use the default,<br></div>
<div>
but for Postgres I needed a custom DDL because I needed the BYTEA data<br></div>
<div>
type which isn't in the default DDL.<br></div>
<div> </div>
<div>
<subsystem xmlns="urn:jboss:domain:jgroups:4.0"><br></div>
<div>
<channels default="ee"><br></div>
<div>
<channel name="ee" stack="tcp"/><br></div>
<div>
</channels><br></div>
<div> </div>
<div>
<stacks default="tcp"><br></div>
<div>
<stack name="tcp"><br></div>
<div>
<transport type="TCP" socket-binding="jgroups-tcp"><br></div>
<div>
<property<br></div>
<div>
name="external_addr">${env.EXTERNAL_HOST_IP}</property><br></div>
<div>
</transport><br></div>
<div> </div>
<div>
<protocol type="JDBC_PING"><br></div>
<div>
<property<br></div>
<div>
name="connection_driver">org.postgresql.Driver</property><br></div>
<div>
<property<br></div>
<div>
name="connection_url">jdbc:postgresql://${env.POSTGRES_TCP_ADDR}:${env.POSTGRES_TCP_PORT}/${env.POSTGRES_DATABASE}</property><br></div>
<div>
<property<br></div>
<div>
name="connection_username">${env.POSTGRES_USER}</property><br></div>
<div>
<property<br></div>
<div>
name="connection_password">${env.POSTGRES_PASSWORD}</property><br></div>
<div>
<property name="initialize_sql"><br></div>
<div>
CREATE TABLE IF NOT EXISTS jgroupsping (<br></div>
<div>
own_addr VARCHAR(200) NOT NULL,<br></div>
<div>
cluster_name VARCHAR(200) NOT NULL,<br></div>
<div>
ping_data BYTEA DEFAULT NULL,<br></div>
<div>
PRIMARY KEY (own_addr, cluster_name)<br></div>
<div>
)<br></div>
<div>
</property><br></div>
<div>
</protocol><br></div>
<div> </div>
<div>
<protocol type="MERGE3"/><br></div>
<div>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"><br></div>
<div>
<property<br></div>
<div>
name="external_addr">${env.EXTERNAL_HOST_IP}</property><br></div>
<div>
</protocol><br></div>
<div> </div>
<div>
<protocol type="FD"/><br></div>
<div>
<protocol type="VERIFY_SUSPECT"/><br></div>
<div>
<protocol type="pbcast.NAKACK2"/><br></div>
<div>
<protocol type="UNICAST3"/><br></div>
<div>
<protocol type="pbcast.STABLE"/><br></div>
<div>
<protocol type="pbcast.GMS"/><br></div>
<div>
<protocol type="MFC"/><br></div>
<div>
<protocol type="FRAG2"/><br></div>
<div>
</stack><br></div>
<div>
</stacks><br></div>
<div>
</subsystem><br></div>
<div> </div>
<div>
3. If you're in a Docker container, you have to expose the JGroups ports<br></div>
<div>
so they are visible from outside the container, so in standalone-ha.xml<br></div>
<div>
in the socket bindings I have changed to the public interface:<br></div>
<div> </div>
<div>
<socket-binding name="jgroups-tcp" interface="public"<br></div>
<div>
port="7600"/><br></div>
<div>
<socket-binding name="jgroups-tcp-fd" interface="public"<br></div>
<div>
port="57600"/><br></div>
<div> </div>
<div>
4. For Docker, the startup script needs to pass the EXTERNAL_HOST_IP<br></div>
<div>
variable. I have a wrapper start script that first queries the AWS<br></div>
<div>
instance metadata service for the host's private IP address:<br></div>
<div> </div>
<div>
export EXTERNAL_HOST_IP=$(curl -s<br></div>
<div> <a href="http://169.254.169.254/latest/meta-data/local-ipv4">169.254.169.254/latest/meta-data/local-ipv4</a>)<br></div>
<div>
exec $WILDFLY_HOME/bin/standalone.sh -c standalone-keycloak-ha.xml<br></div>
<div>
-<a href="http://Djboss.node.name">Djboss.node.name</a>=$HOSTNAME -Djgroups.bind_addr=global -b $HOSTNAME<br></div>
<div> </div>
<div>
> --------------------------------------------------------------------------------------------------------------------------------------------<br></div>
<div>
> From: <<a href="mailto:keycloak-user-bounces@lists.jboss.org">keycloak-user-bounces@lists.jboss.org</a>><br></div>
<div>
> Date: Wednesday, February 17, 2016 at 9:03 AM<br></div>
<div>
> To: "<a href="mailto:keycloak-user@lists.jboss.org">keycloak-user@lists.jboss.org</a>" <<a href="mailto:keycloak-user@lists.jboss.org">keycloak-user@lists.jboss.org</a>><br></div>
<div>
> Subject: [keycloak-user] Infinispan not working on HA environment with dockers.<br></div>
<div>
><br></div>
<div>
> Hello all,<br></div>
<div>
> I'm trying to set a Keycloak HA environment up with dockers. I tried with jboss/keycloak-ha-postgres:1.8.0.Final image.<br></div>
<div>
><br></div>
<div>
> I can't make infinispan work when I run 2 instances of my docker images. I get the following log in every node:<br></div>
<div>
><br></div>
<div>
> Received new cluster view for channel ejb: [f9032dc82244|0] (1) [f9032dc82244]<br></div>
<div>
> Received new cluster view for channel hibernate: [f9032dc82244|0] (1) [f9032dc82244]<br></div>
<div>
> Received new cluster view for channel keycloak: [f9032dc82244|0] (1) [f9032dc82244]<br></div>
<div>
> Received new cluster view for channel web: [f9032dc82244|0] (1) [f9032dc82244]<br></div>
<div>
> Channel hibernate local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200">127.0.0.1:55200</a>]<br></div>
<div>
> Channel keycloak local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200">127.0.0.1:55200</a>]<br></div>
<div>
> Channel ejb local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200">127.0.0.1:55200</a>]<br></div>
<div>
> Channel web local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200">127.0.0.1:55200</a>]<br></div>
<div>
> Received new cluster view for channel server: [f9032dc82244|0] (1) [f9032dc82244]<br></div>
<div>
> Channel server local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200">127.0.0.1:55200</a>]<br></div>
<div>
><br></div>
<div>
> This is causing my user sessions are not shared between instances and it's not working properly.<br></div>
<div>
><br></div>
<div>
> When I run 2 instances of keycloak without dockers, they work properly.<br></div>
<div>
><br></div>
<div>
> Am I missing something? Is there any extra configuration that I need to change?<br></div>
<div>
><br></div>
<div>
> Thanks,<br></div>
<div>
> Nicolas.-<br></div>
<div>
> --<br></div>
<div> <a href="http://www.fastmail.com">http://www.fastmail.com</a> - A fast, anti-spam email service.<br></div>
<div> </div>
</div>
</div>
<div>--<br></div>
<div>
Aikeaguinea<br></div>
<div> <a href="mailto:aikeaguinea@xsmail.com">aikeaguinea@xsmail.com</a><br></div>
<div> <span><span class="colour" style="color:rgb(136, 136, 136)"><br> <br>
--<br> <a href="http://www.fastmail.com">http://www.fastmail.com</a> - Access your email from home and the web<br> </span></span></div>
</blockquote></div>
</div>
</div>
</div>
</blockquote><div> </div>
<div id="sig3995191"><div class="signature">--<br></div>
<div class="signature"> Aikeaguinea<br></div>
<div class="signature"> aikeaguinea@xsmail.com<br></div>
<div class="signature"> </div>
</div>
<div> </div>
<pre>
--
http://www.fastmail.com - Or how I learned to stop worrying and
love email again
</pre>
</body>
</html>