<div dir="ltr">Hi,<div><span style="color:rgb(80,0,80)">JDBC_PING did the Job and infinispan seems to be working now. But I have another issue. </span></div><div><font color="#500050"><br></font></div><div><font color="#500050">I have 2 keycloak instances running behind a load balancer. When I get a token from server 1 and then load balancer sends requests to server 2 using this token, I get an error 401 because token is not valid. Is there any other missing configuration to sinchronize tokens? </font></div><div><font color="#500050"><br></font></div><div><font color="#500050">Thanks,</font></div><div><font color="#500050">Nicolás.-<br></font><div class="gmail_extra"><br><div class="gmail_quote">2016-02-17 13:01 GMT-03:00 Aikeaguinea <span dir="ltr"><<a href="mailto:aikeaguinea@xsmail.com" target="_blank">aikeaguinea@xsmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Apologies to those reading my message in plaintext; apparently all the<br>
spaces come out as question marks. I've updated the message to use<br>
plaintext below.<br>
<br>
--------------------------------------------------------------------------------------------------------------------------------------------<br>
<div><div class="h5"><br>
I just got JGroups/Infinispan with JDBC_PING working from inside a<br>
Docker cluster in ECS on EC2. I use JDBC_PING rather than S3_PING, since<br>
I need a database anyway and didn't want to have to set up an S3 bucket<br>
just for this one purpose. Nicolás, if you're on AWS the default UDP<br>
transport for JGroups doesn't work because multicast isn't supported<br>
inside EC2, which may be your problem.<br>
<br>
Here are the configurations you'd need:<br>
<br>
1. The JGroups module has to reference to the db module. So in<br>
jgroups-module.xml I have:<br>
<br>
<dependencies><br>
<module name="javax.api"/><br>
<module name="org.postgresql.jdbc"/><br>
</dependencies><br>
<br>
2. The standalone-ha.xml has a JGroups subsystem (with TCP and<br>
JDBC_PING) that looks like the configuration below; I read certain<br>
variables from the environment, but may use the Wildfly vault tool for<br>
some of them. The external_addr property configurations are only needed<br>
if you're inside a Docker container, since Wildfly has to read the<br>
address of the EC2 instance hosting the container to register itself<br>
with JGroups. For the initialize_sql you can generally use the default,<br>
but for Postgres I needed a custom DDL because I needed the BYTEA data<br>
type which isn't in the default DDL.<br>
<br>
<subsystem xmlns="urn:jboss:domain:jgroups:4.0"><br>
<channels default="ee"><br>
<channel name="ee" stack="tcp"/><br>
</channels><br>
<br>
<stacks default="tcp"><br>
<stack name="tcp"><br>
<transport type="TCP" socket-binding="jgroups-tcp"><br>
<property<br>
name="external_addr">${env.EXTERNAL_HOST_IP}</property><br>
</transport><br>
<br>
<protocol type="JDBC_PING"><br>
<property<br>
name="connection_driver">org.postgresql.Driver</property><br>
<property<br>
name="connection_url">jdbc:postgresql://${env.POSTGRES_TCP_ADDR}:${env.POSTGRES_TCP_PORT}/${env.POSTGRES_DATABASE}</property><br>
<property<br>
name="connection_username">${env.POSTGRES_USER}</property><br>
<property<br>
name="connection_password">${env.POSTGRES_PASSWORD}</property><br>
<property name="initialize_sql"><br>
CREATE TABLE IF NOT EXISTS jgroupsping (<br>
own_addr VARCHAR(200) NOT NULL,<br>
cluster_name VARCHAR(200) NOT NULL,<br>
ping_data BYTEA DEFAULT NULL,<br>
PRIMARY KEY (own_addr, cluster_name)<br>
)<br>
</property><br>
</protocol><br>
<br>
<protocol type="MERGE3"/><br>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"><br>
<property<br>
name="external_addr">${env.EXTERNAL_HOST_IP}</property><br>
</protocol><br>
<br>
<protocol type="FD"/><br>
<protocol type="VERIFY_SUSPECT"/><br>
<protocol type="pbcast.NAKACK2"/><br>
<protocol type="UNICAST3"/><br>
<protocol type="pbcast.STABLE"/><br>
<protocol type="pbcast.GMS"/><br>
<protocol type="MFC"/><br>
<protocol type="FRAG2"/><br>
</stack><br>
</stacks><br>
</subsystem><br>
<br>
3. If you're in a Docker container, you have to expose the JGroups ports<br>
so they are visible from outside the container, so in standalone-ha.xml<br>
in the socket bindings I have changed to the public interface:<br>
<br>
<socket-binding name="jgroups-tcp" interface="public"<br>
port="7600"/><br>
<socket-binding name="jgroups-tcp-fd" interface="public"<br>
port="57600"/><br>
<br>
4. For Docker, the startup script needs to pass the EXTERNAL_HOST_IP<br>
variable. I have a wrapper start script that first queries the AWS<br>
instance metadata service for the host's private IP address:<br>
<br>
export EXTERNAL_HOST_IP=$(curl -s<br>
<a href="http://169.254.169.254/latest/meta-data/local-ipv4" rel="noreferrer" target="_blank">169.254.169.254/latest/meta-data/local-ipv4</a>)<br>
exec $WILDFLY_HOME/bin/standalone.sh -c standalone-keycloak-ha.xml<br>
-<a href="http://Djboss.node.name" rel="noreferrer" target="_blank">Djboss.node.name</a>=$HOSTNAME -Djgroups.bind_addr=global -b $HOSTNAME<br>
<br>
> --------------------------------------------------------------------------------------------------------------------------------------------<br>
> From: <<a href="mailto:keycloak-user-bounces@lists.jboss.org">keycloak-user-bounces@lists.jboss.org</a>><br>
> Date: Wednesday, February 17, 2016 at 9:03 AM<br>
> To: "<a href="mailto:keycloak-user@lists.jboss.org">keycloak-user@lists.jboss.org</a>" <<a href="mailto:keycloak-user@lists.jboss.org">keycloak-user@lists.jboss.org</a>><br>
> Subject: [keycloak-user] Infinispan not working on HA environment with dockers.<br>
><br>
> Hello all,<br>
> I'm trying to set a Keycloak HA environment up with dockers. I tried with jboss/keycloak-ha-postgres:1.8.0.Final image.<br>
><br>
> I can't make infinispan work when I run 2 instances of my docker images. I get the following log in every node:<br>
><br>
> Received new cluster view for channel ejb: [f9032dc82244|0] (1) [f9032dc82244]<br>
> Received new cluster view for channel hibernate: [f9032dc82244|0] (1) [f9032dc82244]<br>
> Received new cluster view for channel keycloak: [f9032dc82244|0] (1) [f9032dc82244]<br>
> Received new cluster view for channel web: [f9032dc82244|0] (1) [f9032dc82244]<br>
> Channel hibernate local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200" rel="noreferrer" target="_blank">127.0.0.1:55200</a>]<br>
> Channel keycloak local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200" rel="noreferrer" target="_blank">127.0.0.1:55200</a>]<br>
> Channel ejb local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200" rel="noreferrer" target="_blank">127.0.0.1:55200</a>]<br>
> Channel web local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200" rel="noreferrer" target="_blank">127.0.0.1:55200</a>]<br>
> Received new cluster view for channel server: [f9032dc82244|0] (1) [f9032dc82244]<br>
> Channel server local address is f9032dc82244, physical addresses are [<a href="http://127.0.0.1:55200" rel="noreferrer" target="_blank">127.0.0.1:55200</a>]<br>
><br>
> This is causing my user sessions are not shared between instances and it's not working properly.<br>
><br>
> When I run 2 instances of keycloak without dockers, they work properly.<br>
><br>
> Am I missing something? Is there any extra configuration that I need to change?<br>
><br>
> Thanks,<br>
> Nicolas.-<br>
> --<br>
<a href="http://www.fastmail.com" rel="noreferrer" target="_blank">http://www.fastmail.com</a> - A fast, anti-spam email service.<br>
<br>
</div></div>--<br>
Aikeaguinea<br>
<a href="mailto:aikeaguinea@xsmail.com">aikeaguinea@xsmail.com</a><br>
<span class=""><font color="#888888"><br>
<br>
--<br>
<a href="http://www.fastmail.com" rel="noreferrer" target="_blank">http://www.fastmail.com</a> - Access your email from home and the web<br>
<br>
</font></span></blockquote></div><br></div></div></div>