What is the error event when the token is not valid? I'm guess that
this is happening on code to token. If so, that may mean that the
clustered cache is still not set up correctly.
On 2/17/2016 4:53 PM, Aikeaguinea wrote:
I haven't found any way around this other than turning on session
affinity at the load balancer level.
On Wed, Feb 17, 2016, at 03:37 PM, Nicolás Pozo wrote:
> Hi,
> JDBC_PING did the Job and infinispan seems to be working now. But I
> have another issue.
> I have 2 keycloak instances running behind a load balancer. When I
> get a token from server 1 and then load balancer sends requests to
> server 2 using this token, I get an error 401 because token is not
> valid. Is there any other missing configuration to sinchronize tokens?
> Thanks,
> Nicolás.-
> 2016-02-17 13:01 GMT-03:00 Aikeaguinea <aikeaguinea(a)xsmail.com
> <mailto:aikeaguinea@xsmail.com>>:
>
> Apologies to those reading my message in plaintext; apparently
> all the
> spaces come out as question marks. I've updated the message to use
> plaintext below.
>
--------------------------------------------------------------------------------------------------------------------------------------------
> I just got JGroups/Infinispan with JDBC_PING working from inside a
> Docker cluster in ECS on EC2. I use JDBC_PING rather than
> S3_PING, since
> I need a database anyway and didn't want to have to set up an S3
> bucket
> just for this one purpose. Nicolás, if you're on AWS the default UDP
> transport for JGroups doesn't work because multicast isn't supported
> inside EC2, which may be your problem.
> Here are the configurations you'd need:
> 1. The JGroups module has to reference to the db module. So in
> jgroups-module.xml I have:
> <dependencies>
> <module name="javax.api"/>
> <module name="org.postgresql.jdbc"/>
> </dependencies>
> 2. The standalone-ha.xml has a JGroups subsystem (with TCP and
> JDBC_PING) that looks like the configuration below; I read certain
> variables from the environment, but may use the Wildfly vault
> tool for
> some of them. The external_addr property configurations are only
> needed
> if you're inside a Docker container, since Wildfly has to read the
> address of the EC2 instance hosting the container to register itself
> with JGroups. For the initialize_sql you can generally use the
> default,
> but for Postgres I needed a custom DDL because I needed the BYTEA
> data
> type which isn't in the default DDL.
> <subsystem xmlns="urn:jboss:domain:jgroups:4.0">
> <channels default="ee">
> <channel name="ee" stack="tcp"/>
> </channels>
> <stacks default="tcp">
> <stack name="tcp">
> <transport type="TCP"
socket-binding="jgroups-tcp">
> <property
> name="external_addr">${env.EXTERNAL_HOST_IP}</property>
> </transport>
> <protocol type="JDBC_PING">
> <property
> name="connection_driver">org.postgresql.Driver</property>
> <property
>
name="connection_url">jdbc:postgresql://${env.POSTGRES_TCP_ADDR}:${env.POSTGRES_TCP_PORT}/${env.POSTGRES_DATABASE}</property>
> <property
> name="connection_username">${env.POSTGRES_USER}</property>
> <property
>
name="connection_password">${env.POSTGRES_PASSWORD}</property>
> <property name="initialize_sql">
> CREATE TABLE IF NOT EXISTS jgroupsping (
> own_addr VARCHAR(200) NOT NULL,
> cluster_name VARCHAR(200) NOT NULL,
> ping_data BYTEA DEFAULT NULL,
> PRIMARY KEY (own_addr, cluster_name)
> )
> </property>
> </protocol>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK"
socket-binding="jgroups-tcp-fd">
> <property
> name="external_addr">${env.EXTERNAL_HOST_IP}</property>
> </protocol>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> </stack>
> </stacks>
> </subsystem>
> 3. If you're in a Docker container, you have to expose the
> JGroups ports
> so they are visible from outside the container, so in
> standalone-ha.xml
> in the socket bindings I have changed to the public interface:
> <socket-binding name="jgroups-tcp"
interface="public"
> port="7600"/>
> <socket-binding name="jgroups-tcp-fd"
interface="public"
> port="57600"/>
> 4. For Docker, the startup script needs to pass the EXTERNAL_HOST_IP
> variable. I have a wrapper start script that first queries the AWS
> instance metadata service for the host's private IP address:
> export EXTERNAL_HOST_IP=$(curl -s
> 169.254.169.254/latest/meta-data/local-ipv4
> <
http://169.254.169.254/latest/meta-data/local-ipv4>)
> exec $WILDFLY_HOME/bin/standalone.sh -c standalone-keycloak-ha.xml
> -Djboss.node.name <
http://Djboss.node.name>=$HOSTNAME
> -Djgroups.bind_addr=global -b $HOSTNAME
> >
>
--------------------------------------------------------------------------------------------------------------------------------------------
> > From: <keycloak-user-bounces(a)lists.jboss.org
> <mailto:keycloak-user-bounces@lists.jboss.org>>
> > Date: Wednesday, February 17, 2016 at 9:03 AM
> > To: "keycloak-user(a)lists.jboss.org
> <mailto:keycloak-user@lists.jboss.org>"
> <keycloak-user(a)lists.jboss.org
> <mailto:keycloak-user@lists.jboss.org>>
> > Subject: [keycloak-user] Infinispan not working on HA
> environment with dockers.
> >
> > Hello all,
> > I'm trying to set a Keycloak HA environment up with dockers. I
> tried with jboss/keycloak-ha-postgres:1.8.0.Final image.
> >
> > I can't make infinispan work when I run 2 instances of my
> docker images. I get the following log in every node:
> >
> > Received new cluster view for channel ejb: [f9032dc82244|0] (1)
> [f9032dc82244]
> > Received new cluster view for channel hibernate:
> [f9032dc82244|0] (1) [f9032dc82244]
> > Received new cluster view for channel keycloak:
> [f9032dc82244|0] (1) [f9032dc82244]
> > Received new cluster view for channel web: [f9032dc82244|0] (1)
> [f9032dc82244]
> > Channel hibernate local address is f9032dc82244, physical
> addresses are [127.0.0.1:55200 <
http://127.0.0.1:55200>]
> > Channel keycloak local address is f9032dc82244, physical
> addresses are [127.0.0.1:55200 <
http://127.0.0.1:55200>]
> > Channel ejb local address is f9032dc82244, physical addresses
> are [127.0.0.1:55200 <
http://127.0.0.1:55200>]
> > Channel web local address is f9032dc82244, physical addresses
> are [127.0.0.1:55200 <
http://127.0.0.1:55200>]
> > Received new cluster view for channel server: [f9032dc82244|0]
> (1) [f9032dc82244]
> > Channel server local address is f9032dc82244, physical
> addresses are [127.0.0.1:55200 <
http://127.0.0.1:55200>]
> >
> > This is causing my user sessions are not shared between
> instances and it's not working properly.
> >
> > When I run 2 instances of keycloak without dockers, they work
> properly.
> >
> > Am I missing something? Is there any extra configuration that I
> need to change?
> >
> > Thanks,
> > Nicolas.-
> > --
>
http://www.fastmail.com - A fast, anti-spam email service.
> --
> Aikeaguinea
> aikeaguinea(a)xsmail.com <mailto:aikeaguinea@xsmail.com>
>
>
> --
>
http://www.fastmail.com - Access your email from home and the web
>
--
Aikeaguinea
aikeaguinea(a)xsmail.com
--
http://www.fastmail.com - Or how I learned to stop worrying and
love email again
_______________________________________________
keycloak-user mailing list
keycloak-user(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-user