Thanks, the below is the exact post we were using as a reference.

 

Any other idea what might cause it ? or what to search in the logs or JMX ?

 

 

From: Aikeaguinea [mailto:aikeaguinea@xsmail.com]
Sent: Tuesday, August 16, 2016 4:59 PM
To: Haim Vana <haimv@perfectomobile.com>; keycloak-user@lists.jboss.org
Subject: Re: [keycloak-user] KeyCloak HA on AWS EC2 with docker - cluster is up but login fails

 

Yes, this gets more complicated than your standard installation. AWS doesn't allow UDP communication in S3, and you also need to configure your Infinispan cache to work while you're running in Docker. 

 

There was a thread on this list "Using Keycloak in AWS EC2. What are people using? / Infinispan not working" where this was discussed; this is from that three describing howI got things working:

 

________________________________________________________

 

I just got JGroups/Infinispan with JDBC_PING working from inside a

Docker cluster in ECS on EC2. I use JDBC_PING rather than S3_PING, since

I need a database anyway and didn't want to have to set up an S3 bucket

just for this one purpose. Nicolás, if you're on AWS the default UDP

transport for JGroups doesn't work because multicast isn't supported

inside EC2, which may be your problem.

 

Here are the configurations you'd need:

 

1. The JGroups module has to reference to the db module. So in

jgroups-module.xml I have:

 

  <dependencies>

    <module name="javax.api"/>

    <module name="org.postgresql.jdbc"/>

  </dependencies>

 

2. The standalone-ha.xml has a JGroups subsystem (with TCP and

JDBC_PING) that looks like the configuration below; I read certain

variables from the environment, but may use the Wildfly vault tool for

some of them. The external_addr property configurations are only needed

if you're inside a Docker container, since Wildfly has to read the

address of the EC2 instance hosting the container to register itself

with JGroups. For the initialize_sql you can generally use the default,

but for Postgres I needed a custom DDL because I needed the BYTEA data

type which isn't in the default DDL.

 

<subsystem xmlns="urn:jboss:domain:jgroups:4.0">

      <channels default="ee">

        <channel name="ee" stack="tcp"/>

      </channels>

 

      <stacks default="tcp">

        <stack name="tcp">

          <transport type="TCP" socket-binding="jgroups-tcp">

            <property

            name="external_addr">${env.EXTERNAL_HOST_IP}</property>

          </transport>

 

          <protocol type="JDBC_PING">

            <property

            name="connection_driver">org.postgresql.Driver</property>

            <property

            name="connection_url">jdbc:postgresql://${env.POSTGRES_TCP_ADDR}:${env.POSTGRES_TCP_PORT}/${env.POSTGRES_DATABASE}</property>

            <property

            name="connection_username">${env.POSTGRES_USER}</property>

            <property

            name="connection_password">${env.POSTGRES_PASSWORD}</property>

            <property name="initialize_sql">

              CREATE TABLE IF NOT EXISTS jgroupsping (

                own_addr VARCHAR(200) NOT NULL,

                cluster_name VARCHAR(200) NOT NULL,

                ping_data BYTEA DEFAULT NULL,

                PRIMARY KEY (own_addr, cluster_name)

              )

            </property>

          </protocol>

 

          <protocol type="MERGE3"/>

          <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd">

            <property

            name="external_addr">${env.EXTERNAL_HOST_IP}</property>

          </protocol>

 

          <protocol type="FD"/>

          <protocol type="VERIFY_SUSPECT"/>

          <protocol type="pbcast.NAKACK2"/>

          <protocol type="UNICAST3"/>

          <protocol type="pbcast.STABLE"/>

          <protocol type="pbcast.GMS"/>

          <protocol type="MFC"/>

          <protocol type="FRAG2"/>

        </stack>

      </stacks>

    </subsystem>

 

3. If you're in a Docker container, you have to expose the JGroups ports

so they are visible from outside the container, so in standalone-ha.xml

in the socket bindings I have changed to the public interface:

 

      <socket-binding name="jgroups-tcp" interface="public"

      port="7600"/>

      <socket-binding name="jgroups-tcp-fd" interface="public"

      port="57600"/>

 

4. For Docker, the startup script needs to pass the EXTERNAL_HOST_IP

variable. I have a wrapper start script that first queries the AWS

instance metadata service at 169.254.169.254 for the host's private IP address:

 

export EXTERNAL_HOST_IP=$(curl -s

169.254.169.254/latest/meta-data/local-ipv4)

exec $WILDFLY_HOME/bin/standalone.sh -c standalone-keycloak-ha.xml

-Djboss.node.name=$HOSTNAME -Djgroups.bind_addr=global -b $HOSTNAME

 

 

On Tue, Aug 16, 2016, at 09:01 AM, Haim Vana wrote:

Hi,

 

We are trying to set KeyCloak 1.9.3 with HA on AWS EC2 with docker, the cluster is up without errors however the login fails with the below error:

 

WARN [org.keycloak.events] (default task-10) type=LOGIN_ERROR, realmId=master, clientId=null, userId=null, ipAddress=172.30.200.171, error=invalid_code

 

we have followed this (http://lists.jboss.org/pipermail/keycloak-user/2016-February/004940.html ) post but used S3_PING instead of JDBC_PING.

 

It seems that the nodes detect each other:

 

INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,ee,6dbce1e2a05a) ISPN000094: Received new cluster view for channel keycloak: [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]

 

We suspect that the nodes doesn't communicate with each other, when we queried the jboss mbean "jboss.as.expr:subsystem=jgroups,channel=ee" the result was:

jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]

jgroups,channel=ee  receivedMessages = 0

jgroups,channel=ee  sentMessages = 0

 

And for the second node:

jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]

jgroups,channel=ee  receivedMessages = 0

jgroups,channel=ee  sentMessages = 5

 

 

We also verified that the TCP  ports 57600 and 7600 are open.

 

Any idea what might cause it ?

 

 

Here is the relevant standalone-ha.xml configuration and below is that startup command:

 

<subsystem xmlns="urn:jboss:domain:jgroups:4.0">

            <channels default="ee">

                <channel name="ee" stack="tcp"/>

            </channels>

            <stacks>

                <stack name="udp">

                    <transport type="UDP" socket-binding="jgroups-udp"/>

                    <protocol type="PING"/>

                    <protocol type="MERGE3"/>

                    <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>

                    <protocol type="FD_ALL"/>

                    <protocol type="VERIFY_SUSPECT"/>

                    <protocol type="pbcast.NAKACK2"/>

                    <protocol type="UNICAST3"/>

                    <protocol type="pbcast.STABLE"/>

                    <protocol type="pbcast.GMS"/>

                    <protocol type="UFC"/>

                    <protocol type="MFC"/>

                    <protocol type="FRAG2"/>

                </stack>

                <stack name="tcp">

                    <transport type="TCP" socket-binding="jgroups-tcp">

                        <property name="external_addr">200.129.4.189</property>

                    </transport>

                    <protocol type="S3_PING">

                                <property name="access_key">AAAAAAAAAAAAAA</property>

                                <property name="secret_access_key">BBBBBBBBBBBBBB</property>

                                <property name="location">CCCCCCCCCCCCCCCCCCCC</property>

                </protocol>

                    <protocol type="MERGE3"/>

                    <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd">

                        <property name="external_addr">200.129.4.189</property>

                    </protocol>

                    <protocol type="FD"/>

                    <protocol type="VERIFY_SUSPECT"/>

                    <protocol type="pbcast.NAKACK2"/>

                    <protocol type="UNICAST3"/>

                    <protocol type="pbcast.STABLE"/>

                    <protocol type="pbcast.GMS"/>

                    <protocol type="MFC"/>

                    <protocol type="FRAG2"/>

                </stack>

            </stacks>

        </subsystem>

 

 

        <socket-binding name="jgroups-tcp" interface="public" port="7600"/>

        <socket-binding name="jgroups-tcp-fd" interface="public" port="57600"/>

 

And we start the server using the below ($INTERNAL_HOST_IP is the container internal IP address):

standalone.sh -c=standalone-ha.xml -b=$INTERNAL_HOST_IP -bmanagement=$INTERNAL_HOST_IP -bprivate=$INTERNAL_HOST_IP

 

 

Any help will be appreciated.

 

 

Thanks,

Haim.

 

 

The information contained in this message is proprietary to the sender, protected from disclosure, and may be privileged. The information is intended to be conveyed only to the designated recipient(s) of the message. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, use, distribution or copying of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by replying to the message and deleting it from your computer. Thank you.

_______________________________________________

keycloak-user mailing list

keycloak-user@lists.jboss.org

https://lists.jboss.org/mailman/listinfo/keycloak-user

 

--

  Aikeaguinea

  aikeaguinea@xsmail.com

 

 

 
-- 
http://www.fastmail.com - Same, same, but different...
The information contained in this message is proprietary to the sender, protected from disclosure, and may be privileged. The information is intended to be conveyed only to the designated recipient(s) of the message. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, use, distribution or copying of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by replying to the message and deleting it from your computer. Thank you.