Hi Marek,

Yes we have both nodes mentioned in the TCPPING configuration, in both servers. It looks like this (x.x.x.x and y.y.y.y being server public IPs):

        <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="tcp">
            <stack name="udp">
                <transport type="UDP" socket-binding="jgroups-udp"/>
                <protocol type="PING"/>
                <protocol type="MERGE3"/>
                <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
                <protocol type="FD_ALL"/>
                <protocol type="VERIFY_SUSPECT"/>
                <protocol type="pbcast.NAKACK2"/>
                <protocol type="UNICAST3"/>
                <protocol type="pbcast.STABLE"/>
                <protocol type="pbcast.GMS"/>
                <protocol type="UFC"/>
                <protocol type="MFC"/>
                <protocol type="FRAG2"/>
                <protocol type="RSVP"/>
            </stack>
            <stack name="tcp">
                <transport type="TCP" socket-binding="jgroups-tcp"/>
                <protocol type="TCPPING">
                     <property name="initial_hosts">x.x.x.x[7600],y.y.y.y[7600]</property>
                     <property name="num_initial_members">2</property>
                     <property name="port_range">0</property>
                     <property name="timeout">2000</property>
                </protocol>
                <!--<protocol type="MPING" socket-binding="jgroups-mping"/>-->
                <protocol type="MERGE2"/>
                <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
                <protocol type="FD"/>
                <protocol type="VERIFY_SUSPECT"/>
                <protocol type="pbcast.NAKACK2"/>
                <protocol type="UNICAST3"/>
                <protocol type="pbcast.STABLE"/>
                <protocol type="pbcast.GMS"/>
                <protocol type="MFC"/>
                <protocol type="FRAG2"/>
                <protocol type="RSVP"/>
            </stack>
        </subsystem>


We give node names when starting the servers by passing the following params:
./standalone.sh -c standalone-ha.xml -b x.x.x.x -Djboss.node.name=node1
./standalone.sh -c standalone-ha.xml -b y.y.y.y -Djboss.node.name=node2

However, once we start the servers, the two servers seem to detect only itself and start working independently.

In node1 it logs:
...
05:09:44,643 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000078: Starting JGroups Channel
05:09:44,696 INFO  [stdout] (MSC service thread 1-1)
05:09:44,697 INFO  [stdout] (MSC service thread 1-1) -------------------------------------------------------------------
05:09:44,697 INFO  [stdout] (MSC service thread 1-1) GMS: address=node1/keycloak, cluster=keycloak, physical address=0.0.0.0:7600
05:09:44,697 INFO  [stdout] (MSC service thread 1-1) -------------------------------------------------------------------
05:09:46,779 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000094: Received new cluster view: [node1/keycloak|0] (1) [node1/keycloak]
05:09:46,781 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000079: Cache local address is node1/keycloak, physical addresses are [0.0.0.0:7600]
...

In node2 it's the same case with node1 replaced by node2 in the log.

Also, if we're doing an activity in node1, something similar to the following gets logged in that server, without any mention about node2 (and vise versa if the activity was done in node2)
...
05:09:57,280 DEBUG [org.infinispan.interceptors.InvalidationInterceptor] (MSC service thread 1-1) Cache [node1/keycloak] replicating InvalidateCommand{keys=[e2734aa6-e770-407a-b00a-1915105ea586]}
05:09:57,281 DEBUG [org.infinispan.interceptors.InvalidationInterceptor] (MSC service thread 1-1) Cache [node1/keycloak] replicating InvalidateCommand{keys=[ebb4d88a-a364-4ec1-bd7c-48572ac762af]}
05:09:57,283 DEBUG [org.infinispan.interceptors.InvalidationInterceptor] (MSC service thread 1-1) Cache [node1/keycloak] replicating InvalidateCommand{keys=[9658becc-34e4-4fd1-817c-46b3b9ad4c7f]}
...
 

Could you determine if we're doing something wrong here?


Thanks,
Lohitha.

On Thu, May 7, 2015 at 7:58 PM, Marek Posolda <mposolda@redhat.com> wrote:
Hi,

once you start both nodes, do you have the message in the server.log as mentioned in the troubleshooting section: http://docs.jboss.org/keycloak/docs/1.2.0.CR1/userguide/html/clustering.html#d4e2750 ?

Also I recall that for TCPPING you need to manually mention all the cluster nodes in the JGroups configuration of TCPPING protocol. This needs to be done on both nodes AFAIR. Do you have it configured?

Marek


On 7.5.2015 15:39, Lohitha Chiranjeewa wrote:
Hi,

We're trying to have a clustered environment with two servers, and need Infinispan caches to work perfectly.

We're using AWS servers for all our requirements, and they don't support multicasting. Hence the UDP option is out for us. So, as per the JIRA ticket KEYCLOAK-979, we have tried to continue with TCP instead. However we've had no success. Changes don't get synced between the servers.

To configure TCPPING, we've referred both KEYCLOAK-979 ticket contents and http://middlewaremagic.com/jboss/?p=2015. We have enabled TCP communication in our VPC and have all the necessary ports open in our servers.

We've followed the step given at http://docs.jboss.org/keycloak/docs/1.2.0.Beta1/userguide/html/clustering.html to setup Infinispan and related configs.

What could we be doing wrong here? Any configuration we're missing?


Thanks,
Lohitha.


_______________________________________________
keycloak-user mailing list
keycloak-user@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/keycloak-user