Ok ... so the error appears to be related to it's use of multi-casting
(which would not propagate nodes over flannel overlay network); as when I
pin the keycloaks instances to the same kubernetes node, suddenly
everything works when a new pod is added ...
] (1) [keycloak-2268126783-gjr2g]
14:00:57,413 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
thread 1-1) ISPN000094: Received new cluster view for channel server:
[keycloak-2268126783-gjr2g|0] (1) [keycloak-2268126783-gjr2g]
14:00:57,414 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
thread 1-2) ISPN000094: Received new cluster view for channel ejb:
[keycloak-2268126783-gjr2g|0] (1) [keycloak-2268126783-gjr2g]
14:00:57,415 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
thread 1-7) ISPN000094: Received new cluster view for channel hibernate:
[keycloak-2268126783-gjr2g|0] (1) [keycloak-2268126783-gjr2g]
14:00:57,415 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
thread 1-5) ISPN000094: Received new cluster view for channel keycloak:
[keycloak-2268126783-gjr2g|0] (1) [keycloak-2268126783-gjr2g]
14:00:57,419 INFO
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
thread 1-2) ISPN000079: Channel ejb local address is
keycloak-2268126783-gjr2g, physical addresConnection to 10.70.2.71 closed.
Placing on different nodes will result in and unanswered multicasting
14:14:52.676714 IP 10.10.69.4.32790 > 10.200.227.31.8200: Flags [.], ack 1,
win 553, options [nop,nop,TS val 600739168 ecr 600901632], length 0
14:14:52.676877 IP 10.200.227.31.8200 > 10.10.69.4.32790: Flags [.], ack 1,
win 235, options [nop,nop,TS val 600911648 ecr 600568905], length 0
14:14:59.557124 IP 10.10.69.4.45700 > 230.0.0.4.45700: UDP, length
92
14:14:59.557157 IP 10.10.69.4.45700 > 230.0.0.4.45700: UDP, length 92
14:15:02.628054 IP 10.200.227.31.8200 > 10.10.69.4.32790: Flags [.], ack 1,
win 235, options [nop,nop,TS val 600921600 ecr 600568905], length 0
14:15:02.628073 IP 10.10.69.4.32790 > 10.200.227.31.8200: Flags [.], ack 1,
win 553, options [nop,nop,TS val 600749119 ecr 600911648], length 0
14:15:02.692712 IP 10.10.69.4.32790 > 10.200.227.31.8200: Flags [.], ack 1,
win 553, options [nop,nop,TS val 600749184 ecr 600911648], length 0
14:15:02.692894 IP 10.200.227.31.8200 > 10.10.69.4.32790: Flags [.], ack 1,
win 235, options [nop,nop,TS val 600921664 ecr 600749119], length 0
14:15:07.636711 ARP, Request who-has 10.10.69.4 tell 10.10.69.1, length 28
14:15:07.636738 ARP, Reply 10.10.69.4 is-at 02:42:0a:0a:45:04, length 28
Perhaps i've misunderstood tcpgossip? ... Anyone know how to stop using MC
and where in the config this is set? ...
Rohith
On Tue, Oct 11, 2016 at 2:42 PM, gambol <gambol99(a)gmail.com> wrote:
So tcpdumping the in the gossip router I can see traffic from the keycloak
instances when it boots
13:23:09.679254 IP 10.10.13.2.57533 > 10.10.69.2.12001: Flags [F.], seq
3916063057, ack 3487985413, win 13442, options [nop,nop,TS val 597445725
ecr 597241454], length 0
13:23:09.679956 IP 10.10.69.2.12001 > 10.10.13.2.57533: Flags [F.], seq 1,
ack 1, win 210, options [nop,nop,TS val 597636171 ecr 597445725], length 0
13:23:09.680198 IP 10.10.69.2.12001 > 10.10.69.4.37303: Flags [P.], seq
3146214056:3146214091, ack 2740076567, win 210, options [nop,nop,TS val
597636171 ecr 597194069], length 35
13:23:09.680269 IP 10.10.69.4.37303 > 10.10.69.2.12001: Flags [.], ack 35,
win 13442, options [nop,nop,TS val 597636171 ecr 597636171], length 0
13:23:09.680550 IP 10.10.13.2.57533 > 10.10.69.2.12001: Flags [.], ack 2,
win 13442, options [nop,nop,TS val 597445726 ecr 597636171], length 0
Oddly enough I see UDP from KC instance to multicast
13:37:54.224690 IP 10.10.69.4.45700 > 230.0.0.4.45700: UDP, length 92
E..x..@...Hj
....ee...8..DT.:......)%lp......keycloak-648398853-kfidy..
None the less, the instances don't seem to be aware of one another ...
logging into one and refreshing will eventually hit the other and error
with unknown bearer; also when I had this working before in v1.9.0 bringing
up or killing a instance would show membership logs events on the console
Rohith
On Tue, Oct 11, 2016 at 1:50 PM, gambol <gambol99(a)gmail.com> wrote:
> Hiya
>
> I'm running Keycloak inside Kubernetes and attempting to get clustering
> working. Multicasting is isn't available so i'm attempting to get the
> gossip protocol working.
>
> Version: 2.2.1-Final
>
> I've changed the standalone-ha.xml
>
> <subsystem xmlns="urn:jboss:domain:jgroups:4.0">
> <channels default="ee">
> <channel name="ee" stack="tcp"/>
> </channels>
> <stacks>
> <stack name="udp">
> <transport type="UDP"
socket-binding="jgroups-udp"/>
> <protocol type="PING"/>
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK"
> socket-binding="jgroups-udp-fd"/>
> <protocol type="FD_ALL"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="UFC"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> </stack>
> <stack name="tcp">
> <transport type="TCP"
socket-binding="jgroups-tcp"/>
> <protocol type="TCPGOSSIP">
> <property name="initial_hosts">${env.GOS
> SIP_ROUTER_HOST:127.0.0.1}[12001]</property>
> </protocol>
> <protocol type="MPING"
socket-binding="jgroups-mping"
> />
> <protocol type="MERGE3"/>
> <protocol type="FD_SOCK"
> socket-binding="jgroups-tcp-fd"/>
> <protocol type="FD"/>
> <protocol type="VERIFY_SUSPECT"/>
> <protocol type="pbcast.NAKACK2"/>
> <protocol type="UNICAST3"/>
> <protocol type="pbcast.STABLE"/>
> <protocol type="pbcast.GMS"/>
> <protocol type="MFC"/>
> <protocol type="FRAG2"/>
> </stack>
> </stacks>
>
> The gossip router service were using is jboss/jgroups-gossip
>
> 12:40:53,114 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000078:
> Starting JGroups channel ejb
> 12:40:53,114 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-6) ISPN000078:
> Starting JGroups channel server
> 12:40:53,114 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000078:
> Starting JGroups channel hibernate
> 12:40:53,114 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000078:
> Starting JGroups channel keycloak
> 12:40:53,114 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-7) ISPN000078:
> Starting JGroups channel web
> 12:40:53,216 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000094:
> Received new cluster view for channel ejb: [keycloak-4062290770-sn7jb|0]
> (1) [keycloak-4062290770-sn7jb]
> 12:40:53,216 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000094:
> Received new cluster view for channel hibernate:
> [keycloak-4062290770-sn7jb|0] (1) [keycloak-4062290770-sn7jb]
> 12:40:53,216 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-6) ISPN000094:
> Received new cluster view for channel server: [keycloak-4062290770-sn7jb|0]
> (1) [keycloak-4062290770-sn7jb]
> 12:40:53,217 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000094:
> Received new cluster view for channel keycloak:
> [keycloak-4062290770-sn7jb|0] (1) [keycloak-4062290770-sn7jb]
> 12:40:53,216 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-7) ISPN000094:
> Received new cluster view for channel web: [keycloak-4062290770-sn7jb|0]
> (1) [keycloak-4062290770-sn7jb]
> 12:40:53,221 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN000079:
> Channel keycloak local address is keycloak-4062290770-sn7jb, physical
> addresses are [10.10.69.4:7600]
> 12:40:53,221 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-6) ISPN000079:
> Channel server local address is keycloak-4062290770-sn7jb, physical
> addresses are [10.10.69.4:7600]
> 12:40:53,221 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000079:
> Channel hibernate local address is keycloak-4062290770-sn7jb, physical
> addresses are [10.10.69.4:7600]
> 12:40:53,221 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-7) ISPN000079:
> Channel web local address is keycloak-4062290770-sn7jb, physical addresses
> are [10.10.69.4:7600]
> 12:40:53,221 INFO [org.infinispan.remoting.tran
> sport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000079:
> Channel ejb local address is keycloak-4062290770-sn7jb, physical addresses
> are [10.10.69.4:7600]
> 12:40:53,314 INFO [org.infinispan.factories.GlobalComponentRegistry]
> (MSC service thread 1-6) ISPN000128: Infinispan version: Infinispan 'Mahou'
> 8.1.0.Final
> 12:40:53,314 INFO [org.infinispan.factories.GlobalComponentRegistry]
> (MSC service thread 1-4) ISPN000128: Infinispan version: Infinispan 'Mahou'
> 8.1.0.Final
> 12:40:53,314 INFO [org.infinispan.factories.GlobalComponentRegistry]
> (MSC service thread 1-8) ISPN000128: Infinispan version: Infinispan 'Mahou'
> 8.1.0.Final
> 12:40:53,314 INFO [org.infinispan.factories.GlobalComponentRegistry]
> (MSC service thread 1-1) ISPN000128: Infinispan version: Infinispan 'Mahou'
> 8.1.0.Final
> 12:40:55,110 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 60) WFLYCLINF0002: Started users cache from keycloak
> container
> 12:40:55,111 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 52) WFLYCLINF0002: Started realms cache from keycloak
> container
> 12:40:55,114 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 56) WFLYCLINF0002: Started sessions cache from keycloak
> container
> 12:40:55,113 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 54) WFLYCLINF0002: Started work cache from keycloak container
> 12:40:55,117 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 59) WFLYCLINF0002: Started offlineSessions cache from
> keycloak container
> 12:40:55,117 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 58) WFLYCLINF0002: Started authorization cache from keycloak
> container
> 12:40:55,115 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 55) WFLYCLINF0002: Started loginFailures cache from keycloak
> container
> 12:40:58,330 INFO [org.keycloak.services] (ServerService Thread Pool --
> 58) KC-SERVICES0001: Loading config from standalone.xml or domain.xml
> 12:41:00,911 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 58) WFLYCLINF0002: Started userRevisions cache from keycloak
> container
> 12:41:00,921 INFO [org.jboss.as.clustering.infinispan] (ServerService
> Thread Pool -- 58) WFLYCLINF0002: Started realmRevisions cache from
> keycloak container
>
> But no matter what i seem to change ... I can't multiple pods to see the
> membership ... Note, i don't even see a reference to the gossip route
> itself, so i'm not entirely sure it's being used.
>
> Are there any working examples or perhaps something obvious i'm missing?
>
> Rohith
>
>