[keycloak-user] Fwd: Keycloak 3.2.1 Final not working in cluster

Stian Thorgersen sthorger at redhat.com
Wed Nov 15 14:17:45 EST 2017


Try using a fresh install of Keycloak with no modifications at all and try
running:

# bin/standalone.sh --server-config=standalone-ha.xml -b <IP> -bprivate <IP>

On 15 November 2017 at 17:19, mahendra sonawale <mahson1 at gmail.com> wrote:

> Hello Stian,
>
> Thank you for your reply.
> I have gone through the links and reference links as well.
> Trying Keycloak cluster with multicast over private interface and on
> separate testing, messages are getting through with (McastReceiverTest,
> McastSenderTest)
> Tried my best to accommodate needed things into cluster environment and as
> per my understanding looks same which is present the given guidelines .
> Appreciate your help in identifying if anything is missing in config
> I have been stuck here from couple of weeks :(
>
> Have shared whatever set up we have in mail trail.
>
> Thanks,
> Mahendra Sonawale.
>
>
> On Wed, Nov 15, 2017 at 8:25 PM, Stian Thorgersen <sthorger at redhat.com>
> wrote:
>
>> Did you check the docs? Specifically http://www.keyclo
>> ak.org/docs/latest/server_installation/index.html#multicast-network-setup
>>
>> On 15 November 2017 at 14:38, mahendra sonawale <mahson1 at gmail.com>
>> wrote:
>>
>>> Hello Cedric/Keycloak User comm,
>>>
>>> Sorry for getting back late over this. my set-up needs Admin team`s
>>> intervention to change the broadcast value hence the delay in response.
>>>
>>> I got /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts value changed to 0
>>>
>>> And also tested the multicast set-up message test with
>>> "McastReceiverTest"
>>> and "McastSenderTest" which works fine.
>>>
>>> BUT KEYCLOAK is still NOT working in cluster. I get auto logged out.
>>>
>>> PFA the HA file which I am using in my configuration.
>>>
>>> IP addresses are dummy.
>>> Node 1 : 1.2.3.4
>>> Node 2 : 1.2.3.5
>>>
>>> This all I tried.
>>> 1) Start command - nohup ./bin/standalone.sh
>>> --server-config=standalone-ha.xml -b $HOSTNAME -u 230.0.0.4 &
>>> 2) Tried to run both the nodes with public as well as private interface -
>>> but no luck.
>>> 3) I have hardware load balancer where SSL terminates. so domain will
>>> communicate to the both the nodes in round robin and both nodes should be
>>>
>>> 4) PFB the HTTPD Conf
>>>
>>> -------------------------
>>> LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
>>> LoadModule remoteip_module modules/mod_remoteip.so
>>>
>>> ProxyPreserveHost On
>>> LimitRequestFieldSize 163840
>>> LimitRequestLine 163840
>>>
>>> #<VirtualHost _default_:80>
>>>  ServerName rapid.gi-de.com:443
>>>  ErrorLog /opt/keycloak/fiam_error_log
>>>  CustomLog /opt/keycloak/fiam_access_log combined
>>>  LogLevel warn
>>>
>>> RequestHeader set X-Forwarded-Proto "https"
>>>
>>> <Proxy https://abc.com/* >
>>>  RewriteEngine on
>>>  RewriteCond %{REQUEST_FILENAME} !-f
>>>  RewriteCond %{REQUEST_FILENAME} !-d
>>>  # not rewrite css, js and images
>>>  RewriteCond %{REQUEST_URI} !\.(?:css|js|map|jpe?g|gif|png)$ [NC]
>>>  RewriteRule ^(.*)$ /auth [NC,L,QSA]
>>> #Options -Indexes FollowSymLinks
>>>  AllowOverride None
>>>  Order allow,deny
>>>  Allow from all
>>> </Proxy>
>>>
>>>
>>> ProxyPass /auth http://1.2.3.4:8080/auth
>>> ProxyPassReverse /auth http://1.2.3.4:8080/auth
>>>
>>> ------------------
>>> And on 2nd node only proxy pass has change in IP address as 1.2.3.5
>>>
>>> 6) Server logs:
>>> 2017-11-15 14:03:06,255 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-2) ISPN000094: Received new cluster view for channel keycloak:
>>> [keycloak1.accounts.intern|0] (1) [keycloak1.accounts.intern]
>>> 2017-11-15 14:03:06,256 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-7) ISPN000094: Received new cluster view for channel hibernate:
>>> [keycloak1.accounts.intern|0] (1) [keycloak1.accounts.intern]
>>> 2017-11-15 14:03:06,259 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-8) ISPN000094: Received new cluster view for channel web:
>>> [keycloak1.accounts.intern|0] (1) [keycloak1.accounts.intern]
>>> 2017-11-15 14:03:06,263 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-1) ISPN000094: Received new cluster view for channel server:
>>> [keycloak1.accounts.intern|0] (1) [keycloak1.accounts.intern]
>>> 2017-11-15 14:03:06,263 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-7) ISPN000079: Channel hibernate local address is
>>> keycloak1.accounts.intern, physical addresses are [1.2.3.4:55200]
>>> 2017-11-15 14:03:06,264 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-1) ISPN000079: Channel server local address is
>>> keycloak1.accounts.intern, physical addresses are [1.2.3.4:55200]
>>> 2017-11-15 14:03:06,264 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-3) ISPN000094: Received new cluster view for channel ejb:
>>> [keycloak1.accounts.intern|0] (1) [keycloak1.accounts.intern]
>>> 2017-11-15 14:03:06,265 INFO
>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC
>>> service
>>> thread 1-3) ISPN000079: Channel ejb local address is
>>> keycloak1.accounts.intern, physical addresses are [1.2.3.4:55200]
>>>
>>>
>>>
>>>
>>> ProxyPass /auth http://1.2.3.4:8080/auth
>>> ProxyPassReverse /auth http://1.2.3.4:8080/auth
>>>
>>> Thanks,
>>> Mahendra
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Nov 9, 2017 at 6:35 PM, Cédric Couralet <
>>> cedric.couralet at gmail.com>
>>> wrote:
>>>
>>> > 2017-11-09 12:34 GMT+01:00 mahendra sonawale <mahson1 at gmail.com>:
>>> > > (You can look for the value in
>>> > > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts, it should be 0)
>>> > >
>>> > > In our production linux env the value is 1 --  does that really
>>> affect??
>>> > > and would that be the only cause?
>>> > >
>>> >
>>> > Yes, it is important. At least for us, changing this value to 0 was
>>> > enough to have a working cluster.
>>> > As I understand it, the value 1 is a protection against DOS but, in
>>> > the case of Keycloak prevents each node to discover the others. In a
>>> > controlled environment (as recommended in the keycloak docs), I see no
>>> > problem enabling it.
>>> >
>>> > I'm far for expert, so maybe someone will have a better idea.
>>> >
>>>
>>> _______________________________________________
>>> keycloak-user mailing list
>>> keycloak-user at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/keycloak-user
>>>
>>
>>
>


More information about the keycloak-user mailing list