[keycloak-user] Standalone-ha Keycloak and ISPN000476: Timed out waiting for responses for request

Rafael Weingärtner rafaelweingartner at gmail.com
Fri Oct 26 08:07:11 EDT 2018


That is what we will do for now.
Thanks for the help!

On Fri, Oct 26, 2018 at 9:02 AM Sebastian Laskawiec <slaskawi at redhat.com>
wrote:

> It's absolutely fine to stick with TCP. For small clusters (a finger in
> the air guesstimate - < 10 nodes) you should see no performance difference.
>
> For some reason UDP is very sensitive to underlying OS configuration. So I
> would advice just sticking with TCP.
>
> On Thu, Oct 25, 2018 at 8:47 PM Rafael Weingärtner <
> rafaelweingartner at gmail.com> wrote:
>
>> Another discovery.
>> I changed the configuration:
>>
>>>         <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
>>>             <channels default="ee">
>>>                 <channel name="ee" stack="udp" cluster="ejb"/>
>>>             </channels>
>>>
>>
>> to:
>>
>>>         <subsystem xmlns="urn:jboss:domain:jgroups:6.0">
>>>             <channels default="ee">
>>>                 <channel name="ee" stack="tcp" cluster="ejb"/>
>>>             </channels>
>>>
>>
>> This change has fixed the problem. It seems that there is a networking
>> problem that is affecting UDP. However, I still have not figured out the
>> root cause of the problem though.
>>
>> On Wed, Oct 24, 2018 at 9:39 PM Rafael Weingärtner <
>> rafaelweingartner at gmail.com> wrote:
>>
>>> Just an update.
>>>
>>> It is interesting, at least to me, but version 4.0.0.Final of Keycloak
>>> works just fine in the same set up. It seems that there is something that
>>> was changed in Keycloak 4.5.0, which is somehow sensitive to something in
>>> this environment. I have not being able to pin point it though.
>>>
>>> On Wed, Oct 24, 2018 at 12:09 PM Rafael Weingärtner <
>>> rafaelweingartner at gmail.com> wrote:
>>>
>>>> Thanks for the prompt reply. To answer you:
>>>>
>>>>> - Very short timeout on FD_* protocols
>>>>>
>>>> It is the standard configuration, so, this should not be a problem.
>>>>
>>>> - Faulty NIC or incorrect network level configuration in the datacenter
>>>>> - Network packets are cut when in transport. I've seen it with one of
>>>>> the hardware firewalls (don't remember which one though). Due to faulty
>>>>> configuration it was cutting packets to certain length causing other layers
>>>>> to fail miserably.
>>>>>
>>>>
>>>> Maybe? Well, both VMs are in the same broadcast domain, and there is no
>>>> firewall in-between them.
>>>>
>>>> Here go some log entries. There are some things that I do not
>>>> understand here. Why am I seing multiple “keycloak-1: installing view”?
>>>> Shouldn't it only happen once? The same situation is happening with
>>>> “created cluster (first member). My view is [keycloak-1|0]”
>>>>
>>>>
>>>>
>>>>>  2018-10-24 14:45:25,130 DEBUG [org.jgroups.protocols.UDP]
>>>>> (ServerService Thread Pool -- 50) socket information:
>>>>> mcast_addr=230.0.0.4:45688, bind_addr=/<<IP-address-Keycloak1>>, ttl=2
>>>>> sock: bound to <<IP-address-Keycloak1>>:55200, receive buffer
>>>>> size=212992, send buffer size=212992
>>>>> mcast_sock: bound to <<IP-address-Keycloak1>>:45688, send buffer
>>>>> size=212992, receive buffer size=212992
>>>>> 2018-10-24 14:45:25,130 DEBUG [org.jgroups.protocols.UDP]
>>>>> (ServerService Thread Pool -- 50) socket information:
>>>>> mcast_addr=230.0.0.4:45688, bind_addr=/<<IP-address-Keycloak1>>, ttl=2
>>>>> sock: bound to <<IP-address-Keycloak1>>:55200, receive buffer
>>>>> size=212992, send buffer size=212992
>>>>> mcast_sock: bound to <<IP-address-Keycloak1>>:45688, send buffer
>>>>> size=212992, receive buffer size=212992
>>>>> 2018-10-24 14:45:25,139 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (ServerService Thread Pool -- 50) address=keycloak-1, cluster=ejb, physical
>>>>> address=<<IP-address-Keycloak1>>:55200
>>>>> 2018-10-24 14:45:25,139 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (ServerService Thread Pool -- 50) address=keycloak-1, cluster=ejb, physical
>>>>> address=<<IP-address-Keycloak1>>:55200
>>>>> 2018-10-24 14:45:28,148 DEBUG [org.jgroups.protocols.pbcast.NAKACK2]
>>>>> (ServerService Thread Pool -- 50)
>>>>> [keycloak-1 setDigest()]
>>>>> existing digest:  []
>>>>> new digest:       keycloak-1: [0 (0)]
>>>>> resulting digest: keycloak-1: [0 (0)]
>>>>> 2018-10-24 14:45:28,148 DEBUG [org.jgroups.protocols.pbcast.NAKACK2]
>>>>> (ServerService Thread Pool -- 50)
>>>>> [keycloak-1 setDigest()]
>>>>> existing digest:  []
>>>>> new digest:       keycloak-1: [0 (0)]
>>>>> resulting digest: keycloak-1: [0 (0)]
>>>>> 2018-10-24 14:45:28,148 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (ServerService Thread Pool -- 50) keycloak-1: installing view
>>>>> [keycloak-1|0] (1) [keycloak-1]
>>>>> 2018-10-24 14:45:28,148 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (ServerService Thread Pool -- 50) keycloak-1: installing view
>>>>> [keycloak-1|0] (1) [keycloak-1]
>>>>> 2018-10-24 14:45:28,150 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (ServerService Thread Pool -- 50) resuming message garbage collection
>>>>> 2018-10-24 14:45:28,150 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (ServerService Thread Pool -- 50) resuming message garbage collection
>>>>> 2018-10-24 14:45:28,161 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (ServerService Thread Pool -- 50) resuming message garbage collection
>>>>> 2018-10-24 14:45:28,161 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (ServerService Thread Pool -- 50) resuming message garbage collection
>>>>> 2018-10-24 14:45:28,161 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (ServerService Thread Pool -- 50) keycloak-1: created cluster (first
>>>>> member). My view is [keycloak-1|0], impl is
>>>>> org.jgroups.protocols.pbcast.CoordGmsImpl
>>>>> 2018-10-24 14:45:28,161 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (ServerService Thread Pool -- 50) keycloak-1: created cluster (first
>>>>> member). My view is [keycloak-1|0], impl is
>>>>> org.jgroups.protocols.pbcast.CoordGmsImpl
>>>>> 2018-10-24 14:45:28,621 INFO
>>>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
>>>>> thread 1-3) ISPN000078: Starting JGroups channel ejb
>>>>> 2018-10-24 14:45:28,620 INFO
>>>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
>>>>> thread 1-1) ISPN000078: Starting JGroups channel ejb
>>>>> 2018-10-24 14:45:28,620 INFO
>>>>> [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service
>>>>> thread 1-5) ISPN000078: Starting JGroups channel ejb
>>>>> 2018-10-24 14:45:28,625 INFO  [org.infinispan.CLUSTER] (MSC service
>>>>> thread 1-3) ISPN000094: Received new cluster view for channel ejb:
>>>>> [keycloak-1|0] (1) [keycloak-1]
>>>>>
>>>>
>>>>
>>>> 2018-10-24 14:48:02,138 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (thread-9,ejb,keycloak-1) suspending message garbage collection
>>>>> 2018-10-24 14:48:02,138 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (thread-9,ejb,keycloak-1) suspending message garbage collection
>>>>> 2018-10-24 14:48:02,139 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (thread-9,ejb,keycloak-1) keycloak-1: resume task started,
>>>>> max_suspend_time=33000
>>>>> 2018-10-24 14:48:02,139 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (thread-9,ejb,keycloak-1) keycloak-1: resume task started,
>>>>> max_suspend_time=33000
>>>>> 2018-10-24 14:48:02,140 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (thread-9,ejb,keycloak-1) keycloak-1: installing view [keycloak-1|1] (2)
>>>>> [keycloak-1, keycloak-2]
>>>>> 2018-10-24 14:48:02,140 DEBUG [org.jgroups.protocols.pbcast.GMS]
>>>>> (thread-9,ejb,keycloak-1) keycloak-1: installing view [keycloak-1|1] (2)
>>>>> [keycloak-1, keycloak-2]
>>>>> 2018-10-24 14:48:02,142 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN000094: Received new cluster view for channel
>>>>> ejb: [keycloak-1|1] (2) [keycloak-1, keycloak-2]
>>>>> 2018-10-24 14:48:02,143 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK
>>>>> pinger-10,ejb,keycloak-1) keycloak-1: pingable_mbrs=[keycloak-1,
>>>>> keycloak-2], ping_dest=keycloak-2
>>>>> 2018-10-24 14:48:02,143 DEBUG [org.jgroups.protocols.FD_SOCK] (FD_SOCK
>>>>> pinger-10,ejb,keycloak-1) keycloak-1: pingable_mbrs=[keycloak-1,
>>>>> keycloak-2], ping_dest=keycloak-2
>>>>> 2018-10-24 14:48:02,148 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN100000: Node keycloak-2 joined the cluster
>>>>> 2018-10-24 14:48:02,149 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN000094: Received new cluster view for channel
>>>>> ejb: [keycloak-1|1] (2) [keycloak-1, keycloak-2]
>>>>> 2018-10-24 14:48:02,151 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN100000: Node keycloak-2 joined the cluster
>>>>> 2018-10-24 14:48:02,153 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN000094: Received new cluster view for channel
>>>>> ejb: [keycloak-1|1] (2) [keycloak-1, keycloak-2]
>>>>> 2018-10-24 14:48:02,154 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN100000: Node keycloak-2 joined the cluster
>>>>> 2018-10-24 14:48:02,154 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN000094: Received new cluster view for channel
>>>>> ejb: [keycloak-1|1] (2) [keycloak-1, keycloak-2]
>>>>> 2018-10-24 14:48:02,155 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN100000: Node keycloak-2 joined the cluster
>>>>>
>>>>
>>>> 2018-10-24 14:48:02,155 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN100000: Node keycloak-2 joined the cluster
>>>>> 2018-10-24 14:48:02,156 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN000094: Received new cluster view for channel
>>>>> ejb: [keycloak-1|1] (2) [keycloak-1, keycloak-2]
>>>>> 2018-10-24 14:48:02,156 INFO  [org.infinispan.CLUSTER]
>>>>> (thread-9,ejb,keycloak-1) ISPN100000: Node keycloak-2 joined the cluster
>>>>> 2018-10-24 14:48:02,194 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (thread-9,ejb,keycloak-1) resuming message garbage collection
>>>>> 2018-10-24 14:48:02,194 DEBUG [org.jgroups.protocols.pbcast.STABLE]
>>>>> (thread-9,ejb,keycloak-1) resuming message garbage collection
>>>>> 2018-10-24 14:48:03,229 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t2) ISPN000310: Starting cluster-wide rebalance for
>>>>> cache offlineSessions, topology CacheTopology{id=2,
>>>>> phase=READ_OLD_WRITE_ALL, rebalanceId=2,
>>>>> currentCH=DefaultConsistentHash{ns=256, owners = (1)[keycloak-1: 256+0]},
>>>>> pendingCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1: 134+122,
>>>>> keycloak-2: 122+134]}, unionCH=null, actualMembers=[keycloak-1,
>>>>> keycloak-2], persistentUUIDs=[ac5d9ae6-7720-4fd7-8df0-b0727acaed48,
>>>>> 378015dd-8c56-44a7-ab7e-a93d706dd623]}
>>>>> 2018-10-24 14:48:03,231 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t2)
>>>>> [Context=offlineSessions][Scope=keycloak-1]ISPN100002: Started rebalance
>>>>> with topology id 2
>>>>> 2018-10-24 14:48:03,238 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t4) ISPN000310: Starting cluster-wide rebalance for
>>>>> cache sessions, topology CacheTopology{id=2, phase=READ_OLD_WRITE_ALL,
>>>>> rebalanceId=2, currentCH=DefaultConsistentHash{ns=256, owners =
>>>>> (1)[keycloak-1: 256+0]}, pendingCH=DefaultConsistentHash{ns=256, owners =
>>>>> (2)[keycloak-1: 134+122, keycloak-2: 122+134]}, unionCH=null,
>>>>> actualMembers=[keycloak-1, keycloak-2],
>>>>> persistentUUIDs=[ac5d9ae6-7720-4fd7-8df0-b0727acaed48,
>>>>> 378015dd-8c56-44a7-ab7e-a93d706dd623]}
>>>>> 2018-10-24 14:48:03,238 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t4) [Context=sessions][Scope=keycloak-1]ISPN100002:
>>>>> Started rebalance with topology id 2
>>>>> 2018-10-24 14:48:03,242 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p12-t2) ISPN000310: Starting cluster-wide rebalance for
>>>>> cache client-mappings, topology CacheTopology{id=2,
>>>>> phase=READ_OLD_WRITE_ALL, rebalanceId=2,
>>>>> currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[keycloak-1:
>>>>> 256]}, pendingCH=ReplicatedConsistentHash{ns = 256, owners =
>>>>> (2)[keycloak-1: 134, keycloak-2: 122]}, unionCH=null,
>>>>> actualMembers=[keycloak-1, keycloak-2],
>>>>> persistentUUIDs=[f239ad45-df8a-4f17-a08b-6962b47ef073,
>>>>> 33572709-8cca-4daa-a2fd-eee2e3fd4c17]}
>>>>> 2018-10-24 14:48:03,243 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p12-t2)
>>>>> [Context=client-mappings][Scope=keycloak-1]ISPN100002: Started rebalance
>>>>> with topology id 2
>>>>> 2018-10-24 14:48:03,247 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t7) ISPN000310: Starting cluster-wide rebalance for
>>>>> cache actionTokens, topology CacheTopology{id=2, phase=READ_OLD_WRITE_ALL,
>>>>> rebalanceId=2, currentCH=DefaultConsistentHash{ns=256, owners =
>>>>> (1)[keycloak-1: 256+0]}, pendingCH=DefaultConsistentHash{ns=256, owners =
>>>>> (2)[keycloak-1: 134+122, keycloak-2: 122+134]}, unionCH=null,
>>>>> actualMembers=[keycloak-1, keycloak-2],
>>>>> persistentUUIDs=[ac5d9ae6-7720-4fd7-8df0-b0727acaed48,
>>>>> 378015dd-8c56-44a7-ab7e-a93d706dd623]}
>>>>> 2018-10-24 14:48:03,247 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t7) [Context=actionTokens][Scope=keycloak-1]ISPN100002:
>>>>> Started rebalance with topology id 2
>>>>> 2018-10-24 14:48:03,242 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t3) ISPN000310: Starting cluster-wide rebalance for
>>>>> cache loginFailures, topology CacheTopology{id=2, phase=READ_OLD_WRITE_ALL,
>>>>> rebalanceId=2, currentCH=DefaultConsistentHash{ns=256, owners =
>>>>> (1)[keycloak-1: 256+0]}, pendingCH=DefaultConsistentHash{ns=256, owners =
>>>>> (2)[keycloak-1: 134+122, keycloak-2: 122+134]}, unionCH=null,
>>>>> actualMembers=[keycloak-1, keycloak-2],
>>>>> persistentUUIDs=[ac5d9ae6-7720-4fd7-8df0-b0727acaed48,
>>>>> 378015dd-8c56-44a7-ab7e-a93d706dd623]}
>>>>>
>>>>
>>>> 2018-10-24 14:48:03,271 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t2)
>>>>> [Context=offlineSessions][Scope=keycloak-1]ISPN100003: Node keycloak-1
>>>>> finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,273 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t6) ISPN000310: Starting cluster-wide rebalance for
>>>>> cache offlineClientSessions, topology CacheTopology{id=2,
>>>>> phase=READ_OLD_WRITE_ALL, rebalanceId=2,
>>>>> currentCH=DefaultConsistentHash{ns=256, owners = (1)[keycloak-1: 256+0]},
>>>>> pendingCH=DefaultConsistentHash{ns=256, owners = (2)[keycloak-1: 134+122,
>>>>> keycloak-2: 122+134]}, unionCH=null, actualMembers=[keycloak-1,
>>>>> keycloak-2], persistentUUIDs=[ac5d9ae6-7720-4fd7-8df0-b0727acaed48,
>>>>> 378015dd-8c56-44a7-ab7e-a93d706dd623]}
>>>>> 2018-10-24 14:48:03,273 INFO  [org.infinispan.CLUSTER]
>>>>> (remote-thread--p16-t6)
>>>>> [Context=offlineClientSessions][Scope=keycloak-1]ISPN100002: Started
>>>>> rebalance with topology id 2
>>>>> 2018-10-24 14:48:03,278 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t3) [Context=sessions][Scope=keycloak-1]ISPN100003:
>>>>> Node keycloak-1 finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,293 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t8)
>>>>> [Context=actionTokens][Scope=keycloak-1]ISPN100003: Node keycloak-1
>>>>> finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,294 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t4)
>>>>> [Context=clientSessions][Scope=keycloak-1]ISPN100003: Node keycloak-1
>>>>> finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,294 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t9)
>>>>> [Context=loginFailures][Scope=keycloak-1]ISPN100003: Node keycloak-1
>>>>> finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,297 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t18)
>>>>> [Context=authenticationSessions][Scope=keycloak-1]ISPN100003: Node
>>>>> keycloak-1 finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,297 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t18) [Context=work][Scope=keycloak-1]ISPN100003:
>>>>> Node keycloak-1 finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,299 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p18-t23)
>>>>> [Context=offlineClientSessions][Scope=keycloak-1]ISPN100003: Node
>>>>> keycloak-1 finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:48:03,299 INFO  [org.infinispan.CLUSTER]
>>>>> (transport-thread--p20-t8)
>>>>> [Context=client-mappings][Scope=keycloak-1]ISPN100003: Node keycloak-1
>>>>> finished rebalance phase with topology id 2
>>>>> 2018-10-24 14:49:02,158 WARN
>>>>> [org.infinispan.topology.ClusterTopologyManagerImpl]
>>>>> (transport-thread--p19-t2) ISPN000197: Error updating cluster member list:
>>>>> org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out
>>>>> waiting for responses for request 1 from keycloak-2
>>>>>         at
>>>>> org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
>>>>>         at
>>>>> org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
>>>>>         at
>>>>> org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
>>>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>>         at
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>>>>         at
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>>         at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>         at
>>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Oct 24, 2018 at 10:29 AM Sebastian Laskawiec <
>>>> slaskawi at redhat.com> wrote:
>>>>
>>>>> The new view log messages look a bit weird. It seems that JGroups
>>>>> installed new view every 1 ms, which is very unusual. There are three
>>>>> scenarios where this may happen (at least that I can think from the top of
>>>>> my head):
>>>>> - Very short timeout on FD_* protocols
>>>>> - Faulty NIC or incorrect network level configuration in the datacenter
>>>>> - Network packets are cut when in transport. I've seen it with one of
>>>>> the hardware firewalls (don't remember which one though). Due to faulty
>>>>> configuration it was cutting packets to certain length causing other layers
>>>>> to fail miserably.
>>>>>
>>>>> Perhaps you could set the logging level to DEBUG on org.jgroups
>>>>> package and check if there's anything interesting there?
>>>>>
>>>>> On Wed, Oct 24, 2018 at 3:11 PM Rafael Weingärtner <
>>>>> rafaelweingartner at gmail.com> wrote:
>>>>>
>>>>>> Hello Keycloakers,
>>>>>> I am having some problems with Keycloak 4.5.0. I basically have set
>>>>>> up two
>>>>>> nodes, and they see each other. I am using MPING (the default
>>>>>> configuration). The nodes are called “Keycloak-1” and “Keycloak-2”. In
>>>>>> front of these Keycloak nodes I have an HTTPD, which is using AJP to
>>>>>> connect and load balance them.
>>>>>>
>>>>>> When the second server starts I can see:
>>>>>>
>>>>>> > 2018-10-24 12:35:02,277 INFO  [org.infinispan.CLUSTER] (MSC service
>>>>>> thread
>>>>>> > 1-5) ISPN000094: Received new cluster view for channel ejb:
>>>>>> [keycloak-1|1]
>>>>>> > (2) [keycloak-1, keycloak-2]
>>>>>> > 2018-10-24 12:35:02,277 INFO  [org.infinispan.CLUSTER] (MSC service
>>>>>> thread
>>>>>> > 1-2) ISPN000094: Received new cluster view for channel ejb:
>>>>>> [keycloak-1|1]
>>>>>> > (2) [keycloak-1, keycloak-2]
>>>>>> > 2018-10-24 12:35:02,278 INFO  [org.infinispan.CLUSTER] (MSC service
>>>>>> thread
>>>>>> > 1-3) ISPN000094: Received new cluster view for channel ejb:
>>>>>> [keycloak-1|1]
>>>>>> > (2) [keycloak-1, keycloak-2]
>>>>>> > 2018-10-24 12:35:02,279 INFO  [org.infinispan.CLUSTER] (MSC service
>>>>>> thread
>>>>>> > 1-8) ISPN000094: Received new cluster view for channel ejb:
>>>>>> [keycloak-1|1]
>>>>>> > (2) [keycloak-1, keycloak-2]
>>>>>> > 2018-10-24 12:35:02,280 INFO  [org.infinispan.CLUSTER] (MSC service
>>>>>> thread
>>>>>> > 1-7) ISPN000094: Received new cluster view for channel ejb:
>>>>>> [keycloak-1|1]
>>>>>> > (2) [keycloak-1, keycloak-2]
>>>>>> >
>>>>>>
>>>>>> So, they are seeing/reaching each other. The problem happens when I
>>>>>> try to
>>>>>> login.  I then get the following:
>>>>>>
>>>>>> > MSC000001: Failed to start service
>>>>>> > org.wildfly.clustering.infinispan.cache.keycloak.loginFailures:
>>>>>> > org.jboss.msc.service.StartException in service
>>>>>> > org.wildfly.clustering.infinispan.cache.keycloak.loginFailures:
>>>>>> > org.infinispan.commons.CacheException: Unable to invoke method
>>>>>> public void
>>>>>> >
>>>>>> org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
>>>>>> > throws java.lang.Exception on object of type
>>>>>> StateTransferManagerImpl
>>>>>> >
>>>>>>
>>>>>> And errors like:
>>>>>>
>>>>>> > Suppressed: java.util.concurrent.ExecutionException:
>>>>>> > org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed
>>>>>> out
>>>>>> > waiting for responses for request 2 from keycloak-2
>>>>>> >
>>>>>>
>>>>>> 018-10-24 12:55:48,990 ERROR
>>>>>> [org.jboss.as.controller.management-operation]
>>>>>> > (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed -
>>>>>> address: ([
>>>>>> >     ("subsystem" => "infinispan"),
>>>>>> >     ("cache-container" => "ejb"),
>>>>>> >     ("thread-pool" => "transport")
>>>>>> > ]) - failure description: {
>>>>>> >     "WFLYCTL0080: Failed services" =>
>>>>>> > {"org.wildfly.clustering.infinispan.cache.ejb.client-mappings" =>
>>>>>> > "org.infinispan.commons.CacheException: Unable to invoke method
>>>>>> public void
>>>>>> >
>>>>>> org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
>>>>>> > throws java.lang.Exception on object of type
>>>>>> StateTransferManagerImpl
>>>>>> >     Caused by: org.infinispan.commons.CacheException: Unable to
>>>>>> invoke
>>>>>> > method public void
>>>>>> >
>>>>>> org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
>>>>>> > throws java.lang.Exception on object of type
>>>>>> StateTransferManagerImpl
>>>>>> >     Caused by: org.infinispan.commons.CacheException: Initial state
>>>>>> > transfer timed out for cache client-mappings on keycloak-2"},
>>>>>> >     "WFLYCTL0288: One or more services were unable to start due to
>>>>>> one or
>>>>>> > more indirect dependencies not being available." => {
>>>>>> >         "Services that were unable to start:" =>
>>>>>> > ["org.wildfly.clustering.cache.group.ejb.client-mappings"],
>>>>>> >         "Services that may be the cause:" => [
>>>>>> >             "jboss.clustering.web.route.default-server",
>>>>>> >             "jboss.deployment.discovery.\"keycloak-server.war\"",
>>>>>> >             "jboss.ejb.association",
>>>>>> >             "jboss.ejb.remoting.connector.client-mappings",
>>>>>> >             "jboss.iiop-openjdk.poa-service.rootpoa",
>>>>>> >
>>>>>>
>>>>>>
>>>>>> I am quite puzzled, there is not ports being blocked, and the system
>>>>>> has no
>>>>>> load at all. Why would this timeout happen?
>>>>>>
>>>>>> --
>>>>>> Rafael Weingärtner
>>>>>> _______________________________________________
>>>>>> keycloak-user mailing list
>>>>>> keycloak-user at lists.jboss.org
>>>>>> https://lists.jboss.org/mailman/listinfo/keycloak-user
>>>>>
>>>>>
>>>>
>>>> --
>>>> Rafael Weingärtner
>>>>
>>>
>>>
>>> --
>>> Rafael Weingärtner
>>>
>>
>>
>> --
>> Rafael Weingärtner
>>
>

-- 
Rafael Weingärtner


More information about the keycloak-user mailing list