[keycloak-user] Keycloak configuration on AWS for a large scale
Ferdous Shibly
bdshibly at gmail.com
Thu Aug 15 12:34:07 EDT 2019
Hi,
I am trying to configure Keycloak 6.0.1 on AWS. I need to configure multi
AZ Keycloak cluster for almost seven million active users (growing
everyday). Currently I am using standalone HA with PostgreSQL (RDS). When I
am trying to import all the users from LDAP, the cluster stopped working.
Here are the errors I am getting,
2019-08-08 11:35:21,645 ERROR
[org.infinispan.interceptors.impl.InvocationContextInterceptor]
(timeout-thread--p11-t1) ISPN000136: Error executing command
PutKeyValueCommand, writing keys [task::ClearExpiredEvents]:
org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out
waiting for responses for request 76 from keycloak1
at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-08-08 11:35:32,684 ERROR
[org.infinispan.interceptors.impl.InvocationContextInterceptor]
(timeout-thread--p11-t1) ISPN000136: Error executing command
RemoveCommand, writing keys
[task::ClearExpiredClientInitialAccessTokens]:
org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out
waiting for responses for request 81 from keycloak1
at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-08-08 11:36:14,107 ERROR
[org.infinispan.interceptors.impl.InvocationContextInterceptor]
(timeout-thread--p11-t1) ISPN000136: Error executing command
RemoveExpiredCommand, writing keys
[5d57dd8d-79e0-4ab9-9d8b-60fc570ec8b2]:
org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out
waiting for responses for request 85 from keycloak1
at org.infinispan.remoting.transport.impl.SingleTargetRequest.onTimeout(SingleTargetRequest.java:65)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2019-08-08 11:56:34,720 FATAL [org.infinispan.CLUSTER]
(transport-thread--p13-t2) [Context=sessions] ISPN000313: Lost data
because of abrupt leavers [keycloak1]
2019-08-08 11:56:34,721 INFO [org.infinispan.CLUSTER]
(transport-thread--p13-t2) [Context=sessions] ISPN100008: Updating
cache members list [keycloak2], topology id 6
2019-08-08 11:56:34,739 FATAL [org.infinispan.CLUSTER]
(transport-thread--p13-t2) [Context=clientSessions] ISPN000313: Lost
data because of abrupt leavers [keycloak1]
2019-08-08 11:56:34,740 INFO [org.infinispan.CLUSTER]
(transport-thread--p13-t2) [Context=clientSessions] ISPN100008:
Updating cache members list [keycloak2], topology id 6
2019-08-08 11:56:34,744 WARN [org.infinispan.CLUSTER]
(transport-thread--p13-t2) [Context=work] ISPN000314: Lost at least
half of the stable members, possible split brain causing data
inconsistency. Current members are [keycloak1], lost members are
[keycloak1], stable members are [keycloak2, keycloak2]
at java.lang.Thread.run(Thread.java:748)
Here is the jgroup configuration
<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
<channels default="ee">
<channel name="ee" stack="tcp" cluster="ejb"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="JDBC_PING"/>
<jdbc-protocol type="JDBC_PING"
data-source="KeycloakDS">
<property name="initialize_sql">
CREATE TABLE IF NOT EXISTS jgroupsping (
own_addr VARCHAR(200) NOT NULL,
cluster_name VARCHAR(200) NOT NULL,
ping_data BYTEA DEFAULT NULL,
PRIMARY KEY (own_addr, cluster_name)
)
</property>
</jdbc-protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS">
<property name="join_timeout">
60000
</property>
<property name="print_local_addr">
true
</property>
<property name="print_physical_addrs">
true
</property>
</protocol>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
> </subsystem>
Any help would be appreciated.
Cheers
Ferdous Shibly
More information about the keycloak-user
mailing list