On 27 Apr 2015, at 15:15, Stian Thorgersen <stian(a)redhat.com>
wrote:
Can you JIRA this and we'll try to get it fixed for 1.2.0.CR1?
----- Original Message -----
> From: "Libor Krzyžanek" <lkrzyzan(a)redhat.com
<mailto:lkrzyzan@redhat.com>>
> To: "Stian Thorgersen" <stian(a)redhat.com
<mailto:stian@redhat.com>>
> Cc: "Marek Posolda" <mposolda(a)redhat.com
<mailto:mposolda@redhat.com>>, "keycloak-user"
<keycloak-user(a)lists.jboss.org <mailto:keycloak-user@lists.jboss.org>>
> Sent: Monday, 27 April, 2015 3:06:53 PM
> Subject: Re: [keycloak-user] Clustering on localhost with shared DB
>
> Yeah just tried:
> <invalidation-cache name="realms" mode="SYNC"
> start="EAGER"/>
> <invalidation-cache name="users" mode="SYNC"
start="EAGER"/>
> <distributed-cache name="sessions" mode="SYNC"
owners="2"
> segments="60" start="EAGER">
> <state-transfer enabled="true" />
> </distributed-cache>
> <distributed-cache name="loginFailures"
mode="SYNC"
> owners="2" segments="60"
start="EAGER">
> <state-transfer enabled="true" />
> </distributed-cache>
>
> Scenario:
> - node1 is up
> - I’m logged in in node 1
> - starting node 2
>
> I get on node1:
> 15:00:45,988 INFO
> [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
> (Incoming-1,shared=udp) ISPN000094: Received new cluster view:
> [node1/keycloak|1] [node1/keycloak, node2/keycloak]
> 15:00:46,706 ERROR [org.infinispan.statetransfer.OutboundTransferTask]
> (transport-thread-18) Failed to send entries to node node2/keycloak :
> ISPN000217: Received exception from node2/keycloak, see cause for remote
> stack trace: org.infinispan.remoting.RemoteException: ISPN000217: Received
> exception from node2/keycloak, see cause for remote stack trace
> at
>
org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:60)
> at
>
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:310)
> at
>
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:179)
> at
>
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515)
> at
> org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:173)
> at
> org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:194)
> at
>
org.infinispan.statetransfer.OutboundTransferTask.sendEntries(OutboundTransferTask.java:257)
> at
>
org.infinispan.statetransfer.OutboundTransferTask.run(OutboundTransferTask.java:187)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [rt.jar:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [rt.jar:1.8.0_40]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [rt.jar:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [rt.jar:1.8.0_40]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [rt.jar:1.8.0_40]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [rt.jar:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_40]
> Caused by: org.infinispan.CacheException: Problems invoking command.
> at
>
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:230)
> at
> org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
> at
> org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
> at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
> at
> org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:600)
> at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130)
> at org.jgroups.JChannel.up(JChannel.java:707)
> at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1025)
> at org.jgroups.protocols.RSVP.up(RSVP.java:172)
> at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
> at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
> at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
> at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
> at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:766)
> at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
> at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:645)
> at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:147)
> at org.jgroups.protocols.FD.up(FD.java:253)
> at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
> at org.jgroups.protocols.MERGE3.up(MERGE3.java:290)
> at org.jgroups.protocols.Discovery.up(Discovery.java:359)
> at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2607)
> at org.jgroups.protocols.TP.passMessageUp(TP.java:1260)
> at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1822)
> at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1795)
> ... 3 more
> Caused by: java.io.InvalidClassException:
> org.keycloak.models.sessions.infinispan.entities.ClientSessionEntity; Module
> load failed
> at
>
org.jboss.marshalling.ModularClassResolver.resolveClass(ModularClassResolver.java:104)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadClassDescriptor(RiverUnmarshaller.java:948)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadNewObject(RiverUnmarshaller.java:1255)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:276)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
>
org.infinispan.container.entries.ImmortalCacheEntry$Externalizer.readObject(ImmortalCacheEntry.java:160)
> at
>
org.infinispan.container.entries.ImmortalCacheEntry$Externalizer.readObject(ImmortalCacheEntry.java:150)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:406)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:226)
> at
>
org.infinispan.marshall.jboss.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:167)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
>
org.infinispan.marshall.exts.ArrayListExternalizer.readObject(ArrayListExternalizer.java:57)
> at
>
org.infinispan.marshall.exts.ArrayListExternalizer.readObject(ArrayListExternalizer.java:45)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:406)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:226)
> at
>
org.infinispan.marshall.jboss.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:167)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
>
org.infinispan.statetransfer.StateChunk$Externalizer.readObject(StateChunk.java:111)
> at
> org.infinispan.statetransfer.StateChunk$Externalizer.readObject(StateChunk.java:88)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:406)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:226)
> at
>
org.infinispan.marshall.jboss.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:167)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
>
org.infinispan.marshall.exts.ArrayListExternalizer.readObject(ArrayListExternalizer.java:57)
> at
>
org.infinispan.marshall.exts.ArrayListExternalizer.readObject(ArrayListExternalizer.java:45)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:406)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:226)
> at
>
org.infinispan.marshall.jboss.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:167)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
>
org.infinispan.marshall.exts.ReplicableCommandExternalizer.readParameters(ReplicableCommandExternalizer.java:130)
> at
>
org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:158)
> at
>
org.infinispan.marshall.exts.CacheRpcCommandExternalizer.readObject(CacheRpcCommandExternalizer.java:73)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable$ExternalizerAdapter.readObject(ExternalizerTable.java:406)
> at
>
org.infinispan.marshall.jboss.ExternalizerTable.readObject(ExternalizerTable.java:226)
> at
>
org.infinispan.marshall.jboss.JBossMarshaller$ExternalizerTableProxy.readObject(JBossMarshaller.java:167)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:354)
> at
>
org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
> at
> org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
> at
>
org.infinispan.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:163)
> at
>
org.infinispan.marshall.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:121)
> at
>
org.infinispan.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:104)
> at
>
org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:50)
> at
>
org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:215)
> ... 28 more
> Caused by: org.jboss.modules.ModuleNotFoundException:
> deployment.auth-server.war:main
> at org.jboss.modules.ModuleLoader.loadModule(ModuleLoader.java:240)
> [jboss-modules.jar:1.3.6.Final-redhat-1]
> at
>
org.jboss.marshalling.ModularClassResolver.resolveClass(ModularClassResolver.java:102)
> ... 79 more
>
>
>
>
> Very similar on node2 plus something like this:
> 15:01:46,574 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool --
> 57) MSC000001: Failed to start service jboss.infinispan.keycloak.sessions:
> org.jboss.msc.service.StartException in service
> jboss.infinispan.keycloak.sessions: org.infinispan.CacheException: Unable to
> invoke method public void
>
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
> throws java.lang.InterruptedException on object of type
> StateTransferManagerImpl
> at
> org.jboss.as.clustering.msc.AsynchronousService$1.run(AsynchronousService.java:91)
> [jboss-as-clustering-common-7.5.0.Final-redhat-21.jar:7.5.0.Final-redhat-21]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [rt.jar:1.8.0_40]
> <skipped>
> Caused by: org.infinispan.CacheException: Initial state transfer timed out
> for cache sessions on node2/keycloak
> at
>
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:216)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [rt.jar:1.8.0_40]
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [rt.jar:1.8.0_40]
> at
>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_40]
> at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_40]
> at
> org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:203)
> ... 18 more
>
> 15:01:46,581 ERROR [org.jboss.as.controller.management-operation] (Controller
> Boot Thread) JBAS014612: Operation ("add") failed - address: ([
> ("subsystem" => "infinispan"),
> ("cache-container" => "keycloak"),
> ("distributed-cache" => "sessions")
> ]) - failure description: {"JBAS014671: Failed services" =>
> {"jboss.infinispan.keycloak.sessions" =>
> "org.jboss.msc.service.StartException in service
> jboss.infinispan.keycloak.sessions: org.infinispan.CacheException: Unable to
> invoke method public void
>
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
> throws java.lang.InterruptedException on object of type
> StateTransferManagerImpl
> Caused by: org.infinispan.CacheException: Unable to invoke method public
> void
>
org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete()
> throws java.lang.InterruptedException on object of type
> StateTransferManagerImpl
> Caused by: org.infinispan.CacheException: Initial state transfer timed
> out for cache sessions on node2/keycloak"}}
>
>
> Thanks,
>
> Libor Krzyžanek
>
jboss.org <
http://jboss.org/> <
http://jboss.org/
<
http://jboss.org/>> Development Team
>
>> On 27 Apr 2015, at 15:02, Stian Thorgersen <stian(a)redhat.com
<mailto:stian@redhat.com>> wrote:
>>
>>
>>
>> ----- Original Message -----
>>> From: "Libor Krzyžanek" <lkrzyzan(a)redhat.com
<mailto:lkrzyzan@redhat.com>>
>>> To: "Marek Posolda" <mposolda(a)redhat.com
<mailto:mposolda@redhat.com>>
>>> Cc: "keycloak-user" <keycloak-user(a)lists.jboss.org
<mailto:keycloak-user@lists.jboss.org>>
>>> Sent: Monday, 27 April, 2015 2:55:43 PM
>>> Subject: Re: [keycloak-user] Clustering on localhost with shared DB
>>>
>>> Hi,
>>> yeah this helps little bit:
>>> <invalidation-cache name="realms" mode="SYNC"/>
>>> <invalidation-cache name="users" mode="SYNC"/>
>>> <distributed-cache name="sessions" mode="SYNC"
owners="2" segments="60" >
>>> <state-transfer enabled="true" />
>>> </distributed-cache>
>>> <distributed-cache name="loginFailures" mode="SYNC"
owners="2"
>>> segments="60"
>>>>
>>> <state-transfer enabled="true" />
>>> </distributed-cache>
>>>
>>> When both caches on both nodes are up then syncing works fine.
>>> Also /sessions works OK.
>>>
>>> But I’m still facing issue no 1.
>>>
>>> When node is up I see in logs this:
>>>
>>> 14:51:19,088 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874:
>>> JBoss
>>> EAP 6.4.0.GA (AS 7.5.0.Final-redhat-21) started in 18527ms - Started 242
>>> of
>>> 347 services (141 services are lazy, passive or on-demand)
>>>
>>> Caches are initialised after first hit not after KC start
>>
>> Have you tried putting start="EAGER" on both the cache-container and
all
>> caches in standalone.xml?
>>
>>>
>>> I’m talking about this in log:
>>> 14:51:52,597 INFO [org.infinispan.jmx.CacheJmxRegistration]
>>> (http-/127.0.0.1:8080-1) ISPN000031: MBeans were successfully registered
>>> to
>>> the platform MBean server.
>>> 14:51:52,605 INFO [org.jboss.as.clustering.infinispan]
>>> (http-/127.0.0.1:8080-1) JBAS010281: Started users cache from keycloak
>>> container
>>> 14:51:52,710 INFO [org.infinispan.jmx.CacheJmxRegistration]
>>> (http-/127.0.0.1:8080-2) ISPN000031: MBeans were successfully registered
>>> to
>>> the platform MBean server.
>>> 14:51:52,815 INFO [org.jboss.as.clustering.infinispan]
>>> (http-/127.0.0.1:8080-2) JBAS010281: Started sessions cache from keycloak
>>> container
>>> 14:51:52,822 INFO [org.infinispan.jmx.CacheJmxRegistration]
>>> (http-/127.0.0.1:8080-2) ISPN000031: MBeans were successfully registered
>>> to
>>> the platform MBean server.
>>> 14:51:52,847 INFO [org.jboss.as.clustering.infinispan]
>>> (http-/127.0.0.1:8080-2) JBAS010281: Started loginFailures cache from
>>> keycloak container
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Libor Krzyžanek
>>>
jboss.org <
http://jboss.org/> Development Team
>>>
>>>
>>>
>>>
>>> On 27 Apr 2015, at 14:24, Marek Posolda < mposolda(a)redhat.com
<mailto:mposolda@redhat.com> > wrote:
>>>
>>> On 27.4.2015 13:50, Libor Krzyžanek wrote:
>>>
>>>
>>> Hi,
>>> I have now apache webproxy with this configuration:
>>> <Proxy *>
>>> Order allow,deny
>>> Allow from all
>>> </Proxy>
>>> <Proxy balancer://app/ <balancer://app/> >
>>> BalancerMember
http://localhost:8080 <
http://localhost:8080/>
route=app02
>>> BalancerMember
http://localhost:8180 <
http://localhost:8180/>
route=app03
>>> ProxySet lbmethod=byrequests
>>> </Proxy>
>>> ProxyPass /balancer-manager !
>>> ProxyPass /server-status !
>>> ProxyPass /server-info !
>>> ProxyPass / balancer://app/ <balancer://app/>
>>> ProxyPassReverse / balancer://app/ <balancer://app/>
>>>
>>> It looks it helped.
>>> When I have started both nodes and I see that caches on both nodes are
>>> started then everything is fine.
>>> Scenario: When I login to node1, then stop node1, then I’m redirected to
>>> node2 and I’m still logged in. Great!
>>>
>>> But I see two issues right now:
>>> 1. Caches are replicated to newly started node too late.
>>> Scenario is:
>>> 1. start node1, log in.
>>> 2. start node2, wait till you see that node1 knows new node and node2 is
>>> fully started
>>> 3. killl node1.
>>>
>>> Then I’m redirected to login page.
>>>
>>> This happens really only when no request hits newly started node2. If I do
>>> few reloads in browser before I kill node1 then I see in logs that those
>>> infinispan caches are created on node2 and fully replicated.
>>>
>>> Is it related to “start = EAGER” ?
>>> Will it help if you use in standalone-ha.xml the config like this? :
>>>
>>> <distributed-cache name="sessions" mode="SYNC"
owners="2" segments="60" >
>>> <state-transfer enabled="true" />
>>> </distributed-cache>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 2. Weird thing is on /account/session page (
>>>
http://localhost/auth/realms/cluster-test/account/sessions
<
http://localhost/auth/realms/cluster-test/account/sessions> ).
>>>
>>> I got:
>>>
>>> 13:30:50,291 ERROR
>>>
[org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/auth].[Keycloak
>>> REST Interface]] (http-/127.0.0.1:8080-2) JBWEB000236: Servlet.service()
>>> for
>>> servlet Keycloak REST Interface threw exception:
>>> java.lang.RuntimeException:
>>> request path: /auth/realms/cluster-test/account/sessions
>>> at
>>>
org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:54)
>>> [keycloak-services-1.2.0.Beta1.jar:1.2.0.Beta1]
>>> at
>>>
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:231)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:149)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.event(JBossWebContext.java:91)
>>> at
>>>
org.jboss.modcluster.container.jbossweb.JBossWebContext$RequestListenerValve.invoke(JBossWebContext.java:72)
>>> at
>>>
org.jboss.as.jpa.interceptor.WebNonTxEmCloserValve.invoke(WebNonTxEmCloserValve.java:50)
>>> [jboss-as-jpa-7.5.0.Final-redhat-21.jar:7.5.0.Final-redhat-21]
>>> at
>>>
org.jboss.as.jpa.interceptor.WebNonTxEmCloserValve.invoke(WebNonTxEmCloserValve.java:50)
>>> [jboss-as-jpa-7.5.0.Final-redhat-21.jar:7.5.0.Final-redhat-21]
>>> at
>>>
org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:169)
>>> [jboss-as-web-7.5.0.Final-redhat-21.jar:7.5.0.Final-redhat-21]
>>> at
>>>
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:150)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:97)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:102)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:344)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:854)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:653)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:926)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_40]
>>> Caused by: org.jboss.resteasy.spi.UnhandledException:
>>> java.lang.IllegalStateException: Cache mode should be DIST, rather than
>>> REPL_SYNC
>>> at
>>>
org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:76)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:212)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:149)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:372)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
>>> [jboss-servlet-api_3.0_spec-1.0.2.Final-redhat-2.jar:1.0.2.Final-redhat-2]
>>> at
>>>
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:295)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.keycloak.services.filters.ClientConnectionFilter.doFilter(ClientConnectionFilter.java:41)
>>> [keycloak-services-1.2.0.Beta1.jar:1.2.0.Beta1]
>>> at
>>>
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:246)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:214)
>>> [jbossweb-7.5.7.Final-redhat-1.jar:7.5.7.Final-redhat-1]
>>> at
>>>
org.keycloak.services.filters.KeycloakSessionServletFilter.doFilter(KeycloakSessionServletFilter.java:40)
>>> [keycloak-services-1.2.0.Beta1.jar:1.2.0.Beta1]
>>> ... 17 more
>>> Caused by: java.lang.IllegalStateException: Cache mode should be DIST,
>>> rather
>>> than REPL_SYNC
>>> at
>>>
org.infinispan.distexec.mapreduce.MapReduceTask.ensureProperCacheState(MapReduceTask.java:685)
>>> [infinispan-core-5.2.11.Final-redhat-2.jar:5.2.11.Final-redhat-2]
>>> at
>>>
org.infinispan.distexec.mapreduce.MapReduceTask.<init>(MapReduceTask.java:226)
>>> [infinispan-core-5.2.11.Final-redhat-2.jar:5.2.11.Final-redhat-2]
>>> at
>>>
org.infinispan.distexec.mapreduce.MapReduceTask.<init>(MapReduceTask.java:190)
>>> [infinispan-core-5.2.11.Final-redhat-2.jar:5.2.11.Final-redhat-2]
>>> at
>>>
org.keycloak.models.sessions.infinispan.InfinispanUserSessionProvider.getUserSessions(InfinispanUserSessionProvider.java:121)
>>> [keycloak-model-sessions-infinispan-1.2.0.Beta1.jar:1.2.0.Beta1]
>>> at
>>>
org.keycloak.services.resources.AccountService.sessionsPage(AccountService.java:344)
>>> [keycloak-services-1.2.0.Beta1.jar:1.2.0.Beta1]
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> [rt.jar:1.8.0_40]
>>> at
>>>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> [rt.jar:1.8.0_40]
>>> at
>>>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> [rt.jar:1.8.0_40]
>>> at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_40]
>>> at
>>>
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:137)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:296)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:250)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.ResourceLocatorInvoker.invokeOnTargetObject(ResourceLocatorInvoker.java:140)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.ResourceLocatorInvoker.invoke(ResourceLocatorInvoker.java:103)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> at
>>>
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:356)
>>> [resteasy-jaxrs-3.0.9.Final.jar:]
>>> ... 28 more
>>>
>>>
>>> Same error I get in admin console (
>>>
http://localhost/auth/admin/master/console/#/realms/cluster-test/sessions...
<
http://localhost/auth/admin/master/console/#/realms/cluster-test/sessions...
>>> )
>>> Strange... Are you using "distributed-cache" with mode
"SYNC" on both
>>> cluster
>>> nodes?
>>>
>>> Marek
>>>
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Libor Krzyžanek
>>>
jboss.org <
http://jboss.org/> Development Team
>>>
>>>
>>>
>>>
>>> On 27 Apr 2015, at 09:05, Libor Krzyžanek < lkrzyzan(a)redhat.com
<mailto:lkrzyzan@redhat.com> > wrote:
>>>
>>> Hi Marek,
>>> your’re right that i’m hitting directly localhsot on different ports.
>>>
>>> I was thinking about cookies resp. load balancer so I checked cookies and
>>> they were sent on both ports.
>>>
>>> I’ll set up load balancer and I’ll will see.
>>>
>>> Thanks,
>>>
>>> Libor Krzyžanek
>>>
jboss.org <
http://jboss.org/> Development Team
>>>
>>>
>>>
>>>
>>> On 24 Apr 2015, at 19:06, Marek Posolda < mposolda(a)redhat.com
<mailto:mposolda@redhat.com> > wrote:
>>>
>>> Hi Libor,
>>>
>>> the config files looks good (at least for the first look), but question is
>>> if
>>> you're using loadbalancer?
>>>
>>> If you're not using loadbalancer and you access keycloak servers
directly
>>> on
>>> localhost:8080 and localhost:8180, the problem might be just in the fact
>>> that browser cookie KEYCLOAK_IDENTITY is not shared between them and hence
>>> going to localhost:8180 will not find KEYCLOAK_IDENTITY cookie from
>>> localhost:8080 and will try to create new session.
>>>
>>> You can check admin console or account management and list available user
>>> sessions on both nodes. If both cluster nodes have same sessions, then
>>> replication of userSessions works fine, but only issue is really the
>>> cookie.
>>>
>>> I suspect that in production, you will use loadbalancer, so this issue
>>> won't
>>> happen.
>>>
>>> Marek
>>>
>>> On 24.4.2015 15:50, Libor Krzyžanek wrote:
>>>
>>>
>>> Attaching keycloak-server.json and standalone-ha.xml
>>>
>>> Thanks,
>>>
>>> Libor Krzyžanek
>>>
jboss.org <
http://jboss.org/> Development Team
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 24 Apr 2015, at 15:36, Stian Thorgersen < stian(a)redhat.com
<mailto:stian@redhat.com> > wrote:
>>>
>>> Can you attach your keycloak-server.json and standalone.xml?
>>>
>>> ----- Original Message -----
>>>
>>>
>>> From: "Libor Krzyžanek" < lkrzyzan(a)redhat.com
<mailto:lkrzyzan@redhat.com> >
>>> To: "keycloak-user" < keycloak-user(a)lists.jboss.org
<mailto:keycloak-user@lists.jboss.org> >
>>> Sent: Friday, 24 April, 2015 3:12:29 PM
>>> Subject: [keycloak-user] Clustering on localhost with shared DB
>>>
>>> Hi,
>>> I’m trying to achieve full user session replication which means when I’m
>>> logged in on node 1 and then hit node 2 then I expect to be logged in but
>>> I’m forced to log in again.
>>>
>>> I have:
>>> 1. two localhost nodes with JBoss EAP 6.4 + War installation
>>> 2. Postgres
>>> 3. EAP cofigured based on
>>>
http://docs.jboss.org/keycloak/docs/1.2.0.Beta1/userguide/html/clustering...
<
http://docs.jboss.org/keycloak/docs/1.2.0.Beta1/userguide/html/clustering...
>>>
>>> I triedeither
>>> <distributed-cache name="sessions" mode="SYNC"
owners=“ 2 " />
>>> <distributed-cache name="loginFailures" mode="SYNC"
owners=“ 2 " />
>>> or
>>> <replicated-cache name="sessions" mode="SYNC"/>
>>> <replicated-cache name="loginFailures" mode="SYNC”/>
>>> but with same result.
>>>
>>> I’m starting nodes by
>>> ./jb1/bin/standalone.sh --server-config=standalone-ha.xml
>>> -Djboss.node.name=node1
>>> ./jb2/bin/standalone.sh --server-config=standalone-ha.xml
>>> -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node2
>>>
>>>
>>> both jb1 and jb2 are identical and they know each other (Received new
>>> cluster
>>> view: [node1/keycloak|1] [node1/keycloak, node2/keycloak])
>>>
>>> How do you test clustering of KC please?
>>>
>>> Thanks,
>>>
>>> Libor Krzyžanek
>>>
jboss.org <
http://jboss.org/> Development Team
>>>
>>>
>>> _______________________________________________
>>> keycloak-user mailing list
>>> keycloak-user(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/keycloak-user
>>>
>>>
>>>
>>> _______________________________________________
>>> keycloak-user mailing list keycloak-user(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/keycloak-user
>>>
>>>
>>> _______________________________________________
>>> keycloak-user mailing list
>>> keycloak-user(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/keycloak-user
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> keycloak-user mailing list
>>> keycloak-user(a)lists.jboss.org <mailto:keycloak-user@lists.jboss.org>
<mailto:keycloak-user@lists.jboss.org <mailto:keycloak-user@lists.jboss.org>>
>>>
https://lists.jboss.org/mailman/listinfo/keycloak-user
<
https://lists.jboss.org/mailman/listinfo/keycloak-user>
>>> <
https://lists.jboss.org/mailman/listinfo/keycloak-user
<
https://lists.jboss.org/mailman/listinfo/keycloak-user>>