[JBoss JIRA] (ISPN-6884) NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6884?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6884:
-------------------------------------------
No, when populating and reading values using Hot Rod, everything works fine.
Just some more information about versions, I'm playing with:
Infinispan:
{code}
* c29524e - (HEAD -> ISPN-6847/Integrate_with_kubernetes) ISPN-6847 Kubernetes PING integration (2 days ago) <Sebastian Laskawiec>
* 46cb91b - (upstream/master, origin/master, origin/HEAD) ISPN-6745 Locks are lost in pessimistic cache (3 days ago) <Pedro Ruivo>
{code}
OpenShift:
{code}
$ oc version
oc v1.3.0-alpha.2
kubernetes v1.3.0-alpha.1-331-g0522e63
{code}
> NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
> -------------------------------------------------------------------------------
>
> Key: ISPN-6884
> URL: https://issues.jboss.org/browse/ISPN-6884
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores, Server
> Affects Versions: 9.0.0.Alpha3
> Reporter: Sebastian Łaskawiec
> Assignee: Gustavo Fernandes
>
> During the [Rolling Upgrade Procedure|http://infinispan.org/docs/stable/user_guide/user_guide.html#st...] with compatibility caches on OpenShift I encountered weird {{NullPointerException}}.
> Below there are 2 logs from {{Source}} cluster:
> {code}
> 05:14:54,623 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-8) ISPN005022: Exception writing response with messageId=59: java.lang.NullPointerException
> at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:353)
> at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:343)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at org.infinispan.server.hotrod.Encoder2x$.writeResponse(Encoder2x.scala:343)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:45)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
> at io.netty.channel.DefaultChannelHandlerInvoker.invokeWrite(DefaultChannelHandlerInvoker.java:372)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:391)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:252)
> at io.netty.handler.logging.LoggingHandler.write(LoggingHandler.java:241)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
> at io.netty.channel.DefaultChannelHandlerInvoker$WriteTask.run(DefaultChannelHandlerInvoker.java:496)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:279)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And from {{Destination}}:
> {code}
> 05:17:17,555 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) An exception was thrown by a user handler's exceptionCaught() method:: java.lang.NullPointerException
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.exceptionCaught(RequestHandler.java:91)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64)
> at io.netty.channel.DefaultChannelHandlerInvoker$5.run(DefaultChannelHandlerInvoker.java:117)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> 05:17:17,556 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) .. and the cause of the exceptionCaught() was:: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1054)
> at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:245)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6884) NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6884?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-6884:
-----------------------------------------
Does it happen only in compat mode? Have you tried populating data using Hot Rod?
> NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
> -------------------------------------------------------------------------------
>
> Key: ISPN-6884
> URL: https://issues.jboss.org/browse/ISPN-6884
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores, Server
> Affects Versions: 9.0.0.Alpha3
> Reporter: Sebastian Łaskawiec
> Assignee: Gustavo Fernandes
>
> During the [Rolling Upgrade Procedure|http://infinispan.org/docs/stable/user_guide/user_guide.html#st...] with compatibility caches on OpenShift I encountered weird {{NullPointerException}}.
> Below there are 2 logs from {{Source}} cluster:
> {code}
> 05:14:54,623 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-8) ISPN005022: Exception writing response with messageId=59: java.lang.NullPointerException
> at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:353)
> at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:343)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at org.infinispan.server.hotrod.Encoder2x$.writeResponse(Encoder2x.scala:343)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:45)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
> at io.netty.channel.DefaultChannelHandlerInvoker.invokeWrite(DefaultChannelHandlerInvoker.java:372)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:391)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:252)
> at io.netty.handler.logging.LoggingHandler.write(LoggingHandler.java:241)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
> at io.netty.channel.DefaultChannelHandlerInvoker$WriteTask.run(DefaultChannelHandlerInvoker.java:496)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:279)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And from {{Destination}}:
> {code}
> 05:17:17,555 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) An exception was thrown by a user handler's exceptionCaught() method:: java.lang.NullPointerException
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.exceptionCaught(RequestHandler.java:91)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64)
> at io.netty.channel.DefaultChannelHandlerInvoker$5.run(DefaultChannelHandlerInvoker.java:117)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> 05:17:17,556 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) .. and the cause of the exceptionCaught() was:: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1054)
> at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:245)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6673:
-------------------------------------------
In library mode it is also possible to obtain JMX connection. The Java process inside the container should have the following options (of course they might differ depending on environment):
{code}
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
{code}
An OpenShift developer needs to forward JMX ports into his own machine:
{code}
oc port-forward infinispan-simple-tutorials-kubernetes-2-rezpo 9010:9010
{code}
Then he might run the JConsole (or other tool for JMX):
{code}
./jconsole.sh 127.0.0.1:9010
{code}
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6883) Remote Cache Store does does't work properly in compatibility mode
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6883?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6883 at 7/22/16 1:30 AM:
--------------------------------------------------------------------
Hey [~NadirX]! I assigned this one to you since you added most of the implementation to the Remote Cache Store. You will probably have an idea if this is a valid use case or not.
> Remote Cache Store does does't work properly in compatibility mode
> ------------------------------------------------------------------
>
> Key: ISPN-6883
> URL: https://issues.jboss.org/browse/ISPN-6883
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 9.0.0.Alpha3
> Reporter: Sebastian Łaskawiec
> Assignee: Tristan Tarrant
>
> Currently we can't use Remote Cache Store for Caches populated using REST with compatibility mode.
> The configuration for source cache looks like the following:
> {code}
> <distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
> <locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
> <transaction mode="NONE"/>
> <compatibility enabled="true" />
> </distributed-cache>
> {code}
> Destination cache:
> {code}
> <distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
> <locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
> <transaction mode="NONE"/>
> <compatibility enabled="true" />
> <remote-store cache="default" hotrod-wrapping="true" read-only="true">
> <remote-server outbound-socket-binding="remote-store-hotrod-server" />
> </remote-store>
> </distributed-cache>
> {code}
> With the configuration above, when performing [Rolling Upgrade|http://infinispan.org/docs/stable/user_guide/user_guide.html#steps_2] procedure I get:
> {code}
> 03:38:43,025 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (nioEventLoopGroup-7-1) ISPN000136: Error executing command GetCacheEntryCommand, writing keys []: java.lang.ClassCastException: java.lang.String cannot be cast to [B
> at org.infinispan.persistence.remote.wrapper.HotRodEntryMarshaller.objectToByteBuffer(HotRodEntryMarshaller.java:28)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.obj2bytes(RemoteCacheImpl.java:494)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.getWithMetadata(RemoteCacheImpl.java:208)
> at org.infinispan.persistence.remote.RemoteStore.load(RemoteStore.java:109)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.loadFromAllStores(PersistenceManagerImpl.java:455)
> at org.infinispan.persistence.PersistenceUtil.loadAndCheckExpiration(PersistenceUtil.java:113)
> at org.infinispan.persistence.PersistenceUtil.lambda$loadAndStoreInDataContainer$0(PersistenceUtil.java:98)
> at org.infinispan.container.DefaultDataContainer.lambda$compute$3(DefaultDataContainer.java:325)
> at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8.compute(EquivalentConcurrentHashMapV8.java:1873)
> at org.infinispan.container.DefaultDataContainer.compute(DefaultDataContainer.java:324)
> at org.infinispan.persistence.PersistenceUtil.loadAndStoreInDataContainer(PersistenceUtil.java:91)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.loadInContext(CacheLoaderInterceptor.java:352)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.loadIfNeeded(CacheLoaderInterceptor.java:347)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitDataCommand(CacheLoaderInterceptor.java:206)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitGetCacheEntryCommand(CacheLoaderInterceptor.java:150)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitGetCacheEntryCommand(CacheLoaderInterceptor.java:88)
> at org.infinispan.commands.read.GetCacheEntryCommand.acceptVisitor(GetCacheEntryCommand.java:40)
> at org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:53)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeInterceptorsSync(BaseAsyncInvocationContext.java:314)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.forkInvocationSync(BaseAsyncInvocationContext.java:98)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeForkAndHandlerSync(BaseAsyncInvocationContext.java:474)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.afterVisit(BaseAsyncInvocationContext.java:463)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeInterceptorsSync(BaseAsyncInvocationContext.java:329)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeSync(BaseAsyncInvocationContext.java:282)
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:236)
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:433)
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:439)
> at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.getCacheEntry(AbstractDelegatingAdvancedCache.java:216)
> at org.infinispan.rest.RestCacheManager.getInternalEntry(RestCacheManager.scala:58)
> at org.infinispan.rest.Server$$anonfun$getEntry$1.apply(Server.scala:90)
> at org.infinispan.rest.Server$$anonfun$getEntry$1.apply(Server.scala:89)
> at org.infinispan.rest.Server.protectCacheNotFound(Server.scala:498)
> at org.infinispan.rest.Server.getEntry(Server.scala:89)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139)
> at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
> at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
> at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:395)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202)
> at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
> at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6884) NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6884?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6884:
-------------------------------------------
Hey [~gustavonalle]! You're probably the most knowledgeable person in this area. This NPE does not look good... This might be connected to https://issues.jboss.org/browse/ISPN-6883
> NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
> -------------------------------------------------------------------------------
>
> Key: ISPN-6884
> URL: https://issues.jboss.org/browse/ISPN-6884
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores, Server
> Affects Versions: 9.0.0.Alpha3
> Reporter: Sebastian Łaskawiec
> Assignee: Gustavo Fernandes
>
> During the [Rolling Upgrade Procedure|http://infinispan.org/docs/stable/user_guide/user_guide.html#st...] with compatibility caches on OpenShift I encountered weird {{NullPointerException}}.
> Below there are 2 logs from {{Source}} cluster:
> {code}
> 05:14:54,623 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-8) ISPN005022: Exception writing response with messageId=59: java.lang.NullPointerException
> at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:353)
> at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:343)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at org.infinispan.server.hotrod.Encoder2x$.writeResponse(Encoder2x.scala:343)
> at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:45)
> at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
> at io.netty.channel.DefaultChannelHandlerInvoker.invokeWrite(DefaultChannelHandlerInvoker.java:372)
> at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:391)
> at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:252)
> at io.netty.handler.logging.LoggingHandler.write(LoggingHandler.java:241)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
> at io.netty.channel.DefaultChannelHandlerInvoker$WriteTask.run(DefaultChannelHandlerInvoker.java:496)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:279)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> And from {{Destination}}:
> {code}
> 05:17:17,555 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) An exception was thrown by a user handler's exceptionCaught() method:: java.lang.NullPointerException
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.exceptionCaught(RequestHandler.java:91)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64)
> at io.netty.channel.DefaultChannelHandlerInvoker$5.run(DefaultChannelHandlerInvoker.java:117)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> 05:17:17,556 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) .. and the cause of the exceptionCaught() was:: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:192)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
> at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1054)
> at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:245)
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106)
> at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
> at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6884) NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
by Sebastian Łaskawiec (JIRA)
Sebastian Łaskawiec created ISPN-6884:
-----------------------------------------
Summary: NullPointerException when performing Rolling Upgrade Procedure using Kubernetes
Key: ISPN-6884
URL: https://issues.jboss.org/browse/ISPN-6884
Project: Infinispan
Issue Type: Bug
Components: Loaders and Stores, Server
Affects Versions: 9.0.0.Alpha3
Reporter: Sebastian Łaskawiec
Assignee: Gustavo Fernandes
During the [Rolling Upgrade Procedure|http://infinispan.org/docs/stable/user_guide/user_guide.html#st...] with compatibility caches on OpenShift I encountered weird {{NullPointerException}}.
Below there are 2 logs from {{Source}} cluster:
{code}
05:14:54,623 ERROR [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-9-8) ISPN005022: Exception writing response with messageId=59: java.lang.NullPointerException
at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:353)
at org.infinispan.server.hotrod.Encoder2x$$anonfun$writeResponse$9.apply(Encoder2x.scala:343)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.infinispan.server.hotrod.Encoder2x$.writeResponse(Encoder2x.scala:343)
at org.infinispan.server.hotrod.HotRodEncoder.encode(HotRodEncoder.scala:45)
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeWrite(DefaultChannelHandlerInvoker.java:372)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:391)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:252)
at io.netty.handler.logging.LoggingHandler.write(LoggingHandler.java:241)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeWriteNow(ChannelHandlerInvokerUtil.java:157)
at io.netty.channel.DefaultChannelHandlerInvoker$WriteTask.run(DefaultChannelHandlerInvoker.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:279)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:745)
{code}
And from {{Destination}}:
{code}
05:17:17,555 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) An exception was thrown by a user handler's exceptionCaught() method:: java.lang.NullPointerException
at org.jboss.resteasy.plugins.server.netty.RequestHandler.exceptionCaught(RequestHandler.java:91)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeExceptionCaughtNow(ChannelHandlerInvokerUtil.java:64)
at io.netty.channel.DefaultChannelHandlerInvoker$5.run(DefaultChannelHandlerInvoker.java:117)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:745)
05:17:17,556 WARNING [io.netty.channel.DefaultChannelPipeline] (nioEventLoopGroup-7-2) .. and the cause of the exceptionCaught() was:: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:288)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1054)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:245)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:106)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:745)
{code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/22/16 1:16 AM:
--------------------------------------------------------------------
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration. Later on I will use REST interface for playing with data, so turn on compatibility mode:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
212,214d214
< <remote-store cache="default" hotrod-wrapping="false" read-only="true">
< <remote-server outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky -->
< <remote-destination host="172.30.14.112" port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
oc expose svc infinispan-experiments-2
{code}
# At this stage we have 2 clusters (the old one with selector {{cluster=cluster-1}} and the new one with selector {{cluster=cluster-2}}). Both should be up and running (check that with {{oc status -v}}). Cluster-2 has remote stores which point to Cluster-1.
# Switch all the clients to the {{cluster=cluster-2}}. Depending on your configuration you probably want to create a new Route (if your clients connect to the cluster using Routes) or modify the Service.
# Fetch all remaining keys from {{cluster=cluster-1}}
{code}
oc get pods --selector=deploymentconfig=infinispan-experiments-2
.. write down
oc exec infinispan-experiments-2-3-pc7sg -- '/opt/jboss/infinispan-server/bin/ispn-cli.sh' '-c' '--controller=$(hostname -i):9990' '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=default:synchronize-data(migrator-name=hotrod)'
{code}
was (Author: sebastian.laskawiec):
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration. Later on I will use REST interface for playing with data, so turn on compatibility mode:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
212,214d214
< <remote-store cache="default" hotrod-wrapping="true" read-only="true">
< <remote-server outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky -->
< <remote-destination host="172.30.14.112" port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
oc expose svc infinispan-experiments-2
{code}
# At this stage we have 2 clusters (the old one with selector {{cluster=cluster-1}} and the new one with selector {{cluster=cluster-2}}). Both should be up and running (check that with {{oc status -v}}). Cluster-2 has remote stores which point to Cluster-1.
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6883) Remote Cache Store does does't work properly in compatibility mode
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6883?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6883:
-------------------------------------------
Update: I think I found the problem - {{hotrod-wrapping}} in my case should be set to {{false}}. Perhaps we should update the docs and emphasize this?
> Remote Cache Store does does't work properly in compatibility mode
> ------------------------------------------------------------------
>
> Key: ISPN-6883
> URL: https://issues.jboss.org/browse/ISPN-6883
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 9.0.0.Alpha3
> Reporter: Sebastian Łaskawiec
> Assignee: Tristan Tarrant
>
> Currently we can't use Remote Cache Store for Caches populated using REST with compatibility mode.
> The configuration for source cache looks like the following:
> {code}
> <distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
> <locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
> <transaction mode="NONE"/>
> <compatibility enabled="true" />
> </distributed-cache>
> {code}
> Destination cache:
> {code}
> <distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
> <locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
> <transaction mode="NONE"/>
> <compatibility enabled="true" />
> <remote-store cache="default" hotrod-wrapping="true" read-only="true">
> <remote-server outbound-socket-binding="remote-store-hotrod-server" />
> </remote-store>
> </distributed-cache>
> {code}
> With the configuration above, when performing [Rolling Upgrade|http://infinispan.org/docs/stable/user_guide/user_guide.html#steps_2] procedure I get:
> {code}
> 03:38:43,025 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (nioEventLoopGroup-7-1) ISPN000136: Error executing command GetCacheEntryCommand, writing keys []: java.lang.ClassCastException: java.lang.String cannot be cast to [B
> at org.infinispan.persistence.remote.wrapper.HotRodEntryMarshaller.objectToByteBuffer(HotRodEntryMarshaller.java:28)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.obj2bytes(RemoteCacheImpl.java:494)
> at org.infinispan.client.hotrod.impl.RemoteCacheImpl.getWithMetadata(RemoteCacheImpl.java:208)
> at org.infinispan.persistence.remote.RemoteStore.load(RemoteStore.java:109)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.loadFromAllStores(PersistenceManagerImpl.java:455)
> at org.infinispan.persistence.PersistenceUtil.loadAndCheckExpiration(PersistenceUtil.java:113)
> at org.infinispan.persistence.PersistenceUtil.lambda$loadAndStoreInDataContainer$0(PersistenceUtil.java:98)
> at org.infinispan.container.DefaultDataContainer.lambda$compute$3(DefaultDataContainer.java:325)
> at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8.compute(EquivalentConcurrentHashMapV8.java:1873)
> at org.infinispan.container.DefaultDataContainer.compute(DefaultDataContainer.java:324)
> at org.infinispan.persistence.PersistenceUtil.loadAndStoreInDataContainer(PersistenceUtil.java:91)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.loadInContext(CacheLoaderInterceptor.java:352)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.loadIfNeeded(CacheLoaderInterceptor.java:347)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitDataCommand(CacheLoaderInterceptor.java:206)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitGetCacheEntryCommand(CacheLoaderInterceptor.java:150)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitGetCacheEntryCommand(CacheLoaderInterceptor.java:88)
> at org.infinispan.commands.read.GetCacheEntryCommand.acceptVisitor(GetCacheEntryCommand.java:40)
> at org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:53)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeInterceptorsSync(BaseAsyncInvocationContext.java:314)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.forkInvocationSync(BaseAsyncInvocationContext.java:98)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeForkAndHandlerSync(BaseAsyncInvocationContext.java:474)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.afterVisit(BaseAsyncInvocationContext.java:463)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeInterceptorsSync(BaseAsyncInvocationContext.java:329)
> at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeSync(BaseAsyncInvocationContext.java:282)
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:236)
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:433)
> at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:439)
> at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.getCacheEntry(AbstractDelegatingAdvancedCache.java:216)
> at org.infinispan.rest.RestCacheManager.getInternalEntry(RestCacheManager.scala:58)
> at org.infinispan.rest.Server$$anonfun$getEntry$1.apply(Server.scala:90)
> at org.infinispan.rest.Server$$anonfun$getEntry$1.apply(Server.scala:89)
> at org.infinispan.rest.Server.protectCacheNotFound(Server.scala:498)
> at org.infinispan.rest.Server.getEntry(Server.scala:89)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139)
> at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
> at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
> at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:395)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202)
> at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
> at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6883) Remote Cache Store does does't work properly in compatibility mode
by Sebastian Łaskawiec (JIRA)
Sebastian Łaskawiec created ISPN-6883:
-----------------------------------------
Summary: Remote Cache Store does does't work properly in compatibility mode
Key: ISPN-6883
URL: https://issues.jboss.org/browse/ISPN-6883
Project: Infinispan
Issue Type: Bug
Components: Loaders and Stores
Affects Versions: 9.0.0.Alpha3
Reporter: Sebastian Łaskawiec
Assignee: Tristan Tarrant
Currently we can't use Remote Cache Store for Caches populated using REST with compatibility mode.
The configuration for source cache looks like the following:
{code}
<distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
<locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
<transaction mode="NONE"/>
<compatibility enabled="true" />
</distributed-cache>
{code}
Destination cache:
{code}
<distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
<locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
<transaction mode="NONE"/>
<compatibility enabled="true" />
<remote-store cache="default" hotrod-wrapping="true" read-only="true">
<remote-server outbound-socket-binding="remote-store-hotrod-server" />
</remote-store>
</distributed-cache>
{code}
With the configuration above, when performing [Rolling Upgrade|http://infinispan.org/docs/stable/user_guide/user_guide.html#steps_2] procedure I get:
{code}
03:38:43,025 ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (nioEventLoopGroup-7-1) ISPN000136: Error executing command GetCacheEntryCommand, writing keys []: java.lang.ClassCastException: java.lang.String cannot be cast to [B
at org.infinispan.persistence.remote.wrapper.HotRodEntryMarshaller.objectToByteBuffer(HotRodEntryMarshaller.java:28)
at org.infinispan.client.hotrod.impl.RemoteCacheImpl.obj2bytes(RemoteCacheImpl.java:494)
at org.infinispan.client.hotrod.impl.RemoteCacheImpl.getWithMetadata(RemoteCacheImpl.java:208)
at org.infinispan.persistence.remote.RemoteStore.load(RemoteStore.java:109)
at org.infinispan.persistence.manager.PersistenceManagerImpl.loadFromAllStores(PersistenceManagerImpl.java:455)
at org.infinispan.persistence.PersistenceUtil.loadAndCheckExpiration(PersistenceUtil.java:113)
at org.infinispan.persistence.PersistenceUtil.lambda$loadAndStoreInDataContainer$0(PersistenceUtil.java:98)
at org.infinispan.container.DefaultDataContainer.lambda$compute$3(DefaultDataContainer.java:325)
at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8.compute(EquivalentConcurrentHashMapV8.java:1873)
at org.infinispan.container.DefaultDataContainer.compute(DefaultDataContainer.java:324)
at org.infinispan.persistence.PersistenceUtil.loadAndStoreInDataContainer(PersistenceUtil.java:91)
at org.infinispan.interceptors.impl.CacheLoaderInterceptor.loadInContext(CacheLoaderInterceptor.java:352)
at org.infinispan.interceptors.impl.CacheLoaderInterceptor.loadIfNeeded(CacheLoaderInterceptor.java:347)
at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitDataCommand(CacheLoaderInterceptor.java:206)
at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitGetCacheEntryCommand(CacheLoaderInterceptor.java:150)
at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitGetCacheEntryCommand(CacheLoaderInterceptor.java:88)
at org.infinispan.commands.read.GetCacheEntryCommand.acceptVisitor(GetCacheEntryCommand.java:40)
at org.infinispan.interceptors.DDAsyncInterceptor.visitCommand(DDAsyncInterceptor.java:53)
at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeInterceptorsSync(BaseAsyncInvocationContext.java:314)
at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.forkInvocationSync(BaseAsyncInvocationContext.java:98)
at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeForkAndHandlerSync(BaseAsyncInvocationContext.java:474)
at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.afterVisit(BaseAsyncInvocationContext.java:463)
at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeInterceptorsSync(BaseAsyncInvocationContext.java:329)
at org.infinispan.interceptors.impl.BaseAsyncInvocationContext.invokeSync(BaseAsyncInvocationContext.java:282)
at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:236)
at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:433)
at org.infinispan.cache.impl.CacheImpl.getCacheEntry(CacheImpl.java:439)
at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.getCacheEntry(AbstractDelegatingAdvancedCache.java:216)
at org.infinispan.rest.RestCacheManager.getInternalEntry(RestCacheManager.scala:58)
at org.infinispan.rest.Server$$anonfun$getEntry$1.apply(Server.scala:90)
at org.infinispan.rest.Server$$anonfun$getEntry$1.apply(Server.scala:89)
at org.infinispan.rest.Server.protectCacheNotFound(Server.scala:498)
at org.infinispan.rest.Server.getEntry(Server.scala:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139)
at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:395)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202)
at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:745)
{code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/21/16 9:35 AM:
--------------------------------------------------------------------
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration. Later on I will use REST interface for playing with data, so turn on compatibility mode:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
212,214d214
< <remote-store cache="default" hotrod-wrapping="true" read-only="true">
< <remote-server outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky -->
< <remote-destination host="172.30.14.112" port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
oc expose svc infinispan-experiments-2
{code}
# At this stage we have 2 clusters (the old one with selector {{cluster=cluster-1}} and the new one with selector {{cluster=cluster-2}}). Both should be up and running (check that with {{oc status -v}}). Cluster-2 has remote stores which point to Cluster-1.
was (Author: sebastian.laskawiec):
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
212,214d214
< <remote-store cache="default" hotrod-wrapping="true" read-only="true">
< <remote-server outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky -->
< <remote-destination host="172.30.14.112" port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
oc expose svc infinispan-experiments-2
{code}
# At this stage we have 2 clusters (the old one with selector {{cluster=cluster-1}} and the new one with selector {{cluster=cluster-2}}). Both should be up and running (check that with {{oc status -v}}). Cluster-2 has remote stores which point to Cluster-1.
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months