[JBoss JIRA] (ISPN-6882) Scala warning during REST server compilation
by Vojtech Juranek (JIRA)
Vojtech Juranek created ISPN-6882:
-------------------------------------
Summary: Scala warning during REST server compilation
Key: ISPN-6882
URL: https://issues.jboss.org/browse/ISPN-6882
Project: Infinispan
Issue Type: Bug
Components: Server
Affects Versions: 9.0.0.Alpha3
Reporter: Vojtech Juranek
Assignee: Vojtech Juranek
Priority: Trivial
During REST server compilation, Scala compiler produces couple of warngings:
{noformat}
[WARNING] /home/vjuranek/ws-idea/infinispan/server/rest/src/main/scala/org/infinispan/rest/RestCacheManager.scala:59: warning: abstract type V in type pattern org.infinispan.container.entries.InternalCacheEntry[String,V] is unchecked since it is eliminated by erasure
[INFO] case ice: InternalCacheEntry[String, V] => ice
[INFO] ^
[WARNING] /home/vjuranek/ws-idea/infinispan/server/rest/src/main/scala/org/infinispan/rest/RestCacheManager.scala:61: warning: abstract type V in type pattern org.infinispan.container.entries.MVCCEntry[String,V] is unchecked since it is eliminated by erasure
[INFO] case mvcc: MVCCEntry[String, V] => cache.getCacheEntry(key) // FIXME: horrible re-get to be fixed by ISPN-3460
[INFO] ^
[WARNING] /home/vjuranek/ws-idea/infinispan/server/rest/src/main/scala/org/infinispan/rest/Server.scala:101: warning: abstract type V in type pattern org.infinispan.container.entries.InternalCacheEntry[String,V] is unchecked since it is eliminated by erasure
[INFO] case ice: InternalCacheEntry[String, V] => {
[INFO] ^
[WARNING] /home/vjuranek/ws-idea/infinispan/server/rest/src/main/scala/org/infinispan/rest/Server.scala:312: warning: abstract type V in type pattern org.infinispan.container.entries.InternalCacheEntry[String,V] is unchecked since it is eliminated by erasure
[INFO] case ice: InternalCacheEntry[String, V] => {
[INFO] ^
[WARNING] /home/vjuranek/ws-idea/infinispan/server/rest/src/main/scala/org/infinispan/rest/Server.scala:360: warning: abstract type V in type pattern org.infinispan.container.entries.InternalCacheEntry[String,V] is unchecked since it is eliminated by erasure
[INFO] case ice: InternalCacheEntry[String, V] => {
[INFO] ^
[WARNING] /home/vjuranek/ws-idea/infinispan/server/rest/src/main/scala/org/infinispan/rest/Server.scala:445: warning: abstract type V in type pattern org.infinispan.container.entries.InternalCacheEntry[String,V] is unchecked since it is eliminated by erasure
[INFO] case ice: InternalCacheEntry[String, V] => {
[INFO] ^
[WARNING] 6 warnings found
{noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/21/16 8:56 AM:
--------------------------------------------------------------------
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
212,214d214
< <remote-store cache="default" hotrod-wrapping="true" read-only="true">
< <remote-server outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky -->
< <remote-destination host="172.30.14.112" port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
oc expose svc infinispan-experiments-2
{code}
# At this stage we have 2 clusters (the old one with selector {{cluster=cluster-1}} and the new one with selector {{cluster=cluster-2}}). Both should be up and running (check that with {{oc status -v}}). Cluster-2 has remote stores which point to Cluster-1.
was (Author: sebastian.laskawiec):
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
160a161,163
> <remote-store xmlns="urn:infinispan:config:store:remote:9.0" cache="default" hotrod-wrapping="true" ping-on-start="true" read-only="true">
> <remote-server host="infinispan-experiments-myproject.192.168.0.17.xip.io" port="11222" />
> </remote-store>
212,214d214
< <remote-store cache="default" hotrod-wrapping="true" read-only="true">
< <remote-server outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky -->
< <remote-destination host="172.30.14.112" port="11222"/>
---
> <remote-destination host="remote-host" port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
{code}
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/21/16 8:43 AM:
--------------------------------------------------------------------
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
160a161,163
> <remote-store xmlns="urn:infinispan:config:store:remote:9.0" cache="default" hotrod-wrapping="true" ping-on-start="true" read-only="true">
> <remote-server host="infinispan-experiments-myproject.192.168.0.17.xip.io" port="11222" />
> </remote-store>
212,214d214
< <remote-store cache="default" hotrod-wrapping="true" read-only="true">
< <remote-server outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky -->
< <remote-destination host="172.30.14.112" port="11222"/>
---
> <remote-destination host="remote-host" port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
{code}
was (Author: sebastian.laskawiec):
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
diff --git a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
index 6cb865d..8623850 100644
--- a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
+++ b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
@@ -158,6 +158,12 @@
</subsystem>
<subsystem xmlns="urn:infinispan:server:core:9.0" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
+ <persistence passivation="false">
+ <remote-store xmlns="urn:infinispan:config:store:remote:9.0"
+ cache="default" hotrod-wrapping="true" ping-on-start="true" read-only="true">
+ <remote-server host="infinispan-experiments-myproject.192.168.0.17.xip.io" port="11222" />
+ </remote-store>
+ </persistence>
<transport lock-timeout="60000"/>
<global-state/>
<distributed-cache-configuration name="transactional" mode="SYNC" start="EAGER">
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
{code}
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/21/16 8:31 AM:
--------------------------------------------------------------------
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
diff --git a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
index 6cb865d..8623850 100644
--- a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
+++ b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
@@ -158,6 +158,12 @@
</subsystem>
<subsystem xmlns="urn:infinispan:server:core:9.0" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
+ <persistence passivation="false">
+ <remote-store xmlns="urn:infinispan:config:store:remote:9.0"
+ cache="default" hotrod-wrapping="true" ping-on-start="true" read-only="true">
+ <remote-server host="infinispan-experiments-myproject.192.168.0.17.xip.io" port="11222" />
+ </remote-store>
+ </persistence>
<transport lock-timeout="60000"/>
<global-state/>
<distributed-cache-configuration name="transactional" mode="SYNC" start="EAGER">
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
{code}
was (Author: sebastian.laskawiec):
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
diff --git a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
index 6cb865d..8623850 100644
--- a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
+++ b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
@@ -158,6 +158,12 @@
</subsystem>
<subsystem xmlns="urn:infinispan:server:core:9.0" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
+ <persistence passivation="false">
+ <remote-store xmlns="urn:infinispan:config:store:remote:9.0"
+ cache="default" hotrod-wrapping="true" ping-on-start="true" read-only="true">
+ <remote-server host="infinispan-experiments-myproject.192.168.0.17.xip.io" port="11222" />
+ </remote-store>
+ </persistence>
<transport lock-timeout="60000"/>
<global-state/>
<distributed-cache-configuration name="transactional" mode="SYNC" start="EAGER">
{code}
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6881) Error when accessing REST root path
by Vojtech Juranek (JIRA)
[ https://issues.jboss.org/browse/ISPN-6881?page=com.atlassian.jira.plugin.... ]
Vojtech Juranek updated ISPN-6881:
----------------------------------
Status: Open (was: New)
> Error when accessing REST root path
> -----------------------------------
>
> Key: ISPN-6881
> URL: https://issues.jboss.org/browse/ISPN-6881
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Reporter: Vojtech Juranek
> Assignee: Vojtech Juranek
> Priority: Minor
>
> When accessing REST root path (http://localhost:8080/rest/), it results into server error (moreover followed by NPE):
> {noformat}
> ESC[0mESC[31m12:27:19,104 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n] (nioEventLoopGroup-7-1) RESTEASY002010: Failed to execute: javax.ws.rs.NotFoundException: RESTEASY003210: Could not find resource for full path: http://localhost:8080/rest
> at org.jboss.resteasy.core.registry.SegmentNode.match(SegmentNode.java:114)
> at org.jboss.resteasy.core.registry.RootNode.match(RootNode.java:43)
> at org.jboss.resteasy.core.registry.RootClassNode.match(RootClassNode.java:48)
> at org.jboss.resteasy.core.ResourceMethodRegistry.getResourceInvoker(ResourceMethodRegistry.java:445)
> at org.jboss.resteasy.core.SynchronousDispatcher.getInvoker(SynchronousDispatcher.java:257)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:194)
> at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
> at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> ESC[0mESC[31m12:27:19,107 SEVERE [org.jboss.resteasy.plugins.server.netty.RequestHandler] (nioEventLoopGroup-7-1) Unexpected: org.jboss.resteasy.spi.UnhandledException: java.lang.NumberFormatException: null
> at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:180)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:199)
> at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
> at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NumberFormatException: null
> at java.lang.Long.parseLong(Long.java:552)
> at java.lang.Long.parseLong(Long.java:631)
> at org.infinispan.rest.logging.RestAccessLoggingHandler.filter(RestAccessLoggingHandler.java:36)
> at org.jboss.resteasy.core.ServerResponseWriter.executeFilters(ServerResponseWriter.java:121)
> at org.jboss.resteasy.core.ServerResponseWriter.writeNomapResponse(ServerResponseWriter.java:48)
> at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:176)
> ... 11 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6881) Error when accessing REST root path
by Vojtech Juranek (JIRA)
[ https://issues.jboss.org/browse/ISPN-6881?page=com.atlassian.jira.plugin.... ]
Vojtech Juranek updated ISPN-6881:
----------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/4464
> Error when accessing REST root path
> -----------------------------------
>
> Key: ISPN-6881
> URL: https://issues.jboss.org/browse/ISPN-6881
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Reporter: Vojtech Juranek
> Assignee: Vojtech Juranek
> Priority: Minor
>
> When accessing REST root path (http://localhost:8080/rest/), it results into server error (moreover followed by NPE):
> {noformat}
> ESC[0mESC[31m12:27:19,104 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n] (nioEventLoopGroup-7-1) RESTEASY002010: Failed to execute: javax.ws.rs.NotFoundException: RESTEASY003210: Could not find resource for full path: http://localhost:8080/rest
> at org.jboss.resteasy.core.registry.SegmentNode.match(SegmentNode.java:114)
> at org.jboss.resteasy.core.registry.RootNode.match(RootNode.java:43)
> at org.jboss.resteasy.core.registry.RootClassNode.match(RootClassNode.java:48)
> at org.jboss.resteasy.core.ResourceMethodRegistry.getResourceInvoker(ResourceMethodRegistry.java:445)
> at org.jboss.resteasy.core.SynchronousDispatcher.getInvoker(SynchronousDispatcher.java:257)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:194)
> at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
> at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> ESC[0mESC[31m12:27:19,107 SEVERE [org.jboss.resteasy.plugins.server.netty.RequestHandler] (nioEventLoopGroup-7-1) Unexpected: org.jboss.resteasy.spi.UnhandledException: java.lang.NumberFormatException: null
> at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:180)
> at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:199)
> at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
> at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
> at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
> at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
> at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NumberFormatException: null
> at java.lang.Long.parseLong(Long.java:552)
> at java.lang.Long.parseLong(Long.java:631)
> at org.infinispan.rest.logging.RestAccessLoggingHandler.filter(RestAccessLoggingHandler.java:36)
> at org.jboss.resteasy.core.ServerResponseWriter.executeFilters(ServerResponseWriter.java:121)
> at org.jboss.resteasy.core.ServerResponseWriter.writeNomapResponse(ServerResponseWriter.java:48)
> at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:176)
> ... 11 more
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6881) Error when accessing REST root path
by Vojtech Juranek (JIRA)
Vojtech Juranek created ISPN-6881:
-------------------------------------
Summary: Error when accessing REST root path
Key: ISPN-6881
URL: https://issues.jboss.org/browse/ISPN-6881
Project: Infinispan
Issue Type: Bug
Components: Server
Reporter: Vojtech Juranek
Assignee: Vojtech Juranek
Priority: Minor
When accessing REST root path (http://localhost:8080/rest/), it results into server error (moreover followed by NPE):
{noformat}
ESC[0mESC[31m12:27:19,104 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n] (nioEventLoopGroup-7-1) RESTEASY002010: Failed to execute: javax.ws.rs.NotFoundException: RESTEASY003210: Could not find resource for full path: http://localhost:8080/rest
at org.jboss.resteasy.core.registry.SegmentNode.match(SegmentNode.java:114)
at org.jboss.resteasy.core.registry.RootNode.match(RootNode.java:43)
at org.jboss.resteasy.core.registry.RootClassNode.match(RootClassNode.java:48)
at org.jboss.resteasy.core.ResourceMethodRegistry.getResourceInvoker(ResourceMethodRegistry.java:445)
at org.jboss.resteasy.core.SynchronousDispatcher.getInvoker(SynchronousDispatcher.java:257)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:194)
at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:745)
ESC[0mESC[31m12:27:19,107 SEVERE [org.jboss.resteasy.plugins.server.netty.RequestHandler] (nioEventLoopGroup-7-1) Unexpected: org.jboss.resteasy.spi.UnhandledException: java.lang.NumberFormatException: null
at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:180)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:199)
at org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
at org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NumberFormatException: null
at java.lang.Long.parseLong(Long.java:552)
at java.lang.Long.parseLong(Long.java:631)
at org.infinispan.rest.logging.RestAccessLoggingHandler.filter(RestAccessLoggingHandler.java:36)
at org.jboss.resteasy.core.ServerResponseWriter.executeFilters(ServerResponseWriter.java:121)
at org.jboss.resteasy.core.ServerResponseWriter.writeNomapResponse(ServerResponseWriter.java:48)
at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:176)
... 11 more
{noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/21/16 6:16 AM:
--------------------------------------------------------------------
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. The new Docker image with Infinispan Hot Rod server needs to contain the following Remote Cache Store definition:
{code}
diff --git a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
index 6cb865d..8623850 100644
--- a/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
+++ b/server/infinispan-server-9.0.0-SNAPSHOT/standalone/configuration/cloud.xml
@@ -158,6 +158,12 @@
</subsystem>
<subsystem xmlns="urn:infinispan:server:core:9.0" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
+ <persistence passivation="false">
+ <remote-store xmlns="urn:infinispan:config:store:remote:9.0"
+ cache="default" hotrod-wrapping="true" ping-on-start="true" read-only="true">
+ <remote-server host="infinispan-experiments-myproject.192.168.0.17.xip.io" port="11222" />
+ </remote-store>
+ </persistence>
<transport lock-timeout="60000"/>
<global-state/>
<distributed-cache-configuration name="transactional" mode="SYNC" start="EAGER">
{code}
was (Author: sebastian.laskawiec):
THESE ARE WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/xml' -d 'test' http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which will avoid joining those two clusters together. In my case (just a POC) I will simply export the build configuration and import it back again. However in Production environment you will probably have some changes in your Docker image.
{code}
oc export -o yaml dc/infinispan-experiments > dc_2.yaml
.. edit the name and labels ..
oc create -f dc_2.yaml
{code}
# Now let's verify if everything looks good:
{code}
oc status -v
In project My Project (myproject) on server https://192.168.0.17:8443
http://infinispan-experiments-myproject.192.168.0.17.xip.io to pod port 8080-tcp (svc/infinispan-experiments)
dc/infinispan-experiments deploys istag/infinispan-experiments:latest
deployment #3 deployed 7 minutes ago - 1 pod
deployment #2 deployed 8 minutes ago
deployment #1 deployed 38 minutes ago
dc/infinispan-experiments-2 deploys istag/infinispan-experiments:latest
deployment #1 running for 12 seconds - 1 pod
{code}
# Now we need to expose a service
{code}
oc expose dc/infinispan-experiments-2
{code}
# At this point we have 2 clusters (the old one with selector {{cluster=cluster-1}} and the new one with selector {{cluster=cluster-2}}). Depending on our client we might want (or not) to expose a route to {{svc/infinispan-experiments-2}} or we are good with just a service.
# Since the [Infinispan Upgrade Procedure|http://infinispan.org/docs/stable/user_guide/user_guide.html#_R...] requires port {{4444}} we need to edit the service in source cluster and add it:
{code}
$ oc edit svc/infinispan-experiments
.. add port 4444 ..
{code}
# Get the IP address for source cluster service ({{svc/infinispan-experiments}})
{code}
oc describe svc/infinispan-experiments
.. write down: IP: 172.30.66.108 ..
{code}
The connection string for the cluster will be {{jmx://172.30.66.108:4444/clustered/default}}
#
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6880) Netty ByteBuf leak in REST endpoint
by Vojtech Juranek (JIRA)
Vojtech Juranek created ISPN-6880:
-------------------------------------
Summary: Netty ByteBuf leak in REST endpoint
Key: ISPN-6880
URL: https://issues.jboss.org/browse/ISPN-6880
Project: Infinispan
Issue Type: Bug
Components: Server
Affects Versions: 9.0.0.Alpha3
Reporter: Vojtech Juranek
Priority: Critical
REST endpoint leaks Netty {{ByteBuf}} instances:
{noformat}
[0m[31m11:25:34,059 SEVERE [io.netty.util.ResourceLeakDetector] (nioEventLoopGroup-2-14) LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
WARNING: 10 leak records were discarded because the leak record count is limited to 4. Use system property io.netty.leakDetection.maxRecords to increase the limit.
Recent access records: 5
#5:
io.netty.buffer.AdvancedLeakAwareByteBuf.getBytes(AdvancedLeakAwareByteBuf.java:225)
io.netty.buffer.CompositeByteBuf.getBytes(CompositeByteBuf.java:805)
io.netty.buffer.CompositeByteBuf.getBytes(CompositeByteBuf.java:44)
io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:805)
io.netty.buffer.CompositeByteBuf.readBytes(CompositeByteBuf.java:1742)
io.netty.buffer.WrappedCompositeByteBuf.readBytes(WrappedCompositeByteBuf.java:996)
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.readBytes(AdvancedLeakAwareCompositeByteBuf.java:448)
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.readBytes(AdvancedLeakAwareCompositeByteBuf.java:36)
io.netty.buffer.ByteBufInputStream.read(ByteBufInputStream.java:120)
java.io.InputStream.read(InputStream.java:101)
org.jboss.resteasy.util.ReadFromStream.readFromStream(ReadFromStream.java:30)
org.jboss.resteasy.plugins.providers.ByteArrayProvider.readFrom(ByteArrayProvider.java:35)
org.jboss.resteasy.plugins.providers.ByteArrayProvider.readFrom(ByteArrayProvider.java:23)
org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.readFrom(AbstractReaderInterceptorContext.java:61)
org.jboss.resteasy.core.interception.ServerReaderInterceptorContext.readFrom(ServerReaderInterceptorContext.java:60)
org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:53)
org.jboss.resteasy.plugins.interceptors.encoding.GZIPDecodingInterceptor.aroundReadFrom(GZIPDecodingInterceptor.java:59)
org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:55)
org.jboss.resteasy.core.MessageBodyParameterInjector.inject(MessageBodyParameterInjector.java:151)
org.jboss.resteasy.core.MethodInjectorImpl.injectArguments(MethodInjectorImpl.java:91)
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:114)
org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:395)
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202)
org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#4:
io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:901)
io.netty.handler.codec.http.DefaultHttpContent.release(DefaultHttpContent.java:84)
io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:84)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:91)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#3:
io.netty.buffer.AdvancedLeakAwareByteBuf.slice(AdvancedLeakAwareByteBuf.java:75)
io.netty.buffer.CompositeByteBuf.addComponent0(CompositeByteBuf.java:203)
io.netty.buffer.CompositeByteBuf.addComponent(CompositeByteBuf.java:133)
io.netty.buffer.WrappedCompositeByteBuf.addComponent(WrappedCompositeByteBuf.java:467)
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.addComponent(AdvancedLeakAwareCompositeByteBuf.java:832)
io.netty.handler.codec.MessageAggregator.appendPartialContent(MessageAggregator.java:323)
io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:287)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#2:
io.netty.buffer.AdvancedLeakAwareByteBuf.order(AdvancedLeakAwareByteBuf.java:65)
io.netty.buffer.CompositeByteBuf.addComponent0(CompositeByteBuf.java:203)
io.netty.buffer.CompositeByteBuf.addComponent(CompositeByteBuf.java:133)
io.netty.buffer.WrappedCompositeByteBuf.addComponent(WrappedCompositeByteBuf.java:467)
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.addComponent(AdvancedLeakAwareCompositeByteBuf.java:832)
io.netty.handler.codec.MessageAggregator.appendPartialContent(MessageAggregator.java:323)
io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:287)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#1:
io.netty.buffer.AdvancedLeakAwareByteBuf.retain(AdvancedLeakAwareByteBuf.java:873)
io.netty.handler.codec.MessageAggregator.appendPartialContent(MessageAggregator.java:322)
io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:287)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:271)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170)
io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:131)
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:105)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
[0m[31m11:25:34,059 SEVERE [io.netty.util.ResourceLeakDetector] (nioEventLoopGroup-2-14) LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 5
#5:
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.readBytes(AdvancedLeakAwareCompositeByteBuf.java:447)
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.readBytes(AdvancedLeakAwareCompositeByteBuf.java:36)
io.netty.buffer.ByteBufInputStream.read(ByteBufInputStream.java:120)
java.io.InputStream.read(InputStream.java:101)
org.jboss.resteasy.util.ReadFromStream.readFromStream(ReadFromStream.java:30)
org.jboss.resteasy.plugins.providers.ByteArrayProvider.readFrom(ByteArrayProvider.java:35)
org.jboss.resteasy.plugins.providers.ByteArrayProvider.readFrom(ByteArrayProvider.java:23)
org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.readFrom(AbstractReaderInterceptorContext.java:61)
org.jboss.resteasy.core.interception.ServerReaderInterceptorContext.readFrom(ServerReaderInterceptorContext.java:60)
org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:53)
org.jboss.resteasy.plugins.interceptors.encoding.GZIPDecodingInterceptor.aroundReadFrom(GZIPDecodingInterceptor.java:59)
org.jboss.resteasy.core.interception.AbstractReaderInterceptorContext.proceed(AbstractReaderInterceptorContext.java:55)
org.jboss.resteasy.core.MessageBodyParameterInjector.inject(MessageBodyParameterInjector.java:151)
org.jboss.resteasy.core.MethodInjectorImpl.injectArguments(MethodInjectorImpl.java:91)
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:114)
org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:395)
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202)
org.jboss.resteasy.plugins.server.netty.RequestDispatcher.service(RequestDispatcher.java:83)
org.jboss.resteasy.plugins.server.netty.RequestHandler.channelRead0(RequestHandler.java:53)
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker$7.run(DefaultChannelHandlerInvoker.java:159)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:339)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:373)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#4:
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.release(AdvancedLeakAwareCompositeByteBuf.java:961)
io.netty.handler.codec.http.HttpObjectAggregator$AggregatedFullHttpMessage.release(HttpObjectAggregator.java:317)
io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:84)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:91)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#3:
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.retain(AdvancedLeakAwareCompositeByteBuf.java:933)
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.retain(AdvancedLeakAwareCompositeByteBuf.java:36)
org.jboss.resteasy.plugins.server.netty.RestEasyHttpRequestDecoder.decode(RestEasyHttpRequestDecoder.java:75)
org.jboss.resteasy.plugins.server.netty.RestEasyHttpRequestDecoder.decode(RestEasyHttpRequestDecoder.java:28)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#2:
Hint: 'RestEasyHttpRequestDecoder#0' will handle the message from this point.
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.touch(AdvancedLeakAwareCompositeByteBuf.java:951)
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.touch(AdvancedLeakAwareCompositeByteBuf.java:36)
io.netty.handler.codec.http.HttpObjectAggregator$AggregatedFullHttpMessage.touch(HttpObjectAggregator.java:305)
io.netty.handler.codec.http.HttpObjectAggregator$AggregatedFullHttpRequest.touch(HttpObjectAggregator.java:402)
io.netty.handler.codec.http.HttpObjectAggregator$AggregatedFullHttpRequest.touch(HttpObjectAggregator.java:332)
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:101)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
#1:
io.netty.buffer.AdvancedLeakAwareCompositeByteBuf.addComponent(AdvancedLeakAwareCompositeByteBuf.java:831)
io.netty.handler.codec.MessageAggregator.appendPartialContent(MessageAggregator.java:323)
io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:287)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:264)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
Created at:
io.netty.buffer.AbstractByteBufAllocator.compositeDirectBuffer(AbstractByteBufAllocator.java:215)
io.netty.buffer.AbstractByteBufAllocator.compositeBuffer(AbstractByteBufAllocator.java:193)
io.netty.handler.codec.MessageAggregator.decode(MessageAggregator.java:251)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:277)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:372)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:245)
io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:154)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:354)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:145)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:1078)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:527)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:484)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:398)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:370)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
java.lang.Thread.run(Thread.java:745)
{noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6857) OutdatedTopologyException in clustered invalidation cache because StateTransferInterceptor not in the chain
by Marek Posolda (JIRA)
[ https://issues.jboss.org/browse/ISPN-6857?page=com.atlassian.jira.plugin.... ]
Marek Posolda commented on ISPN-6857:
-------------------------------------
Any idea when this can be fixed? It's an issue for us in Keycloak right now.
> OutdatedTopologyException in clustered invalidation cache because StateTransferInterceptor not in the chain
> -----------------------------------------------------------------------------------------------------------
>
> Key: ISPN-6857
> URL: https://issues.jboss.org/browse/ISPN-6857
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 8.1.0.Final
> Reporter: Marek Posolda
> Attachments: OutdatedTopologyExceptionReproducerTest.java
>
>
> I have the following setup:
> - 2 nodes in cluster with mode INVALIDATION_SYNC. No-transaction cache.
> - Node1 is started
> - Called "cache.remove" on some key on node1. At the same time, node2 is starting, which is causing topology change.
> - The "cache.remove" call on node1 is throwing OutdatedTopologyException.
> I found the cause is that StateTransferInterceptor is not added in InterceptorChain during INVALIDATION mode. It's just available during REPLICATION or DISTRIBUTED modes - https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
> Indeed when I manually added StateTransferInterceptor to my invalidation cache:
> {code}
> invalidationConfigBuilder.customInterceptors()
> .addInterceptor()
> .before(NonTransactionalLockingInterceptor.class)
> .interceptorClass(StateTransferInterceptor.class);
> {code}
>
> I can see that issue is gone as OutdatedTopologyException is catched and command is retried with new topology.
> I am attaching the Java unit test for reproducing issue. On my laptop when I run it, I can almost always simulate the issue.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months