[JBoss JIRA] (JGRP-2017) Optimize COMPRESS protocol by pooling / re-using byte array/buffer
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2017?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2017:
--------------------------------
Make sure we don't run into JGRP-867 again
> Optimize COMPRESS protocol by pooling / re-using byte array/buffer
> ------------------------------------------------------------------
>
> Key: JGRP-2017
> URL: https://issues.jboss.org/browse/JGRP-2017
> Project: JGroups
> Issue Type: Enhancement
> Affects Versions: 3.4.5, 3.6.7
> Environment: Wildfly 8.2.1.Final (JGroups 3.4.5)
> Reporter: Mathieu Lachance
> Assignee: Bela Ban
> Fix For: 3.6.8, 4.0
>
>
> I'm currently working to minimize our application UDP bandwith usage by enabling the COMPRESS protocol in Wildfly 8.2.1.Final.
> Enabling the COMPRESS protocol works fine on that regards by reducing approximatly by 7x the amount of UDP traffic which is really neat.
> The thing is by enabling compression we are trading off increased cpu / memory usage for lowered network bandwith usage. In our case, we actually a notable increase of the cpu / memory usage due to increased usage of the garbage collector.
> When I had a look at the COMPRESS protocol implementation we see that both Deflator and Inflator are pooled and re-used (great) but not at the byte array level (not great).
> This compression pattern is very similar to GZip compression filter we can find in any recent Http spec implementors. In the Netty project for example, there's various compression protocol and/or utilities that are available and fine-tunable and they all have in common to re-use byte buffer and/or try to avoid byte arrays instantiation/copy at maximum.
> Here is some notable classes of the Netty implementation:
> https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty...
> https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/hand...
> https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/hand...
> https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/hand...
> I think re-implementing something similar in JGroups would yield great benefits to the COMPRESS protocol. Maybe also, there's something that could be picked off right from the JBoss XNIO and/or the Undertow projects?
> Finally, I do not know also if we could add ergonomics at the same time to automatically determine the best number of Deflator and Inflator by keeping trace of the amount of time waited to pool one of them?
> Thanks,
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 2 months
[JBoss JIRA] (JGRP-2017) Optimize COMPRESS protocol by pooling / re-using byte array/buffer
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2017?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2017:
--------------------------------
Hmm, there _is_ a superfluous copy in {{COMPRESS.down()}}: {{new_payload}} is unneeded as {{compressed_payload}} was already created and can be set in the message.
However, the problem is that if we reuse a send buffer, we need to copy the contents of it into a message as the message might get retransmitted (e.g. by {{NAKACK2}}) and buffer reuse might change the buffer's contents. Looks like we cannot prevent that 1 copy in the down and up direction...
> Optimize COMPRESS protocol by pooling / re-using byte array/buffer
> ------------------------------------------------------------------
>
> Key: JGRP-2017
> URL: https://issues.jboss.org/browse/JGRP-2017
> Project: JGroups
> Issue Type: Enhancement
> Affects Versions: 3.4.5, 3.6.7
> Environment: Wildfly 8.2.1.Final (JGroups 3.4.5)
> Reporter: Mathieu Lachance
> Assignee: Bela Ban
> Fix For: 3.6.8, 4.0
>
>
> I'm currently working to minimize our application UDP bandwith usage by enabling the COMPRESS protocol in Wildfly 8.2.1.Final.
> Enabling the COMPRESS protocol works fine on that regards by reducing approximatly by 7x the amount of UDP traffic which is really neat.
> The thing is by enabling compression we are trading off increased cpu / memory usage for lowered network bandwith usage. In our case, we actually a notable increase of the cpu / memory usage due to increased usage of the garbage collector.
> When I had a look at the COMPRESS protocol implementation we see that both Deflator and Inflator are pooled and re-used (great) but not at the byte array level (not great).
> This compression pattern is very similar to GZip compression filter we can find in any recent Http spec implementors. In the Netty project for example, there's various compression protocol and/or utilities that are available and fine-tunable and they all have in common to re-use byte buffer and/or try to avoid byte arrays instantiation/copy at maximum.
> Here is some notable classes of the Netty implementation:
> https://github.com/netty/netty/blob/4.1/codec-http/src/main/java/io/netty...
> https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/hand...
> https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/hand...
> https://github.com/netty/netty/blob/4.1/codec/src/main/java/io/netty/hand...
> I think re-implementing something similar in JGroups would yield great benefits to the COMPRESS protocol. Maybe also, there's something that could be picked off right from the JBoss XNIO and/or the Undertow projects?
> Finally, I do not know also if we could add ergonomics at the same time to automatically determine the best number of Deflator and Inflator by keeping trace of the amount of time waited to pool one of them?
> Thanks,
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 2 months
[JBoss JIRA] (WFLY-6208) New warning on shutdown ISPN000197: Error updating cluster member list: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6208?page=com.atlassian.jira.plugin.... ]
Radoslav Husar resolved WFLY-6208.
----------------------------------
Fix Version/s: 10.1.0.Final
Resolution: Done
> New warning on shutdown ISPN000197: Error updating cluster member list: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6208
> URL: https://issues.jboss.org/browse/WFLY-6208
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.1.0.Final
> Reporter: Radoslav Husar
> Assignee: Paul Ferraro
> Fix For: 10.1.0.Final
>
>
> This seems new, seen on master (2860f82551835f0b3b3e80a58f54f4f69fe9571a).
> {noformat}11:12:54,901 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p14-t8) ISPN000197: Error updating cluster member list: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:757)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:599)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.futureDone(SingleResponseFuture.java:30)
> at org.jgroups.blocks.Request.checkCompletion(Request.java:162)
> at org.jgroups.blocks.UnicastRequest.transportClosed(UnicastRequest.java:180)
> at org.jgroups.blocks.RequestCorrelator.stop(RequestCorrelator.java:257)
> at org.jgroups.blocks.MessageDispatcher.stop(MessageDispatcher.java:171)
> at org.jgroups.blocks.MessageDispatcher.channelDisconnected(MessageDispatcher.java:546)
> at org.jgroups.Channel.notifyChannelDisconnected(Channel.java:533)
> at org.jgroups.fork.ForkChannel.disconnect(ForkChannel.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.stop(JGroupsTransport.java:263)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:887)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:692)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:570)
> at org.infinispan.factories.GlobalComponentRegistry.stop(GlobalComponentRegistry.java:263)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:689)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerBuilder.stop(CacheContainerBuilder.java:120)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:2056)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:2017)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: transport was closed
> at org.jgroups.blocks.UnicastRequest.transportClosed(UnicastRequest.java:171)
> ... 22 more{noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 2 months
[JBoss JIRA] (WFLY-6208) New warning on shutdown ISPN000197: Error updating cluster member list: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
by Radoslav Husar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6208?page=com.atlassian.jira.plugin.... ]
Radoslav Husar commented on WFLY-6208:
--------------------------------------
Fixed with 8.1.2.Final ispn upgrade.
> New warning on shutdown ISPN000197: Error updating cluster member list: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-6208
> URL: https://issues.jboss.org/browse/WFLY-6208
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 10.1.0.Final
> Reporter: Radoslav Husar
> Assignee: Paul Ferraro
> Fix For: 10.1.0.Final
>
>
> This seems new, seen on master (2860f82551835f0b3b3e80a58f54f4f69fe9571a).
> {noformat}11:12:54,901 WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p14-t8) ISPN000197: Error updating cluster member list: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:757)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:599)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
> at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.futureDone(SingleResponseFuture.java:30)
> at org.jgroups.blocks.Request.checkCompletion(Request.java:162)
> at org.jgroups.blocks.UnicastRequest.transportClosed(UnicastRequest.java:180)
> at org.jgroups.blocks.RequestCorrelator.stop(RequestCorrelator.java:257)
> at org.jgroups.blocks.MessageDispatcher.stop(MessageDispatcher.java:171)
> at org.jgroups.blocks.MessageDispatcher.channelDisconnected(MessageDispatcher.java:546)
> at org.jgroups.Channel.notifyChannelDisconnected(Channel.java:533)
> at org.jgroups.fork.ForkChannel.disconnect(ForkChannel.java:186)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.stop(JGroupsTransport.java:263)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:168)
> at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:887)
> at org.infinispan.factories.AbstractComponentRegistry.internalStop(AbstractComponentRegistry.java:692)
> at org.infinispan.factories.AbstractComponentRegistry.stop(AbstractComponentRegistry.java:570)
> at org.infinispan.factories.GlobalComponentRegistry.stop(GlobalComponentRegistry.java:263)
> at org.infinispan.manager.DefaultCacheManager.stop(DefaultCacheManager.java:689)
> at org.jboss.as.clustering.infinispan.subsystem.CacheContainerBuilder.stop(CacheContainerBuilder.java:120)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:2056)
> at org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:2017)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: transport was closed
> at org.jgroups.blocks.UnicastRequest.transportClosed(UnicastRequest.java:171)
> ... 22 more{noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 2 months
[JBoss JIRA] (DROOLS-1012) kie-ci-osgi Activator doesn't register the ClassLoaderResolver on the ServiceRegistry
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1012?page=com.atlassian.jira.plugi... ]
RH Bugzilla Integration commented on DROOLS-1012:
-------------------------------------------------
Alessandro Lazarotti <alazarot(a)redhat.com> changed the Status of [bug 1293965|https://bugzilla.redhat.com/show_bug.cgi?id=1293965] from RELEASE_PENDING to CLOSED
> kie-ci-osgi Activator doesn't register the ClassLoaderResolver on the ServiceRegistry
> -------------------------------------------------------------------------------------
>
> Key: DROOLS-1012
> URL: https://issues.jboss.org/browse/DROOLS-1012
> Project: Drools
> Issue Type: Bug
> Reporter: Mario Fusco
> Assignee: Mario Fusco
> Fix For: 6.4.0.Beta1
>
> Attachments: brms-fuse-integration-master.tgz
>
>
> kie-ci-osgi Activator doesn't register the ClassLoaderResolver on the ServiceRegistry. This implies that when the KieContainer is created in a OSGi environment it is not able to get an instance of the MavenClassLoaderResolver and then cannot create a ClassLoader including all the transitive dependencies of the kjar to be compiled.
> The attached project reproduces this issue. In order to run it, it's enough to do a mvn clean install of the project and issue the following commands in fuse:
> features:addurl mvn:com.redhat/brms-fuse-integration-features/0.0.1-SNAPSHOT/xml/features
> features:install brms-integration-bundle
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 2 months