[JBoss JIRA] (ISPN-4459) Memory leak in Hot Rod clients
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-4459?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-4459:
----------------------------------
Summary: Memory leak in Hot Rod clients (was: Memory leak in HotRod clientz)
> Memory leak in Hot Rod clients
> ------------------------------
>
> Key: ISPN-4459
> URL: https://issues.jboss.org/browse/ISPN-4459
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Querying, Test Suite - Server
> Affects Versions: 7.0.0.Alpha4
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Labels: testsuite_stability
> Fix For: 7.0.0.Final
>
> Attachments: jprofiler_screenshot.png
>
>
> Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
> In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}}s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4459) Memory leak in Hot Rod clients
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-4459?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-4459:
----------------------------------
Description:
Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}} s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
was:
Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}}s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
> Memory leak in Hot Rod clients
> ------------------------------
>
> Key: ISPN-4459
> URL: https://issues.jboss.org/browse/ISPN-4459
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Querying, Test Suite - Server
> Affects Versions: 7.0.0.Alpha4
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Labels: testsuite_stability
> Fix For: 7.0.0.Final
>
> Attachments: jprofiler_screenshot.png
>
>
> Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
> In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}} s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4459) Memory leak in HotRod clientz
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-4459?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-4459:
----------------------------------
Summary: Memory leak in HotRod clientz (was: Memory leak in HotRod client tests)
> Memory leak in HotRod clientz
> -----------------------------
>
> Key: ISPN-4459
> URL: https://issues.jboss.org/browse/ISPN-4459
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Querying, Test Suite - Server
> Affects Versions: 7.0.0.Alpha4
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Labels: testsuite_stability
> Fix For: 7.0.0.Final
>
> Attachments: jprofiler_screenshot.png
>
>
> Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
> In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}}s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4459) Memory leak in HotRod client tests
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-4459?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero commented on ISPN-4459:
---------------------------------------
[~dan.berindei] why is this assigned to me? BTW I might have misunderstood the description (I didn't look at the code) but the way you describe it it seems a critical leak of the client, not so much a problem of tests only.
> Memory leak in HotRod client tests
> ----------------------------------
>
> Key: ISPN-4459
> URL: https://issues.jboss.org/browse/ISPN-4459
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Querying, Test Suite - Server
> Affects Versions: 7.0.0.Alpha4
> Reporter: Dan Berindei
> Assignee: Sanne Grinovero
> Labels: testsuite_stability
> Fix For: 7.0.0.Final
>
> Attachments: jprofiler_screenshot.png
>
>
> Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
> In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}}s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4459) Memory leak in HotRod client tests
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-4459?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero updated ISPN-4459:
----------------------------------
Assignee: Mircea Markus (was: Sanne Grinovero)
> Memory leak in HotRod client tests
> ----------------------------------
>
> Key: ISPN-4459
> URL: https://issues.jboss.org/browse/ISPN-4459
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Querying, Test Suite - Server
> Affects Versions: 7.0.0.Alpha4
> Reporter: Dan Berindei
> Assignee: Mircea Markus
> Labels: testsuite_stability
> Fix For: 7.0.0.Final
>
> Attachments: jprofiler_screenshot.png
>
>
> Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
> In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}}s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4424) getCacheEntry is not safe
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4424?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4424:
-----------------------------------------------
Pedro Ruivo <pruivo(a)redhat.com> changed the Status of [bug 1110647|https://bugzilla.redhat.com/show_bug.cgi?id=1110647] from ASSIGNED to POST
> getCacheEntry is not safe
> -------------------------
>
> Key: ISPN-4424
> URL: https://issues.jboss.org/browse/ISPN-4424
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Protocols
> Affects Versions: 6.0.2.Final, 7.0.0.Alpha4
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 7.0.0.Alpha5, 7.0.0.Final
>
>
> Versioned update with a multi threaded Hot Rod client results in inconsistency. Some replaceWithVersion return true ignoring a version update executed in another thread. Here's a log excerpt of a concurrency stress test:
> ```
> 2014-06-20 16:16:56,798 INFO [PutFromNull] (pool-7-thread-10) count=462,prev=462,new=463
> 2014-06-20 16:16:56,820 INFO [PutFromNull] (pool-7-thread-9) count=463,prev=463,new=464
> 2014-06-20 16:16:56,831 INFO [PutFromNull] (pool-7-thread-2) count=464,prev=463,new=464
> 2014-06-20 16:16:56,845 INFO [PutFromNull] (pool-7-thread-9) count=465,prev=464,new=465
> ```
> Here you see two threads applying the same replacement, from 463 to 464.
> The issue appears a result of a race condition in Hot Rod server's protocol decoder. When replaceIfUmodified is received, the cache entry is retrieved to verify whether the version in the server and the version sent in the command match. However, the cache entry retrieved is mutable, and the value could change midway through this operation as a result of another thread updating the value. Please find below some log snippets showing this.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4460) Map-Reduce: Mapper sometimes receives null value
by Rich DiCroce (JIRA)
Rich DiCroce created ISPN-4460:
----------------------------------
Summary: Map-Reduce: Mapper sometimes receives null value
Key: ISPN-4460
URL: https://issues.jboss.org/browse/ISPN-4460
Project: Infinispan
Issue Type: Bug
Security Level: Public (Everyone can see)
Affects Versions: 6.0.2.Final
Reporter: Rich DiCroce
Assignee: Mircea Markus
I have a Mapper with the following map method:
{code}
public void map(EndpointAddress key, EndpointInfo value, Collector<Address, Integer> collector) {
// TODO debugging, remove this
if (value == null) {
System.out.println("value is null! WTF");
}
if (collector == null) {
System.out.println("collector is null! OMGWTFBBQ");
}
collector.emit(value.getConnectedGP(), 1);
}
{code}
Null checks were added because I am sometimes seeing a NullPointerException on the last line. Console output is below. I cannot reliably reproduce this problem. It's clearly a race condition of some kind. The cache that is being queried has keys being added/removed all the time.
{noformat}
14:05:30,020 INFO [stdout] (transport-thread-18) value is null! WTF
14:05:30,022 ERROR [com.sg.song.nms.ispn.DataGatherer] (EJB default - 3) GP table column query failed: java.util.concurrent.ExecutionException: org.infinispan.commons.CacheException: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at org.infinispan.distexec.mapreduce.MapReduceTask$MapReduceTaskFuture.get(MapReduceTask.java:762) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at com.sg.song.nms.ispn.DataGatherer.queryCurrentGPStatistics(DataGatherer.java:116) [classes:]
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source) [:1.7.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_45]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_45]
at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:61)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:82) [wildfly-weld-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:95) [wildfly-weld-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:61)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:61)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ejb3.tx.EjbBMTInterceptor.handleInvocation(EjbBMTInterceptor.java:104) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.as.ejb3.tx.BMTInterceptor.processInvocation(BMTInterceptor.java:56) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:55) [weld-core-impl-2.1.2.Final.jar:2014-01-09 09:23]
at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83) [wildfly-weld-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:52) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:95) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.as.ejb3.component.interceptors.AdditionalSetupInterceptor.processInvocation(AdditionalSetupInterceptor.java:55) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:326)
at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:448)
at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:326)
at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
at org.jboss.as.ejb3.timerservice.TimedObjectInvokerImpl.callTimeout(TimedObjectInvokerImpl.java:104) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.as.ejb3.timerservice.task.CalendarTimerTask.callTimeout(CalendarTimerTask.java:61) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at org.jboss.as.ejb3.timerservice.task.TimerTask.run(TimerTask.java:168) [wildfly-ejb3-8.1.0.Final.jar:8.1.0.Final]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
at org.jboss.threads.JBossThread.run(JBossThread.java:122)
Caused by: org.infinispan.commons.CacheException: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:348) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:634) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask$3.call(MapReduceTask.java:652) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask$MapReduceTaskFuture.get(MapReduceTask.java:760) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
... 62 more
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122) [rt.jar:1.7.0_45]
at java.util.concurrent.FutureTask.get(FutureTask.java:188) [rt.jar:1.7.0_45]
at org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get(MapReduceTask.java:845) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhase(MapReduceTask.java:439) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask.execute(MapReduceTask.java:342) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
... 65 more
Caused by: java.lang.NullPointerException
at com.sgi.song.gp.protocol.SONGv1.cluster.query.RegistrationsByGPMapper.map(RegistrationsByGPMapper.java:26) [gp-ispn-shared-1.0.0-SNAPSHOT.jar:]
at com.sgi.song.gp.protocol.SONGv1.cluster.query.RegistrationsByGPMapper.map(RegistrationsByGPMapper.java:1) [gp-ispn-shared-1.0.0-SNAPSHOT.jar:]
at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.map(MapReduceManagerImpl.java:181) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceManagerImpl.mapAndCombineForDistributedReduction(MapReduceManagerImpl.java:96) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.invokeMapCombineLocally(MapReduceTask.java:967) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart.access$200(MapReduceTask.java:894) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:916) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at org.infinispan.distexec.mapreduce.MapReduceTask$MapTaskPart$1.call(MapReduceTask.java:912) [infinispan-core-6.0.2.Final.jar:6.0.2.Final]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_45]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
{noformat}
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4459) Memory leak in HotRod client tests
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4459?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-4459:
-------------------------------
Attachment: jprofiler_screenshot.png
> Memory leak in HotRod client tests
> ----------------------------------
>
> Key: ISPN-4459
> URL: https://issues.jboss.org/browse/ISPN-4459
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Querying, Test Suite - Server
> Affects Versions: 7.0.0.Alpha4
> Reporter: Dan Berindei
> Assignee: Sanne Grinovero
> Labels: testsuite_stability
> Fix For: 7.0.0.Final
>
> Attachments: jprofiler_screenshot.png
>
>
> Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
> In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}}s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4459) Memory leak in HotRod client tests
by Dan Berindei (JIRA)
Dan Berindei created ISPN-4459:
----------------------------------
Summary: Memory leak in HotRod client tests
Key: ISPN-4459
URL: https://issues.jboss.org/browse/ISPN-4459
Project: Infinispan
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: Remote Querying, Test Suite - Server
Affects Versions: 7.0.0.Alpha4
Reporter: Dan Berindei
Assignee: Sanne Grinovero
Fix For: 7.0.0.Final
Even though the HotRod client stop their servers, each test instance keeps the reference to the servers it has started (both tests extending HotRodSingleNodeTest and those extending HotRodMultiNodeTest).
In some tests, like the remote query tests, those servers' worker threads also keep references to huge thread-local {{io.netty.buffer.PoolThreadCache}}s (see attachment). Because of this, I'm sometimes seeing an OOM in the HotRod client suite. We should either clear the server references in the tests, or clear the worker threads when shutting down the server.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months
[JBoss JIRA] (ISPN-4424) getCacheEntry is not safe
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4424?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4424:
-----------------------------------------------
Martin Gencur <mgencur(a)redhat.com> changed the Status of [bug 1110647|https://bugzilla.redhat.com/show_bug.cgi?id=1110647] from ON_QA to ASSIGNED
> getCacheEntry is not safe
> -------------------------
>
> Key: ISPN-4424
> URL: https://issues.jboss.org/browse/ISPN-4424
> Project: Infinispan
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Remote Protocols
> Affects Versions: 6.0.2.Final, 7.0.0.Alpha4
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 7.0.0.Alpha5, 7.0.0.Final
>
>
> Versioned update with a multi threaded Hot Rod client results in inconsistency. Some replaceWithVersion return true ignoring a version update executed in another thread. Here's a log excerpt of a concurrency stress test:
> ```
> 2014-06-20 16:16:56,798 INFO [PutFromNull] (pool-7-thread-10) count=462,prev=462,new=463
> 2014-06-20 16:16:56,820 INFO [PutFromNull] (pool-7-thread-9) count=463,prev=463,new=464
> 2014-06-20 16:16:56,831 INFO [PutFromNull] (pool-7-thread-2) count=464,prev=463,new=464
> 2014-06-20 16:16:56,845 INFO [PutFromNull] (pool-7-thread-9) count=465,prev=464,new=465
> ```
> Here you see two threads applying the same replacement, from 463 to 464.
> The issue appears a result of a race condition in Hot Rod server's protocol decoder. When replaceIfUmodified is received, the cache entry is retrieved to verify whether the version in the server and the version sent in the command match. However, the cache entry retrieved is mutable, and the value could change midway through this operation as a result of another thread updating the value. Please find below some log snippets showing this.
--
This message was sent by Atlassian JIRA
(v6.2.6#6264)
10 years, 5 months