[infinispan-dev] Classloader leaks?

Dan Berindei dan.berindei at gmail.com
Thu Feb 23 02:47:04 EST 2017


On my system, `cat /proc/sys/fs/file-nr` reports 24838 used FDs, and
that goes up to 29030 when running the
infinispan-compatibility-mode-it tests. Since there are only 78 tests,
a leak is quite possible, but I wouldn't say 4K open files (or
sockets, more likely) really is a deal-breaker.

Cheers
Dan


On Thu, Feb 23, 2017 at 1:31 AM, Sanne Grinovero <sanne at infinispan.org> wrote:
> On 22 February 2017 at 21:20, Dennis Reed <dereed at redhat.com> wrote:
>> Are those actually 2 million *unique* descriptors?
>>
>> I've seen lsof output that listed many duplicates for the same file
>> descriptor (one for each thread?), making the list appear much larger
>> than it really was.
>
> Good point! You're right, I verified and all instances of e.g. the
> jgroups jar were using the same FD, just a different thread id.
>
> This is the full error I'm having, when running the tests from the
> "infinispan-compatibility-mode-it" maven module:
>
>
> java.lang.IllegalStateException: failed to create a child event loop
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:88)
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58)
> at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:51)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:87)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:82)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:63)
> at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:51)
> at org.jboss.resteasy.plugins.server.netty.NettyJaxrsServer.start(NettyJaxrsServer.java:239)
> at org.infinispan.rest.NettyRestServer.start(NettyRestServer.java:81)
> at org.infinispan.it.compatibility.CompatibilityCacheFactory.createRestCache(CompatibilityCacheFactory.java:199)
> at org.infinispan.it.compatibility.CompatibilityCacheFactory.createRestMemcachedCaches(CompatibilityCacheFactory.java:137)
> at org.infinispan.it.compatibility.CompatibilityCacheFactory.setup(CompatibilityCacheFactory.java:123)
> at org.infinispan.it.compatibility.ByteArrayKeyReplEmbeddedHotRodTest.setup(ByteArrayKeyReplEmbeddedHotRodTest.java:87)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
> at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:564)
> at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:213)
> at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:138)
> at org.testng.internal.TestMethodWorker.invokeBeforeClassMethods(TestMethodWorker.java:175)
> at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:107)
> at org.testng.TestRunner.privateRun(TestRunner.java:767)
> at org.testng.TestRunner.run(TestRunner.java:617)
> at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
> at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
> at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
> at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: io.netty.channel.ChannelException: failed to open a new selector
> at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:157)
> at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:148)
> at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:126)
> at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:36)
> at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84)
> ... 32 more
> Caused by: java.io.IOException: Too many open files
> at sun.nio.ch.IOUtil.makePipe(Native Method)
> at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:65)
> at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
> at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:155)
> ... 36 more
>
> Now that I know which metrics to look at, I see that before running
> this specific module my system is consuming about 12K FDs, and
> occasionally it just stays around that same level and the integration
> tests will pass w/o any failure.
> However most of the times when I run this module specifically, I see
> the FD consumption increase during the test run and eventually fail
> with the above error. The last sample I could take before failing was
> around 15,5K, and not surprising as my limit is set to 16384.. so I
> guess it could have attempted to grow further.
>
> Tomorrow I'll try with higher limits, to see if I'm running with
> barely enough for this testsuite or if I see it growing more. Still
> sounds like a leak though, as the increase is quite significant..
>
> Thanks,
> Sanne
>
>>
>> -Dennis
>>
>>
>> On 02/22/2017 02:25 PM, Sanne Grinovero wrote:
>>> Hi all,
>>>
>>> our documentation suggest to raise the file limits to about 16K:
>>> http://infinispan.org/docs/stable/contributing/contributing.html#running_and_writing_tests
>>>
>>> I already have this setup since years, yet I've been noticing errors such as:
>>>
>>> "Caused by: java.io.IOException: Too many open files"
>>>
>>> Today I decided to finally have a look, and I see that while running
>>> the testsuite, my system's consumption of file descriptor raises
>>> continuously, up to more than 2 millions.
>>> (When not running the suite, I'm consuming 200K - that's including
>>> IDEs and other FD hungry systems like Chrome)
>>>
>>> Trying to get some samples of these file descriptors, it looks like
>>> it's really about open files. Jar files to be more precise.
>>>
>>> What puzzles me is that taking just one jar - jgroups for example - I
>>> can count 7852 open instances of it, but distributed among a handful
>>> of processes only.
>>>
>>> My guess is classloaders aren't being closed?
>>>
>>> Also: why did nobody else notice problems? Do you all have
>>> reconfigured your system for unlimited FDs?
>>>
>>> Thanks,
>>> Sanne
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


More information about the infinispan-dev mailing list