[JBoss JIRA] (ISPN-1611) Hotrod server memory leak after enable idle timeout
by hs z (Created) (JIRA)
Hotrod server memory leak after enable idle timeout
---------------------------------------------------
Key: ISPN-1611
URL: https://issues.jboss.org/browse/ISPN-1611
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 5.1.0.CR1
Reporter: hs z
Assignee: Manik Surtani
i set --idle_timeout=15 as param, threads count and memory usage always grow, i got a heap dump and found many netty HashedWheelTimer instances cost much memory.
in HashedWheelTimer javadoc, it says: "Do not create many instances. HashedWheelTimer creates a new thread whenever it is instantiated and started. Therefore, you should make sure to create only one instance and share it across your application. One of the common mistakes, that makes your application unresponsive, is to create a new instance in ChannelPipelineFactory, which results in the creation of a new thread for every connection.". but in infinispan NettyChannelPipelineFactory.scala, there is "timer = new HashedWheelTimer", it's misused.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-1621) SharedResourceMisuseDetector warning
by Michal Linhard (Created) (JIRA)
SharedResourceMisuseDetector warning
------------------------------------
Key: ISPN-1621
URL: https://issues.jboss.org/browse/ISPN-1621
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 5.1.0.CR1
Reporter: Michal Linhard
Assignee: Manik Surtani
Priority: Minor
This warning message can be seen when starting EDG
{code}
node1:
09:07:45,110 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (MemcachedServerMaster-2 ([id: 0x4d815146, /10.16.90.106:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
node2:
09:07:45,381 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (HotRodServerMaster-2 ([id: 0x35612600, /10.16.90.107:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
node3:
09:07:45,526 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (HotRodServerMaster-2 ([id: 0x4263f6ea, /10.16.90.108:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
node4:
09:07:48,125 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (HotRodServerMaster-2 ([id: 0x7d05e560, /10.16.90.109:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
{code}
EDG build: http://hudson.qa.jboss.com/hudson/view/EDG6/view/EDG-QE/job/edg-60-build-...
huson job with the issue: http://hudson.qa.jboss.com/hudson/view/EDG6/view/EDG-REPORTS-PERF/job/edg...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] Created: (ISPN-1319) topology changes makes entire cluster inconsistent
by Jan Slezak (JIRA)
topology changes makes entire cluster inconsistent
--------------------------------------------------
Key: ISPN-1319
URL: https://issues.jboss.org/browse/ISPN-1319
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 5.0.0.FINAL
Environment: linux / devel environment / 1.6.0_26
Reporter: Jan Slezak
Assignee: Manik Surtani
Priority: Blocker
Invoke timeout exception in replicated or distributed environment (the issue occurred in both) during topology change on producer node - after that the data may end up in inconsistent state on other nodes (in my case n+1 entities on some of the nodes). I tried that with many TM / ISPN configurations in sync mode using DummyTransactionManagerLookup. Same behavior using invocation batching ...
example xml:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
xmlns="urn:infinispan:config:5.0">
<global>
<transport clusterName="ifprotocluster"/>
</global>
<default>
<clustering mode="distribution">
<l1 enabled="false"/>
<hash numOwners="100" rehashRpcTimeout="120000" />
<sync/>
</clustering>
<transaction
transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
syncRollbackPhase="true"
syncCommitPhase="true"
useEagerLocking="true"
/>
</default>
</infinispan>
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-1441) Better and more sensible executor configuration
by Galder Zamarreño (Created) (JIRA)
Better and more sensible executor configuration
-----------------------------------------------
Key: ISPN-1441
URL: https://issues.jboss.org/browse/ISPN-1441
Project: Infinispan
Issue Type: Enhancement
Components: Configuration
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 6.0.0.FINAL
Dan and I had a chat about ISPN-1396 and we both agreed that a better configuration and management approach is needed for Infinispan's executors:
- Firstly, out of the box for SE environments, Infinispan should have executors configured with newCachedThreadPool because they provide better queuing performance than a fixed thread pool.
- Sure, in an managed env (i.e. AS), this won't fly, which is why all executors need to be injectable. This should be in place once ISPN-1396 is in place.
- So, if we go for cached thread pools for SE environments, we don't need any of the properties of executors any more. Besides, these can be more confusing for the user (there is knowledged of at least one case where things went wrong due to bad config here). So, the configuration would be limited to injecting executors. If you need any specific executor settings, pass us the right executors. To aid these cases, we could have some executor builders available with some common executor configuration for managed envs (i.e. we could borrow settings from AS?)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-1751) NPE in hotrod client
by Michal Linhard (JIRA)
Michal Linhard created ISPN-1751:
------------------------------------
Summary: NPE in hotrod client
Key: ISPN-1751
URL: https://issues.jboss.org/browse/ISPN-1751
Project: Infinispan
Issue Type: Bug
Affects Versions: 5.1.0.CR4
Reporter: Michal Linhard
Assignee: Manik Surtani
I've got this one in two recent stress test runs:
{code}
java.lang.NullPointerException
at sun.nio.ch.Util.atBugLevel(Util.java:448)
at sun.nio.ch.SelectorImpl.<init>(SelectorImpl.java:40)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:47)
at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at sun.nio.ch.Util.getTemporarySelector(Util.java:245)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:92)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:80)
at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:57)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1179)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:250)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:143)
at org.jboss.qa.edg.adapter.HotRodAdapter$AdapterTcpTransportFactory.getTransport(HotRodAdapter.java:104)
at org.infinispan.client.hotrod.RemoteCacheManager.ping(RemoteCacheManager.java:529)
at org.infinispan.client.hotrod.RemoteCacheManager.createRemoteCache(RemoteCacheManager.java:511)
at org.infinispan.client.hotrod.RemoteCacheManager.getCache(RemoteCacheManager.java:433)
at org.infinispan.client.hotrod.RemoteCacheManager.getCache(RemoteCacheManager.java:429)
at org.jboss.qa.edg.adapter.HotRodAdapter.getCache(HotRodAdapter.java:216)
at org.jboss.smartfrog.edg.loaddriver.DriverNodeImpl$ClientThread.init(DriverNodeImpl.java:104)
at org.jboss.smartfrog.edg.loaddriver.DriverNodeImpl$ClientThread.run(DriverNodeImpl.java:317)
{code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months
[JBoss JIRA] (ISPN-1801) Enable virtual nodes in the default configuration
by Mircea Markus (JIRA)
Mircea Markus created ISPN-1801:
-----------------------------------
Summary: Enable virtual nodes in the default configuration
Key: ISPN-1801
URL: https://issues.jboss.org/browse/ISPN-1801
Project: Infinispan
Issue Type: Feature Request
Components: Distributed Cache
Affects Versions: 5.1.0.FINAL
Reporter: Mircea Markus
Assignee: Manik Surtani
Fix For: 5.1.1.CR1, 5.1.1.FINAL
ATM the default value for virtualNodes is 1. This means that the wheel-share each node has can be very uneven for small(up to 15 nodes) clusters.
Increasing this value even to a small number(10-30) would significantly improve each node's share of wheel and the chance for a well balanced data distribution over the cluster.
Here are some suggestions from an email from Dan:
<snip>
I've been working on a test to search for an optimal default value here:
https://github.com/danberindei/infinispan/commit/983c0328dc40be9609fcabb7...
I'm measuring both the number of keys for which a node is primary
owner and the number of keys for which it is one of the owners
compared to the ideal distribution (K/N keys on each node). The former
tells us how much more work the node could be expected to do, the
latter how much memory the node is likely to need.
I'm only running 10000 loops, so the max figure is not the absolute
maximum. But it's certainly bigger than the 0.9999 percentile.
The full results are here:
http://fpaste.org/cI1r/
The uniformity of the distribution goes up with the number of virtual
nodes but down with the number of physical nodes. I think we should go
with a default of 48 nodes (or 50 if you prefer decimal). With 32
nodes, there's only a 0.1% chance that a node will hold more than 1.35
* K/N keys, and a 0.1% chance that the node will be primary owner for
more than 1.5 * K/N keys.
We could go higher, but we run against the risk of node addresses
colliding on the hash wheel. According to the formula on the Birthday
Paradox page (http://en.wikipedia.org/wiki/Birthday_problem), we only
need 2072 addresses on our 2^31 hash wheel to get a 0.1% chance of
collision. That means 21 nodes * 96 virtual nodes, 32 nodes * 64
virtual nodes or 43 nodes * 48 virtual nodes.
</snip>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 2 months