[JBoss JIRA] (ISPN-1834) Improve the way custom objects are injected into ExtendedModuleCommandFactory impls
by Galder Zamarreño (JIRA)
Galder Zamarreño created ISPN-1834:
--------------------------------------
Summary: Improve the way custom objects are injected into ExtendedModuleCommandFactory impls
Key: ISPN-1834
URL: https://issues.jboss.org/browse/ISPN-1834
Project: Infinispan
Issue Type: Enhancement
Reporter: Galder Zamarreño
Assignee: Manik Surtani
Fix For: 6.0.0.FINAL
Retrieving the ExtendedModuleCommandFactory associated with a cache manager is a PITA right now, you have to do:
{code}GlobalComponentRegistry globalCr = cache.getComponentRegistry().getGlobalComponentRegistry();
// TODO: This is a hack, make it easier to retrieve in Infinispan!
return (CacheCommandFactory) ((Map) globalCr.getComponent("org.infinispan.modules.command.factories"))
.values().iterator().next();
{code}
Provide a cleaner way of initialising cache command factories for custom objects that the factory can plug into the remote commands. Example: evict all in 2LC where commands need to know the cache region (a Hibernate construct) on which to operate on.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years
[JBoss JIRA] (ISPN-1611) Hotrod server memory leak after enable idle timeout
by hs z (Created) (JIRA)
Hotrod server memory leak after enable idle timeout
---------------------------------------------------
Key: ISPN-1611
URL: https://issues.jboss.org/browse/ISPN-1611
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 5.1.0.CR1
Reporter: hs z
Assignee: Manik Surtani
i set --idle_timeout=15 as param, threads count and memory usage always grow, i got a heap dump and found many netty HashedWheelTimer instances cost much memory.
in HashedWheelTimer javadoc, it says: "Do not create many instances. HashedWheelTimer creates a new thread whenever it is instantiated and started. Therefore, you should make sure to create only one instance and share it across your application. One of the common mistakes, that makes your application unresponsive, is to create a new instance in ChannelPipelineFactory, which results in the creation of a new thread for every connection.". but in infinispan NettyChannelPipelineFactory.scala, there is "timer = new HashedWheelTimer", it's misused.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years
[JBoss JIRA] (ISPN-1621) SharedResourceMisuseDetector warning
by Michal Linhard (Created) (JIRA)
SharedResourceMisuseDetector warning
------------------------------------
Key: ISPN-1621
URL: https://issues.jboss.org/browse/ISPN-1621
Project: Infinispan
Issue Type: Bug
Components: Cache Server
Affects Versions: 5.1.0.CR1
Reporter: Michal Linhard
Assignee: Manik Surtani
Priority: Minor
This warning message can be seen when starting EDG
{code}
node1:
09:07:45,110 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (MemcachedServerMaster-2 ([id: 0x4d815146, /10.16.90.106:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
node2:
09:07:45,381 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (HotRodServerMaster-2 ([id: 0x35612600, /10.16.90.107:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
node3:
09:07:45,526 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (HotRodServerMaster-2 ([id: 0x4263f6ea, /10.16.90.108:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
node4:
09:07:48,125 WARNING [org.jboss.netty.util.internal.SharedResourceMisuseDetector] (HotRodServerMaster-2 ([id: 0x7d05e560, /10.16.90.109:11222])) You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.
{code}
EDG build: http://hudson.qa.jboss.com/hudson/view/EDG6/view/EDG-QE/job/edg-60-build-...
huson job with the issue: http://hudson.qa.jboss.com/hudson/view/EDG6/view/EDG-REPORTS-PERF/job/edg...
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years
[JBoss JIRA] (ISPN-1872) Coordinator hangs when cache is loaded to it and l1cache enabled in cluster
by Matt Davis (JIRA)
Matt Davis created ISPN-1872:
--------------------------------
Summary: Coordinator hangs when cache is loaded to it and l1cache enabled in cluster
Key: ISPN-1872
URL: https://issues.jboss.org/browse/ISPN-1872
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 5.1.1.FINAL
Reporter: Matt Davis
Assignee: Manik Surtani
Priority: Blocker
Scaled from 3 nodes to 4 nodes and ran into this issue with both 5.1.1 and trunk (5.2.0 snapshot from 2.18.12).
I altered the slider in the gui demo to allow for 1,000,000 cache entries. If I generate the cache on the coordinator node, and the following exception occurs :
2012-02-15 12:40:49,633 ERROR [InvocationContextInterceptor]
(pool-1-thread-1) ISPN000136: Execution error
org.infinispan.util.concurrent.TimeoutException: Replication timeout for
muskrat-626
at
org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:99)
at
org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:461)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:148)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:169)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:219)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:206)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:201)
at
org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:197)
at
org.infinispan.interceptors.DistributionInterceptor.handleWriteCommand(DistributionInterceptor.java:494)
at
org.infinispan.interceptors.DistributionInterceptor.visitPutMapCommand(DistributionInterceptor.java:285)
at
org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:66)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)
at
org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:199)
at
org.infinispan.interceptors.EntryWrappingInterceptor.visitPutMapCommand(EntryWrappingInterceptor.java:160)
at
org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:66)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)
at
org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitPutMapCommand(NonTransactionalLockingInterceptor.java:84)
at
org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:66)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)
at
org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:130)
at
org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:77)
at
org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:66)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)
at
org.infinispan.interceptors.StateTransferLockInterceptor.handleWithRetries(StateTransferLockInterceptor.java:207)
at
org.infinispan.interceptors.StateTransferLockInterceptor.handleWriteCommand(StateTransferLockInterceptor.java:180)
at
org.infinispan.interceptors.StateTransferLockInterceptor.visitPutMapCommand(StateTransferLockInterceptor.java:171)
at
org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:66)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)
at
org.infinispan.interceptors.CacheMgmtInterceptor.visitPutMapCommand(CacheMgmtInterceptor.java:110)
at
org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:66)
at
org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)
at
org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:130)
at
org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:89)
at
org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:77)
at
org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:66)
at
org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:345)
at
org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:941)
at org.infinispan.CacheImpl.putAll(CacheImpl.java:678)
at org.infinispan.CacheImpl.putAll(CacheImpl.java:671)
at org.infinispan.CacheSupport.putAll(CacheSupport.java:66)
at
org.infinispan.demo.InfinispanDemo$7$1.run(InfinispanDemo.java:251)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
At that time the gui for the coordinator becomes unresponsive. I attached jconsole to the 4 nodes and forced a system.gc. The coordinator node sits at 62MB heap after gc, while the other 3 nodes are sitting around 280MB. The cache distribution has not succeeded on this node. If I kill one of the other nodes, the coordinator instantly becomes responsive. In the final state the coordinator will end up with 1/5 of the load, while the other 2 nodes are each holding about 2/5 of the load.
The problem only occurs when l1cache is enabled, or I generate the data on the coordinator node. It also only becomes a problem when I scale from 3-4 nodes.
Here is the original cache configuration for all 3 nodes :
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
xmlns="urn:infinispan:config:5.2">
<global>
<transport clusterName="demoCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<default>
<jmxStatistics enabled="true"/>
<clustering mode="distribution">
<l1 enabled="true" lifespan="60000"/>
<hash numOwners="2" rehashRpcTimeout="120000"/>
<sync/>
</clustering>
</default>
</infinispan>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years
[JBoss JIRA] Created: (ISPN-1319) topology changes makes entire cluster inconsistent
by Jan Slezak (JIRA)
topology changes makes entire cluster inconsistent
--------------------------------------------------
Key: ISPN-1319
URL: https://issues.jboss.org/browse/ISPN-1319
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 5.0.0.FINAL
Environment: linux / devel environment / 1.6.0_26
Reporter: Jan Slezak
Assignee: Manik Surtani
Priority: Blocker
Invoke timeout exception in replicated or distributed environment (the issue occurred in both) during topology change on producer node - after that the data may end up in inconsistent state on other nodes (in my case n+1 entities on some of the nodes). I tried that with many TM / ISPN configurations in sync mode using DummyTransactionManagerLookup. Same behavior using invocation batching ...
example xml:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.0 http://www.infinispan.org/schemas/infinispan-config-5.0.xsd"
xmlns="urn:infinispan:config:5.0">
<global>
<transport clusterName="ifprotocluster"/>
</global>
<default>
<clustering mode="distribution">
<l1 enabled="false"/>
<hash numOwners="100" rehashRpcTimeout="120000" />
<sync/>
</clustering>
<transaction
transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"
syncRollbackPhase="true"
syncCommitPhase="true"
useEagerLocking="true"
/>
</default>
</infinispan>
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years
[JBoss JIRA] (ISPN-1441) Better and more sensible executor configuration
by Galder Zamarreño (Created) (JIRA)
Better and more sensible executor configuration
-----------------------------------------------
Key: ISPN-1441
URL: https://issues.jboss.org/browse/ISPN-1441
Project: Infinispan
Issue Type: Enhancement
Components: Configuration
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 6.0.0.FINAL
Dan and I had a chat about ISPN-1396 and we both agreed that a better configuration and management approach is needed for Infinispan's executors:
- Firstly, out of the box for SE environments, Infinispan should have executors configured with newCachedThreadPool because they provide better queuing performance than a fixed thread pool.
- Sure, in an managed env (i.e. AS), this won't fly, which is why all executors need to be injectable. This should be in place once ISPN-1396 is in place.
- So, if we go for cached thread pools for SE environments, we don't need any of the properties of executors any more. Besides, these can be more confusing for the user (there is knowledged of at least one case where things went wrong due to bad config here). So, the configuration would be limited to injecting executors. If you need any specific executor settings, pass us the right executors. To aid these cases, we could have some executor builders available with some common executor configuration for managed envs (i.e. we could borrow settings from AS?)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years