[JBoss JIRA] Created: (ISPN-1369) Make c3p0 an optional dependency of cachstore-jdbc
by Paul Ferraro (JIRA)
Make c3p0 an optional dependency of cachstore-jdbc
--------------------------------------------------
Key: ISPN-1369
URL: https://issues.jboss.org/browse/ISPN-1369
Project: Infinispan
Issue Type: Task
Components: Loaders and Stores
Affects Versions: 5.0.0.FINAL
Reporter: Paul Ferraro
Assignee: Manik Surtani
Since c3p0 is only used for pooled connection factories, it should not be a required runtime dependency. Specifically, AS7 would rather not pull in c3p0 since AS7 will always use this cache store in conjunction with a managed connection factory. To make c3p0 optional, c3p0 classes should only be imported by the PooledConnectionFactory itself. Currently, there are c3p0 import statements within org.infinispan.loaders.jdbc.logging.Log which prevent c3p0 from being "optional".
N.B. To indicate a optional dependency in maven:
<dependency>
<groupId>c3p0</groupId>
<artifactId>c3p0</artifactId>
<version>...</version>
<optional>true</optional>
</dependency>
This will prevent c3p0 from being treated as a transient dependency of org.infinispan:infinispan-cachestore-jdbc.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] Created: (ISPN-1115) Fine-grained AtomicMaps
by Manik Surtani (JIRA)
Fine-grained AtomicMaps
-----------------------
Key: ISPN-1115
URL: https://issues.jboss.org/browse/ISPN-1115
Project: Infinispan
Issue Type: Feature Request
Components: Core API
Affects Versions: 5.0.0.FINAL
Reporter: Manik Surtani
Assignee: Manik Surtani
Fix For: 5.1.0.BETA1, 5.1.0.Final
Atomic Maps are locked by acquiring a single lock on the entire map and this causes concurrency issues for certain use cases. This JIRA is to allow for concurrent modifications of keys within the AtomicMap, provided the keys do not overlap.
The design is as follows:
* {{AtomicMapLookup}} gets a WL on the {{AtomicMap}}'s key (AMK) when creating and removing a new AtomicMap
* Modifications to the AtomicMap (which go through the {{AtomicMapProxy}}) do _not_ acquire a WL on AMK. Instead,
* {{AdvancedCache}} exposes a new API, {{applyDelta(K deltaAwareValueKey, Delta delta, Object... locksToAcquire)}}
* {{AtomicMapProxy}} makes changes by calling {{applyDelta}} and passing in the key within the map that is being modified, along with the delta to apply.
* The implementation could offer lock pooling to prevent a large number of locks being created for AtomicMaps with a large number of entries
This can then be used by other {{Delta}}/{{DeltaAware}} types in future as well, perhaps JSON documents, etc.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] Created: (ISPN-1408) Reduce Hot Rod topology information memory consumption
by Galder Zamarreño (JIRA)
Reduce Hot Rod topology information memory consumption
------------------------------------------------------
Key: ISPN-1408
URL: https://issues.jboss.org/browse/ISPN-1408
Project: Infinispan
Issue Type: Enhancement
Components: Cache Server
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 5.2.0.FINAL
Hot Rod server storage of topology views should be trimmed. Currently it stores:
||key||value||
|view|view_id, [sever_endpoint, cluster_addr, [cache_name:hash_id]]
|
This results in storing far too much information, and particularly when virtual nodes are enabled, this explodes in size. This affects both Hot Rod memory consumption and network activity since these ids and server endpoint information needs to be send back to client. For example, with 1000 virtual nodes, 8 nodes and 10 caches and taking in account each hash id is a 4 byte integer, that results in ~300kb of data.
On top of that, this is stored in a single key which means that concurrent startups can be problematic.
There's a couple of things that can be done to reduce the size of this:
# Get clients to calculate hash ids. This will require configured number of virtual nodes to be sent back to clients and also ISPN-1404 so that both the client and the server can hash node positions on the same thing.
# Piggy back on JGroups' view id to avoid Hot Rod having to maintain its own view id.
The end result is:
||key||value||
|cluster, sever_endpoint|
Which reduces the memory consumption and avoids concurrent startup issues.
This would require a new version of the Hot Rod protocol, potentially a 1.1 version rather than 2.0
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] Created: (ISPN-1404) Make hash id seed for a node's position in the hash wheel configurable
by Galder Zamarreño (JIRA)
Make hash id seed for a node's position in the hash wheel configurable
----------------------------------------------------------------------
Key: ISPN-1404
URL: https://issues.jboss.org/browse/ISPN-1404
Project: Infinispan
Issue Type: Enhancement
Components: Cache Server, Configuration, Distributed Cache
Reporter: Galder Zamarreño
Assignee: Galder Zamarreño
Fix For: 5.2.0.FINAL
In distributed caches, make hash seeds configurable. Right now, this is configured to be:
- WO/ virtual nodes -> org.infinispan.remoting.transport.jgroups.JGroupsAddress
- W/ virtual nodes -> org.infinispan.distribution.ch.VirtualAddress
To avoid needing Hot Rod servers needing to keep hash ids in the server side memory, it'd be helpful to have this configurable so that both the server and client can hash on the same seed to find a node's position in the hash wheel. For example, a HotRod server would provide an implementation where the source would be:
- WO/ virtual nodes -> String UTF-8 byte[] based on: '<node_host>:<node_port>'
- W/ virtual nodes -> String UTF-8 byte[] based on: '<node_host>:<node_port>:<virtual_node_index>'
This would be doable with some kind of SPI class that's configurable. By default, Infinispan would use a class that uses the Java classes and Hot Rod would plug a different implementation that returns the byte[] in the format spelled.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] Created: (ISPN-439) Define and implement configuration file backward and forward compatibility policy
by Vladimir Blagojevic (JIRA)
Define and implement configuration file backward and forward compatibility policy
---------------------------------------------------------------------------------
Key: ISPN-439
URL: https://jira.jboss.org/jira/browse/ISPN-439
Project: Infinispan
Issue Type: Task
Affects Versions: 4.1.0.BETA1
Reporter: Vladimir Blagojevic
Assignee: Manik Surtani
Fix For: 4.1.0.CR1
Backward compatibility:
Process any previous version configuration file from the same minor version. For example, Infinispan release 4.2 should process configuration file produced for version 4.1 without any configuration file changes or any other adjustments. For configuration options present in 4.2, but not in 4.1 assume default values. However, Infinispan version 5.0 is not required to process configuration files from previous major versions, i.e 4.0...4.x.
Forward compatibility.
Do not process any forward version configuration file. For example, Infinispan release 4.1, give configuration file input from any succeeding versions (i.e 4.2...4.x, 5.x) would simply fail outright with proper error message.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months
[JBoss JIRA] Created: (ISPN-1375) Tx lock should not be locked for writing as long as the latch is open during stress
by Jan Slezak (JIRA)
Tx lock should not be locked for writing as long as the latch is open during stress
-----------------------------------------------------------------------------------
Key: ISPN-1375
URL: https://issues.jboss.org/browse/ISPN-1375
Project: Infinispan
Issue Type: Bug
Components: Distributed Cache
Affects Versions: 5.0.0.FINAL
Environment: windows 2008 server / cluster / 40 nodes
Reporter: Jan Slezak
Assignee: Manik Surtani
Priority: Critical
2011-09-05 12:37:17,846 ERROR local -> dist buckets| RpcManagerImpl| ISPN000073: Unexpected error while replicating
java.lang.IllegalStateException: Tx lock should not be locked for writing as long as the latch is open
at org.infinispan.distribution.TransactionLoggerImpl.acquireLockForTx(TransactionLoggerImpl.java:303)
at org.infinispan.distribution.TransactionLoggerImpl.beforeCommand(TransactionLoggerImpl.java:185)
at org.infinispan.interceptors.DistTxInterceptor.visitPutKeyValueCommand(DistTxInterceptor.java:125)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:114)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:61)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:274)
at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:71)
at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:62)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:181)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:195)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:309)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:167)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:165)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:144)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:577)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:488)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:364)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:770)
at org.jgroups.JChannel.up(JChannel.java:1484)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1074)
at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:263)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:908)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:246)
at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:575)
at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:703)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:99)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:275)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:209)
at org.jgroups.protocols.Discovery.up(Discovery.java:293)
at org.jgroups.protocols.PING.up(PING.java:69)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1109)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1665)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1647)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2011-09-05 12:37:17,924 ERROR local -> dist buckets| tionContextInterceptor| ISPN000136: Execution error
org.infinispan.CacheException: java.lang.IllegalStateException: Tx lock should not be locked for writing as long as the latch is open
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:145)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:156)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:265)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:252)
at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:248)
at org.infinispan.interceptors.DistributionInterceptor.handleWriteCommand(DistributionInterceptor.java:432)
at org.infinispan.interceptors.DistributionInterceptor.visitPutKeyValueCommand(DistributionInterceptor.java:214)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.LockingInterceptor.visitPutKeyValueCommand(LockingInterceptor.java:294)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:214)
at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:162)
at org.infinispan.interceptors.DistTxInterceptor.visitPutKeyValueCommand(DistTxInterceptor.java:129)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:114)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:77)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:274)
at org.infinispan.CacheImpl.put(CacheImpl.java:515)
at org.infinispan.CacheSupport.put(CacheSupport.java:51)
at eu.ysoft.cache.cluster.bucket.IfspnBucketManager$BucketSynchronizer.run(IfspnBucketManager.java:223)
Caused by: java.lang.IllegalStateException: Tx lock should not be locked for writing as long as the latch is open
at org.infinispan.distribution.TransactionLoggerImpl.acquireLockForTx(TransactionLoggerImpl.java:303)
at org.infinispan.distribution.TransactionLoggerImpl.beforeCommand(TransactionLoggerImpl.java:185)
at org.infinispan.interceptors.DistTxInterceptor.visitPutKeyValueCommand(DistTxInterceptor.java:125)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:114)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:61)
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:274)
at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:71)
at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:62)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:181)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:195)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:309)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:167)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:165)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:144)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:577)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:488)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:364)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:770)
at org.jgroups.JChannel.up(JChannel.java:1484)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1074)
at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:263)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:908)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:246)
at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:575)
at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:703)
at org.jgroups.protocols.BARRIER.up(BARRIER.java:99)
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:275)
at org.jgroups.protocols.MERGE2.up(MERGE2.java:209)
at org.jgroups.protocols.Discovery.up(Discovery.java:293)
at org.jgroups.protocols.PING.up(PING.java:69)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1109)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1665)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1647)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 2 months