[JBoss JIRA] Created: (ISPN-639) IllegalStateException when tx context is propagated
by Mircea Markus (JIRA)
IllegalStateException when tx context is propagated
---------------------------------------------------
Key: ISPN-639
URL: https://jira.jboss.org/browse/ISPN-639
Project: Infinispan
Issue Type: Feature Request
Affects Versions: 4.1.0.Final
Reporter: Mircea Markus
Assignee: Mircea Markus
Fix For: 4.2.0.BETA1, 4.2.0.CR1, 4.2.0.Final
This exception is being thrown by a remote node[see bellow]. This is caused by ctx.isInTxScope() being called even for remote nodes - this should not happen. This only happens when an exception is thrown:
catch (Throwable t) {
if (ctx.isInTxScope()) {
TxInvocationContext txContext = (TxInvocationContext) ctx;
if (txContext.isValidRunningTx()) {
txContext.getRunningTransaction().setRollbackOnly();
}
}
throw t;
}
The condition needs to be changed to: ctx.isInTxScope() and ctx.isOriginLocal()
2010-09-09 16:22:27,402 ERROR [org.infinispan.remoting.rpc.RpcManagerImpl] (RMI TCP Connection(13)-127.0.0.3) unexpected error while replicating: java.lang.IllegalStateException: this method can only be called for locally originated transactions!
at org.infinispan.context.impl.RemoteTxInvocationContext.getRunningTransaction(RemoteTxInvocationContext.java:28) [:4.1.0.FINAL]
at org.infinispan.context.impl.AbstractTxInvocationContext.isValidRunningTx(AbstractTxInvocationContext.java:32) [:4.1.0.FINAL]
at org.infinispan.interceptors.CallInterceptor.handleDefault(CallInterceptor.java:77) [:4.1.0.FINAL]
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:57) [:4.1.0.FINAL]
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [:4.1.0.FINAL]
at org.infinispan.interceptors.ReplicationInterceptor.handleCrudMethod(ReplicationInterceptor.java:107) [:4.1.0.FINAL]
at org.infinispan.interceptors.ReplicationInterceptor.visitPutKeyValueCommand(ReplicationInterceptor.java:78) [:4.1.0.FINAL]
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [:4.1.0.FINAL]
at org.infinispan.interceptors.LockingInterceptor.visitPutKeyValueCommand(LockingInterceptor.java:198) [:4.1.0.FINAL]
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132) [:4.1.0.FINAL]
at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:57) [:4.1.0.FINAL]
at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [:4.1.0.FINAL]
at org.infinispan.interceptors.TxInterceptor.visitPrepareCommand(TxInterceptor.java:82) [:4.1.0.FINAL]
at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132) [:4.1.0.FINAL]
at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:105) [:4.1.0.FINAL]
at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [:4.1.0.FINAL]
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:57) [:4.1.0.FINAL]
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:38) [:4.1.0.FINAL]
at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:105) [:4.1.0.FINAL]
at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120) [:4.1.0.FINAL]
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [:4.1.0.FINAL]
at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:60) [:4.1.0.FINAL]
at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:105) [:4.1.0.FINAL]
at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120) [:4.1.0.FINAL]
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:273) [:4.1.0.FINAL]
at org.infinispan.commands.tx.PrepareCommand.perform(PrepareCommand.java:111) [:4.1.0.FINAL]
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:76) [:4.1.0.FINAL]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:176) [:4.1.0.FINAL]
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:148) [:4.1.0.FINAL]
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:575) [:2.10.0.GA]
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:486) [:2.10.0.GA]
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:362) [:2.10.0.GA]
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:771) [:2.10.0.GA]
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:136) [:2.10.0.GA]
at org.jgroups.JChannel.up(JChannel.java:1453) [:2.10.0.GA]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:887) [:2.10.0.GA]
at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:483) [:2.10.0.GA]
at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:265) [:2.10.0.GA]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:188) [:2.10.0.GA]
at org.jgroups.protocols.FC.up(FC.java:494) [:2.10.0.GA]
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:888) [:2.10.0.GA]
at org.jgroups.protocols.VIEW_SYNC.up(VIEW_SYNC.java:171) [:2.10.0.GA]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234) [:2.10.0.GA]
at org.jgroups.protocols.UNICAST.up(UNICAST.java:309) [:2.10.0.GA]
at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:813) [:2.10.0.GA]
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:671) [:2.10.0.GA]
at org.jgroups.protocols.BARRIER.up(BARRIER.java:120) [:2.10.0.GA]
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132) [:2.10.0.GA]
at org.jgroups.protocols.FD.up(FD.java:266) [:2.10.0.GA]
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:270) [:2.10.0.GA]
at org.jgroups.protocols.MERGE2.up(MERGE2.java:210) [:2.10.0.GA]
at org.jgroups.protocols.Discovery.up(Discovery.java:281) [:2.10.0.GA]
at org.jgroups.protocols.PING.up(PING.java:67) [:2.10.0.GA]
at org.jgroups.stack.Protocol.up(Protocol.java:371) [:2.10.0.GA]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1009) [:2.10.0.GA]
at org.jgroups.protocols.TP.access$100(TP.java:56) [:2.10.0.GA]
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1549) [:2.10.0.GA]
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1531) [:2.10.0.GA]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [:1.6.0_18]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [:1.6.0_18]
at java.lang.Thread.run(Thread.java:636) [:1.6.0_18]
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 5 months
[JBoss JIRA] Created: (ISPN-650) DefaultCacheManager.getCache(...) should block until cache newly created cache is started
by Paul Ferraro (JIRA)
DefaultCacheManager.getCache(...) should block until cache newly created cache is started
-----------------------------------------------------------------------------------------
Key: ISPN-650
URL: https://jira.jboss.org/browse/ISPN-650
Project: Infinispan
Issue Type: Feature Request
Components: Core API
Affects Versions: 4.2.0.ALPHA1
Reporter: Paul Ferraro
Assignee: Manik Surtani
Priority: Minor
Currently, DefaultCacheManager stores it's caches in a concurrent map. When a call to getCache(...) is made for a cache that does not yet exist, the cache is created, put into the map (via putIfAbsent()) and then the cache is started. Consequently, a subsequent, but concurrent thread calling getCache(...) with the same cache name may end up with a cache that is not yet ready for use, leading to unexpected behavior.
Ideally, calls to getCache(...) should block if the requested cache is newly created, but not yet started. Requests for an already started cache should not block.
A possible implementation might involve storing the cache along side a volatile single-use thread gate (e.g. new CountDownLatch(1)) in the concurrent map. The algorithm might look like:
1. Lookup the map entry (i.e. cache + gate) using the cache name
2. If the map entry exists, but no gate is present, return the cache.
3. If the map entry exists, and a latch is present, wait on the latch (ideally with a timeout) and return the cache.
4. If the map entry does not exist, create the cache and put it into the map (if absent) with a new thread gate.
4a. If the put was not successful (i.e. an entry already existed), goto 1.
5. Start the cache - if start fails, stop the cache and remove the map entry (threads waiting on it's gate will timeout, oh well)
6. Open the gate
7. Remove the gate from the map entry
8. Return the cache
A horridly generic version of the above can be found in the HA-JDBC source code:
http://ha-jdbc.svn.sourceforge.net/viewvc/ha-jdbc/trunk/src/main/java/net...
and an example demonstrating use of a Registry with a MapRegistryStoreFactory can be found here:
http://ha-jdbc.svn.sourceforge.net/viewvc/ha-jdbc/trunk/src/main/java/net...
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 6 months
[JBoss JIRA] Created: (ISPN-615) Eager locking and key affinity optimisation
by Mircea Markus (JIRA)
Eager locking and key affinity optimisation
-------------------------------------------
Key: ISPN-615
URL: https://jira.jboss.org/browse/ISPN-615
Project: Infinispan
Issue Type: Feature Request
Affects Versions: 4.1.0.CR3
Reporter: Mircea Markus
Assignee: Manik Surtani
Fix For: 4.2.0.BETA1, 4.2.0.Final
The optimisation refers to cache.lock() not to perform remote locks on ALL data owners, but only on the main data owner.
This way, if session affinity is used for enforcing key locality then cache.lock() would only acquire lock within the same JVM - i.e. very good performance without loosing eager's locking semantics. If the cluster changes, and the key is rehashed on a different node, then eager locking would do a RPC - but for many clusters the topology changes are infrequent.
Consistency during node failures: if K is on node A and it was locked by a tx originated on node B. If A fails then we can invalidate the transaction on B, so that it would rollback.
Another interesting race condition Sanne raised is with re-hashing: "it needs to be atomic to know who is the owner and aquire the lock, or the owner might be moved and you're locking on the wrong node (only)" I think this is not related to this optimisation in particular, but stands for eager locking in general
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://jira.jboss.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 6 months