[infinispan-dev] Issue about propagation of the RollbackCommand in Infinispan 5.2.0
Sebastiano Peluso
peluso at gsd.inesc-id.pt
Fri Aug 17 08:16:39 EDT 2012
Hi all,
I have a question about the propagation of the RollbackCommand in
Infinispan 5.2.0 when I use the Optimistic locking scheme and the
Distribution clustering mode.
In particular I have noticed that a RollbackCommand command for a
transaction T is propagated on a set of nodes S even if T's coordinator
has never sent and it will never send a PrepareCommand command to nodes
in S.
I try to make clear the issue by the following example.
Suppose you have a transaction T executing on node N0 and T writes on
keys k0, k1, k2,...., km (m+1 keys) until it reaches the prepare phase.
In addition node Ni, with i=0,...,m+1, is the ki's primary owner. If at
prepare time, during the lock acquisition on the local node N0 (see
visitPrepareCommand method in OptimisticLockingInterceptor class) T
fails to acquire the lock on k0, an exception is thrown (e.g.
TimeoutException) and T will be rolled back. In this case, when T starts
the rollback phase, it seems to me that a RollbackCommand command is
multicast to all nodes Nj, with j=1,...,d, if k0 is sorted after kj
during the local lock acquisition (see acquireAllLocks method in
OptimisticLockingInterceptor), because:
- shouldInvokeRemoteCommand method on the TxInvocationContext returns
true (see BaseRpcInterceptor class);
- getAffectedKeys on the TxInvocationContext returns the set {k1,...,
kd} (see visitRollbackCommand in DistributionInterceptor class).
Is it correct?
If I'm not wrong, which is the design choice behind this implementation?
Thank you.
Best regards,
Sebastiano Peluso
--
Sebastiano Peluso, PhD student
Department of Computer, Control and Management Engineering
Sapienza University of Rome
Via Ariosto 25, 00185, Rome, Italy
Tel. +39 06 77274108
Webpage http://www.dis.uniroma1.it/~peluso
Distributed Systems Group, INESC-ID
Instituto Superior Técnico
Rua Alves Redol 9, 1000-029, Lisbon, Portugal
Webpage http://www.gsd.inesc-id.pt/~peluso
More information about the infinispan-dev
mailing list