[infinispan-dev] [hibernate-dev] Resetting Lucene lock at Directory initialization

Sanne Grinovero sanne.grinovero at gmail.com
Mon Oct 19 07:22:30 EDT 2009


Sorry I'll try to explain myself better, I think there's some
confusion about what my problem is.

Javadoc for LockFactory.clearLock - which the interface of what we
have to implement - is about an explicit force-cleanup:
/**
   * Attempt to clear (forcefully unlock and remove) the
   * specified lock.  Only call this at a time when you are
   * certain this lock is no longer in use.
   * @param name name of the lock to be cleared.
   */
  public void clearLock(String name) throws IOException {

So yes I would agree in avoiding all "automagics" here and just remove
the lock, but when IndexWriter opens the Directory in "create" mode
it does:

[...]
if (create) {
      // Clear the write lock in case it's leftover:
      directory.clearLock(WRITE_LOCK_NAME);
    }
Lock writeLock = directory.makeLock(WRITE_LOCK_NAME);
if (!writeLock.obtain(writeLockTimeout)) // obtain write lock
      throw new LockObtainFailedException("Index locked for write: " +
writeLock);
    this.writeLock = writeLock;                   // save it
[...]

basically "stealing" the lock ownership from existing running
processes, if any, and then by using the stolen lock apply changes to
the index.
Apparently this was working fine in Lucene's Filesystem-based
Directory, but this would fail on Infinispan as we are using
transactions: concurrent access is guaranteed to happen on the same
keys as the lock which is being ignored was meant to prevent this. And
I'm happy for this to be illegal as the result would really be
unpredictable :-)

My understanding of the IndexWriter's code is that it uses this
clearLock to make sure it's able to start even after a previous crash,
so I'd like to implement the same functionality but need to detect if
the left-over lock is really a left over and not a working lock from
another node / IndexWriter instance. If the index is "live" it's fine
for this IndexWriter to re-create it (making it empty) but it still
should coordinate nicely with the other nodes.
IMHO the IndexWriter wanting to do a cleanup should block until it
properly gets the lock; as we get an eager lock on
writeLock.obtain(writeLockTimeout) it means my implementation of
clearLock() could be "no-op", provided we can guarantee to make a
difference from a crash-leftover lock and an in-use lock.

Manik you're idea is very interesting, but this lock is not shared:
just one owner. I could store the single lock owner as you suggest, or
is there some simpler way for this one-owner case? I understood that I
can't use Infinispan's eager lock as this ownership is spanning
multiple transactions; am I right on this? It would be quite useful if
I could keep owning the lock even after the transaction commit, or
have a separate TX running for the lock lifecycle, like a LockManager
transaction, also because I expect Infinispan to clear this locks
automatically in case the lock owner crashed/disconnects.

thanks for all comments,
Sanne


2009/10/19 Manik Surtani <manik at jboss.org>:
>
> On 19 Oct 2009, at 08:16, Emmanuel Bernard wrote:
>
>> On the Lucene side, it seems to me that manually asking for a lock
>> clear is cleaner / safer than this automagic approach.
>
> Yeah, I agree with Emmanuel - a more explicit form would work better IMO.
>  Perhaps what you could do is something like this:
>
> 1)  Create an entry, name "sharedlock", value "address of current lock
> owner".
> 2)  Any time a node needs a lock, adds its addess to the "sharedlock" entry
> only if it doesn't exist (putIfAbsent)
> 3)  If the entry exists , check if the address is still in the cluster
> (check using CacheManager.getMembers()).  If the address doesn't exist
> (stale lock) remove and overwrite (using replace() to prevent concurrent
> overwrites)
>
> WDYT?
>
> - Manik
>
> --
> Manik Surtani
> manik at jboss.org
> Lead, Infinispan
> Lead, JBoss Cache
> http://www.infinispan.org
> http://www.jbosscache.org
>
>
>
>
>




More information about the infinispan-dev mailing list