Some minor improvements to Hibernate Search
by Amin Mohammed-Coleman
Hi All
I have been looking at the Hibernate Search codebase and I am very keen to
help out. I have noticed some small changes I would like to purpose (very
small) and I hope I don't offend anyone by mentioning these.
1) Remove cyclic reference in JmsBackEndQueueProcessor and
JmsBackEndQueueProcessorFactory. It seems as though the factory creates a
processor and the processor depends on the factory. The processor only
needs the queueConnection factory and jms queue which I think should be
passed to the processor instead of passing the factory. The object being
created should not know about the factory.
2) Create a convienence method (not sure where) that enables a user to get
the lucene document using either the fulltextsession. So for example the
method would loook something like:
fullTextSession.getDocument(Class<?> clazz, Serializable id);
Under the hood it would delegate the work to the directory providers and
close the index readers.
3) Provide integration with GigaSpaces which I had started but not
completed.
4) Integration with Spring and maybe Spring Integration.
Sorry if my mail is brief however I would be happy to discuss any of the
points further.
Kind Regards
Amin (amin-mc on the forums)
14 years, 5 months
about HSEARCH-367 : Support only one kind of Similarity per index
by Sanne Grinovero
I'm looking into this as it's blocking way too much other stuff,
it's not hard to implement but is having me puzzled about usability.
Basically I'm planning to throw an exception when different entities
are defining a different Similarity while sharing the same index.
This case is obvious:
@Entity @Similarity(ImplA.class) @Indexed(name="oneindex")
A
@Entity @Similarity(ImplB.class) @Indexed(name="oneindex")
B
so I'll throw an exception.
What about this case:
@Entity @Similarity(ImplA.class) @Indexed(name="oneindex")
A
@Entity @Indexed(name="oneindex")
B
?
This would be fine for me, and I would use ImplA Similarity for index
"oneindex" (being applied to both entities A and B), but it's looking
bad to not warn about the inconsistency.
I think some confusion can arise, imagine the situation in which I'm
adding the similarity definition to entity A, I wouldn't expect it to
make effect also on entity B.
The confusion arises IMHO from the fact being in Hibernate Search
there's a notion of "index configuration", so there's an index
"entity" (as a concept) to speak about,
while the definition of the properties of this "index" is scattered
about the @Indexed@Entity(es) and the configuration properties.
Like when configuring sharding and IndexWriter configuration settings,
they are only exposed in the configuration file.
The similarity is actually a property of one index, so the annotation
shouldn't exist and it should be configured like other index settings,
still I agree it feels natural to declare it on an entity so I'm not
proposing to remove it.
I'm afraid there's a bit of inconsistency, but I've no clear idea
about how to solve it: I'm just pointing out what is IMHO a weakness
we should think about.
What should I do in the above mentioned case? Log a warning? just let
them do? throw an exception, demanding to have all entities annotated
the same way?
Regards,
Sanne
14 years, 6 months
[Fwd: [redhat.com #1341720] [Fwd: Re: Unable to checkout core/trunk/core]]
by Steve Ebersole
Paul, anyone else able to validate Graeme's questions?
-------- Forwarded Message --------
From: Graeme Gillies ...
Hi,
I had a close look at the setup on anonsvn.jboss.org for both HTTP and
HTTPS and I can not see any differences.
I did some tests using the command the user specified which was
svn co
http://anonsvn.jboss.org/repos/hibernate/core/branches/Branch_3_3/core
-r HEAD --depth=infinity --force
>From systems both inside and external to the Red hat network, and it
seemed to work fine.
What does pike my interest is that he mentioned it worked ok for HTTPS
but HTTP is giving him grief. This leads me to believe that the problem
may actually be a web proxy between the user and their internet
connection. Typically web proxies will connect HTTPS traffic straight
through (for obvious reasons) but sometimes they can have problems with
SVN if they are configured to not allow some HTTP request methods.
Can we please get the user to confirm that they have no proxy server
(squid, Microsoft ISA, etc.) between themselves and their internet at
large, and possibly even try to perform the operation from a place they
are sure they have a direct internet connection (like home) and see how
they go.
Let us know what they come back with.
Regards,
Graeme
--
Steve Ebersole <steve(a)hibernate.org>
Hibernate.org
14 years, 6 months
Comment patch - cascading performance
by Yves Galante
Hi,
Could someone look at HHH-3860 and comment on the patch attached in
JIRA. It a update from an old patch "HHH-2272 Serious performance
problems when saving large amount".
This patch optimize cascading operation by the caching the relation
parent-child on a temporary map.
When a child find its parent, he lookup on his map, at the end of the
cascading operation, the map is empty.
Actually his parent it fined by parsing all entity of the session.
eventSource.getPersistenceContext().addChildParent(child, parent);
action.cascade(eventSource, child, entityName, anything,
isCascadeDeleteEnabled);
eventSource.getPersistenceContext().removeChildParent(child);
Cascade 5510 objects took 30906 ms on my test before this patch.
With the patch the test case took 4905 ms.
Thanks
Yves
14 years, 6 months
Re: [hibernate-dev] JPA2 locking
by Scott Marlow
On 10/16/2009 03:40 PM, Emmanuel Bernard wrote:
> When I discussed that with Gavin, I believe this idea is that you can
> implement the optimistic locking in the following way:
> - when locking an object read the version number (or if already
> loaded keep this one - not sure about that detail)
> - when flushing or right before commit, read the version number again
> from the database and compare.
> If they are different => exception
>
> A provider may but is not forced to acquire the lock
>
> Note that today we implement Optimistic in a pessimistic way (ie que
> acquire the DB lock right away).
>
> So there are three levels really
> no lock => we check versions upon UPDATE operations
> optimistic => we check versions on reads as well and verify consistency
> pessimistic => we lock at the DB level.
Currently, the Hibernate EM depends on Hibernate core for locking (as it
should). I have a few questions about how achieve the above locking
with Hibernate core and about what changes are needed.
The JPA 2 locking operations that we need support for are:
OPTIMISTIC (equal to READ) - should read the version initially and
confirm that it hasn't changed at transaction commit time. We should
throw OptimisticLockException if the version has changed. I think that
we need a new LockMode for this (similar to LockMode.READ). For
extended persistence context (meaning that the duration is beyond the
end of transaction), I think that we use the entity value from the
extended persistence context as is but should still confirm that it
hasn't changed at commit time (optimistically assume that it hasn't
changed initially).
OPTIMISTIC_FORCE_INCREMENT (equal to WRITE) - should read the version
initially. At transaction commit time, confirm that the version hasn't
changed as we increment it via update. We should throw
OptimisticLockException if the version has changed. I think that we
need a new LockMode for this (similar to LockMode.READ and
LockMode.FORCE). Same rules as above for extended persistence context.
PESSIMISTIC_WRITE - Should obtain a database write lock on the entity.
Hibernate LockMode.Upgrade could be used for this on dialects that
support it. For dialects that don't support LockMode.Upgrade, a
PessimisticLockException should be thrown.
PESSIMISTIC_READ - Should obtain a shared database read lock on the
entity (for the duration of the database transaction). How should we
support this? The JPA 2 specification allows the PESSIMISTIC_WRITE
behavior to be used.
PESSIMISTIC_FORCE_INCREMENT - Same as PESSIMISTIC_READ but with an
increment version at transaction commit time (even if entity isn't
updated). I think that we need a new LockMode for this. We need a way
to throw an exception if not supported.
For pessimistic locks, only lock element collections and relationships
owned by the entity, if property javax.persistence.lock.scope is set to
"PessimisticLockScope.EXTENDED".
Assuming we do the above, we need to release note that READ/WRITE locks
are obtained in optimistic manner which is a change from our JPA 1 support.
Comments?
Any volunteers willing to help with JPA 2 implementation (coding,
testing, moral support) are welcome to join in.:-)
Scott
14 years, 6 months
Re: [hibernate-dev] [infinispan-dev] atomic operations for Lucene's LockManager on Infinispan
by Manik Surtani
On 22 Oct 2009, at 16:01, Mircea Markus wrote:
>
> On Oct 21, 2009, at 9:50 PM, Sanne Grinovero wrote:
>
>> Hello,
>> I've spoken with Manik in IRC about this, so wanted to share this,
>> especially because he mentioned to ask someone to help me.
> I've applied the patch and reproduced the failure. Looking into it
> right now.
To confirm, you only have this failure when using the cache in a
clustered mode? I.e., using it in LOCAL mode works fine?
Cheers
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org
14 years, 6 months
atomic operations for Lucene's LockManager on Infinispan
by Sanne Grinovero
Hello,
I've spoken with Manik in IRC about this, so wanted to share this,
especially because he mentioned to ask someone to help me.
I've been busy writing a lock-stress-test for our implementation for
Lucene's Lock and Lockfactory, and got some trouble using
cache.putIfAbsent(Object key, Object value);
It appears to not behave atomically as it should.
I've confirmed the test is working when mocking the cache with a plain
ConcurrentHashMap, so next step for me is having someone
with better knowledge of Infinispan core have a look into the code; I
might have some configuration problem.
My test is attached to ISPN-227, here are some instructions:
the test to run is
org.infinispan.lucene.InfinispanLockFactoryStressTest, which creates
and uses several org.infinispan.lucene.InfinispanLock.
mvn test -Dtest=org.infinispan.lucene.InfinispanLockFactoryStressTest
-Dbind.address=127.0.0.1 -Djava.net.preferIPv4Stack=true
The test defines 3 different cacheFactory, only one is not commented,
so please edit the code to try against a different ConcurrentMap
implementation and see what happens.
Some status will be sent to system.out.
* MultiNodeTestCacheFactory emulates different nodes sharing state,
and each node is having n threads (using Core Infinispan's
MultipleCacheManagersTest)
* ConcurrentHashMapCacheTestFactory uses Java's ConcurrentHashMap
* LocalISPNCacheTestFactory (using
TestCacheManagerFactory.createLocalCacheManager(false))
This is not intended to be committed for now, just to find out what's
wrong. Also this is not the Lock implementation as we need it, but
first this step should be fixed.
thanks,
Sanne
14 years, 6 months
Resetting Lucene lock at Directory initialization
by Sanne Grinovero
Hello,
Lucene does - in default LockManager implementation - a sort of "lock
cleanup" at index creation: if it detects a lock on the index at
startup, this is cleared.
Łukasz translated the exact same semantic on the Infinispan Directory;
current implementation inserts a "lock marker" at a conventional key,
like Infinispan was a filesystem.
So what is done in this case is to just delete the value from this
key, if any, at startup (for precision: at lockFactory.clearLock()).
But in this situation I would need to "steal" the lock from another
node, if it exists. IMHO this Lucene approach is not considering
concurrent initializations of the FSDirectory.
So my doubts:
1) Is there some API in Infinispan capable to invalidate an existing
Lock on a key, in case another node is still holding it (and will I
have the other node failing?)
2) Does it make sense at all? looks like a bad practice to steal stuff.
I am considering to write this lock using SKIP_CACHE_STORE, in which
case I could assume that if there exists one, there's a good reason to
not delete the lock as other nodes are running and using the index. In
case all nodes are going down, the lock doesn't exists as it wasn't
stored.
So my proposal is to do a no-op on lockFactory.clearLock(), and use
SKIP_CACHE_STORE when the lock is created.
When an IndexWriter re-creates an index (basically making an existing
one empty) it first uses clearLock(), then it tries to acquire one, so
it looks like it should be safe.
WDYT? this concept of SKIP_CACHE_STORE is totally new for me, maybe I
just misunderstood the usage.
Regards,
Sanne
14 years, 6 months