[Hibernate-JIRA] Created: (HSEARCH-618) BridgeFactory should pass along the field type to the field bridge constructor (if an appropriate constructor exists)
by I D (JIRA)
BridgeFactory should pass along the field type to the field bridge constructor (if an appropriate constructor exists)
---------------------------------------------------------------------------------------------------------------------
Key: HSEARCH-618
URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-618
Project: Hibernate Search
Issue Type: Improvement
Affects Versions: 3.3.0.Beta2
Environment: Hibernate Core 3.6.0
Reporter: I D
Priority: Minor
Currently, only the built in EnumBridge is getting access to the field type via its constructor.
That's a shame, since the field type is very useful information, and may be necessary for certainy types of bridges, plus it's readily available to the BridgeFactory.
Here's one use case:
Suppose I want to create a bridge named EnumOrdinalBridge, similar to EnumBridge but saving the constants' ordinals to the index (instead of their names/values). This bridge would need to know the Enum type, so the only way this could be done presently is to make it a ParameterizedBridge and then to pass the enum's class name via the "params" param of the @FieldBridge annotation.
Here is how an invokation of such a bridge would look:
{code}
public enum Gender {MALE,FEMALE}
{code}
{code}
@Field(index = Index.UN_TOKENIZED)
@FieldBridge(impl = EnumOrdinalBridge.class, params = {@Parameter(name="class", value="package.name.Gender")})
public Gender getGender() {
return gender;
}
{code}
Note the problematic use of qualified class name as a string literal - this is neither portable nor refactoring-proof. What if I wanted to change the name of the enum or move it to another package (or what if I simply misspell the qualified class name)? These will all lead to runtime (rather than compile-time) errors. Also, repetition's bad (m'kay?), so there's no reason to have to explicitly pass along information to the bridge when that information is already implicitly available to the BridgeFactory.
I suggest you use reflection in BridgeFactory to detect whether the constructed FieldBridge has a constructor with one argument of type Class. If so - that constructor will be used (instead of the no-args constructor, currently used for all non-built-in field bridges), and the type of the field will be passed as its argument.
With this behavior implemented, the above example can be nicely reduced to the following, and we'll have achieved code portability and enhanced compile-time safety:
{code}
public enum Gender {MALE,FEMALE}
{code}
{code}
@Field(index = Index.UN_TOKENIZED)
@FieldBridge(impl = EnumOrdinalBridge.class)
public Gender getGender() {
return gender;
}
{code}
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 2 months
[Hibernate-JIRA] Created: (HSEARCH-573) PerDPQueueProcessor forces release of lock even if not held - causes corrupt index
by Christian Köberl (JIRA)
PerDPQueueProcessor forces release of lock even if not held - causes corrupt index
----------------------------------------------------------------------------------
Key: HSEARCH-573
URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-573
Project: Hibernate Search
Issue Type: Bug
Affects Versions: 3.2.1, 3.2.0.Final, 3.2.0.CR1
Environment: Hibernate 3.5.0-Final, Oracle 10g
Reporter: Christian Köberl
Occurs: when an indexed entity is modified while another thread is re-indexing the index for that entity
Consequences: Hibernate Search calls "workspace.forceLockRelease();" in the catch-block of PerDPQueueProcessor#run - so the lock held by the batch indexer is forcefully released. That means the next entity index operation will write to the index causing the index to be corrupt.
h2. First Exception - triggers forceLockRelease
{noformat}
ERROR| Unexpected error in Lucene Backend: | at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:118)
org.hibernate.search.SearchException: Unable to open IndexWriter
at org.hibernate.search.backend.Workspace.getIndexWriter(Workspace.java:159)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:103)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:432)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:284)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:678)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:703)
at java.lang.Thread.run(Thread.java:811)
Caused by:
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@C:\Temp\turntableLuceneIndex\LogEntry\lucene-74da319434c1dd9f133d63245791e1b4-write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1538)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1067)
at org.hibernate.search.backend.Workspace.getIndexWriter(Workspace.java:152)
... 7 more
WARN | going to force release of the IndexWriter lock | at org.hibernate.search.backend.Workspace.forceLockRelease(Workspace.java:221)
{noformat}
h2. second Exception - index is destroyed
{noformat}
ERROR| Exception occurred org.hibernate.search.SearchException: Unable to add to Lucene index: class com.poi.egh.turntable.vehicle.domain.vehicle.Vehicle#200
Primary Failure:
Entity com.poi.egh.turntable.vehicle.domain.vehicle.Vehicle Id 200 Work Type org.hibernate.search.backend.AddLuceneWork
| at org.hibernate.search.exception.impl.LogErrorHandler.logError(LogErrorHandler.java:83)
org.hibernate.search.SearchException: Unable to add to Lucene index: class com.poi.egh.turntable.vehicle.domain.vehicle.Vehicle#200
at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:81)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:432)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:284)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:678)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:703)
at java.lang.Thread.run(Thread.java:811)
Caused by:
java.io.IOException: Cannot overwrite: C:\temp\turntableLuceneIndex\vehicle\_5.fdt
at org.apache.lucene.store.FSDirectory.initOutput(FSDirectory.java:362)
at org.apache.lucene.store.SimpleFSDirectory.createOutput(SimpleFSDirectory.java:58)
at org.apache.lucene.index.FieldsWriter.<init>(FieldsWriter.java:61)
at org.apache.lucene.index.StoredFieldsWriter.initFieldsWriter(StoredFieldsWriter.java:66)
at org.apache.lucene.index.StoredFieldsWriter.finishDocument(StoredFieldsWriter.java:144)
at org.apache.lucene.index.StoredFieldsWriter$PerDoc.finish(StoredFieldsWriter.java:190)
at org.apache.lucene.index.DocumentsWriter$WaitQueue.writeDocument(DocumentsWriter.java:1466)
at org.apache.lucene.index.DocumentsWriter$WaitQueue.add(DocumentsWriter.java:1485)
at org.apache.lucene.index.DocumentsWriter.finishDocument(DocumentsWriter.java:1089)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:775)
at org.apache.lucene.index.DocumentsWriter.addDocument(DocumentsWriter.java:750)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2454)
at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:77)
... 7 more
{noformat}
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 2 months
[Hibernate-JIRA] Created: (HSEARCH-390) HibernateSearchResourceLoader uses default charset for reading resources
by Ivan Holub (JIRA)
HibernateSearchResourceLoader uses default charset for reading resources
------------------------------------------------------------------------
Key: HSEARCH-390
URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-390
Project: Hibernate Search
Issue Type: Bug
Components: analyzer
Affects Versions: 3.1.1.GA
Reporter: Ivan Holub
HibernateSearchResourceLoader uses default charset for reading resources.
So stop words are not working for other languages.
@AnalyzerDef(name="ru",
tokenizer=(a)TokenizerDef(factory=StandardTokenizerFactory.class),
filters={
@TokenFilterDef(factory=StandardFilterFactory.class),
@TokenFilterDef(factory=LowerCaseFilterFactory.class),
@TokenFilterDef(factory=StopFilterFactory.class,
params=@Parameter(name="words",
value="stopwords/stopwords_ru.txt")),
@TokenFilterDef(factory=SnowballPorterFilterFactory.class,
params=@Parameter(name="language",
value="Russian"))
stopwords/stopwords_ru.txt is UTF-8 file
To fix the problem I constructed Analyzer in separate class and without using AnalyzerDef.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 2 months