Hibernate SVN: r15643 - search/trunk/doc/reference/en/modules.
by hibernate-commits@lists.jboss.org
Author: hardy.ferentschik
Date: 2008-12-02 13:16:46 -0500 (Tue, 02 Dec 2008)
New Revision: 15643
Modified:
search/trunk/doc/reference/en/modules/mapping.xml
Log:
added documentation for AnalyzerDiscriminator
Modified: search/trunk/doc/reference/en/modules/mapping.xml
===================================================================
--- search/trunk/doc/reference/en/modules/mapping.xml 2008-12-02 16:33:47 UTC (rev 15642)
+++ search/trunk/doc/reference/en/modules/mapping.xml 2008-12-02 18:16:46 UTC (rev 15643)
@@ -810,6 +810,99 @@
your IDE to see the implementations available.</para>
</section>
+ <section>
+ <title>Analyzer discriminator (experimental)</title>
+
+ <para>So far all the different ways to specify an analyzer were
+ static. However, there are usecases where it is useful to select an
+ analyzer depending on the current state of the entity to be indexed,
+ for example in multi language enabled applications. For an BlogEntry
+ class for example the analyzer could depend on the language property
+ of the entry. Depending on this property the correct stemmer can be
+ chosen to index the actual text. </para>
+
+ <para>To enable this dynamic analyzer selection Hibernate Search
+ introduces the <classname>AnalyzerDiscriminator</classname>
+ annotation. The following example demonstrates the usage of this
+ annotation:</para>
+
+ <para><example>
+ <title>Usage of @AnalyzerDiscriminator in order to select an
+ analyzer depending on the entity state</title>
+
+ <programlisting>@Entity
+@Indexed
+@AnalyzerDefs({
+ @AnalyzerDef(name = "en",
+ tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
+ filters = {
+ @TokenFilterDef(factory = LowerCaseFilterFactory.class),
+ @TokenFilterDef(factory = EnglishPorterFilterFactory.class
+ )
+ }),
+ @AnalyzerDef(name = "de",
+ tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
+ filters = {
+ @TokenFilterDef(factory = LowerCaseFilterFactory.class),
+ @TokenFilterDef(factory = GermanStemFilterFactory.class)
+ })
+})
+public class BlogEntry {
+
+ @Id
+ @GeneratedValue
+ @DocumentId
+ private Integer id;
+
+ @Field
+ @AnalyzerDiscriminator(impl = LanguageDiscriminator.class)
+ private String language;
+
+ @Field
+ private String text;
+
+ private Set<BlogEntry> references;
+
+ // standard getter/setter
+ ...
+}</programlisting>
+
+ <programlisting>public class LanguageDiscriminator implements Discriminator {
+
+ public String getAnanyzerDefinitionName(Object value, Object entity, String field) {
+ if ( value == null || !( entity instanceof Article ) ) {
+ return null;
+ }
+ return (String) value;
+ }
+}</programlisting>
+ </example>The prerequisite for using
+ <classname>@AnalyzerDiscriminator</classname> is that all analyzer
+ which are going to be used are predefined via
+ <classname>@AnalyzerDef</classname> definitions. If this is the case
+ one can place the <classname>@AnalyzerDiscriminator</classname>
+ annotation either on the class or on a specific property of the entity
+ for which to dynamically select an analyzer. Via the
+ <literal>impl</literal> parameter of the
+ <classname>AnalyzerDiscriminator</classname> you specify a concrete
+ implementation of the <classname>Discriminator</classname> interface.
+ It is up to you to provide an implementation for this interface. The
+ only method you have to implement is
+ <classname>getAnanyzerDefinitionName()</classname> which gets called
+ for each field added to the Lucene document. The entity which is
+ getting indexed is also passed at each call to the interface method.
+ The <literal>value</literal> parameter is only set if the
+ <classname>AnalyzerDiscriminator</classname> is placed on a property
+ instead of class level. In this case the value represents the current
+ value of this property.</para>
+
+ <para>The implemention of the interface has to return the name of an
+ existing analyzer definition if the analyzer should be set dynamically
+ or <classname>null</classname> if the default analyzer should be
+ applied. The given example assumes that the language paramter is
+ either 'de' or 'en'.</para>
+ </section>
+
<section id="analyzer-retrievinganalyzer">
<title>Retrieving an analyzer</title>
15 years, 4 months
Hibernate SVN: r15642 - search/trunk/doc/reference/en/modules.
by hibernate-commits@lists.jboss.org
Author: hardy.ferentschik
Date: 2008-12-02 11:33:47 -0500 (Tue, 02 Dec 2008)
New Revision: 15642
Modified:
search/trunk/doc/reference/en/modules/batchindex.xml
search/trunk/doc/reference/en/modules/lucene-native.xml
search/trunk/doc/reference/en/modules/optimize.xml
Log:
HSEARCH-303
Modified: search/trunk/doc/reference/en/modules/batchindex.xml
===================================================================
--- search/trunk/doc/reference/en/modules/batchindex.xml 2008-12-02 15:11:04 UTC (rev 15641)
+++ search/trunk/doc/reference/en/modules/batchindex.xml 2008-12-02 16:33:47 UTC (rev 15642)
@@ -22,8 +22,8 @@
~ 51 Franklin Street, Fifth Floor
~ Boston, MA 02110-1301 USA
-->
-
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
+"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
<chapter id="search-batchindex">
<!-- $Id$ -->
@@ -32,37 +32,36 @@
<section id="search-batchindex-indexing">
<title>Indexing</title>
- <para>It is sometimes useful to index an object even if this object is not
- inserted nor updated to the database. This is especially true when you
- want to build your index for the first time. You can achieve that goal
- using the <classname>FullTextSession</classname>.</para>
+ <para>It is sometimes useful to index an entity even if this entity is not
+ inserted or updated to the database. This is for example the case when you
+ want to build your index for the first time.
+ <classname>FullTextSession</classname>.<methodname>index()</methodname>
+ allows you to do so.</para>
- <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(session);
+ <example>
+ <title>Indexing an entity via
+ <methodname>FullTextSession.index()</methodname></title>
+
+ <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
for (Customer customer : customers) {
<emphasis role="bold">fullTextSession.index(customer);</emphasis>
}
tx.commit(); //index are written at commit time </programlisting>
+ </example>
<para>For maximum efficiency, Hibernate Search batches index operations
- and executes them at commit time (Note: you don't need to use
- <classname>org.hibernate.Transaction</classname> in a JTA
- environment).</para>
+ and executes them at commit time. If you expect to index a lot of data,
+ however, you need to be careful about memory consumption since all
+ documents are kept in a queue until the transaction commit. You can
+ potentially face an <classname>OutOfMemoryException</classname>. To avoid
+ this exception, you can use
+ <methodname>fullTextSession.flushToIndexes()</methodname>. Every time
+ <methodname>fullTextSession.flushToIndexes()</methodname> is called (or if
+ the transaction is committed), the batch queue is processed (freeing
+ memory) applying all index changes. Be aware that once flushed changes
+ cannot be rolled back.</para>
- <para>If you expect to index a lot of data, you need to be careful about
- memory consumption: since all documents are kept in a queue until the
- transaction commit, you can potentially face an
- <classname>OutOfMemoryException</classname>.</para>
-
- <para>To avoid that, you can use
- <methodname>fullTextSession.flushToIndexes()</methodname>: all index
- operations are queued until
- <methodname>fullTextSession.flushToIndexes()</methodname> is called. Every
- time <methodname>fullTextSession.flushToIndexes()</methodname> is called
- (or if the transaction is committed), the queue is processed (freeing
- memory) and emptied. Be aware that changes made before a flush cannot be
- rollbacked. </para>
-
<note>
<para><literal>hibernate.search.worker.batch_size</literal> has been
deprecated in favor of this explicit API which provides better
@@ -70,26 +69,43 @@
</note>
<para>Other parameters which also can affect indexing time and memory
- consumption are
- <literal>hibernate.search.[default|<indexname>].indexwriter.batch.max_buffered_docs</literal>
- ,
- <literal>hibernate.search.[default|<indexname>].indexwriter.batch.max_field_length</literal>
- ,
- <literal>hibernate.search.[default|<indexname>].indexwriter.batch.max_merge_docs</literal>
- ,
- <literal>hibernate.search.[default|<indexname>].indexwriter.batch.merge_factor</literal>
- ,
- <literal>hibernate.search.[default|<indexname>].indexwriter.batch.ram_buffer_size</literal>
- and
- <literal>hibernate.search.[default|<indexname>].indexwriter.batch.term_index_interval</literal>
- . These parameters are Lucene specific and Hibernate Search is just
+ consumption are:</para>
+
+ <itemizedlist>
+ <listitem>
+ <literal>hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].max_buffered_docs</literal>
+ </listitem>
+
+ <listitem>
+ <literal>hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].max_field_length</literal>
+ </listitem>
+
+ <listitem>
+ <literal>hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].max_merge_docs</literal>
+ </listitem>
+
+ <listitem>
+ <literal>hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].merge_factor</literal>
+ </listitem>
+
+ <listitem>
+ <literal>hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].ram_buffer_size</literal>
+ </listitem>
+
+ <listitem>
+ <literal>hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].term_index_interval</literal>
+ </listitem>
+ </itemizedlist>
+
+ <para>These parameters are Lucene specific and Hibernate Search is just
passing these parameters through - see <xref
linkend="lucene-indexing-performance" /> for more details.</para>
- <para>Here is an especially efficient way to index a given class (useful
- for index (re)initialization):</para>
+ <example>
+ <title>Efficiently indexing a given class (useful for index
+ (re)initialization)</title>
- <programlisting>fullTextSession.setFlushMode(FlushMode.MANUAL);
+ <programlisting>fullTextSession.setFlushMode(FlushMode.MANUAL);
fullTextSession.setCacheMode(CacheMode.IGNORE);
transaction = fullTextSession.beginTransaction();
//Scrollable results will avoid loading too many objects in memory
@@ -106,9 +122,10 @@
}
}
transaction.commit();</programlisting>
+ </example>
- <para>Try to use a batch size that guaranty that your application will not
- run out of memory.</para>
+ <para>Try to use a batch size that guarantees that your application will
+ not run out of memory.</para>
</section>
<section>
@@ -116,29 +133,38 @@
<para>It is equally possible to remove an entity or all entities of a
given type from a Lucene index without the need to physically remove them
- from the database. This operation is named purging and is done through the
- <classname>FullTextSession</classname>.</para>
+ from the database. This operation is named purging and is also done
+ through the <classname>FullTextSession</classname>.</para>
- <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(session);
+ <example>
+ <title>Purging a specific instance of an entity from the index</title>
+
+ <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
for (Customer customer : customers) {
<emphasis role="bold">fullTextSession.purge( Customer.class, customer.getId() );</emphasis>
}
tx.commit(); //index are written at commit time </programlisting>
+ </example>
<para>Purging will remove the entity with the given id from the Lucene
index but will not touch the database.</para>
<para>If you need to remove all entities of a given type, you can use the
- <methodname>purgeAll</methodname> method. This operation remove all entities of the type passed
- as a parameter as well as all its subtypes.</para>
+ <methodname>purgeAll</methodname> method. This operation remove all
+ entities of the type passed as a parameter as well as all its
+ subtypes.</para>
- <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(session);
+ <example>
+ <title>Purging all instances of an entity from the index</title>
+
+ <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
<emphasis role="bold">fullTextSession.purgeAll( Customer.class );</emphasis>
//optionally optimize the index
//fullTextSession.getSearchFactory().optimize( Customer.class );
tx.commit(); //index are written at commit time </programlisting>
+ </example>
<para>It is recommended to optimize the index after such an
operation.</para>
@@ -150,4 +176,4 @@
well.</para>
</note>
</section>
-</chapter>
\ No newline at end of file
+</chapter>
Modified: search/trunk/doc/reference/en/modules/lucene-native.xml
===================================================================
--- search/trunk/doc/reference/en/modules/lucene-native.xml 2008-12-02 15:11:04 UTC (rev 15641)
+++ search/trunk/doc/reference/en/modules/lucene-native.xml 2008-12-02 16:33:47 UTC (rev 15642)
@@ -22,8 +22,8 @@
~ 51 Franklin Street, Fifth Floor
~ Boston, MA 02110-1301 USA
-->
-
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
+"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
<chapter id="search-lucene-native">
<!-- $Id$ -->
@@ -37,8 +37,12 @@
way to access Lucene natively. The <classname>SearchFactory</classname>
can be accessed from a <classname>FullTextSession</classname>:</para>
- <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(regularSession);
+ <example>
+ <title>Accessing the <classname>SearchFactory</classname></title>
+
+ <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(regularSession);
SearchFactory searchFactory = fullTextSession.getSearchFactory();</programlisting>
+ </example>
</section>
<section>
@@ -51,12 +55,16 @@
<classname>DirectoryProvider</classname>s per indexed class. One directory
provider can be shared amongst several indexed classes if the classes
share the same underlying index directory. While usually not the case, a
- given entity can have several <classname>DirectoryProvider</classname>s is
+ given entity can have several <classname>DirectoryProvider</classname>s if
the index is sharded (see <xref
linkend="search-configuration-directory-sharding" />).</para>
- <programlisting>DirectoryProvider[] provider = searchFactory.getDirectoryProviders(Order.class);
+ <example>
+ <title>Accessing the Lucene <classname>Directory</classname></title>
+
+ <programlisting>DirectoryProvider[] provider = searchFactory.getDirectoryProviders(Order.class);
org.apache.lucene.store.Directory directory = provider[0].getDirectory();</programlisting>
+ </example>
<para>In this example, directory points to the lucene index storing
<classname>Order</classname>s information. Note that the obtained Lucene
@@ -68,11 +76,14 @@
<title>Using an IndexReader</title>
<para>Queries in Lucene are executed on an <literal>IndexReader</literal>.
- Hibernate Search caches such index readers to maximize performances. Your
- code can access such cached / shared resources. You will just have to
- follow some "good citizen" rules.</para>
+ Hibernate Search caches all index readers to maximize performance. Your
+ code can access this cached resources, but you have to follow some "good
+ citizen" rules.</para>
- <programlisting>DirectoryProvider orderProvider = searchFactory.getDirectoryProviders(Order.class)[0];
+ <example>
+ <title>Accesing an <classname>IndexReader</classname></title>
+
+ <programlisting>DirectoryProvider orderProvider = searchFactory.getDirectoryProviders(Order.class)[0];
DirectoryProvider clientProvider = searchFactory.getDirectoryProviders(Client.class)[0];
ReaderProvider readerProvider = searchFactory.getReaderProvider();
@@ -84,24 +95,26 @@
finally {
readerProvider.closeReader(reader);
}</programlisting>
+ </example>
<para>The ReaderProvider (described in <xref
linkend="search-architecture-readerstrategy" />), will open an IndexReader
- on top of the index(es) referenced by the directory providers. This
- IndexReader being shared amongst several clients, you must adhere to the
- following rules:</para>
+ on top of the index(es) referenced by the directory providers. Because
+ this <classname>IndexReader</classname> is shared amongst several clients,
+ you must adhere to the following rules:</para>
<itemizedlist>
<listitem>
<para>Never call indexReader.close(), but always call
- readerProvider.closeReader(reader); (a finally block is the best
- area).</para>
+ readerProvider.closeReader(reader), preferably in a finally
+ block.</para>
</listitem>
<listitem>
- <para>This indexReader can't be used for modification operations
- (you would get an exception). If you want to use a read/write index reader,
- open one from the Lucene Directory object.</para>
+ <para>Don't use this <classname>IndexReader</classname> for
+ modification operations (you would get an exception). If you want to
+ use a read/write index reader, open one from the Lucene Directory
+ object.</para>
</listitem>
</itemizedlist>
@@ -156,10 +169,10 @@
</row>
<row>
- <entry align="left">queryNorm(q) </entry>
+ <entry align="left">queryNorm(q)</entry>
<entry>Normalizing factor used to make scores between queries
- comparable. </entry>
+ comparable.</entry>
</row>
<row>
@@ -178,7 +191,7 @@
</tgroup>
</informaltable>It is beyond the scope of this manual to explain this
formula in more detail. Please refer to
- <classname>Similarity</classname>'s Javadocs for more information. </para>
+ <classname>Similarity</classname>'s Javadocs for more information.</para>
<para>Hibernate Search provides two ways to modify Lucene's similarity
calculation. First you can set the default similarity by specifying the
@@ -196,6 +209,6 @@
term appears in a document. Documents with a single occurrence of the term
should be scored the same as documents with multiple occurrences. In this
case your custom implementation of the method <methodname>tf(float
- freq)</methodname> should return 1.0. </para>
+ freq)</methodname> should return 1.0.</para>
</section>
-</chapter>
\ No newline at end of file
+</chapter>
Modified: search/trunk/doc/reference/en/modules/optimize.xml
===================================================================
--- search/trunk/doc/reference/en/modules/optimize.xml 2008-12-02 15:11:04 UTC (rev 15641)
+++ search/trunk/doc/reference/en/modules/optimize.xml 2008-12-02 16:33:47 UTC (rev 15642)
@@ -22,23 +22,23 @@
~ 51 Franklin Street, Fifth Floor
~ Boston, MA 02110-1301 USA
-->
-
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
+"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
<chapter id="search-optimize">
<!-- $Id$ -->
<title>Index Optimization</title>
<para>From time to time, the Lucene index needs to be optimized. The process
- is essentially a defragmentation: until the optimization occurs deleted
- documents are just marked as such, no physical deletion is applied; the
- optimization can also adjust the number of files in the Lucene
- Directory.</para>
+ is essentially a defragmentation. Until an optimization is triggered Lucene
+ only marks deleted documents as such, no physical deletions are applied.
+ During the optimization process the deletions will be applied which also
+ effects the number of files in the Lucene Directory.</para>
- <para>The optimization speeds up searches but in no way speeds up indexation
- (update). During an optimization, searches can be performed (but will most
- likely be slowed down), and all index updates will be stopped. Prefer
- optimizing:</para>
+ <para>Optimising the Lucene index speeds up searches but has no effect on
+ the indexation (update) performance. During an optimization, searches can be
+ performed, but will most likely be slowed down. All index updates will be
+ stopped. It is recommended to schedule optimization:</para>
<itemizedlist>
<listitem>
@@ -46,40 +46,42 @@
</listitem>
<listitem>
- <para>after a lot of index modifications (doing so before will not speed
- up the indexation process)</para>
+ <para>after a lot of index modifications</para>
</listitem>
</itemizedlist>
<section>
<title>Automatic optimization</title>
- <para>Hibernate Search can optimize automatically an index after:</para>
+ <para>Hibernate Search can automatically optimize an index after:</para>
<itemizedlist>
<listitem>
- <para>a certain amount of operations have been applied (insertion,
- deletion)</para>
+ <para>a certain amount of operations (insertion, deletion)</para>
</listitem>
<listitem>
- <para>or a certain amout of transactions have been applied</para>
+ <para>or a certain amout of transactions </para>
</listitem>
</itemizedlist>
- <para>The configuration can be global or defined at the index
- level:</para>
+ <para>The configuration for automatic index optimization can be defined on
+ a global level or per index:</para>
- <programlisting>hibernate.search.default.optimizer.operation_limit.max = 1000
+ <example>
+ <title>Defining automatic optimization parameters</title>
+
+ <programlisting>hibernate.search.default.optimizer.operation_limit.max = 1000
hibernate.search.default.optimizer.transaction_limit.max = 100
hibernate.search.Animal.optimizer.transaction_limit.max = 50</programlisting>
+ </example>
<para>An optimization will be triggered to the <literal>Animal</literal>
index as soon as either:</para>
<itemizedlist>
<listitem>
- <para>the number of addition and deletion reaches 1000</para>
+ <para>the number of additions and deletions reaches 1000</para>
</listitem>
<listitem>
@@ -100,22 +102,25 @@
<para>You can programmatically optimize (defragment) a Lucene index from
Hibernate Search through the <classname>SearchFactory</classname>:</para>
- <programlisting>searchFactory.optimize(Order.class);</programlisting>
+ <example>
+ <title>Programmatic index optimization</title>
- <programlisting>searchFactory.optimize();</programlisting>
+ <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(regularSession);
+SearchFactory searchFactory = fullTextSession.getSearchFactory();
+searchFactory.optimize(Order.class);
+// or
+searchFactory.optimize();</programlisting>
+ </example>
+
<para>The first example optimizes the Lucene index holding
<classname>Order</classname>s; the second, optimizes all indexes.</para>
- <para>The <classname>SearchFactory</classname> can be accessed from a
- <classname>FullTextSession</classname>:</para>
-
- <programlisting>FullTextSession fullTextSession = Search.getFullTextSession(regularSession);
-SearchFactory searchFactory = fullTextSession.getSearchFactory();</programlisting>
-
- <para>Note that <literal>searchFactory.optimize()</literal> has no effect
- on a JMS backend. You must apply the optimize operation on the Master
- node.</para>
+ <note>
+ <para><literal>searchFactory.optimize()</literal> has no effect on a JMS
+ backend. You must apply the optimize operation on the Master
+ node.</para>
+ </note>
</section>
<section>
@@ -151,4 +156,4 @@
</itemizedlist> See <xref linkend="lucene-indexing-performance" /> for
more details.</para>
</section>
-</chapter>
\ No newline at end of file
+</chapter>
15 years, 4 months
Hibernate SVN: r15641 - in branches/Branch_3_2/HibernateExt/tools/src: test/org/hibernate/tool/test/jdbc2cfg and 1 other directory.
by hibernate-commits@lists.jboss.org
Author: anthonyHib
Date: 2008-12-02 10:11:04 -0500 (Tue, 02 Dec 2008)
New Revision: 15641
Modified:
branches/Branch_3_2/HibernateExt/tools/src/java/org/hibernate/tool/hbm2x/pojo/EntityPOJOClass.java
branches/Branch_3_2/HibernateExt/tools/src/test/org/hibernate/tool/test/jdbc2cfg/OneToOneTest.java
Log:
HBX-524 : jpa fix
Modified: branches/Branch_3_2/HibernateExt/tools/src/java/org/hibernate/tool/hbm2x/pojo/EntityPOJOClass.java
===================================================================
--- branches/Branch_3_2/HibernateExt/tools/src/java/org/hibernate/tool/hbm2x/pojo/EntityPOJOClass.java 2008-12-02 15:00:08 UTC (rev 15640)
+++ branches/Branch_3_2/HibernateExt/tools/src/java/org/hibernate/tool/hbm2x/pojo/EntityPOJOClass.java 2008-12-02 15:11:04 UTC (rev 15641)
@@ -22,6 +22,7 @@
import org.hibernate.mapping.OneToMany;
import org.hibernate.mapping.OneToOne;
import org.hibernate.mapping.PersistentClass;
+import org.hibernate.mapping.PrimaryKey;
import org.hibernate.mapping.Property;
import org.hibernate.mapping.RootClass;
import org.hibernate.mapping.Selectable;
@@ -32,6 +33,7 @@
import org.hibernate.mapping.UniqueKey;
import org.hibernate.mapping.Value;
import org.hibernate.tool.hbm2x.Cfg2JavaTool;
+import org.hibernate.type.ForeignKeyDirection;
import org.hibernate.util.JoinedIterator;
import org.hibernate.util.StringHelper;
@@ -447,16 +449,43 @@
return buffer.toString();
}
+ public boolean isSharedPkBasedOneToOne(OneToOne oneToOne){
+ Iterator joinColumnsIt = oneToOne.getColumnIterator();
+ Set joinColumns = new HashSet();
+ while ( joinColumnsIt.hasNext() ) {
+ joinColumns.add( joinColumnsIt.next() );
+ }
+
+ if ( joinColumns.size() == 0 )
+ return false;
+
+ Iterator<Column> idColumnsIt = getIdentifierProperty().getColumnIterator();
+ Set idColumns = new HashSet();
+ while ( idColumnsIt.hasNext() ) {
+ if (!joinColumns.contains(idColumnsIt.next()) )
+ return false;
+ }
+
+ return true;
+ }
+
public String generateOneToOneAnnotation(Property property, Configuration cfg) {
+ OneToOne oneToOne = (OneToOne)property.getValue();
+
+ boolean pkIsAlsoFk = isSharedPkBasedOneToOne(oneToOne);
+
AnnotationBuilder ab = AnnotationBuilder.createAnnotation( importType("javax.persistence.OneToOne") )
.addAttribute( "cascade", getCascadeTypes(property))
.addAttribute( "fetch", getFetchType(property));
- OneToOne oneToOne = (OneToOne)property.getValue();
- if (oneToOne.isConstrained())
+
+ if ( oneToOne.getForeignKeyType().equals(ForeignKeyDirection.FOREIGN_KEY_TO_PARENT) ){
ab.addQuotedAttribute("mappedBy", getOneToOneMappedBy(cfg, oneToOne));
+ }
+
StringBuffer buffer = new StringBuffer(ab.getResult());
buffer.append(getHibernateCascadeTypeAnnotation(property));
- if (!oneToOne.isConstrained()){
+
+ if ( pkIsAlsoFk && oneToOne.getForeignKeyType().equals(ForeignKeyDirection.FOREIGN_KEY_FROM_PARENT) ){
AnnotationBuilder ab1 = AnnotationBuilder.createAnnotation( importType("javax.persistence.PrimaryKeyJoinColumn") );
buffer.append(ab1.getResult());
}
@@ -691,14 +720,20 @@
joinColumns.add( joinColumnsIt.next() );
}
PersistentClass pc = cfg.getClassMapping( oneToOne.getReferencedEntityName() );
+ String referencedPropertyName = oneToOne.getReferencedPropertyName();
+ if ( referencedPropertyName != null )
+ return referencedPropertyName;
+
Iterator properties = pc.getPropertyClosureIterator();
//TODO we should check the table too
boolean isOtherSide = false;
mappedBy = "unresolved";
+
+
while ( ! isOtherSide && properties.hasNext() ) {
Property oneProperty = (Property) properties.next();
Value manyValue = oneProperty.getValue();
- if ( manyValue != null && manyValue instanceof OneToOne ) {
+ if ( manyValue != null && ( manyValue instanceof OneToOne || manyValue instanceof ManyToOne ) ) {
if ( joinColumns.size() == manyValue.getColumnSpan() ) {
isOtherSide = true;
Iterator it = manyValue.getColumnIterator();
Modified: branches/Branch_3_2/HibernateExt/tools/src/test/org/hibernate/tool/test/jdbc2cfg/OneToOneTest.java
===================================================================
--- branches/Branch_3_2/HibernateExt/tools/src/test/org/hibernate/tool/test/jdbc2cfg/OneToOneTest.java 2008-12-02 15:00:08 UTC (rev 15640)
+++ branches/Branch_3_2/HibernateExt/tools/src/test/org/hibernate/tool/test/jdbc2cfg/OneToOneTest.java 2008-12-02 15:11:04 UTC (rev 15641)
@@ -246,7 +246,7 @@
TestHelper.compile(
getOutputDir(), getOutputDir(), TestHelper.visitAllFiles( getOutputDir(), list ), "1.5",
TestHelper.buildClasspath( jars )
- );
+ );
URL[] urls = new URL[] { getOutputDir().toURL() };
ClassLoader oldLoader = Thread.currentThread().getContextClassLoader();
URLClassLoader ucl = new URLClassLoader(urls, oldLoader );
@@ -299,6 +299,7 @@
"create table ADDRESS_PERSON ( address_id integer not null, name varchar(50), primary key (address_id), constraint address_person foreign key (address_id) references PERSON)",
"create table MULTI_PERSON ( person_id integer not null, person_compid integer not null, name varchar(50), primary key (person_id, person_compid) )",
"create table ADDRESS_MULTI_PERSON ( address_id integer not null, address_compid integer not null, name varchar(50), primary key (address_id, address_compid), constraint address_multi_person foreign key (address_id, address_compid) references MULTI_PERSON)",
+
};
}
@@ -308,7 +309,6 @@
"drop table PERSON",
"drop table ADDRESS_MULTI_PERSON",
"drop table MULTI_PERSON",
-
};
}
15 years, 4 months
Hibernate SVN: r15640 - in validator/trunk: validation-api/src/main/java/javax/validation and 1 other directory.
by hibernate-commits@lists.jboss.org
Author: epbernard
Date: 2008-12-02 10:00:08 -0500 (Tue, 02 Dec 2008)
New Revision: 15640
Modified:
validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java
validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java
Log:
BVAL-73 remove ConstraintViolation.getBeanClass
Modified: validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java
===================================================================
--- validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java 2008-12-02 14:49:20 UTC (rev 15639)
+++ validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java 2008-12-02 15:00:08 UTC (rev 15640)
@@ -78,13 +78,6 @@
/**
* {@inheritDoc}
*/
- public Class<T> getBeanClass() {
- return beanClass;
- }
-
- /**
- * {@inheritDoc}
- */
public Object getInvalidValue() {
return value;
}
Modified: validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java
===================================================================
--- validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java 2008-12-02 14:49:20 UTC (rev 15639)
+++ validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java 2008-12-02 15:00:08 UTC (rev 15640)
@@ -59,13 +59,7 @@
*/
String getPropertyPath();
-
/**
- * @return the type of interface or class being validated.
- */
- Class<T> getBeanClass();
-
- /**
* @return the value failing to pass the constraint.
*/
Object getInvalidValue();
15 years, 4 months
Hibernate SVN: r15639 - in search/trunk: doc/quickstart and 4 other directories.
by hibernate-commits@lists.jboss.org
Author: hardy.ferentschik
Date: 2008-12-02 09:49:20 -0500 (Tue, 02 Dec 2008)
New Revision: 15639
Modified:
search/trunk/build.xml
search/trunk/changelog.txt
search/trunk/doc/quickstart/pom.xml
search/trunk/doc/quickstart/src/main/resources/archetype-resources/pom.xml
search/trunk/doc/reference/en/master.xml
search/trunk/doc/reference/en/modules/getting-started.xml
search/trunk/pom.xml
search/trunk/readme.txt
search/trunk/src/java/org/hibernate/search/Version.java
Log:
changed the version number from CR1 to GA
Modified: search/trunk/build.xml
===================================================================
--- search/trunk/build.xml 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/build.xml 2008-12-02 14:49:20 UTC (rev 15639)
@@ -18,7 +18,7 @@
<!-- Name of project and version, used to create filenames -->
<property name="Name" value="Hibernate Search"/>
<property name="name" value="hibernate-search"/>
- <property name="version" value="3.1.0.CR1"/>
+ <property name="version" value="3.1.0.GA"/>
<property name="javadoc.packagenames" value="org.hibernate.search.*"/>
<property name="copy.test" value="true"/>
<property name="javac.source" value="1.5"/>
Modified: search/trunk/changelog.txt
===================================================================
--- search/trunk/changelog.txt 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/changelog.txt 2008-12-02 14:49:20 UTC (rev 15639)
@@ -1,6 +1,9 @@
Hibernate Search Changelog
==========================
+3.1.0.GA (4-12-2008)
+------------------------
+
3.1.0.CR1 (17-10-2008)
------------------------
Modified: search/trunk/doc/quickstart/pom.xml
===================================================================
--- search/trunk/doc/quickstart/pom.xml 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/doc/quickstart/pom.xml 2008-12-02 14:49:20 UTC (rev 15639)
@@ -3,5 +3,5 @@
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search-quickstart</artifactId>
<packaging>jar</packaging>
- <version>3.1.0.CR1</version>
+ <version>3.1.0.GA</version>
</project>
Modified: search/trunk/doc/quickstart/src/main/resources/archetype-resources/pom.xml
===================================================================
--- search/trunk/doc/quickstart/src/main/resources/archetype-resources/pom.xml 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/doc/quickstart/src/main/resources/archetype-resources/pom.xml 2008-12-02 14:49:20 UTC (rev 15639)
@@ -11,7 +11,7 @@
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search</artifactId>
- <version>3.1.0.CR1</version>
+ <version>3.1.0.GA</version>
</dependency>
<dependency>
<groupId>cglib</groupId>
Modified: search/trunk/doc/reference/en/master.xml
===================================================================
--- search/trunk/doc/reference/en/master.xml 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/doc/reference/en/master.xml 2008-12-02 14:49:20 UTC (rev 15639)
@@ -25,7 +25,7 @@
-->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY versionNumber "3.1.0.CR1">
+<!ENTITY versionNumber "3.1.0.GA">
<!ENTITY copyrightYear "2004">
<!ENTITY copyrightHolder "Red Hat Middleware, LLC.">
]>
Modified: search/trunk/doc/reference/en/modules/getting-started.xml
===================================================================
--- search/trunk/doc/reference/en/modules/getting-started.xml 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/doc/reference/en/modules/getting-started.xml 2008-12-02 14:49:20 UTC (rev 15639)
@@ -119,7 +119,7 @@
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search</artifactId>
- <version>3.1.0.CR1</version>
+ <version>3.1.0.GA</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
@@ -562,7 +562,7 @@
<para><programlisting>mvn archetype:create \
-DarchetypeGroupId=org.hibernate \
-DarchetypeArtifactId=hibernate-search-quickstart \
- -DarchetypeVersion=3.1.0.CR1 \
+ -DarchetypeVersion=3.1.0.GA \
-DgroupId=my.company -DartifactId=quickstart</programlisting>Using the
maven project you can execute the examples, inspect the file system based
index and search and retrieve a list of managed objects. Just run
Modified: search/trunk/pom.xml
===================================================================
--- search/trunk/pom.xml 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/pom.xml 2008-12-02 14:49:20 UTC (rev 15639)
@@ -4,7 +4,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-search</artifactId>
- <version>3.1.0.CR1</version>
+ <version>3.1.0.GA</version>
<name>Hibernate Search</name>
<description>Hibernate Search</description>
<url>http://search.hibernate.org</url>
Modified: search/trunk/readme.txt
===================================================================
--- search/trunk/readme.txt 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/readme.txt 2008-12-02 14:49:20 UTC (rev 15639)
@@ -1,6 +1,6 @@
Hibernate Search
==================================================
-Version: 3.1.0.CR1, 17.11.2008
+Version: 3.1.0.GA, 4.12.2008
Description
-----------
Modified: search/trunk/src/java/org/hibernate/search/Version.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/Version.java 2008-12-02 14:47:55 UTC (rev 15638)
+++ search/trunk/src/java/org/hibernate/search/Version.java 2008-12-02 14:49:20 UTC (rev 15639)
@@ -12,7 +12,7 @@
* @author Emmanuel Bernard
*/
public class Version {
- public static final String VERSION = "3.1.0.CR1";
+ public static final String VERSION = "3.1.0.GA";
private static final Logger log = LoggerFactory.make();
15 years, 4 months
Hibernate SVN: r15638 - in validator/trunk: hibernate-validator/src/main/java/org/hibernate/validation/impl and 3 other directories.
by hibernate-commits@lists.jboss.org
Author: epbernard
Date: 2008-12-02 09:47:55 -0500 (Tue, 02 Dec 2008)
New Revision: 15638
Modified:
validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/engine/ValidatorImpl.java
validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java
validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/bootstrap/ValidationTest.java
validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/engine/ValidatorImplTest.java
validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java
Log:
BVAL-76 add ConstraintViolation getRawMessage() and rename getMessage() to getInterpolatedMessage()
Modified: validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/engine/ValidatorImpl.java
===================================================================
--- validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/engine/ValidatorImpl.java 2008-12-02 14:28:28 UTC (rev 15637)
+++ validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/engine/ValidatorImpl.java 2008-12-02 14:47:55 UTC (rev 15638)
@@ -164,13 +164,15 @@
if ( !constraintDescriptor.getConstraintImplementation().isValid( value, contextImpl ) ) {
for ( ConstraintContextImpl.ErrorMessage error : contextImpl.getErrorMessages() ) {
- String message = messageResolver.interpolate(
- error.getMessage(),
+ final String message = error.getMessage();
+ String interpolatedMessage = messageResolver.interpolate(
+ message,
constraintDescriptor,
leafBeanInstance
);
ConstraintViolationImpl<T> failingConstraintViolation = new ConstraintViolationImpl<T>(
message,
+ interpolatedMessage,
context.getRootBean(),
metaDataProvider.getBeanClass(),
leafBeanInstance,
@@ -299,13 +301,15 @@
if ( !wrapper.descriptor.getConstraintImplementation().isValid( wrapper.value, contextImpl ) ) {
for ( ConstraintContextImpl.ErrorMessage error : contextImpl.getErrorMessages() ) {
- String message = messageResolver.interpolate(
- error.getMessage(),
+ final String message = error.getMessage();
+ String interpolatedMessage = messageResolver.interpolate(
+ message,
wrapper.descriptor,
wrapper.value
);
ConstraintViolationImpl<T> failingConstraintViolation = new ConstraintViolationImpl<T>(
message,
+ interpolatedMessage,
object,
beanType,
object,
@@ -366,13 +370,15 @@
ConstraintContextImpl contextImpl = new ConstraintContextImpl(constraintDescriptor);
if ( !constraintDescriptor.getConstraintImplementation().isValid( object, contextImpl ) ) {
for ( ConstraintContextImpl.ErrorMessage error : contextImpl.getErrorMessages() ) {
- String message = messageResolver.interpolate(
- error.getMessage(),
+ final String message = error.getMessage();
+ String interpolatedMessage = messageResolver.interpolate(
+ message,
constraintDescriptor,
object
);
ConstraintViolationImpl<T> failingConstraintViolation = new ConstraintViolationImpl<T>(
message,
+ interpolatedMessage,
null,
null,
null,
Modified: validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java
===================================================================
--- validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java 2008-12-02 14:28:28 UTC (rev 15637)
+++ validator/trunk/hibernate-validator/src/main/java/org/hibernate/validation/impl/ConstraintViolationImpl.java 2008-12-02 14:47:55 UTC (rev 15638)
@@ -27,7 +27,7 @@
* @author Hardy Ferentschik
*/
public class ConstraintViolationImpl<T> implements ConstraintViolation<T> {
- private String message;
+ private String interpolatedMessage;
private T rootBean;
private Class<T> beanClass;
private Object value;
@@ -35,11 +35,14 @@
private HashSet<String> groups;
private Object leafBeanInstance;
private final ConstraintDescriptor constraintDescriptor;
+ private String rawMessage;
- public ConstraintViolationImpl(String message, T rootBean, Class<T> beanClass, Object leafBeanInstance, Object value,
+ public ConstraintViolationImpl(String rawMessage, String interpolatedMessage, T rootBean, Class<T> beanClass,
+ Object leafBeanInstance, Object value,
String propertyPath, String group, ConstraintDescriptor constraintDescriptor) {
- this.message = message;
+ this.rawMessage = rawMessage;
+ this.interpolatedMessage = interpolatedMessage;
this.rootBean = rootBean;
this.beanClass = beanClass;
this.value = value;
@@ -53,10 +56,14 @@
/**
* {@inheritDoc}
*/
- public String getMessage() {
- return message;
+ public String getInterpolatedMessage() {
+ return interpolatedMessage;
}
+ public String getRawMessage() {
+ return rawMessage;
+ }
+
/**
* {@inheritDoc}
*/
@@ -118,7 +125,7 @@
if ( beanClass != null ? !beanClass.equals( that.beanClass ) : that.beanClass != null ) {
return false;
}
- if ( message != null ? !message.equals( that.message ) : that.message != null ) {
+ if ( interpolatedMessage != null ? !interpolatedMessage.equals( that.interpolatedMessage ) : that.interpolatedMessage != null ) {
return false;
}
if ( propertyPath != null ? !propertyPath.equals( that.propertyPath ) : that.propertyPath != null ) {
@@ -136,7 +143,7 @@
@Override
public int hashCode() {
- int result = message != null ? message.hashCode() : 0;
+ int result = interpolatedMessage != null ? interpolatedMessage.hashCode() : 0;
result = 31 * result + ( rootBean != null ? rootBean.hashCode() : 0 );
result = 31 * result + ( beanClass != null ? beanClass.hashCode() : 0 );
result = 31 * result + ( value != null ? value.hashCode() : 0 );
Modified: validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/bootstrap/ValidationTest.java
===================================================================
--- validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/bootstrap/ValidationTest.java 2008-12-02 14:28:28 UTC (rev 15637)
+++ validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/bootstrap/ValidationTest.java 2008-12-02 14:47:55 UTC (rev 15638)
@@ -108,7 +108,7 @@
Set<ConstraintViolation<Customer>> constraintViolations = validator.validate( customer );
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
ConstraintViolation<Customer> constraintViolation = constraintViolations.iterator().next();
- assertEquals( "Wrong message", "may not be null", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "may not be null", constraintViolation.getInterpolatedMessage() );
//FIXME nothing guarantee that a builder can be reused
// now we modify the builder, get a new factory and valiator and try again
@@ -128,7 +128,7 @@
constraintViolations = validator.validate( customer );
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
constraintViolation = constraintViolations.iterator().next();
- assertEquals( "Wrong message", "my custom message", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "my custom message", constraintViolation.getInterpolatedMessage() );
}
@Test
@@ -146,7 +146,7 @@
Set<ConstraintViolation<Customer>> constraintViolations = validator.validate( customer );
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
ConstraintViolation<Customer> constraintViolation = constraintViolations.iterator().next();
- assertEquals( "Wrong message", "may not be null", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "may not be null", constraintViolation.getInterpolatedMessage() );
//FIXME nothing guarantee that a builder can be reused
// now we modify the builder, get a new factory and valiator and try again
Modified: validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/engine/ValidatorImplTest.java
===================================================================
--- validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/engine/ValidatorImplTest.java 2008-12-02 14:28:28 UTC (rev 15637)
+++ validator/trunk/hibernate-validator/src/test/java/org/hibernate/validation/engine/ValidatorImplTest.java 2008-12-02 14:47:55 UTC (rev 15638)
@@ -132,7 +132,7 @@
constraintViolations = validator.validate( book, "first", "second", "last" );
ConstraintViolation constraintViolation = constraintViolations.iterator().next();
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
- assertEquals( "Wrong message", "may not be empty", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "may not be empty", constraintViolation.getInterpolatedMessage() );
assertEquals( "Wrong bean class", Book.class, constraintViolation.getBeanClass() );
assertEquals( "Wrong root entity", book, constraintViolation.getRootBean() );
assertEquals( "Wrong value", book.getTitle(), constraintViolation.getInvalidValue() );
@@ -144,7 +144,7 @@
constraintViolations = validator.validate( book, "first", "second", "last" );
constraintViolation = constraintViolations.iterator().next();
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
- assertEquals( "Wrong message", "length must be between 0 and 30", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "length must be between 0 and 30", constraintViolation.getInterpolatedMessage() );
assertEquals( "Wrong bean class", Book.class, constraintViolation.getBeanClass() );
assertEquals( "Wrong root entity", book, constraintViolation.getRootBean() );
assertEquals( "Wrong value", book.getSubtitle(), constraintViolation.getInvalidValue() );
@@ -156,7 +156,7 @@
constraintViolations = validator.validate( book, "first", "second", "last" );
constraintViolation = constraintViolations.iterator().next();
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
- assertEquals( "Wrong message", "length must be between 0 and 20", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "length must be between 0 and 20", constraintViolation.getInterpolatedMessage() );
assertEquals( "Wrong bean class", Author.class, constraintViolation.getBeanClass() );
assertEquals( "Wrong root entity", book, constraintViolation.getRootBean() );
assertEquals( "Wrong value", author.getCompany(), constraintViolation.getInvalidValue() );
@@ -187,7 +187,7 @@
constraintViolations = validator.validate( book, "default" );
ConstraintViolation constraintViolation = constraintViolations.iterator().next();
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
- assertEquals( "Wrong message", "may not be null", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "may not be null", constraintViolation.getInterpolatedMessage() );
assertEquals( "Wrong bean class", Book.class, constraintViolation.getBeanClass() );
assertEquals( "Wrong root entity", book, constraintViolation.getRootBean() );
assertEquals( "Wrong value", book.getTitle(), constraintViolation.getInvalidValue() );
@@ -329,7 +329,7 @@
constraintViolations = validator.validate( customer );
ConstraintViolation constraintViolation = constraintViolations.iterator().next();
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
- assertEquals( "Wrong message", "may not be null", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "may not be null", constraintViolation.getInterpolatedMessage() );
assertEquals( "Wrong bean class", Order.class, constraintViolation.getBeanClass() );
assertEquals( "Wrong root entity", customer, constraintViolation.getRootBean() );
assertEquals( "Wrong value", order1.getOrderNumber(), constraintViolation.getInvalidValue() );
@@ -381,7 +381,7 @@
Set<ConstraintViolation<Actor>> constraintViolations = validator.validate( clint );
ConstraintViolation constraintViolation = constraintViolations.iterator().next();
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
- assertEquals( "Wrong message", "may not be empty", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "may not be empty", constraintViolation.getInterpolatedMessage() );
assertEquals( "Wrong bean class", Actor.class, constraintViolation.getBeanClass() );
assertEquals( "Wrong root entity", clint, constraintViolation.getRootBean() );
assertEquals( "Wrong value", morgan.getLastName(), constraintViolation.getInvalidValue() );
@@ -401,7 +401,7 @@
ConstraintViolation constraintViolation = constraintViolations.iterator().next();
assertEquals( "Wrong number of constraints", 1, constraintViolations.size() );
- assertEquals( "Wrong message", "may not be null", constraintViolation.getMessage() );
+ assertEquals( "Wrong message", "may not be null", constraintViolation.getInterpolatedMessage() );
assertEquals( "Wrong bean class", null, constraintViolation.getBeanClass() );
assertEquals( "Wrong root entity", null, constraintViolation.getRootBean() );
assertEquals( "Wrong value", order.getOrderNumber(), constraintViolation.getInvalidValue() );
Modified: validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java
===================================================================
--- validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java 2008-12-02 14:28:28 UTC (rev 15637)
+++ validator/trunk/validation-api/src/main/java/javax/validation/ConstraintViolation.java 2008-12-02 14:47:55 UTC (rev 15638)
@@ -29,11 +29,16 @@
public interface ConstraintViolation<T> {
/**
- * @return The error message for this constraint violation.
+ * @return The interpolated error message for this constraint violation.
*/
- String getMessage();
+ String getInterpolatedMessage();
/**
+ * @return The non-interpolated error message for this constraint violation.
+ */
+ String getRawMessage();
+
+ /**
* @return The root bean being validated.
*/
T getRootBean();
15 years, 4 months
Hibernate SVN: r15637 - in search/trunk/src: java/org/hibernate/search/analyzer and 6 other directories.
by hibernate-commits@lists.jboss.org
Author: hardy.ferentschik
Date: 2008-12-02 09:28:28 -0500 (Tue, 02 Dec 2008)
New Revision: 15637
Added:
search/trunk/src/java/org/hibernate/search/analyzer/
search/trunk/src/java/org/hibernate/search/analyzer/Discriminator.java
search/trunk/src/java/org/hibernate/search/annotations/AnalyzerDiscriminator.java
search/trunk/src/test/org/hibernate/search/test/analyzer/Article.java
search/trunk/src/test/org/hibernate/search/test/analyzer/BlogEntry.java
search/trunk/src/test/org/hibernate/search/test/analyzer/LanguageDiscriminator.java
Modified:
search/trunk/src/java/org/hibernate/search/backend/AddLuceneWork.java
search/trunk/src/java/org/hibernate/search/backend/Workspace.java
search/trunk/src/java/org/hibernate/search/backend/impl/lucene/works/AddWorkDelegate.java
search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderContainedEntity.java
search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderIndexedEntity.java
search/trunk/src/java/org/hibernate/search/util/ScopedAnalyzer.java
search/trunk/src/test/org/hibernate/search/test/analyzer/AnalyzerTest.java
Log:
HSEARCH-221
Implementation of AnalyzerDiscriminator framework
Added: search/trunk/src/java/org/hibernate/search/analyzer/Discriminator.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/analyzer/Discriminator.java (rev 0)
+++ search/trunk/src/java/org/hibernate/search/analyzer/Discriminator.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -0,0 +1,23 @@
+// $Id:$
+package org.hibernate.search.analyzer;
+
+/**
+ * Allows to choose a by name defines analyzer at runtime.
+ *
+ * @author Hardy Ferentschik
+ */
+public interface Discriminator {
+
+ /**
+ * Allows to specify the analyzer to be used for the given field based on the specified entity state.
+ *
+ * @param value The value of the field the <code>@AnalyzerDiscriminator</code> annotation was placed on. <code>null</code>
+ * if the annotation was placed on class level.
+ * @param entity The entity to be indexed.
+ * @param field The document field.
+ * @return The name of a defined analyzer to be used for the specified <code>field</code> or <code>null</code> if the
+ * default analyzer for this field should be used.
+ * @see org.hibernate.search.annotations.AnalyzerDef
+ */
+ String getAnanyzerDefinitionName(Object value, Object entity, String field);
+}
Property changes on: search/trunk/src/java/org/hibernate/search/analyzer/Discriminator.java
___________________________________________________________________
Name: svn:keywords
+ Id
Added: search/trunk/src/java/org/hibernate/search/annotations/AnalyzerDiscriminator.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/annotations/AnalyzerDiscriminator.java (rev 0)
+++ search/trunk/src/java/org/hibernate/search/annotations/AnalyzerDiscriminator.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -0,0 +1,22 @@
+// $Id:$
+package org.hibernate.search.annotations;
+
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Documented;
+
+import org.hibernate.search.analyzer.Discriminator;
+
+/**
+ * Allows to dynamically select a named analyzer through a <code>Discriminator</code> implementation.
+ *
+ * @author Hardy Ferentschik
+ */
+(a)Retention(RetentionPolicy.RUNTIME)
+@Target({ ElementType.TYPE, ElementType.FIELD, ElementType.METHOD })
+@Documented
+public @interface AnalyzerDiscriminator {
+ public Class<? extends Discriminator> impl();
+}
Property changes on: search/trunk/src/java/org/hibernate/search/annotations/AnalyzerDiscriminator.java
___________________________________________________________________
Name: svn:keywords
+ Id
Modified: search/trunk/src/java/org/hibernate/search/backend/AddLuceneWork.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/backend/AddLuceneWork.java 2008-12-02 14:21:37 UTC (rev 15636)
+++ search/trunk/src/java/org/hibernate/search/backend/AddLuceneWork.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -2,6 +2,7 @@
package org.hibernate.search.backend;
import java.io.Serializable;
+import java.util.Map;
import org.apache.lucene.document.Document;
@@ -12,14 +13,29 @@
private static final long serialVersionUID = -2450349312813297371L;
+ private final Map<String, String> fieldToAnalyzerMap;
+
public AddLuceneWork(Serializable id, String idInString, Class entity, Document document) {
- super( id, idInString, entity, document, false );
+ this( id, idInString, entity, document, false );
}
public AddLuceneWork(Serializable id, String idInString, Class entity, Document document, boolean batch) {
+ this( id, idInString, entity, document, null, batch );
+ }
+
+ public AddLuceneWork(Serializable id, String idInString, Class entity, Document document, Map<String, String> fieldToAnalyzerMap) {
+ this( id, idInString, entity, document, fieldToAnalyzerMap, false );
+ }
+
+ public AddLuceneWork(Serializable id, String idInString, Class entity, Document document, Map<String, String> fieldToAnalyzerMap, boolean batch) {
super( id, idInString, entity, document, batch );
+ this.fieldToAnalyzerMap = fieldToAnalyzerMap;
}
+ public Map<String, String> getFieldToAnalyzerMap() {
+ return fieldToAnalyzerMap;
+ }
+
@Override
public <T> T getWorkDelegate(final WorkVisitor<T> visitor) {
return visitor.getDelegate( this );
Modified: search/trunk/src/java/org/hibernate/search/backend/Workspace.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/backend/Workspace.java 2008-12-02 14:21:37 UTC (rev 15636)
+++ search/trunk/src/java/org/hibernate/search/backend/Workspace.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -81,6 +81,10 @@
return searchFactoryImplementor.getDocumentBuilderIndexedEntity( entity );
}
+ public Analyzer getAnalyzer(String name) {
+ return searchFactoryImplementor.getAnalyzer( name );
+ }
+
/**
* If optimization has not been forced give a change to configured OptimizerStrategy
* to optimize the index.
Modified: search/trunk/src/java/org/hibernate/search/backend/impl/lucene/works/AddWorkDelegate.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/backend/impl/lucene/works/AddWorkDelegate.java 2008-12-02 14:21:37 UTC (rev 15636)
+++ search/trunk/src/java/org/hibernate/search/backend/impl/lucene/works/AddWorkDelegate.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -1,6 +1,7 @@
package org.hibernate.search.backend.impl.lucene.works;
import java.io.IOException;
+import java.util.Map;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.index.IndexReader;
@@ -9,14 +10,16 @@
import org.slf4j.Logger;
import org.hibernate.search.SearchException;
+import org.hibernate.search.backend.AddLuceneWork;
import org.hibernate.search.backend.LuceneWork;
import org.hibernate.search.backend.Workspace;
import org.hibernate.search.backend.impl.lucene.IndexInteractionType;
import org.hibernate.search.engine.DocumentBuilderIndexedEntity;
import org.hibernate.search.util.LoggerFactory;
+import org.hibernate.search.util.ScopedAnalyzer;
/**
- * Stateless implementation that performs a AddLuceneWork.
+ * Stateless implementation that performs a <code>AddLuceneWork</code>.
*
* @author Emmanuel Bernard
* @author Hardy Ferentschik
@@ -40,8 +43,11 @@
}
public void performWork(LuceneWork work, IndexWriter writer) {
+ @SuppressWarnings("unchecked")
DocumentBuilderIndexedEntity documentBuilder = workspace.getDocumentBuilder( work.getEntityClass() );
- Analyzer analyzer = documentBuilder.getAnalyzer();
+ Map<String, String> fieldToAnalyzerMap = ( ( AddLuceneWork ) work ).getFieldToAnalyzerMap();
+ ScopedAnalyzer analyzer = ( ScopedAnalyzer ) documentBuilder.getAnalyzer();
+ analyzer = updateAnalyzerMappings( analyzer, fieldToAnalyzerMap, workspace );
Similarity similarity = documentBuilder.getSimilarity();
if ( log.isTraceEnabled() ) {
log.trace(
@@ -64,8 +70,37 @@
}
}
+ /**
+ * Allows to override the otherwise static field to analyzer mapping in <code>scopedAnalyzer</code>.
+ *
+ * @param scopedAnalyzer The scoped analyzer created at startup time.
+ * @param fieldToAnalyzerMap A map of <code>Document</code> field names for analyzer names. This map gets creates
+ * when the Lucene <code>Document</code> gets created and uses the state of the entiy to index to determine analyzers
+ * dynamically at index time.
+ * @param workspace The current workspace.
+ * @return <code>scopedAnalyzer</code> in case <code>fieldToAnalyzerMap</code> is <code>null</code> or empty. Otherwise
+ * a clone of <code>scopedAnalyzer</code> is created where the analyzers get overriden according to <code>fieldToAnalyzerMap</code>.
+ */
+ private ScopedAnalyzer updateAnalyzerMappings(ScopedAnalyzer scopedAnalyzer, Map<String, String> fieldToAnalyzerMap, Workspace workspace) {
+ // for backwards compatability
+ if ( fieldToAnalyzerMap == null || fieldToAnalyzerMap.isEmpty() ) {
+ return scopedAnalyzer;
+ }
+
+ ScopedAnalyzer analyzerClone = scopedAnalyzer.clone();
+ for ( Map.Entry<String, String> entry : fieldToAnalyzerMap.entrySet() ) {
+ Analyzer analyzer = workspace.getAnalyzer( entry.getValue() );
+ if ( analyzer == null ) {
+ log.warn( "Unable to retrieve named analyzer: " + entry.getValue() );
+ }
+ else {
+ analyzerClone.addScopedAnalyzer( entry.getKey(), analyzer );
+ }
+ }
+ return analyzerClone;
+ }
+
public void performWork(LuceneWork work, IndexReader reader) {
throw new UnsupportedOperationException();
}
-
}
Modified: search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderContainedEntity.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderContainedEntity.java 2008-12-02 14:21:37 UTC (rev 15636)
+++ search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderContainedEntity.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -26,8 +26,10 @@
import org.hibernate.annotations.common.reflection.XProperty;
import org.hibernate.annotations.common.util.StringHelper;
import org.hibernate.search.SearchException;
+import org.hibernate.search.analyzer.Discriminator;
import org.hibernate.search.annotations.AnalyzerDef;
import org.hibernate.search.annotations.AnalyzerDefs;
+import org.hibernate.search.annotations.AnalyzerDiscriminator;
import org.hibernate.search.annotations.Boost;
import org.hibernate.search.annotations.ClassBridge;
import org.hibernate.search.annotations.ClassBridges;
@@ -101,7 +103,7 @@
Set<XClass> processedClasses = new HashSet<XClass>();
processedClasses.add( clazz );
- initializeMembers( clazz, metadata, true, "", processedClasses, context );
+ initializeClass( clazz, metadata, true, "", processedClasses, context );
this.analyzer.setGlobalAnalyzer( metadata.analyzer );
@@ -115,8 +117,8 @@
return isRoot;
}
- private void initializeMembers(XClass clazz, PropertiesMetadata propertiesMetadata, boolean isRoot, String prefix,
- Set<XClass> processedClasses, InitContext context) {
+ private void initializeClass(XClass clazz, PropertiesMetadata propertiesMetadata, boolean isRoot, String prefix,
+ Set<XClass> processedClasses, InitContext context) {
List<XClass> hierarchy = new ArrayList<XClass>();
for ( XClass currClass = clazz; currClass != null; currClass = currClass.getSuperclass() ) {
hierarchy.add( currClass );
@@ -149,14 +151,24 @@
}
/**
- * Checks for class level annotations.
+ * Check and initialize class level annotations.
+ *
+ * @param clazz The class to process.
+ * @param propertiesMetadata The meta data holder.
+ * @param isRoot Flag indicating if the specified class is a root entity, meaning the start of a chain of indexed
+ * entities.
+ * @param prefix The current prefix used for the <code>Document</code> field names.
+ * @param context Handle to default configuration settings.
*/
private void initalizeClassLevelAnnotations(XClass clazz, PropertiesMetadata propertiesMetadata, boolean isRoot, String prefix, InitContext context) {
+
+ // check for a class level specified analyzer
Analyzer analyzer = getAnalyzer( clazz, context );
-
if ( analyzer != null ) {
propertiesMetadata.analyzer = analyzer;
}
+
+ // check for AnalyzerDefs annotations
checkForAnalyzerDefs( clazz, context );
// Check for any ClassBridges annotation.
@@ -164,16 +176,18 @@
if ( classBridgesAnn != null ) {
ClassBridge[] cbs = classBridgesAnn.value();
for ( ClassBridge cb : cbs ) {
- bindClassAnnotation( prefix, propertiesMetadata, cb, context );
+ bindClassBridgeAnnotation( prefix, propertiesMetadata, cb, context );
}
}
// Check for any ClassBridge style of annotations.
ClassBridge classBridgeAnn = clazz.getAnnotation( ClassBridge.class );
if ( classBridgeAnn != null ) {
- bindClassAnnotation( prefix, propertiesMetadata, classBridgeAnn, context );
+ bindClassBridgeAnnotation( prefix, propertiesMetadata, classBridgeAnn, context );
}
+ checkForAnalyzerDiscriminator( clazz, propertiesMetadata );
+
// Get similarity
//TODO: similarity form @IndexedEmbedded are not taken care of. Exception??
if ( isRoot ) {
@@ -190,6 +204,7 @@
checkForField( member, propertiesMetadata, prefix, context );
checkForFields( member, propertiesMetadata, prefix, context );
checkForAnalyzerDefs( member, context );
+ checkForAnalyzerDiscriminator( member, propertiesMetadata );
checkForIndexedEmbedded( member, propertiesMetadata, prefix, processedClasses, context );
checkForContainedIn( member, propertiesMetadata );
}
@@ -241,7 +256,30 @@
context.addAnalyzerDef( def );
}
+ private void checkForAnalyzerDiscriminator(XAnnotatedElement annotatedElement, PropertiesMetadata propertiesMetadata) {
+ AnalyzerDiscriminator discriminiatorAnn = annotatedElement.getAnnotation( AnalyzerDiscriminator.class );
+ if ( discriminiatorAnn != null ) {
+ if ( propertiesMetadata.discriminator != null ) {
+ throw new SearchException(
+ "Multiple AnalyzerDiscriminator defined in the same class hierarchy: " + beanClass.getName()
+ );
+ }
+
+ Class<? extends Discriminator> discriminatorClass = discriminiatorAnn.impl();
+ try {
+ propertiesMetadata.discriminator = discriminatorClass.newInstance();
+ }
+ catch ( Exception e ) {
+ throw new SearchException(
+ "Unable to instantiate analyzer discriminator implementation: " + discriminatorClass.getName()
+ );
+ }
+ if ( annotatedElement instanceof XMember ) {
+ propertiesMetadata.discriminatorGetter = ( XMember ) annotatedElement;
+ }
+ }
+ }
public Similarity getSimilarity() {
return similarity;
@@ -333,7 +371,7 @@
Analyzer analyzer = getAnalyzer( member, context );
metadata.analyzer = analyzer != null ? analyzer : propertiesMetadata.analyzer;
String localPrefix = buildEmbeddedPrefix( prefix, embeddedAnn, member );
- initializeMembers( elementClass, metadata, false, localPrefix, processedClasses, context );
+ initializeClass( elementClass, metadata, false, localPrefix, processedClasses, context );
/**
* We will only index the "expected" type but that's OK, HQL cannot do downcasting either
*/
@@ -396,8 +434,7 @@
return ReflectionHelper.getAttributeName( member, name );
}
- private void bindClassAnnotation(String prefix, PropertiesMetadata propertiesMetadata, ClassBridge ann, InitContext context) {
- //FIXME name should be prefixed
+ private void bindClassBridgeAnnotation(String prefix, PropertiesMetadata propertiesMetadata, ClassBridge ann, InitContext context) {
String fieldName = prefix + ann.name();
propertiesMetadata.classNames.add( fieldName );
propertiesMetadata.classStores.add( getStore( ann.store() ) );
@@ -641,6 +678,8 @@
protected static class PropertiesMetadata {
public Float boost;
public Analyzer analyzer;
+ public Discriminator discriminator;
+ public XMember discriminatorGetter;
public final List<String> fieldNames = new ArrayList<String>();
public final List<XMember> fieldGetters = new ArrayList<XMember>();
public final List<FieldBridge> fieldBridges = new ArrayList<FieldBridge>();
Modified: search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderIndexedEntity.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderIndexedEntity.java 2008-12-02 14:21:37 UTC (rev 15636)
+++ search/trunk/src/java/org/hibernate/search/engine/DocumentBuilderIndexedEntity.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -7,6 +7,9 @@
import java.util.Collection;
import java.util.List;
import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.document.Document;
@@ -23,6 +26,7 @@
import org.hibernate.annotations.common.util.ReflectHelper;
import org.hibernate.proxy.HibernateProxy;
import org.hibernate.search.SearchException;
+import org.hibernate.search.analyzer.Discriminator;
import org.hibernate.search.annotations.DocumentId;
import org.hibernate.search.annotations.Index;
import org.hibernate.search.annotations.ProvidedId;
@@ -295,8 +299,7 @@
String idInString = idBridge.objectToString( id );
if ( workType == WorkType.ADD ) {
- Document doc = getDocument( entity, id );
- queue.add( new AddLuceneWork( id, idInString, entityClass, doc ) );
+ queue.add( createAddWork( entityClass, entity, id, idInString, false ) );
}
else if ( workType == WorkType.DELETE || workType == WorkType.PURGE ) {
queue.add( new DeleteLuceneWork( id, idInString, entityClass ) );
@@ -305,7 +308,6 @@
queue.add( new PurgeAllLuceneWork( entityClass ) );
}
else if ( workType == WorkType.UPDATE || workType == WorkType.COLLECTION ) {
- Document doc = getDocument( entity, id );
/**
* even with Lucene 2.1, use of indexWriter to update is not an option
* We can only delete by term, and the index doesn't have a term that
@@ -314,12 +316,11 @@
* double file opening.
*/
queue.add( new DeleteLuceneWork( id, idInString, entityClass ) );
- queue.add( new AddLuceneWork( id, idInString, entityClass, doc ) );
+ queue.add( createAddWork( entityClass, entity, id, idInString, false ) );
}
else if ( workType == WorkType.INDEX ) {
- Document doc = getDocument( entity, id );
queue.add( new DeleteLuceneWork( id, idInString, entityClass ) );
- queue.add( new AddLuceneWork( id, idInString, entityClass, doc, true ) );
+ queue.add( createAddWork( entityClass, entity, id, idInString, true ) );
}
else {
throw new AssertionFailure( "Unknown WorkType: " + workType );
@@ -328,14 +329,34 @@
super.addWorkToQueue( entityClass, entity, id, workType, queue, searchFactoryImplementor );
}
+ private AddLuceneWork createAddWork(Class<T> entityClass, T entity, Serializable id, String idInString, boolean isBatch) {
+ Map<String, String> fieldToAnalyzerMap = new HashMap<String, String>();
+ Document doc = getDocument( entity, id, fieldToAnalyzerMap );
+ AddLuceneWork addWork;
+ if ( fieldToAnalyzerMap.isEmpty() ) {
+ addWork = new AddLuceneWork( id, idInString, entityClass, doc, isBatch );
+ }
+ else {
+ addWork = new AddLuceneWork( id, idInString, entityClass, doc, fieldToAnalyzerMap, isBatch );
+ }
+ return addWork;
+ }
+
/**
* Builds the Lucene <code>Document</code> for a given entity <code>instance</code> and its <code>id</code>.
*
* @param instance The entity for which to build the matching Lucene <code>Document</code>
* @param id the entity id.
+ * @param fieldToAnalyzerMap this maps gets populated while generateing the <code>Document</code>.
+ * It allows to specify for any document field a named analyzer to use. This parameter cannot be <code>null</code>.
+ *
* @return The Lucene <code>Document</code> for the specified entity.
*/
- public Document getDocument(T instance, Serializable id) {
+ public Document getDocument(T instance, Serializable id, Map<String, String> fieldToAnalyzerMap) {
+ if ( fieldToAnalyzerMap == null ) {
+ throw new IllegalArgumentException( "fieldToAnalyzerMap cannot be null" );
+ }
+
Document doc = new Document();
final Class<?> entityType = Hibernate.getClass( instance );
if ( metadata.boost != null ) {
@@ -361,16 +382,21 @@
idBridge.set( idKeywordName, id, doc, luceneOptions );
// finally add all other document fields
- buildDocumentFields( instance, doc, metadata );
+ Set<String> processedFieldNames = new HashSet<String>();
+ buildDocumentFields( instance, doc, metadata, fieldToAnalyzerMap, processedFieldNames );
return doc;
}
- private void buildDocumentFields(Object instance, Document doc, PropertiesMetadata propertiesMetadata) {
+ private void buildDocumentFields(Object instance, Document doc, PropertiesMetadata propertiesMetadata, Map<String, String> fieldToAnalyzerMap,
+ Set<String> processedFieldNames) {
if ( instance == null ) {
return;
}
- //needed for field access: I cannot work in the proxied version
+
+ // needed for field access: I cannot work in the proxied version
Object unproxiedInstance = unproxy( instance );
+
+ // process the class bridges
for ( int i = 0; i < propertiesMetadata.classBridges.size(); i++ ) {
FieldBridge fb = propertiesMetadata.classBridges.get( i );
fb.set(
@@ -378,6 +404,8 @@
doc, propertiesMetadata.getClassLuceneOptions( i )
);
}
+
+ // process the indexed fields
for ( int i = 0; i < propertiesMetadata.fieldNames.size(); i++ ) {
XMember member = propertiesMetadata.fieldGetters.get( i );
Object value = ReflectionHelper.getMemberValue( unproxiedInstance, member );
@@ -386,6 +414,13 @@
propertiesMetadata.getFieldLuceneOptions( i )
);
}
+
+ // allow analyzer override for the fields added by the class and field bridges
+ allowAnalyzerDiscriminatorOverride(
+ doc, propertiesMetadata, fieldToAnalyzerMap, processedFieldNames, unproxiedInstance
+ );
+
+ // recursively process embedded objects
for ( int i = 0; i < propertiesMetadata.embeddedGetters.size(); i++ ) {
XMember member = propertiesMetadata.embeddedGetters.get( i );
Object value = ReflectionHelper.getMemberValue( unproxiedInstance, member );
@@ -398,21 +433,27 @@
switch ( propertiesMetadata.embeddedContainers.get( i ) ) {
case ARRAY:
for ( Object arrayValue : ( Object[] ) value ) {
- buildDocumentFields( arrayValue, doc, embeddedMetadata );
+ buildDocumentFields(
+ arrayValue, doc, embeddedMetadata, fieldToAnalyzerMap, processedFieldNames
+ );
}
break;
case COLLECTION:
for ( Object collectionValue : ( Collection ) value ) {
- buildDocumentFields( collectionValue, doc, embeddedMetadata );
+ buildDocumentFields(
+ collectionValue, doc, embeddedMetadata, fieldToAnalyzerMap, processedFieldNames
+ );
}
break;
case MAP:
for ( Object collectionValue : ( ( Map ) value ).values() ) {
- buildDocumentFields( collectionValue, doc, embeddedMetadata );
+ buildDocumentFields(
+ collectionValue, doc, embeddedMetadata, fieldToAnalyzerMap, processedFieldNames
+ );
}
break;
case OBJECT:
- buildDocumentFields( value, doc, embeddedMetadata );
+ buildDocumentFields( value, doc, embeddedMetadata, fieldToAnalyzerMap, processedFieldNames );
break;
default:
throw new AssertionFailure(
@@ -423,6 +464,40 @@
}
}
+ /**
+ * Allows a analyzer discriminator to override the analyzer used for any field in the Lucene document.
+ *
+ * @param doc The Lucene <code>Document</code> which shall be indexed.
+ * @param propertiesMetadata The metadata for the entity we currently add to the document.
+ * @param fieldToAnalyzerMap This map contains the actual override data. It is a map between document fields names and
+ * analyzer definition names. This map will be added to the <code>Work</code> instance and processed at actual indexing time.
+ * @param processedFieldNames A list of field names we have already processed.
+ * @param unproxiedInstance The entity we currently "add" to the document.
+ */
+ private void allowAnalyzerDiscriminatorOverride(Document doc, PropertiesMetadata propertiesMetadata, Map<String, String> fieldToAnalyzerMap, Set<String> processedFieldNames, Object unproxiedInstance) {
+ Discriminator discriminator = propertiesMetadata.discriminator;
+ if ( discriminator == null ) {
+ return;
+ }
+
+ Object value = null;
+ if ( propertiesMetadata.discriminatorGetter != null ) {
+ value = ReflectionHelper.getMemberValue( unproxiedInstance, propertiesMetadata.discriminatorGetter );
+ }
+
+ // now we give the discriminator the oppertunity to specify a analyzer per field level
+ for ( Object o : doc.getFields() ) {
+ Field field = ( Field ) o;
+ if ( !processedFieldNames.contains( field.name() ) ) {
+ String analyzerName = discriminator.getAnanyzerDefinitionName( value, unproxiedInstance, field.name() );
+ if ( analyzerName != null ) {
+ fieldToAnalyzerMap.put( field.name(), analyzerName );
+ }
+ processedFieldNames.add( field.name() );
+ }
+ }
+ }
+
private Object unproxy(Object value) {
//FIXME this service should be part of Core?
if ( value instanceof HibernateProxy ) {
Modified: search/trunk/src/java/org/hibernate/search/util/ScopedAnalyzer.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/util/ScopedAnalyzer.java 2008-12-02 14:21:37 UTC (rev 15636)
+++ search/trunk/src/java/org/hibernate/search/util/ScopedAnalyzer.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -16,11 +16,18 @@
* @author Emmanuel Bernard
*/
public class ScopedAnalyzer extends Analyzer {
+ private Analyzer globalAnalyzer;
+ private Map<String, Analyzer> scopedAnalyzers = new HashMap<String, Analyzer>();
+
public ScopedAnalyzer() {
}
- private Analyzer globalAnalyzer;
- private Map<String, Analyzer> scopedAnalyzers = new HashMap<String, Analyzer>();
+ private ScopedAnalyzer( Analyzer globalAnalyzer, Map<String, Analyzer> scopedAnalyzers) {
+ this.globalAnalyzer = globalAnalyzer;
+ for ( Map.Entry<String, Analyzer> entry : scopedAnalyzers.entrySet() ) {
+ addScopedAnalyzer( entry.getKey(), entry.getValue() );
+ }
+ }
public void setGlobalAnalyzer( Analyzer globalAnalyzer ) {
this.globalAnalyzer = globalAnalyzer;
@@ -45,4 +52,9 @@
}
return analyzer;
}
+
+ public ScopedAnalyzer clone() {
+ ScopedAnalyzer clone = new ScopedAnalyzer( globalAnalyzer, scopedAnalyzers );
+ return clone;
+ }
}
Modified: search/trunk/src/test/org/hibernate/search/test/analyzer/AnalyzerTest.java
===================================================================
--- search/trunk/src/test/org/hibernate/search/test/analyzer/AnalyzerTest.java 2008-12-02 14:21:37 UTC (rev 15636)
+++ search/trunk/src/test/org/hibernate/search/test/analyzer/AnalyzerTest.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -1,6 +1,10 @@
// $Id$
package org.hibernate.search.test.analyzer;
+import java.util.HashSet;
+import java.util.Set;
+import javax.print.attribute.HashAttributeSet;
+
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
@@ -8,94 +12,154 @@
import org.slf4j.Logger;
import org.hibernate.Transaction;
+import org.hibernate.annotations.common.reflection.ReflectionManager;
+import org.hibernate.annotations.common.reflection.XClass;
import org.hibernate.search.FullTextQuery;
import org.hibernate.search.FullTextSession;
import org.hibernate.search.Search;
import org.hibernate.search.SearchFactory;
+import org.hibernate.search.SearchException;
+import org.hibernate.search.impl.InitContext;
+import org.hibernate.search.cfg.SearchConfiguration;
+import org.hibernate.search.cfg.SearchConfigurationFromHibernateCore;
+import org.hibernate.search.engine.DocumentBuilderContainedEntity;
import org.hibernate.search.test.SearchTestCase;
import org.hibernate.search.test.util.AnalyzerUtils;
import org.hibernate.search.util.LoggerFactory;
/**
* @author Emmanuel Bernard
+ * @author Hardy Ferentschik
*/
public class AnalyzerTest extends SearchTestCase {
public static final Logger log = LoggerFactory.make();
+ public void testAnalyzerDiscriminator() throws Exception {
+ Article germanArticle = new Article();
+ germanArticle.setLanguage( "de" );
+ germanArticle.setText( "aufeinanderschl�gen" );
+ Set<Article> references = new HashSet<Article>();
+ references.add( germanArticle );
+
+
+ Article englishArticle = new Article();
+ englishArticle.setLanguage( "en" );
+ englishArticle.setText( "acknowledgment" );
+ englishArticle.setReferences( references );
+
+ FullTextSession s = Search.getFullTextSession( openSession() );
+ Transaction tx = s.beginTransaction();
+ s.persist( englishArticle );
+ tx.commit();
+
+ tx = s.beginTransaction();
+
+ // at query time we use a standard analyzer. We explicitly search for tokens which can only be found if the
+ // right language specific stemmer was used at index time
+ QueryParser parser = new QueryParser( "references.text", new StandardAnalyzer() );
+ org.apache.lucene.search.Query luceneQuery = parser.parse( "aufeinanderschlug" );
+ FullTextQuery query = s.createFullTextQuery( luceneQuery );
+ assertEquals( 1, query.getResultSize() );
+
+ parser = new QueryParser( "text", new StandardAnalyzer() );
+ luceneQuery = parser.parse( "acknowledg" );
+ query = s.createFullTextQuery( luceneQuery );
+ assertEquals( 1, query.getResultSize() );
+
+ tx.commit();
+ s.close();
+ }
+
+ public void testMultipleAnalyzerDiscriminatorDefinitions() throws Exception {
+ SearchConfigurationFromHibernateCore searchConfig = new SearchConfigurationFromHibernateCore( cfg );
+ ReflectionManager reflectionManager = searchConfig.getReflectionManager();
+ XClass xclass = reflectionManager.toXClass( BlogEntry.class );
+ InitContext context = new InitContext( searchConfig );
+ try {
+ new DocumentBuilderContainedEntity( xclass, context, reflectionManager );
+ fail();
+ }
+ catch ( SearchException e ) {
+ assertTrue( "Wrong error message", e.getMessage().startsWith( "Multiple AnalyzerDiscriminator defined in the same class hierarchy" ));
+ }
+ }
+
public void testScopedAnalyzers() throws Exception {
MyEntity en = new MyEntity();
- en.setEntity("Entity");
- en.setField("Field");
- en.setProperty("Property");
- en.setComponent(new MyComponent());
- en.getComponent().setComponentProperty("component property");
- FullTextSession s = Search.getFullTextSession(openSession());
+ en.setEntity( "Entity" );
+ en.setField( "Field" );
+ en.setProperty( "Property" );
+ en.setComponent( new MyComponent() );
+ en.getComponent().setComponentProperty( "component property" );
+ FullTextSession s = Search.getFullTextSession( openSession() );
Transaction tx = s.beginTransaction();
- s.persist(en);
+ s.persist( en );
tx.commit();
tx = s.beginTransaction();
- QueryParser parser = new QueryParser("id", new StandardAnalyzer());
- org.apache.lucene.search.Query luceneQuery = parser.parse("entity:alarm");
- FullTextQuery query = s.createFullTextQuery(luceneQuery, MyEntity.class);
- assertEquals(1, query.getResultSize());
+ QueryParser parser = new QueryParser( "id", new StandardAnalyzer() );
+ org.apache.lucene.search.Query luceneQuery = parser.parse( "entity:alarm" );
+ FullTextQuery query = s.createFullTextQuery( luceneQuery, MyEntity.class );
+ assertEquals( 1, query.getResultSize() );
- luceneQuery = parser.parse("property:cat");
- query = s.createFullTextQuery(luceneQuery, MyEntity.class);
- assertEquals(1, query.getResultSize());
+ luceneQuery = parser.parse( "property:cat" );
+ query = s.createFullTextQuery( luceneQuery, MyEntity.class );
+ assertEquals( 1, query.getResultSize() );
- luceneQuery = parser.parse("field:energy");
- query = s.createFullTextQuery(luceneQuery, MyEntity.class);
- assertEquals(1, query.getResultSize());
+ luceneQuery = parser.parse( "field:energy" );
+ query = s.createFullTextQuery( luceneQuery, MyEntity.class );
+ assertEquals( 1, query.getResultSize() );
- luceneQuery = parser.parse("component.componentProperty:noise");
- query = s.createFullTextQuery(luceneQuery, MyEntity.class);
- assertEquals(1, query.getResultSize());
+ luceneQuery = parser.parse( "component.componentProperty:noise" );
+ query = s.createFullTextQuery( luceneQuery, MyEntity.class );
+ assertEquals( 1, query.getResultSize() );
- s.delete(query.uniqueResult());
+ s.delete( query.uniqueResult() );
tx.commit();
s.close();
}
public void testScopedAnalyzersFromSearchFactory() throws Exception {
- FullTextSession session = Search.getFullTextSession(openSession());
+ FullTextSession session = Search.getFullTextSession( openSession() );
SearchFactory searchFactory = session.getSearchFactory();
- Analyzer analyzer = searchFactory.getAnalyzer(MyEntity.class);
+ Analyzer analyzer = searchFactory.getAnalyzer( MyEntity.class );
// you can pass what so ever into the analysis since the used analyzers are
// returning the same tokens all the time. We just want to make sure that
// the right analyzers are used.
- Token[] tokens = AnalyzerUtils.tokensFromAnalysis(analyzer, "entity", "");
- AnalyzerUtils.assertTokensEqual(tokens, new String[] { "alarm", "dog", "performance" });
+ Token[] tokens = AnalyzerUtils.tokensFromAnalysis( analyzer, "entity", "" );
+ AnalyzerUtils.assertTokensEqual( tokens, new String[] { "alarm", "dog", "performance" } );
- tokens = AnalyzerUtils.tokensFromAnalysis(analyzer, "property", "");
- AnalyzerUtils.assertTokensEqual(tokens, new String[] { "sound", "cat", "speed" });
+ tokens = AnalyzerUtils.tokensFromAnalysis( analyzer, "property", "" );
+ AnalyzerUtils.assertTokensEqual( tokens, new String[] { "sound", "cat", "speed" } );
- tokens = AnalyzerUtils.tokensFromAnalysis(analyzer, "field", "");
- AnalyzerUtils.assertTokensEqual(tokens, new String[] { "music", "elephant", "energy" });
+ tokens = AnalyzerUtils.tokensFromAnalysis( analyzer, "field", "" );
+ AnalyzerUtils.assertTokensEqual( tokens, new String[] { "music", "elephant", "energy" } );
- tokens = AnalyzerUtils.tokensFromAnalysis(analyzer, "component.componentProperty", "");
- AnalyzerUtils.assertTokensEqual(tokens, new String[] { "noise", "mouse", "light" });
+ tokens = AnalyzerUtils.tokensFromAnalysis( analyzer, "component.componentProperty", "" );
+ AnalyzerUtils.assertTokensEqual( tokens, new String[] { "noise", "mouse", "light" } );
// test border cases
try {
- searchFactory.getAnalyzer((Class) null);
- } catch ( IllegalArgumentException iae ) {
- log.debug("success");
+ searchFactory.getAnalyzer( ( Class ) null );
}
+ catch ( IllegalArgumentException iae ) {
+ log.debug( "success" );
+ }
try {
- searchFactory.getAnalyzer(String.class);
- } catch ( IllegalArgumentException iae ) {
- log.debug("success");
+ searchFactory.getAnalyzer( String.class );
}
+ catch ( IllegalArgumentException iae ) {
+ log.debug( "success" );
+ }
session.close();
}
protected Class[] getMappings() {
- return new Class[] { MyEntity.class };
+ return new Class[] { MyEntity.class, Article.class };
}
}
Added: search/trunk/src/test/org/hibernate/search/test/analyzer/Article.java
===================================================================
--- search/trunk/src/test/org/hibernate/search/test/analyzer/Article.java (rev 0)
+++ search/trunk/src/test/org/hibernate/search/test/analyzer/Article.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -0,0 +1,94 @@
+// $Id:$
+package org.hibernate.search.test.analyzer;
+
+import java.util.Set;
+import javax.persistence.CascadeType;
+import javax.persistence.Entity;
+import javax.persistence.GeneratedValue;
+import javax.persistence.Id;
+import javax.persistence.OneToMany;
+
+import org.apache.solr.analysis.EnglishPorterFilterFactory;
+import org.apache.solr.analysis.GermanStemFilterFactory;
+import org.apache.solr.analysis.LowerCaseFilterFactory;
+import org.apache.solr.analysis.StandardTokenizerFactory;
+
+import org.hibernate.search.annotations.AnalyzerDef;
+import org.hibernate.search.annotations.AnalyzerDefs;
+import org.hibernate.search.annotations.AnalyzerDiscriminator;
+import org.hibernate.search.annotations.DocumentId;
+import org.hibernate.search.annotations.Field;
+import org.hibernate.search.annotations.Indexed;
+import org.hibernate.search.annotations.IndexedEmbedded;
+import org.hibernate.search.annotations.Store;
+import org.hibernate.search.annotations.TokenFilterDef;
+import org.hibernate.search.annotations.TokenizerDef;
+
+/**
+ * @author Hardy Ferentschik
+ */
+@Entity
+@Indexed
+@AnalyzerDefs({
+ @AnalyzerDef(name = "en",
+ tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
+ filters = {
+ @TokenFilterDef(factory = LowerCaseFilterFactory.class),
+ @TokenFilterDef(factory = EnglishPorterFilterFactory.class
+ )
+ }),
+ @AnalyzerDef(name = "de",
+ tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
+ filters = {
+ @TokenFilterDef(factory = LowerCaseFilterFactory.class),
+ @TokenFilterDef(factory = GermanStemFilterFactory.class)
+ })
+})
+public class Article {
+
+ private Integer id;
+ private String language;
+ private String text;
+ private Set<Article> references;
+
+ @Id
+ @GeneratedValue
+ @DocumentId
+ public Integer getId() {
+ return id;
+ }
+
+ public void setId(Integer id) {
+ this.id = id;
+ }
+
+ @Field(store = Store.YES)
+ @AnalyzerDiscriminator(impl = LanguageDiscriminator.class)
+ public String getLanguage() {
+ return language;
+ }
+
+ public void setLanguage(String language) {
+ this.language = language;
+ }
+
+ @Field(store = Store.YES)
+ public String getText() {
+ return text;
+ }
+
+ public void setText(String text) {
+ this.text = text;
+ }
+
+ @OneToMany(cascade = CascadeType.ALL)
+ @IndexedEmbedded(depth = 1)
+ public Set<Article> getReferences() {
+ return references;
+ }
+
+ public void setReferences(Set<Article> references) {
+ this.references = references;
+ }
+}
+
Property changes on: search/trunk/src/test/org/hibernate/search/test/analyzer/Article.java
___________________________________________________________________
Name: svn:keywords
+ Id
Added: search/trunk/src/test/org/hibernate/search/test/analyzer/BlogEntry.java
===================================================================
--- search/trunk/src/test/org/hibernate/search/test/analyzer/BlogEntry.java (rev 0)
+++ search/trunk/src/test/org/hibernate/search/test/analyzer/BlogEntry.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -0,0 +1,94 @@
+// $Id:$
+package org.hibernate.search.test.analyzer;
+
+import java.util.Set;
+import javax.persistence.CascadeType;
+import javax.persistence.Entity;
+import javax.persistence.GeneratedValue;
+import javax.persistence.Id;
+import javax.persistence.OneToMany;
+
+import org.apache.solr.analysis.EnglishPorterFilterFactory;
+import org.apache.solr.analysis.GermanStemFilterFactory;
+import org.apache.solr.analysis.LowerCaseFilterFactory;
+import org.apache.solr.analysis.StandardTokenizerFactory;
+
+import org.hibernate.search.annotations.AnalyzerDef;
+import org.hibernate.search.annotations.AnalyzerDefs;
+import org.hibernate.search.annotations.AnalyzerDiscriminator;
+import org.hibernate.search.annotations.DocumentId;
+import org.hibernate.search.annotations.Field;
+import org.hibernate.search.annotations.Indexed;
+import org.hibernate.search.annotations.IndexedEmbedded;
+import org.hibernate.search.annotations.Store;
+import org.hibernate.search.annotations.TokenFilterDef;
+import org.hibernate.search.annotations.TokenizerDef;
+
+/**
+ * @author Hardy Ferentschik
+ */
+@Entity
+@Indexed
+@AnalyzerDefs({
+ @AnalyzerDef(name = "en",
+ tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
+ filters = {
+ @TokenFilterDef(factory = LowerCaseFilterFactory.class),
+ @TokenFilterDef(factory = EnglishPorterFilterFactory.class
+ )
+ }),
+ @AnalyzerDef(name = "de",
+ tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
+ filters = {
+ @TokenFilterDef(factory = LowerCaseFilterFactory.class),
+ @TokenFilterDef(factory = GermanStemFilterFactory.class)
+ })
+})
+@AnalyzerDiscriminator(impl = LanguageDiscriminator.class)
+public class BlogEntry {
+
+ private Integer id;
+ private String language;
+ private String text;
+ private Set<BlogEntry> references;
+
+ @Id
+ @GeneratedValue
+ @DocumentId
+ public Integer getId() {
+ return id;
+ }
+
+ public void setId(Integer id) {
+ this.id = id;
+ }
+
+ @Field(store = Store.YES)
+ @AnalyzerDiscriminator(impl = LanguageDiscriminator.class)
+ public String getLanguage() {
+ return language;
+ }
+
+ public void setLanguage(String language) {
+ this.language = language;
+ }
+
+ @Field(store = Store.YES)
+ public String getText() {
+ return text;
+ }
+
+ public void setText(String text) {
+ this.text = text;
+ }
+
+ @OneToMany(cascade = CascadeType.ALL)
+ @IndexedEmbedded(depth = 1)
+ public Set<BlogEntry> getReferences() {
+ return references;
+ }
+
+ public void setReferences(Set<BlogEntry> references) {
+ this.references = references;
+ }
+}
\ No newline at end of file
Property changes on: search/trunk/src/test/org/hibernate/search/test/analyzer/BlogEntry.java
___________________________________________________________________
Name: svn:keywords
+ Id
Added: search/trunk/src/test/org/hibernate/search/test/analyzer/LanguageDiscriminator.java
===================================================================
--- search/trunk/src/test/org/hibernate/search/test/analyzer/LanguageDiscriminator.java (rev 0)
+++ search/trunk/src/test/org/hibernate/search/test/analyzer/LanguageDiscriminator.java 2008-12-02 14:28:28 UTC (rev 15637)
@@ -0,0 +1,17 @@
+// $Id:$
+package org.hibernate.search.test.analyzer;
+
+import org.hibernate.search.analyzer.Discriminator;
+
+/**
+ * @author Hardy Ferentschik
+ */
+public class LanguageDiscriminator implements Discriminator {
+
+ public String getAnanyzerDefinitionName(Object value, Object entity, String field) {
+ if ( value == null || !( entity instanceof Article ) ) {
+ return null;
+ }
+ return (String) value;
+ }
+}
Property changes on: search/trunk/src/test/org/hibernate/search/test/analyzer/LanguageDiscriminator.java
___________________________________________________________________
Name: svn:keywords
+ Id
15 years, 4 months
Hibernate SVN: r15636 - in search/trunk/src/java/org/hibernate/search: store and 1 other directory.
by hibernate-commits@lists.jboss.org
Author: hardy.ferentschik
Date: 2008-12-02 09:21:37 -0500 (Tue, 02 Dec 2008)
New Revision: 15636
Modified:
search/trunk/src/java/org/hibernate/search/annotations/Indexed.java
search/trunk/src/java/org/hibernate/search/store/DirectoryProviderFactory.java
search/trunk/src/java/org/hibernate/search/store/FSDirectoryProvider.java
Log:
Javadoc changes
Modified: search/trunk/src/java/org/hibernate/search/annotations/Indexed.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/annotations/Indexed.java 2008-12-02 14:18:09 UTC (rev 15635)
+++ search/trunk/src/java/org/hibernate/search/annotations/Indexed.java 2008-12-02 14:21:37 UTC (rev 15636)
@@ -15,7 +15,7 @@
*/
public @interface Indexed {
/**
- * The filename of the index
+ * @return The filename of the index
*/
String index() default "";
}
Modified: search/trunk/src/java/org/hibernate/search/store/DirectoryProviderFactory.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/store/DirectoryProviderFactory.java 2008-12-02 14:18:09 UTC (rev 15635)
+++ search/trunk/src/java/org/hibernate/search/store/DirectoryProviderFactory.java 2008-12-02 14:21:37 UTC (rev 15636)
@@ -22,20 +22,16 @@
import org.hibernate.util.StringHelper;
/**
- * Create a Lucene directory provider
- * <p/>
- * Lucene directory providers are configured through properties
+ * Create a Lucene directory provider which can be configured
+ * through the following properties:
* <ul>
- * <li>hibernate.search.default.* and</li>
- * <li>hibernate.search.<indexname>.*</li>
- * </ul>
+ * <li><i>hibernate.search.default.*</i></li>
+ * <li><i>hibernate.search.<indexname>.*</i>,</li>
+ * </ul>where <i><indexname></i> properties have precedence over default ones.
* <p/>
- * <indexname> properties have precedence over default
- * <p/>
* The implementation is described by
- * hibernate.search.[default|indexname].directory_provider
- * <p/>
- * If none is defined the default value is FSDirectory
+ * <i>hibernate.search.[default|indexname].directory_provider</i>.
+ * If none is defined the default value is FSDirectory.
*
* @author Emmanuel Bernard
* @author Sylvain Vieujot
Modified: search/trunk/src/java/org/hibernate/search/store/FSDirectoryProvider.java
===================================================================
--- search/trunk/src/java/org/hibernate/search/store/FSDirectoryProvider.java 2008-12-02 14:18:09 UTC (rev 15635)
+++ search/trunk/src/java/org/hibernate/search/store/FSDirectoryProvider.java 2008-12-02 14:21:37 UTC (rev 15636)
@@ -13,9 +13,15 @@
import org.hibernate.search.util.LoggerFactory;
/**
- * Use a Lucene FSDirectory
- * The base directory is represented by hibernate.search.<index>.indexBase
- * The index is created in <base directory>/<index name>
+ * Use a Lucene {@link FSDirectory}. The base directory is represented by the property <i>hibernate.search.default.indexBase</i>
+ * or <i>hibernate.search.<index>.indexBase</i>. The former defines the default base directory for all indexes whereas the
+ * latter allows to override the base directory on a per index basis.<i> <index></i> has to be replaced with the fully qualified
+ * classname of the indexed class or the value of the <i>index</i> property of the <code>@Indexed</code> annotation.
+ * <p>
+ * The actual index files are then created in <i><indexBase>/<index name></i>, <i><index name></i> is
+ * per default the name of the indexed entity, or the value of the <i>index</i> property of the <code>@Indexed</code> or can be specified
+ * as property in the configuration file using <i>hibernate.search.<index>.indexName</i>.
+ * </p>
*
* @author Emmanuel Bernard
* @author Sylvain Vieujot
15 years, 4 months
Hibernate SVN: r15635 - search/trunk/doc/reference/en/modules.
by hibernate-commits@lists.jboss.org
Author: hardy.ferentschik
Date: 2008-12-02 09:18:09 -0500 (Tue, 02 Dec 2008)
New Revision: 15635
Modified:
search/trunk/doc/reference/en/modules/query.xml
Log:
HSEARCH-303
Review of the query chapter.
Modified: search/trunk/doc/reference/en/modules/query.xml
===================================================================
--- search/trunk/doc/reference/en/modules/query.xml 2008-12-02 14:05:20 UTC (rev 15634)
+++ search/trunk/doc/reference/en/modules/query.xml 2008-12-02 14:18:09 UTC (rev 15635)
@@ -31,86 +31,109 @@
<para>The second most important capability of Hibernate Search is the
ability to execute a Lucene query and retrieve entities managed by an
- Hibernate session, providing the power of Lucene without living the
+ Hibernate session, providing the power of Lucene without leaving the
Hibernate paradigm, and giving another dimension to the Hibernate classic
- search mechanisms (HQL, Criteria query, native SQL query).</para>
+ search mechanisms (HQL, Criteria query, native SQL query). Preparing and
+ executing a query consists of four simple steps:</para>
- <para>To access the <productname>Hibernate Search</productname> querying
- facilities, you have to use an Hibernate
- <classname>FullTextSession</classname> . A Search Session wraps a regular
- <classname>org.hibernate.Session</classname> to provide query and indexing
- capabilities.</para>
+ <itemizedlist>
+ <listitem>
+ <para>Creating a <classname>FullTextSession</classname></para>
+ </listitem>
- <programlisting>Session session = sessionFactory.openSession();
+ <listitem>
+ <para>Creating a Lucene query</para>
+ </listitem>
+
+ <listitem>
+ <para>Wrapping the Lucene query using a
+ <classname>org.hibernate.Query</classname></para>
+ </listitem>
+
+ <listitem>
+ <para>Executing the search by calling for example
+ <methodname>list()</methodname> or
+ <methodname>scroll()</methodname></para>
+ </listitem>
+ </itemizedlist>
+
+ <para>To access the querying facilities, you have to use an
+ <classname>FullTextSession</classname> . This Search specfic session wraps a
+ regular <classname>org.hibernate.Session</classname> to provide query and
+ indexing capabilities.</para>
+
+ <example>
+ <title>Creating a FullTextSession</title>
+
+ <programlisting>Session session = sessionFactory.openSession();
...
FullTextSession fullTextSession = Search.getFullTextSession(session); </programlisting>
+ </example>
- <para>The search facility is built on native Lucene queries.</para>
+ <para>The actual search facility is built on native Lucene queries which the
+ following example illustrates.</para>
- <programlisting>org.apache.lucene.queryParser.QueryParser parser = new QueryParser("title", new StopAnalyzer() );
+ <example>
+ <title>Creating a Lucene query</title>
+ <programlisting>org.apache.lucene.queryParser.QueryParser parser =
+ new QueryParser("title", new StopAnalyzer() );
+
org.apache.lucene.search.Query luceneQuery = parser.parse( "summary:Festina Or brand:Seiko" );
<emphasis role="bold">org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery );
</emphasis>
-
List result = fullTextQuery.list(); //return a list of managed objects </programlisting>
+ </example>
<para>The Hibernate query built on top of the Lucene query is a regular
- <literal>org.hibernate.Query</literal> , you are in the same paradigm as the
- other Hibernate query facilities (HQL, Native or Criteria). The regular
- <literal>list()</literal> , <literal>uniqueResult()</literal> ,
- <literal>iterate()</literal> and <literal>scroll()</literal> can be
+ <literal>org.hibernate.Query</literal>, which means you are in the same
+ paradigm as the other Hibernate query facilities (HQL, Native or Criteria).
+ The regular <literal>list()</literal> , <literal>uniqueResult()</literal>,
+ <literal>iterate()</literal> and <literal>scroll()</literal> methods can be
used.</para>
- <para>For people using Java Persistence (aka EJB 3.0 Persistence) APIs of
- Hibernate, the same extensions exist:</para>
+ <para>In case you are using the Java Persistence APIs of Hibernate (aka EJB
+ 3.0 Persistence), the same extensions exist:</para>
- <programlisting>EntityManager em = entityManagerFactory.createEntityManager();
+ <example>
+ <title>Creating a Search query using the JPA API</title>
+ <programlisting>EntityManager em = entityManagerFactory.createEntityManager();
+
FullTextEntityManager fullTextEntityManager =
org.hibernate.hibernate.search.jpa.Search.getFullTextEntityManager(em);
...
-org.apache.lucene.queryParser.QueryParser parser = new QueryParser("title", new StopAnalyzer() );
+org.apache.lucene.queryParser.QueryParser parser =
+ new QueryParser("title", new StopAnalyzer() );
org.apache.lucene.search.Query luceneQuery = parser.parse( "summary:Festina Or brand:Seiko" );
<emphasis role="bold">javax.persistence.Query fullTextQuery = fullTextEntityManager.createFullTextQuery( luceneQuery );</emphasis>
List result = fullTextQuery.getResultList(); //return a list of managed objects </programlisting>
+ </example>
- <para>The following examples show the Hibernate APIs but the same example
- can be easily rewritten with the Java Persistence API by just adjusting the
- way the FullTextQuery is retrieved.</para>
+ <para>The following examples we will use the Hibernate APIs but the same
+ example can be easily rewritten with the Java Persistence API by just
+ adjusting the way the <classname>FullTextQuery</classname> is
+ retrieved.</para>
<section>
<title>Building queries</title>
- <para>Hibernate Search queries are built on top of Lucene queries. It
- gives you a total freedom on the kind of Lucene queries you are willing to
- execute. However, once built, Hibernate Search abstract the query
- processing from your application using org.hibernate.Query as your primary
- query manipulation API.</para>
+ <para>Hibernate Search queries are built on top of Lucene queries which
+ gives you total freedom on the type of Lucene query you want to execute.
+ However, once built, Hibernate Search wraps further query processing using
+ <classname>org.hibernate.Query</classname> as your primary query
+ manipulation API. </para>
<section>
<title>Building a Lucene query</title>
- <para>This subject is generally speaking out of the scope of this
- documentation. Please refer to the Lucene documentation Lucene In Action
- or Hibernate Search in Action from Manning.</para>
-
- <para>It is essential to use the same analyzer when indexing a field and
- when querying that field. Hibernate Search gives you access to the
- analyzers used during indexing time (see <xref
- linkend="analyzer-retrievinganalyzer" /> for more information).</para>
-
- <programlisting>//retrieve an analyzer by name
-Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("phonetic-analyzer");
-
-//or the scoped analyzer for a given entity
-Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer(Song.class);</programlisting>
-
- <para>Using the same analyzer at indexing and querying time is
- important. See <xref linkend="analyzer" /> for more information.</para>
+ <para>It is out of the scope of this documentation on how to exactly
+ build a Lucene query. Please refer to the online Lucene documentation or
+ get hold of a copy of either Lucene In Action or Hibernate Search in
+ Action.</para>
</section>
<section>
@@ -122,39 +145,58 @@
<para>Once the Lucene query is built, it needs to be wrapped into an
Hibernate Query.</para>
- <programlisting>FullTextSession fullTextSession = Search.getFullTextSession( session );
+ <example>
+ <title>Wrapping a Lucene query into a Hibernate Query</title>
+
+ <programlisting>FullTextSession fullTextSession = Search.getFullTextSession( session );
org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery );</programlisting>
+ </example>
<para>If not specified otherwise, the query will be executed against
all indexed entities, potentially returning all types of indexed
classes. It is advised, from a performance point of view, to restrict
the returned types:</para>
- <programlisting>org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Customer.class );
-//or
+ <example>
+ <title>Filtering the search result by entity type</title>
+
+ <programlisting>org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Customer.class );
+// or
fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Item.class, Actor.class );</programlisting>
+ </example>
- <para>The first example returns only matching customers, the second
- returns matching actors and items.</para>
+ <para>The first example returns only matching
+ <classname>Customer</classname>s, the second returns matching
+ <classname>Actor</classname>s and <classname>Item</classname>s. The
+ type restriction is fully polymorphic which means that if there are
+ two indexed subclasses <classname>Salesman</classname> and
+ <classname>Customer</classname> of the baseclass
+ <classname>Person</classname>, it is possible to just specify
+ <classname>Person.class</classname> in order to filter on result
+ types. </para>
</section>
<section>
<title>Pagination</title>
- <para>It is recommended to restrict the number of returned objects per
- query. It is a very common use case as well, the user usually navigate
- from one page to an other. The way to define pagination is exactly the
- way you would define pagination in a plain HQL or Criteria
- query.</para>
+ <para>Out of performace reasons it is recommended to restrict the
+ number of returned objects per query. In fact is a very common use
+ case anyway that the user navigates from one page to an other. The way
+ to define pagination is exactly the way you would define pagination in
+ a plain HQL or Criteria query.</para>
- <programlisting>org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Customer.class );
+ <example>
+ <title>Defining pagination for a search query</title>
+
+ <programlisting>org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Customer.class );
fullTextQuery.setFirstResult(15); //start from the 15th element
fullTextQuery.setMaxResults(10); //return 10 elements</programlisting>
+ </example>
<note>
<para>It is still possible to get the total number of matching
- elements regardless of the pagination. See
- <methodname>getResultSize()</methodname> below</para>
+ elements regardless of the pagination via
+ <methodname>fulltextQuery.</methodname><methodname>getResultSize()</methodname></para>
</note>
</section>
@@ -163,22 +205,24 @@
<para>Apache Lucene provides a very flexible and powerful way to sort
results. While the default sorting (by relevance) is appropriate most
- of the time, it can interesting to sort by one or several
- properties.</para>
+ of the time, it can be interesting to sort by one or several other
+ properties. In order to do so set the Lucene Sort object to apply a
+ Lucene sorting strategy.</para>
- <para>Inject the Lucene Sort object to apply a Lucene sorting strategy
- to an Hibernate Search.</para>
+ <example>
+ <title>Specifying a Lucene <classname>Sort</classname> in order to
+ sort the results</title>
- <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( query, Book.class );
+ <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( query, Book.class );
org.apache.lucene.search.Sort sort = new Sort(new SortField("title"));
<emphasis role="bold">query.setSort(sort);</emphasis>
List results = query.list();</programlisting>
+ </example>
<para>One can notice the <classname>FullTextQuery</classname>
interface which is a sub interface of
- <classname>org.hibernate.Query</classname>.</para>
-
- <para>Fields used for sorting must not be tokenized.</para>
+ <classname>org.hibernate.Query</classname>. Be aware that fields used
+ for sorting must not be tokenized.</para>
</section>
<section>
@@ -191,8 +235,13 @@
<para>It is often useful, however, to refine the fetching strategy for
a specific use case.</para>
- <programlisting>Criteria criteria = s.createCriteria( Book.class ).setFetchMode( "authors", FetchMode.JOIN );
+ <example>
+ <title>Specifying <classname>FetchMode</classname> on a
+ query</title>
+
+ <programlisting>Criteria criteria = s.createCriteria( Book.class ).setFetchMode( "authors", FetchMode.JOIN );
s.createFullTextQuery( luceneQuery ).setCriteriaQuery( criteria );</programlisting>
+ </example>
<para>In this example, the query will return all Books matching the
luceneQuery. The authors collection will be loaded from the same query
@@ -215,7 +264,11 @@
overkill. Only a small subset of the properties is necessary.
Hibernate Search allows you to return a subset of properties:</para>
- <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
+ <example>
+ <title>Using projection instead of returning the full domain
+ object</title>
+
+ <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
query.<emphasis role="bold">setProjection( "id", "summary", "body", "mainAuthor.name" )</emphasis>;
List results = query.list();
Object[] firstResult = (Object[]) results.get(0);
@@ -223,6 +276,7 @@
String summary = firstResult[1];
String body = firstResult[2];
String authorName = firstResult[3];</programlisting>
+ </example>
<para>Hibernate Search extracts the properties from the Lucene index
and convert them back to their object representation, returning a list
@@ -246,6 +300,17 @@
the latter being the simpler version. All Hibernate Search
built-in types are two-way.</para>
</listitem>
+
+ <listitem>
+ <para>you can only project simple properties of the indexed entity
+ or its embedded associations. This means you cannot project a
+ whole embedded entity.</para>
+ </listitem>
+
+ <listitem>
+ <para>projection does not work on collections or maps which are
+ indexed via <classname>@IndexedEmbedded</classname></para>
+ </listitem>
</itemizedlist>
<para>Projection is useful for another kind of usecases. Lucene
@@ -253,13 +318,17 @@
using some special placeholders, the projection mechanism can retrieve
them:</para>
- <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
+ <example>
+ <title>Using projection in order to retrieve meta data</title>
+
+ <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
query.<emphasis role="bold">setProjection( FullTextQuery.SCORE, FullTextQuery.THIS, "mainAuthor.name" )</emphasis>;
List results = query.list();
Object[] firstResult = (Object[]) results.get(0);
float score = firstResult[0];
Book book = firstResult[1];
String authorName = firstResult[2];</programlisting>
+ </example>
<para>You can mix and match regular fields and special placeholders.
Here is the list of available placeholders:</para>
@@ -315,7 +384,7 @@
<para>Once the Hibernate Search query is built, executing it is in no way
different than executing a HQL or Criteria query. The same paradigm and
- object semantic apply. All the common operations are available:
+ object semantic applies. All the common operations are available:
<methodname>list()</methodname>, <methodname>uniqueResult()</methodname>,
<methodname>iterate()</methodname>,
<methodname>scroll()</methodname>.</para>
@@ -338,8 +407,8 @@
<methodname>scroll()</methodname> is more appropriate. Don't forget to
close the <classname>ScrollableResults</classname> object when you're
done, since it keeps Lucene resources. If you expect to use
- <methodname>scroll</methodname> but wish to load objects in batch, you
- can use <methodname>query.setFetchSize()</methodname>: When an object is
+ <methodname>scroll,</methodname> but wish to load objects in batch, you
+ can use <methodname>query.setFetchSize()</methodname>. When an object is
accessed, and if not already loaded, Hibernate Search will load the next
<literal>fetchSize</literal> objects in one pass.</para>
@@ -367,21 +436,23 @@
</listitem>
</itemizedlist>
- <para>But it would be costly to retrieve all the matching
- documents.</para>
-
- <para>Hibernate Search allows you to retrieve the total number of
+ <para>Of course it would be too costly to retrieve all the matching
+ documents. Hibernate Search allows you to retrieve the total number of
matching documents regardless of the pagination parameters. Even more
interesting, you can retrieve the number of matching elements without
triggering a single object load.</para>
- <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
+ <example>
+ <title>Determining the result size of a query</title>
+
+ <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
assert 3245 == <emphasis role="bold">query.getResultSize()</emphasis>; //return the number of matching books without loading a single one
org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
query.setMaxResult(10);
List results = query.list();
assert 3245 == <emphasis role="bold">query.getResultSize()</emphasis>; //return the total number of matching books regardless of pagination</programlisting>
+ </example>
<note>
<para>Like Google, the number of results is approximative if the index
@@ -399,7 +470,10 @@
<classname>ResultTransformer</classname> operation post query to match
the targeted data structure:</para>
- <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
+ <example>
+ <title>Using ResultTransformer in conjuncton with projections</title>
+
+ <programlisting>org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
query.setProjection( "title", "mainAuthor.name" );
<emphasis role="bold">query.setResultTransformer(
@@ -409,6 +483,7 @@
for(BookView view : results) {
log.info( "Book: " + view.getTitle() + ", " + view.getAuthor() );
}</programlisting>
+ </example>
<para>Examples of <classname>ResultTransformer</classname>
implementations can be found in the Hibernate Core codebase.</para>
@@ -419,12 +494,12 @@
<para>You will find yourself sometimes puzzled by a result showing up in
a query or a result not showing up in a query. Luke is a great tool to
- understand those mysteries. But Hibernate Search also let's you access
- to the Lucene <classname>Explanation</classname> object for a given
- result (in a given query). This class is considered fairly advanced to
- Lucene users but can provide a good understanding of the scoring of an
- object. You have two ways to access the Explanation object for a given
- result:</para>
+ understand those mysteries. However, Hibernate Search also gives you
+ access to the Lucene <classname>Explanation</classname> object for a
+ given result (in a given query). This class is considered fairly
+ advanced to Lucene users but can provide a good understanding of the
+ scoring of an object. You have two ways to access the Explanation object
+ for a given result:</para>
<itemizedlist>
<listitem>
@@ -443,21 +518,26 @@
constant.</para>
<warning>
- <para>The Document id has nothing to do with the entity id. do not
- mess up the two notions.</para>
+ <para>The Document id has nothing to do with the entity id. Do not
+ mess up these two notions.</para>
</warning>
<para>The second approach let's you project the
<classname>Explanation</classname> object using the
<literal>FullTextQuery.EXPLANATION</literal> constant.</para>
- <programlisting>FullTextQuery ftQuery = s.createFullTextQuery( luceneQuery, Dvd.class )
+ <example>
+ <title>Retrieving the Lucene Explanation object using
+ projection</title>
+
+ <programlisting>FullTextQuery ftQuery = s.createFullTextQuery( luceneQuery, Dvd.class )
.setProjection( FullTextQuery.DOCUMENT_ID, <emphasis role="bold">FullTextQuery.EXPLANATION</emphasis>, FullTextQuery.THIS );
@SuppressWarnings("unchecked") List<Object[]> results = ftQuery.list();
for (Object[] result : results) {
Explanation e = (Explanation) result[1];
display( e.toString() );
}</programlisting>
+ </example>
<para>Be careful, building the explanation object is quite expensive, it
is roughly as expensive as running the Lucene query again. Don't do it
@@ -497,10 +577,14 @@
For people familiar with the notion of Hibernate Core filters, the API is
very similar:</para>
- <programlisting>fullTextQuery = s.createFullTextQuery( query, Driver.class );
+ <example>
+ <title>Enabling fulltext filters for a given query</title>
+
+ <programlisting>fullTextQuery = s.createFullTextQuery( query, Driver.class );
fullTextQuery.enableFullTextFilter("bestDriver");
fullTextQuery.enableFullTextFilter("security").setParameter( "login", "andre" );
fullTextQuery.list(); //returns only best drivers where andre has credentials</programlisting>
+ </example>
<para>In this example we enabled two filters on top of the query. You can
enable (or disable) as many filters as you like.</para>
@@ -515,7 +599,10 @@
are defined. Each named filter has to specify its actual filter
implementation.</para>
- <programlisting>@Entity
+ <example>
+ <title>Defining and implementing a Filter</title>
+
+ <programlisting>@Entity
@Indexed
@FullTextFilterDefs( {
<emphasis role="bold">@FullTextFilterDef(name = "bestDriver", impl = BestDriversFilter.class)</emphasis>,
@@ -523,8 +610,8 @@
})
public class Driver { ... }</programlisting>
- <programlisting>public class BestDriversFilter extends <emphasis
- role="bold">org.apache.lucene.search.Filter</emphasis> {
+ <programlisting>public class BestDriversFilter extends <emphasis
+ role="bold">org.apache.lucene.search.Filter</emphasis> {
public DocIdSet getDocIdSet(IndexReader reader) throws IOException {
OpenBitSet bitSet = new OpenBitSet( reader.maxDoc() );
@@ -535,6 +622,7 @@
return bitSet;
}
}</programlisting>
+ </example>
<para><classname>BestDriversFilter</classname> is an example of a simple
Lucene filter which reduces the result set to drivers whose score is 5. In
@@ -546,7 +634,10 @@
you want to use does not have a no-arg constructor, you can use the
factory pattern:</para>
- <programlisting>@Entity
+ <example>
+ <title>Creating a filter using the factory pattern</title>
+
+ <programlisting>@Entity
@Indexed
@FullTextFilterDef(name = "bestDriver", impl = BestDriversFilterFactory.class)
public class Driver { ... }
@@ -560,6 +651,7 @@
return new CachingWrapperFilter(bestDriversFilter);
}
}</programlisting>
+ </example>
<para>Hibernate Search will look for a <literal>@Factory</literal>
annotated method and use it to build the filter instance. The factory must
@@ -571,13 +663,20 @@
the filter. For example a security filter might want to know which
security level you want to apply:</para>
- <programlisting>fullTextQuery = s.createFullTextQuery( query, Driver.class );
+ <example>
+ <title>Passing parameters to a defined filter</title>
+
+ <programlisting>fullTextQuery = s.createFullTextQuery( query, Driver.class );
fullTextQuery.enableFullTextFilter("security")<emphasis role="bold">.setParameter( "level", 5 )</emphasis>;</programlisting>
+ </example>
<para>Each parameter name should have an associated setter on either the
filter or filter factory of the targeted named filter definition.</para>
- <programlisting>public class SecurityFilterFactory {
+ <example>
+ <title>Using paramters in the actual filter implementation</title>
+
+ <programlisting>public class SecurityFilterFactory {
private Integer level;
/**
@@ -600,6 +699,7 @@
return new CachingWrapperFilter( new QueryWrapperFilter(query) );
}
}</programlisting>
+ </example>
<para>Note the method annotated <classname>@Key</classname> returning a
<classname>FilterKey</classname> object. The returned object has a special
15 years, 4 months
Hibernate SVN: r15634 - validator/trunk/validation-api/src/main/java/javax/validation.
by hibernate-commits@lists.jboss.org
Author: epbernard
Date: 2008-12-02 09:05:20 -0500 (Tue, 02 Dec 2008)
New Revision: 15634
Modified:
validator/trunk/validation-api/src/main/java/javax/validation/Validator.java
Log:
Remove serializability of Validator
Modified: validator/trunk/validation-api/src/main/java/javax/validation/Validator.java
===================================================================
--- validator/trunk/validation-api/src/main/java/javax/validation/Validator.java 2008-12-02 12:18:30 UTC (rev 15633)
+++ validator/trunk/validation-api/src/main/java/javax/validation/Validator.java 2008-12-02 14:05:20 UTC (rev 15634)
@@ -21,14 +21,14 @@
import java.util.Set;
/**
- * Validate objects
+ * Validate bean instances
* Implementations of this interface must be thread-safe
*
* @author Emmanuel Bernard
* @author Hardy Ferentschik
* @todo Should Serializable be part of the definition?
*/
-public interface Validator extends Serializable {
+public interface Validator {
/**
* validate all constraints on object
*
@@ -73,7 +73,7 @@
<T> Set<ConstraintViolation<T>> validateValue(Class<T> beanType, String propertyName, Object value, String... groups);
/**
- * Return the class level constraints
+ * Return the descriptor object describing bean constraints
* The returned object (and associated objects including ConstraintDescriptors)
* are immutable.
*
15 years, 4 months