[hibernate-commits] Hibernate SVN: r11253 - in branches/Branch_3_2/HibernateExt/search: doc/reference/en/modules and 5 other directories.
hibernate-commits at lists.jboss.org
hibernate-commits at lists.jboss.org
Tue Mar 6 00:23:24 EST 2007
Author: epbernard
Date: 2007-03-06 00:23:23 -0500 (Tue, 06 Mar 2007)
New Revision: 11253
Added:
branches/Branch_3_2/HibernateExt/search/doc/reference/en/images/jms-backend.png
branches/Branch_3_2/HibernateExt/search/doc/reference/en/images/lucene-backend.png
branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSlaveDirectoryProvider.java
branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSlaveAndMasterDPTest.java
Removed:
branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSwitchableDirectoryProvider.java
branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSwitchableAndMasterDPTest.java
Modified:
branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/architecture.xml
branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/configuration.xml
branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/mapping.xml
branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/Environment.java
branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/backend/impl/BatchedQueueingProcessor.java
branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSMasterDirectoryProvider.java
branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/worker/AsyncWorkerTest.java
Log:
Documentation and better property names
Added: branches/Branch_3_2/HibernateExt/search/doc/reference/en/images/jms-backend.png
===================================================================
(Binary files differ)
Property changes on: branches/Branch_3_2/HibernateExt/search/doc/reference/en/images/jms-backend.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: branches/Branch_3_2/HibernateExt/search/doc/reference/en/images/lucene-backend.png
===================================================================
(Binary files differ)
Property changes on: branches/Branch_3_2/HibernateExt/search/doc/reference/en/images/lucene-backend.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Modified: branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/architecture.xml
===================================================================
--- branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/architecture.xml 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/architecture.xml 2007-03-06 05:23:23 UTC (rev 11253)
@@ -6,17 +6,10 @@
engine. Both are backed by Apache Lucene.</para>
<para>When an entity is inserted, updated or removed to/from the database,
- <productname>Hibernate Search</productname> will keep track of this event
- (through the Hibernate event system) and schedule an index update. When out
- of transaction, the update is executed right after the actual database
- operation. It is however recommended, for both your database and Hibernate
- Search, to execute your operation in a transaction (whether JDBC or JTA).
- When in a transaction, the index update is schedule for the transaction
- commit (and discarded in case of transaction rollback). You can think of
- this as the regular (infamous) autocommit vs transactional behavior. From a
- performance perspective, the <emphasis>in transaction</emphasis> mode is
- recommended. All the index updates are handled for you without you having to
- use the Apache Lucene APIs.</para>
+ <productname>Hibernate Search</productname> keeps track of this event
+ (through the Hibernate event system) and schedule an index update. All the
+ index updates are handled for you without you having to use the Apache
+ Lucene APIs.</para>
<para>To interact with Apache Lucene indexes, Hibernate Search has the
notion of <classname>DirectoryProvider</classname> . A directory provider
@@ -31,50 +24,163 @@
native query would be done.</para>
<section>
- <title>Backend</title>
+ <title>Batching Scope</title>
- <para>Hibernate Search offers the ability to process the</para>
+ <para>To be more efficient, Hibernate Search batch the interactions with
+ the Lucene index. There is currently two types of batching depending on
+ the expected scope.</para>
+ <para>When out of transaction, the index update operation is executed
+ right after the actual database operation. This scope is really a no
+ scoping, and no batching is performed.</para>
+
+ <para>It is however recommended, for both your database and Hibernate
+ Search, to execute your operation in a transaction (whether it be JDBC or
+ JTA). When in a transaction, the index update operation is schedule for
+ the transaction commit (and discarded in case of transaction rollback).
+ The batching scope is the transaction. There is 2 immediate
+ benefits:</para>
+
+ <itemizedlist>
+ <listitem>
+ <para>performance: Lucene indexing works better when operation are
+ executed in batch.</para>
+ </listitem>
+
+ <listitem>
+ <para>ACIDity: The work executed has the same scoping as the one
+ executed by the database transaction and is executed if and only if
+ the transaction is committed.</para>
+
+ <note>
+ <para>Disclamer, the work in not ACID in the strict sense of it, but
+ ACID behavior is rarely useful for full text search indexes since
+ they can be rebuilt from the source at any time.</para>
+ </note>
+ </listitem>
+ </itemizedlist>
+
+ <para>You can think of those two scopes (no scope vs transactional) as the
+ equivalent of the (infamous) autocommit vs transactional behavior. From a
+ performance perspective, the <emphasis>in transaction</emphasis> mode is
+ recommended. The scoping choice is made transparently: Hibernate Search
+ detects the presence of a transaction and adjust the scoping.</para>
+
+ <remark>Note that Hibernate Search works perfectly fine in the Hibernate /
+ EntityManager long conversation pattern aka. atomic conversation.</remark>
+
+ <para>Depending on user demand, additional scoping will be considered, the
+ pluggability mechanism being already in place.</para>
+ </section>
+
+ <section>
+ <title>Back end</title>
+
+ <para>Hibernate Search offers the ability to let the scoped work being
+ processed by different back ends. Two back ends are provided out of the
+ box for 2 different scenarii.</para>
+
<section>
<title>Lucene</title>
<para>In this mode, all index update operations applied on a given node
(JVM) will be executed to the Lucene directories (through the directory
providers) by the same node. This mode is typically used in non
- clustered mode or in clustered mode where the directory store is shared.
- </para>
+ clustered mode or in clustered mode where the directory store is
+ shared.</para>
+
+ <mediaobject>
+ <imageobject role="html">
+ <imagedata align="center"
+ fileref="../shared/images/lucene-backend.png"
+ format="PNG" />
+ </imageobject>
+
+ <imageobject role="fo">
+ <imagedata align="center" fileref="images/lucene-backend.png"
+ format="PNG" />
+ </imageobject>
+ </mediaobject>
+
+ <para>This mode targets non clustered applications, or clustered
+ applications where the Directory is taking care of the locking
+ strategy.</para>
+
+ <para>The main advantage is simplicity and immediate visibility of the
+ changes in Lucene queries (a requirement is some applications).</para>
</section>
<section>
<title>JMS</title>
- <para></para>
+ <para>All index update operations applied on a given node are sent to a
+ JMS queue. A unique reader will then process the queue and update the
+ master Lucene index. The master index is then replicated on a regular
+ basis to the slave copies. This is known as the master / slaves pattern.
+ The master is the sole responsible for updating the Lucene index, the
+ slaves can accept read/write operations, process the read operation on
+ their local index copy, and delegate the update operations to the
+ master.</para>
+
+ <mediaobject>
+ <imageobject role="html">
+ <imagedata align="center" fileref="../shared/images/jms-backend.png"
+ format="PNG" />
+ </imageobject>
+
+ <imageobject role="fo">
+ <imagedata align="center" fileref="images/jms-backend.png"
+ format="PNG" />
+ </imageobject>
+ </mediaobject>
+
+ <para>This mode targets clustered environments where throughput is
+ critical, and index update delays are affordable. Reliability is ensured
+ by the JMS provider and by having the slaves working on a local copy of
+ the index.</para>
</section>
<section>
<title>Custom</title>
- <para></para>
+ <para>Hibernate Search is an extensible architecture. While not yet part
+ of the public API, pluging a third party back end is possible. Feel free
+ to drop ideas to
+ <literal>hibernate-dev at lists.jboss.org</literal>.</para>
</section>
</section>
<section>
<title>Work execution</title>
- <para>The indexing work can be executed synchronously with the transaction
- commit (or update operation if out of transaction), or
- asynchronously.</para>
+ <para>The indexing work (done by the back end) can be executed
+ synchronously with the transaction commit (or update operation if out of
+ transaction), or asynchronously.</para>
<section>
<title>Synchronous</title>
- <para></para>
+ <para>This is the safe mode where the back end work is executed in
+ concert with the transaction commit. Under highly concurrent
+ environment, this can lead to throughput limitation (due to the Apache
+ Lucene lock mechanism). It can increase the system response time too if
+ the backend is significantly slower than the transactional process and
+ if a lot of IO operations are involved.</para>
</section>
<section>
<title>Asynchronous</title>
- <para></para>
+ <para>This mode delegates the work done by the back end to a different
+ thread. That way, throughput and response time are (to a certain extend)
+ decorrelated from the back end performance. The drawback is that a small
+ delay appears between the transaction commit and the index update and a
+ small overhead is introduced to deal with thread management.</para>
+
+ <para>It is recommended to use synchronous execution first and evaluate
+ asynchronous if performance problems occur and after having set up a
+ proper benchmark (ie not a lonely cowboy hitting the system in a
+ completely unrealistic way).</para>
</section>
</section>
</chapter>
\ No newline at end of file
Modified: branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/configuration.xml
===================================================================
--- branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/configuration.xml 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/configuration.xml 2007-03-06 05:23:23 UTC (rev 11253)
@@ -46,6 +46,63 @@
<entry>none</entry>
</row>
+
+ <row>
+ <entry>org.hibernate.search.store.FSMasterDirectoryProvider</entry>
+
+ <entry><para>File system based directory. Like
+ FSDirectoryProvider. It also copy the index to a source directory
+ (aka copy directory) on a regular basis. </para><para>The
+ recommended value for the refresh period is (at least) 50% higher
+ that the time to copy the information (default 3600 seconds - 60
+ minutes).</para><para>Note that the copy is based on an
+ incremental copy mechanism reducing the average copy
+ time.</para><para>DirectoryProvider typically used on the master
+ node in a JMS back end cluster.</para>DirectoryProvider typically
+ used on slave nodes using a JMS back end.</entry>
+
+ <entry><para><literal>indexBase</literal>: Base
+ directory</para><para><literal>sourceBase</literal>: Source (copy)
+ base directory.</para><para><literal>source</literal>: Source
+ directory suffix (default to <literal>@Indexed.name</literal>).
+ The actual source directory name being
+ <filename><sourceBase>/<source></filename>
+ </para><para>refresh: refresh period in second (the copy will take
+ place every refresh seconds).</para></entry>
+ </row>
+
+ <row>
+ <entry>org.hibernate.search.store.FSSlaveDirectoryProvider</entry>
+
+ <entry><para>File system based directory. Like
+ FSDirectoryProvider. But retrieve a master version (source) on a
+ regular basis. To avoid locking and inconsistent search results, 2
+ local copies are kept. </para><para>The recommended value for the
+ refresh period is (at least) 50% higher that the time to copy the
+ information (default 3600 seconds - 60 minutes).</para><para>Note
+ that the copy is based on an incremental copy mechanism reducing
+ the average copy time.</para><para>DirectoryProvider typically
+ used on slave nodes using a JMS back end.</para></entry>
+
+ <entry><para><literal>indexBase</literal>: Base
+ directory</para><para><literal>sourceBase</literal>: Source (copy)
+ base directory.</para><para><literal>source</literal>: Source
+ directory suffix (default to <literal>@Indexed.name</literal>).
+ The actual source directory name being
+ <filename><sourceBase>/<source></filename>
+ </para><para>refresh: refresh period in second (the copy will take
+ place every refresh seconds).</para></entry>
+ </row>
+
+ <row>
+ <entry>org.hibernate.search.store.RAMDirectoryProvider</entry>
+
+ <entry>Memory based directory, the directory will be uniquely
+ identified (in the same deployment unit) by the
+ <literal>@Indexed.name</literal> element</entry>
+
+ <entry>none</entry>
+ </row>
</tbody>
</tgroup>
</table>
@@ -78,8 +135,7 @@
public class Status { ... }
@Indexed(name="Rules")
-public class Rule { ... }
- </programlisting>
+public class Rule { ... }</programlisting>
<para>will create a file system directory in
<filename>/usr/lucene/indexes/Status</filename> where the Status entities
@@ -94,6 +150,208 @@
benefit this configuration mechanism too.</para>
</section>
+ <section>
+ <title>Worker and Back ends configuration</title>
+
+ <para>Hibernate Search works done by a worker you can configure. The
+ default (and only) worker today use transactional scoping.</para>
+
+ <section>
+ <title>Generic configuration</title>
+
+ <para>You can define the worker configuration using the following
+ properties</para>
+
+ <para></para>
+
+ <table>
+ <title>worker configuration</title>
+
+ <tgroup cols="2">
+ <colspec align="center" />
+
+ <tbody>
+ <row>
+ <entry>property</entry>
+
+ <entry>description</entry>
+ </row>
+
+ <row>
+ <entry><literal>org.hibernate.worker.backend</literal></entry>
+
+ <entry>Out of the box support for the Apache Lucene back end and
+ the JMS back end. Default to <literal>lucene</literal>. Supports
+ also <literal>jms</literal>.</entry>
+ </row>
+
+ <row>
+ <entry><literal>org.hibernate.worker.execution</literal></entry>
+
+ <entry>Supports synchronous and asynchrounous execution. Default
+ to <literal><literal>sync</literal></literal>. Supports also
+ <literal>async</literal>.</entry>
+ </row>
+
+ <row>
+ <entry><literal>org.hibernate.worker.thread_pool.size</literal></entry>
+
+ <entry>Defines the number of threads in the pool. useful only
+ for asynchrounous execution. Default to 1.</entry>
+ </row>
+
+ <row>
+ <entry><literal>org.hibernate.worker.buffer_queue.max</literal></entry>
+
+ <entry>Defines the maximal number of work queue if the thread
+ poll is starved. Useful only for asynchrounous execution.
+ Default to infinite. If the limit is reached, the work is done
+ by the main thread.</entry>
+ </row>
+
+ <row>
+ <entry><literal>org.hibernate.worker.jndi.*</literal></entry>
+
+ <entry>Defines the JNDI properties to initiate the
+ InitialContext (if needed). JNDI is only used by the JMS back
+ end.</entry>
+ </row>
+
+ <row>
+ <entry><literal>
+ org.hibernate.worker.jms.connection_factory</literal></entry>
+
+ <entry>Mandatory for the JMS back end. Defines the JNDI name to
+ lookup the JMS connection factory from
+ (<literal>java:/ConnectionFactory</literal> by default in JBoss
+ AS)</entry>
+ </row>
+
+ <row>
+ <entry><literal>
+ org.hibernate.worker.jms.queue</literal></entry>
+
+ <entry>Mandatory for the JMS back end. Defines the JNDI name to
+ lookup the JMS queue from. The queue will be used to post work
+ messages.</entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </section>
+
+ <section>
+ <title>JMS Back end</title>
+
+ <para>This chapter describes in greater detail how to configure the
+ Master / Slaves Hibernate Search architecture.</para>
+
+ <section>
+ <title>Slave nodes</title>
+
+ <para>Every index update operation is sent to a JMS queue. Index
+ quering operations are executed on a local index copy.</para>
+
+ <programlisting>### slave configuration
+
+## DirectoryProvider
+# (remote) master location
+hibernate.search.default.sourceBase = /mnt/mastervolume/lucenedirs/mastercopy
+
+# local copy location
+hibernate.search.default.indexBase = /Users/prod/lucenedirs
+
+# refresh every half hour
+hibernate.search.default.refresh = 1800
+
+# appropriate directory provider
+hibernate.search.default.directory_provider = org.hibernate.search.store.FSSlaveDirectoryProvider
+
+## Backend configuration
+hibernate.search.worker.backend = jms
+hibernate.search.worker.jms.connection_factory = java:/ConnectionFactory
+hibernate.search.worker.jms.queue = queue/hibernatesearch
+#optional jndi configuration (check your JMS provider for more information)
+
+## Optional asynchronous execution strategy
+# org.hibernate.worker.execution = async
+# org.hibernate.worker.thread_pool.size = 2
+# org.hibernate.worker.buffer_queue.max = 50</programlisting>
+
+ <para>A file system local copy is recommended for faster search
+ results.</para>
+
+ <para>The refresh period should be higher that the expected time
+ copy.</para>
+ </section>
+
+ <section>
+ <title>Master node</title>
+
+ <para>Every index update operation is taken fron a JMS queue and
+ executed. The master index(es) is(are) copied on a regular
+ basis.</para>
+
+ <programlisting>### master configuration
+
+## DirectoryProvider
+# (remote) master location where information is copied to
+hibernate.search.default.sourceBase = /mnt/mastervolume/lucenedirs/mastercopy
+
+# local master location
+hibernate.search.default.indexBase = /Users/prod/lucenedirs
+
+# refresh every half hour
+hibernate.search.default.refresh = 1800
+
+# appropriate directory provider
+hibernate.search.default.directory_provider = org.hibernate.search.store.FSMasterDirectoryProvider
+
+## Backend configuration
+#Backend is the default lucene one</programlisting>
+
+ <para>The refresh period should be higher that the expected time
+ copy.</para>
+
+ <para>In addition to the Hibernate Search framework configuration, a
+ Message Driven Bean should be written and set up to process index
+ works queue through JMS.</para>
+
+ <programlisting>@MessageDriven(activationConfig = {
+ @ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
+ @ActivationConfigProperty(propertyName="destination", propertyValue="queue/hiebrnatesearch"),
+ @ActivationConfigProperty(propertyName="DLQMaxResent", propertyValue="1")
+ } )
+public class MDBSearchController extends AbstractJMSHibernateSearchController implements MessageListener {
+ @PersistenceContext EntityManager em;
+
+ //method retrieving the appropriate session
+ protected Session getSession() {
+ return (Session) em.getDelegate();
+ }
+
+ //potentially close the session opened in #getSession(), not needed here
+ protected void cleanSessionIfNeeded(Session session)
+ }
+}</programlisting>
+
+ <para>This example inherit the abstract JMS controller class available
+ and implements a JavaEE 5 MDB. This implementation is given as an
+ example and, while most likely ;ore complex, can be adjusted to make
+ use of non Java EE Message Driven Beans. For more information about
+ the <methodname>getSession()</methodname> and
+ <methodname>cleanSessionIfNeeded()</methodname>, please check
+ <classname>AbstractJMSHibernateSearchController</classname>'s
+ javadoc.</para>
+
+ <remark>Hibernate Search test suite makes use of JBoss Embedded to
+ test the JMS integration. It allows the unit test to run both the MDB
+ container and JBoss Messaging (JMS provider) in a standalone way
+ (marketed by some as "lightweight"). </remark>
+ </section>
+ </section>
+ </section>
+
<section id="search-configuration-event" revision="1">
<title>Enabling automatic indexing</title>
@@ -104,9 +362,11 @@
that there is no performance runtime when the listeners are enabled while
no entity is indexable.</para>
- <para>To enable automatic indexing in Hibernate core, add the
+ <para>To enable automatic indexing in Hibernate Core, add the
<literal>SearchEventListener</literal> for the three Hibernate events that
- occur after changes are executed to the database.</para>
+ occur after changes are executed to the database. Once again, such a
+ configuration is not useful with Hibernate Annotations or Hibernate
+ EntityManager.</para>
<programlisting><hibernate-configuration>
...
Modified: branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/mapping.xml
===================================================================
--- branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/mapping.xml 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/doc/reference/en/modules/mapping.xml 2007-03-06 05:23:23 UTC (rev 11253)
@@ -19,8 +19,7 @@
<emphasis role="bold">@Indexed(index="indexes/essays")</emphasis>
public class Essay {
...
-}
- </programlisting>
+}</programlisting>
<para>The <literal>index</literal> attribute tells Hibernate what the
Lucene directory name is (usually a directory on your file system). If you
Modified: branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/Environment.java
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/Environment.java 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/Environment.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -22,5 +22,17 @@
public static final String WORKER_PREFIX = "hibernate.search.worker.";
public static final String WORKER_SCOPE = WORKER_PREFIX + "scope";
public static final String WORKER_BACKEND = WORKER_PREFIX + "backend";
- public static final String WORKER_PROCESS = WORKER_PREFIX + "process";
+ public static final String WORKER_EXECUTION = WORKER_PREFIX + "execution";
+ /**
+ * only used then execution is async
+ * Thread pool size
+ * default 1
+ */
+ public static final String WORKER_THREADPOOL_SIZE = Environment.WORKER_PREFIX + "thread_pool.size";
+ /**
+ * only used then execution is async
+ * Size of the buffer queue (besides the thread pool size)
+ * default infinite
+ */
+ public static final String WORKER_WORKQUEUE_SIZE = Environment.WORKER_PREFIX + "buffer_queue.max";
}
Modified: branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/backend/impl/BatchedQueueingProcessor.java
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/backend/impl/BatchedQueueingProcessor.java 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/backend/impl/BatchedQueueingProcessor.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -36,18 +36,29 @@
public BatchedQueueingProcessor(SearchFactory searchFactory,
Properties properties) {
//default to sync if none defined
- this.sync = !"async".equalsIgnoreCase( properties.getProperty( Environment.WORKER_PROCESS ) );
+ this.sync = !"async".equalsIgnoreCase( properties.getProperty( Environment.WORKER_EXECUTION ) );
+ //default to a simple asynchronous operation
int min = Integer.parseInt(
- properties.getProperty( Environment.WORKER_PREFIX + "thread_pool.min", "0" )
+ properties.getProperty( Environment.WORKER_THREADPOOL_SIZE, "1" ).trim()
);
- int max = Integer.parseInt(
- properties.getProperty( Environment.WORKER_PREFIX + "thread_pool.max", "0" ).trim()
+ //no queue limit
+ int queueSize = Integer.parseInt(
+ properties.getProperty( Environment.WORKER_WORKQUEUE_SIZE, Integer.toString( Integer.MAX_VALUE ) ).trim()
);
- if ( max == 0 ) max = Integer.MAX_VALUE;
if ( !sync ) {
- executorService =
- new ThreadPoolExecutor( min, max, 60, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>() );
+ /**
+ * choose min = max with a sizable queue to be able to
+ * actually queue operations
+ * The locking mechanism preventing much of the scalability
+ * anyway, the idea is really to have a buffer
+ * If the queue limit is reached, the operation is executed by the main thread
+ */
+ executorService = new ThreadPoolExecutor(
+ min, min, 60, TimeUnit.SECONDS,
+ new LinkedBlockingQueue<Runnable>(queueSize),
+ new ThreadPoolExecutor.CallerRunsPolicy()
+ );
}
String backend = properties.getProperty( Environment.WORKER_BACKEND );
if ( StringHelper.isEmpty( backend ) || "lucene".equalsIgnoreCase( backend ) ) {
Modified: branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSMasterDirectoryProvider.java
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSMasterDirectoryProvider.java 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSMasterDirectoryProvider.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -21,12 +21,13 @@
import org.hibernate.HibernateException;
/**
- * Use a Lucene FSDirectory
+ * File based DirectoryProvider that takes care of index copy
* The base directory is represented by hibernate.search.<index>.indexBase
* The index is created in <base directory>/<index name>
- * The copy directory is built from <sourceBase>/<index name>
- * TODO explose source
+ * The source (aka copy) directory is built from <sourceBase>/<index name>
*
+ * A copy is triggered every refresh seconds
+ *
* @author Emmanuel Bernard
*/
//TODO rename copy?
@@ -46,10 +47,10 @@
log.debug( "Source directory: " + source );
File indexDir = DirectoryProviderHelper.determineIndexDir( directoryProviderName, properties );
log.debug( "Index directory: " + indexDir );
- String refreshPeriod = properties.getProperty( "refresh", "60" );
+ String refreshPeriod = properties.getProperty( "refresh", "3600" );
long period = Long.parseLong( refreshPeriod );
- period *= 100 * 60; //per minute
- log.debug("Refresh period " + period / 1000 + " mins");
+ log.debug("Refresh period " + period + " seconds");
+ period *= 1000; //per second
try {
boolean create = !indexDir.exists();
indexName = indexDir.getCanonicalPath();
Copied: branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSlaveDirectoryProvider.java (from rev 11246, branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSwitchableDirectoryProvider.java)
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSlaveDirectoryProvider.java (rev 0)
+++ branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSlaveDirectoryProvider.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -0,0 +1,239 @@
+//$Id: $
+package org.hibernate.search.store;
+
+import java.util.Properties;
+import java.util.Timer;
+import java.util.TimerTask;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.io.File;
+import java.io.IOException;
+
+import org.apache.lucene.store.FSDirectory;
+import org.apache.lucene.index.IndexWriter;
+import org.apache.lucene.analysis.standard.StandardAnalyzer;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.hibernate.HibernateException;
+import org.hibernate.AssertionFailure;
+import org.hibernate.search.util.FileHelper;
+import org.hibernate.search.util.DirectoryProviderHelper;
+import org.hibernate.search.SearchFactory;
+
+/**
+ * File based directory provider that takes care of geting a version of the index
+ * from a given source
+ * The base directory is represented by hibernate.search.<index>.indexBase
+ * The index is created in <base directory>/<index name>
+ * The source (aka copy) directory is built from <sourceBase>/<index name>
+ *
+ * A copy is triggered every refresh seconds
+ *
+ * @author Emmanuel Bernard
+ */
+public class FSSlaveDirectoryProvider implements DirectoryProvider<FSDirectory> {
+ private static Log log = LogFactory.getLog( FSSlaveDirectoryProvider.class );
+ private FSDirectory directory1;
+ private FSDirectory directory2;
+ private int current;
+ private String indexName;
+ private Timer timer;
+
+ public void initialize(String directoryProviderName, Properties properties, SearchFactory searchFactory) {
+ //source guessing
+ String source = DirectoryProviderHelper.getSourceDirectory( "sourceBase", "source", directoryProviderName, properties );
+ if (source == null)
+ throw new IllegalStateException("FSSlaveDirectoryProvider requires a viable source directory");
+ if ( ! new File(source, "current1").exists() && ! new File(source, "current2").exists() ) {
+ throw new IllegalStateException("No current marker in source directory");
+ }
+ log.debug( "Source directory: " + source );
+ File indexDir = DirectoryProviderHelper.determineIndexDir( directoryProviderName, properties );
+ log.debug( "Index directory: " + indexDir.getPath() );
+ String refreshPeriod = properties.getProperty( "refresh", "3600" );
+ long period = Long.parseLong( refreshPeriod );
+ log.debug("Refresh period " + period + " seconds");
+ period *= 1000; //per second
+ try {
+ boolean create = !indexDir.exists();
+ indexName = indexDir.getCanonicalPath();
+ if (create) {
+ indexDir.mkdir();
+ log.debug("Initializing index directory " + indexName);
+ }
+
+ File subDir = new File( indexName, "1" );
+ create = ! subDir.exists();
+ directory1 = FSDirectory.getDirectory( subDir.getCanonicalPath(), create );
+ if ( create ) {
+ IndexWriter iw = new IndexWriter( directory1, new StandardAnalyzer(), create );
+ iw.close();
+ }
+
+ subDir = new File( indexName, "2" );
+ create = ! subDir.exists();
+ directory2 = FSDirectory.getDirectory( subDir.getCanonicalPath(), create );
+ if ( create ) {
+ IndexWriter iw = new IndexWriter( directory2, new StandardAnalyzer(), create );
+ iw.close();
+ }
+ File currentMarker = new File(indexName, "current1");
+ File current2Marker = new File(indexName, "current2");
+ if ( currentMarker.exists() ) {
+ current = 1;
+ }
+ else if ( current2Marker.exists() ) {
+ current = 2;
+ }
+ else {
+ //no default
+ log.debug( "Setting directory 1 as current");
+ current = 1;
+ File sourceFile = new File(source);
+ File destinationFile = new File(indexName, Integer.valueOf(current).toString() );
+ int sourceCurrent;
+ if ( new File(sourceFile, "current1").exists() ) {
+ sourceCurrent = 1;
+ }
+ else if ( new File(sourceFile, "current2").exists() ) {
+ sourceCurrent = 2;
+ }
+ else {
+ throw new AssertionFailure("No current file marker found in source directory: " + source);
+ }
+ try {
+ FileHelper.synchronize( new File(sourceFile, String.valueOf(sourceCurrent) ), destinationFile, true);
+ }
+ catch (IOException e) {
+ throw new HibernateException("Umable to synchonize directory: " + indexName, e);
+ }
+ if (! currentMarker.createNewFile() ) {
+ throw new HibernateException("Unable to create the directory marker file: " + indexName);
+ }
+ }
+ log.debug( "Current directory: " + current);
+ }
+ catch (IOException e) {
+ throw new HibernateException( "Unable to initialize index: " + directoryProviderName, e );
+ }
+ timer = new Timer();
+ TimerTask task = new TriggerTask(source, indexName);
+ timer.scheduleAtFixedRate( task, period, period );
+ }
+
+ public FSDirectory getDirectory() {
+ if (current == 1) {
+ return directory1;
+ }
+ else if (current == 2) {
+ return directory2;
+ }
+ else {
+ throw new AssertionFailure("Illegal current directory: " + current);
+ }
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ // this code is actually broken since the value change after initialize call
+ // but from a practical POV this is fine since we only call this method
+ // after initialize call
+ if ( obj == this ) return true;
+ if ( obj == null || !( obj instanceof FSSlaveDirectoryProvider ) ) return false;
+ return indexName.equals( ( (FSSlaveDirectoryProvider) obj ).indexName );
+ }
+
+ @Override
+ public int hashCode() {
+ // this code is actually broken since the value change after initialize call
+ // but from a practical POV this is fine since we only call this method
+ // after initialize call
+ int hash = 11;
+ return 37 * hash + indexName.hashCode();
+ }
+
+ class TriggerTask extends TimerTask {
+
+ private ExecutorService executor;
+ private CopyDirectory copyTask;
+
+ public TriggerTask(String source, String destination) {
+ executor = Executors.newSingleThreadExecutor();
+ copyTask = new CopyDirectory( source, destination );
+ }
+
+ public void run() {
+ if (!copyTask.inProgress) {
+ executor.execute( copyTask );
+ }
+ else {
+ log.trace( "Skipping directory synchronization, previous work still in progress: " + indexName);
+ }
+ }
+ }
+
+ class CopyDirectory implements Runnable {
+ private String source;
+ private String destination;
+ private volatile boolean inProgress;
+
+ public CopyDirectory(String source, String destination) {
+ this.source = source;
+ this.destination = destination;
+ }
+
+ public void run() {
+ long start = System.currentTimeMillis();
+ try {
+ inProgress = true;
+ int oldIndex = current;
+ int index = current == 1 ? 2 : 1;
+ File sourceFile;
+ if ( new File( source, "current1" ).exists() ) {
+ sourceFile = new File(source, "1");
+ }
+ else if ( new File( source, "current2" ).exists() ) {
+ sourceFile = new File(source, "2");
+ }
+ else {
+ log.error("Unable to determine current in source directory");
+ inProgress = false;
+ return;
+ }
+
+ File destinationFile = new File(destination, Integer.valueOf(index).toString() );
+ //TODO make smart a parameter
+ try {
+ log.trace("Copying " + sourceFile + " into " + destinationFile);
+ FileHelper.synchronize( sourceFile, destinationFile, true);
+ current = index;
+ }
+ catch (IOException e) {
+ //don't change current
+ log.error( "Unable to synchronize " + indexName, e);
+ inProgress = false;
+ return;
+ }
+ if ( ! new File(indexName, "current" + oldIndex).delete() ) {
+ log.warn( "Unable to remove previous marker file in " + indexName );
+ }
+ try {
+ new File(indexName, "current" + index).createNewFile();
+ }
+ catch( IOException e ) {
+ log.warn( "Unable to create current marker file in " + indexName, e );
+ }
+ }
+ finally {
+ inProgress = false;
+ }
+ log.trace( "Copy for " + indexName + " took " + (System.currentTimeMillis() - start) + " ms");
+ }
+ }
+
+ public void finalize() throws Throwable {
+ super.finalize();
+ timer.cancel();
+ //TODO find a better cycle from Hibernate core
+ }
+}
Deleted: branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSwitchableDirectoryProvider.java
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSwitchableDirectoryProvider.java 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/src/java/org/hibernate/search/store/FSSwitchableDirectoryProvider.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -1,235 +0,0 @@
-//$Id: $
-package org.hibernate.search.store;
-
-import java.util.Properties;
-import java.util.Timer;
-import java.util.TimerTask;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.io.File;
-import java.io.IOException;
-
-import org.apache.lucene.store.FSDirectory;
-import org.apache.lucene.index.IndexWriter;
-import org.apache.lucene.analysis.standard.StandardAnalyzer;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.hibernate.HibernateException;
-import org.hibernate.AssertionFailure;
-import org.hibernate.search.util.FileHelper;
-import org.hibernate.search.util.DirectoryProviderHelper;
-import org.hibernate.search.SearchFactory;
-
-/**
- * Use a Lucene FSDirectory
- * The base directory is represented by hibernate.search.<index>.indexBase
- * The index is created in <base directory>/<index name>
- *
- * @author Emmanuel Bernard
- */
-public class FSSwitchableDirectoryProvider implements DirectoryProvider<FSDirectory> {
- private static Log log = LogFactory.getLog( FSSwitchableDirectoryProvider.class );
- private FSDirectory directory1;
- private FSDirectory directory2;
- private int current;
- private String indexName;
- private Timer timer;
-
- public void initialize(String directoryProviderName, Properties properties, SearchFactory searchFactory) {
- //source guessing
- String source = DirectoryProviderHelper.getSourceDirectory( "sourceBase", "source", directoryProviderName, properties );
- if (source == null)
- throw new IllegalStateException("FSSwitchableDirectoryProvider requires a viable source directory");
- if ( ! new File(source, "current1").exists() && ! new File(source, "current2").exists() ) {
- throw new IllegalStateException("No current marker in source directory");
- }
- log.debug( "Source directory: " + source );
- File indexDir = DirectoryProviderHelper.determineIndexDir( directoryProviderName, properties );
- log.debug( "Index directory: " + indexDir.getPath() );
- String refreshPeriod = properties.getProperty( "refresh", "60" );
- long period = Long.parseLong( refreshPeriod );
- period *= 100 * 60; //per minute
- log.debug("Refresh period " + period / 60000 + " mins");
- try {
- boolean create = !indexDir.exists();
- indexName = indexDir.getCanonicalPath();
- if (create) {
- indexDir.mkdir();
- log.debug("Initializing index directory " + indexName);
- }
-
- File subDir = new File( indexName, "1" );
- create = ! subDir.exists();
- directory1 = FSDirectory.getDirectory( subDir.getCanonicalPath(), create );
- if ( create ) {
- IndexWriter iw = new IndexWriter( directory1, new StandardAnalyzer(), create );
- iw.close();
- }
-
- subDir = new File( indexName, "2" );
- create = ! subDir.exists();
- directory2 = FSDirectory.getDirectory( subDir.getCanonicalPath(), create );
- if ( create ) {
- IndexWriter iw = new IndexWriter( directory2, new StandardAnalyzer(), create );
- iw.close();
- }
- File currentMarker = new File(indexName, "current1");
- File current2Marker = new File(indexName, "current2");
- if ( currentMarker.exists() ) {
- current = 1;
- }
- else if ( current2Marker.exists() ) {
- current = 2;
- }
- else {
- //no default
- log.debug( "Setting directory 1 as current");
- current = 1;
- File sourceFile = new File(source);
- File destinationFile = new File(indexName, Integer.valueOf(current).toString() );
- int sourceCurrent;
- if ( new File(sourceFile, "current1").exists() ) {
- sourceCurrent = 1;
- }
- else if ( new File(sourceFile, "current2").exists() ) {
- sourceCurrent = 2;
- }
- else {
- throw new AssertionFailure("No current file marker found in source directory: " + source);
- }
- try {
- FileHelper.synchronize( new File(sourceFile, String.valueOf(sourceCurrent) ), destinationFile, true);
- }
- catch (IOException e) {
- throw new HibernateException("Umable to synchonize directory: " + indexName, e);
- }
- if (! currentMarker.createNewFile() ) {
- throw new HibernateException("Unable to create the directory marker file: " + indexName);
- }
- }
- log.debug( "Current directory: " + current);
- }
- catch (IOException e) {
- throw new HibernateException( "Unable to initialize index: " + directoryProviderName, e );
- }
- timer = new Timer();
- TimerTask task = new TriggerTask(source, indexName);
- timer.scheduleAtFixedRate( task, period, period );
- }
-
- public FSDirectory getDirectory() {
- if (current == 1) {
- return directory1;
- }
- else if (current == 2) {
- return directory2;
- }
- else {
- throw new AssertionFailure("Illegal current directory: " + current);
- }
- }
-
- @Override
- public boolean equals(Object obj) {
- // this code is actually broken since the value change after initialize call
- // but from a practical POV this is fine since we only call this method
- // after initialize call
- if ( obj == this ) return true;
- if ( obj == null || !( obj instanceof FSSwitchableDirectoryProvider ) ) return false;
- return indexName.equals( ( (FSSwitchableDirectoryProvider) obj ).indexName );
- }
-
- @Override
- public int hashCode() {
- // this code is actually broken since the value change after initialize call
- // but from a practical POV this is fine since we only call this method
- // after initialize call
- int hash = 11;
- return 37 * hash + indexName.hashCode();
- }
-
- class TriggerTask extends TimerTask {
-
- private ExecutorService executor;
- private CopyDirectory copyTask;
-
- public TriggerTask(String source, String destination) {
- executor = Executors.newSingleThreadExecutor();
- copyTask = new CopyDirectory( source, destination );
- }
-
- public void run() {
- if (!copyTask.inProgress) {
- executor.execute( copyTask );
- }
- else {
- log.trace( "Skipping directory synchronization, previous work still in progress: " + indexName);
- }
- }
- }
-
- class CopyDirectory implements Runnable {
- private String source;
- private String destination;
- private volatile boolean inProgress;
-
- public CopyDirectory(String source, String destination) {
- this.source = source;
- this.destination = destination;
- }
-
- public void run() {
- long start = System.currentTimeMillis();
- try {
- inProgress = true;
- int oldIndex = current;
- int index = current == 1 ? 2 : 1;
- File sourceFile;
- if ( new File( source, "current1" ).exists() ) {
- sourceFile = new File(source, "1");
- }
- else if ( new File( source, "current2" ).exists() ) {
- sourceFile = new File(source, "2");
- }
- else {
- log.error("Unable to determine current in source directory");
- inProgress = false;
- return;
- }
-
- File destinationFile = new File(destination, Integer.valueOf(index).toString() );
- //TODO make smart a parameter
- try {
- log.trace("Copying " + sourceFile + " into " + destinationFile);
- FileHelper.synchronize( sourceFile, destinationFile, true);
- current = index;
- }
- catch (IOException e) {
- //don't change current
- log.error( "Unable to synchronize " + indexName, e);
- inProgress = false;
- return;
- }
- if ( ! new File(indexName, "current" + oldIndex).delete() ) {
- log.warn( "Unable to remove previous marker file in " + indexName );
- }
- try {
- new File(indexName, "current" + index).createNewFile();
- }
- catch( IOException e ) {
- log.warn( "Unable to create current marker file in " + indexName, e );
- }
- }
- finally {
- inProgress = false;
- }
- log.trace( "Copy for " + indexName + " took " + (System.currentTimeMillis() - start) + " ms");
- }
- }
-
- public void finalize() throws Throwable {
- super.finalize();
- timer.cancel();
- //TODO find a better cycle from Hibernate core
- }
-}
Copied: branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSlaveAndMasterDPTest.java (from rev 11246, branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSwitchableAndMasterDPTest.java)
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSlaveAndMasterDPTest.java (rev 0)
+++ branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSlaveAndMasterDPTest.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -0,0 +1,137 @@
+//$Id: $
+package org.hibernate.search.test.directoryProvider;
+
+import java.io.File;
+import java.util.Date;
+import java.util.List;
+
+import org.apache.lucene.analysis.StopAnalyzer;
+import org.apache.lucene.queryParser.QueryParser;
+import org.hibernate.Session;
+import org.hibernate.cfg.Configuration;
+import org.hibernate.event.PostDeleteEventListener;
+import org.hibernate.event.PostInsertEventListener;
+import org.hibernate.event.PostUpdateEventListener;
+import org.hibernate.search.FullTextSession;
+import org.hibernate.search.Search;
+import org.hibernate.search.event.FullTextIndexEventListener;
+import org.hibernate.search.util.FileHelper;
+
+/**
+ * @author Emmanuel Bernard
+ */
+public class FSSlaveAndMasterDPTest extends MultipleSFTestCase {
+
+ public void testProperCopy() throws Exception {
+ Session s1 = getSessionFactories()[0].openSession( );
+ SnowStorm sn = new SnowStorm();
+ sn.setDate( new Date() );
+ sn.setLocation( "Dallas, TX, USA");
+
+ FullTextSession fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
+ QueryParser parser = new QueryParser("id", new StopAnalyzer() );
+ List result = fts2.createFullTextQuery( parser.parse( "location:texas" ) ).list();
+ assertEquals( "No copy yet, fresh index expected", 0, result.size() );
+
+ s1.persist( sn );
+ s1.flush(); //we don' commit so we need to flush manually
+
+ fts2.close();
+ s1.close();
+
+ Thread.sleep( 2 * 60 * 100 + 10); //wait a bit more than 2 refresh (one master / one slave)
+
+ //temp test original
+ fts2 = Search.createFullTextSession( getSessionFactories()[0].openSession( ) );
+ result = fts2.createFullTextQuery( parser.parse( "location:dallas" ) ).list();
+ assertEquals( "Original should get one", 1, result.size() );
+
+ fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
+ result = fts2.createFullTextQuery( parser.parse( "location:dallas" ) ).list();
+ assertEquals("First copy did not work out", 1, result.size() );
+
+ s1 = getSessionFactories()[0].openSession( );
+ sn = new SnowStorm();
+ sn.setDate( new Date() );
+ sn.setLocation( "Chennai, India");
+
+ s1.persist( sn );
+ s1.flush(); //we don' commit so we need to flush manually
+
+ fts2.close();
+ s1.close();
+
+ Thread.sleep( 2 * 60 * 100 + 10); //wait a bit more than 2 refresh (one master / one slave)
+
+ fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
+ result = fts2.createFullTextQuery( parser.parse( "location:chennai" ) ).list();
+ assertEquals("Second copy did not work out", 1, result.size() );
+
+ s1 = getSessionFactories()[0].openSession( );
+ sn = new SnowStorm();
+ sn.setDate( new Date() );
+ sn.setLocation( "Melbourne, Australia");
+
+ s1.persist( sn );
+ s1.flush(); //we don' commit so we need to flush manually
+
+ fts2.close();
+ s1.close();
+
+ Thread.sleep( 2 * 60 * 100 + 10); //wait a bit more than 2 refresh (one master / one slave)
+
+ fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
+ result = fts2.createFullTextQuery( parser.parse( "location:melbourne" ) ).list();
+ assertEquals("Third copy did not work out", 1, result.size() );
+
+ fts2.close();
+ }
+
+
+ protected void setUp() throws Exception {
+ File base = new File(".");
+ File root = new File(base, "lucenedirs");
+ root.mkdir();
+
+ File master = new File(root, "master/main");
+ master.mkdirs();
+ master = new File(root, "master/copy");
+ master.mkdirs();
+
+ File slave = new File(root, "slave");
+ slave.mkdir();
+
+ super.setUp();
+ }
+
+ protected void tearDown() throws Exception {
+ super.tearDown();
+ File base = new File(".");
+ File root = new File(base, "lucenedirs");
+ FileHelper.delete( root );
+ }
+
+ protected int getSFNbrs() {
+ return 2;
+ }
+
+ protected Class[] getMappings() {
+ return new Class[] {
+ SnowStorm.class
+ };
+ }
+
+ protected void configure(Configuration[] cfg) {
+ //master
+ cfg[0].setProperty( "hibernate.search.default.sourceBase", "./lucenedirs/master/copy");
+ cfg[0].setProperty( "hibernate.search.default.indexBase", "./lucenedirs/master/main");
+ cfg[0].setProperty( "hibernate.search.default.refresh", "1"); //every minute
+ cfg[0].setProperty( "hibernate.search.default.directory_provider", "org.hibernate.search.store.FSMasterDirectoryProvider");
+
+ //slave(s)
+ cfg[1].setProperty( "hibernate.search.default.sourceBase", "./lucenedirs/master/copy");
+ cfg[1].setProperty( "hibernate.search.default.indexBase", "./lucenedirs/slave");
+ cfg[1].setProperty( "hibernate.search.default.refresh", "1"); //every minute
+ cfg[1].setProperty( "hibernate.search.default.directory_provider", "org.hibernate.search.store.FSSlaveDirectoryProvider");
+ }
+}
Deleted: branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSwitchableAndMasterDPTest.java
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSwitchableAndMasterDPTest.java 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/directoryProvider/FSSwitchableAndMasterDPTest.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -1,143 +0,0 @@
-//$Id: $
-package org.hibernate.search.test.directoryProvider;
-
-import java.io.File;
-import java.util.Date;
-import java.util.List;
-
-import org.apache.lucene.analysis.StopAnalyzer;
-import org.apache.lucene.queryParser.QueryParser;
-import org.hibernate.Session;
-import org.hibernate.cfg.Configuration;
-import org.hibernate.event.PostDeleteEventListener;
-import org.hibernate.event.PostInsertEventListener;
-import org.hibernate.event.PostUpdateEventListener;
-import org.hibernate.search.FullTextSession;
-import org.hibernate.search.Search;
-import org.hibernate.search.util.FileHelper;
-import org.hibernate.search.event.FullTextIndexEventListener;
-
-/**
- * @author Emmanuel Bernard
- */
-public class FSSwitchableAndMasterDPTest extends MultipleSFTestCase {
-
- public void testProperCopy() throws Exception {
- Session s1 = getSessionFactories()[0].openSession( );
- SnowStorm sn = new SnowStorm();
- sn.setDate( new Date() );
- sn.setLocation( "Dallas, TX, USA");
-
- FullTextSession fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
- QueryParser parser = new QueryParser("id", new StopAnalyzer() );
- List result = fts2.createFullTextQuery( parser.parse( "location:texas" ) ).list();
- assertEquals( "No copy yet, fresh index expected", 0, result.size() );
-
- s1.persist( sn );
- s1.flush(); //we don' commit so we need to flush manually
-
- fts2.close();
- s1.close();
-
- Thread.sleep( 2 * 60 * 100 + 10); //wait a bit more than 2 refresh (one master / one slave)
-
- //temp test original
- fts2 = Search.createFullTextSession( getSessionFactories()[0].openSession( ) );
- result = fts2.createFullTextQuery( parser.parse( "location:dallas" ) ).list();
- assertEquals( "Original should get one", 1, result.size() );
-
- fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
- result = fts2.createFullTextQuery( parser.parse( "location:dallas" ) ).list();
- assertEquals("First copy did not work out", 1, result.size() );
-
- s1 = getSessionFactories()[0].openSession( );
- sn = new SnowStorm();
- sn.setDate( new Date() );
- sn.setLocation( "Chennai, India");
-
- s1.persist( sn );
- s1.flush(); //we don' commit so we need to flush manually
-
- fts2.close();
- s1.close();
-
- Thread.sleep( 2 * 60 * 100 + 10); //wait a bit more than 2 refresh (one master / one slave)
-
- fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
- result = fts2.createFullTextQuery( parser.parse( "location:chennai" ) ).list();
- assertEquals("Second copy did not work out", 1, result.size() );
-
- s1 = getSessionFactories()[0].openSession( );
- sn = new SnowStorm();
- sn.setDate( new Date() );
- sn.setLocation( "Melbourne, Australia");
-
- s1.persist( sn );
- s1.flush(); //we don' commit so we need to flush manually
-
- fts2.close();
- s1.close();
-
- Thread.sleep( 2 * 60 * 100 + 10); //wait a bit more than 2 refresh (one master / one slave)
-
- fts2 = Search.createFullTextSession( getSessionFactories()[1].openSession( ) );
- result = fts2.createFullTextQuery( parser.parse( "location:melbourne" ) ).list();
- assertEquals("Third copy did not work out", 1, result.size() );
-
- fts2.close();
- }
-
-
- protected void setUp() throws Exception {
- File base = new File(".");
- File root = new File(base, "lucenedirs");
- root.mkdir();
-
- File master = new File(root, "master/main");
- master.mkdirs();
- master = new File(root, "master/copy");
- master.mkdirs();
-
- File slave = new File(root, "slave");
- slave.mkdir();
-
- super.setUp();
- }
-
- protected void tearDown() throws Exception {
- super.tearDown();
- File base = new File(".");
- File root = new File(base, "lucenedirs");
- FileHelper.delete( root );
- }
-
- protected int getSFNbrs() {
- return 2;
- }
-
- protected Class[] getMappings() {
- return new Class[] {
- SnowStorm.class
- };
- }
-
- protected void configure(Configuration[] cfg) {
- //master
- cfg[0].setProperty( "hibernate.search.default.sourceBase", "./lucenedirs/master/copy");
- cfg[0].setProperty( "hibernate.search.default.indexBase", "./lucenedirs/master/main");
- cfg[0].setProperty( "hibernate.search.default.refresh", "1"); //every minute
- cfg[0].setProperty( "hibernate.search.default.directory_provider", "org.hibernate.search.store.FSMasterDirectoryProvider");
- cfg[0].getEventListeners().setPostDeleteEventListeners( new PostDeleteEventListener[]{ new FullTextIndexEventListener() } );
- cfg[0].getEventListeners().setPostUpdateEventListeners( new PostUpdateEventListener[]{ new FullTextIndexEventListener() } );
- cfg[0].getEventListeners().setPostInsertEventListeners( new PostInsertEventListener[]{ new FullTextIndexEventListener() } );
-
- //slave(s)
- cfg[1].setProperty( "hibernate.search.default.sourceBase", "./lucenedirs/master/copy");
- cfg[1].setProperty( "hibernate.search.default.indexBase", "./lucenedirs/slave");
- cfg[1].setProperty( "hibernate.search.default.refresh", "1"); //every minute
- cfg[1].setProperty( "hibernate.search.default.directory_provider", "org.hibernate.search.store.FSSwitchableDirectoryProvider");
- cfg[1].getEventListeners().setPostDeleteEventListeners( new PostDeleteEventListener[]{ new FullTextIndexEventListener() } );
- cfg[1].getEventListeners().setPostUpdateEventListeners( new PostUpdateEventListener[]{ new FullTextIndexEventListener() } );
- cfg[1].getEventListeners().setPostInsertEventListeners( new PostInsertEventListener[]{ new FullTextIndexEventListener() } );
- }
-}
Modified: branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/worker/AsyncWorkerTest.java
===================================================================
--- branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/worker/AsyncWorkerTest.java 2007-03-06 05:19:39 UTC (rev 11252)
+++ branches/Branch_3_2/HibernateExt/search/src/test/org/hibernate/search/test/worker/AsyncWorkerTest.java 2007-03-06 05:23:23 UTC (rev 11253)
@@ -3,11 +3,7 @@
import org.hibernate.search.store.RAMDirectoryProvider;
import org.hibernate.search.Environment;
-import org.hibernate.search.event.FullTextIndexEventListener;
import org.hibernate.cfg.Configuration;
-import org.hibernate.event.PostDeleteEventListener;
-import org.hibernate.event.PostUpdateEventListener;
-import org.hibernate.event.PostInsertEventListener;
import org.apache.lucene.analysis.StopAnalyzer;
/**
@@ -19,9 +15,9 @@
cfg.setProperty( "hibernate.search.default.directory_provider", RAMDirectoryProvider.class.getName() );
cfg.setProperty( Environment.ANALYZER_CLASS, StopAnalyzer.class.getName() );
cfg.setProperty( Environment.WORKER_SCOPE, "transaction" );
- cfg.setProperty( Environment.WORKER_PROCESS, "async" );
- cfg.setProperty( Environment.WORKER_PREFIX + "thread_pool.min", "1" );
- cfg.setProperty( Environment.WORKER_PREFIX + "thread_pool.max", "10" );
+ cfg.setProperty( Environment.WORKER_EXECUTION, "async" );
+ cfg.setProperty( Environment.WORKER_PREFIX + "thread_pool.size", "1" );
+ cfg.setProperty( Environment.WORKER_PREFIX + "buffer_queue.max", "10" );
}
}
More information about the hibernate-commits
mailing list