thanks for your insights :-)<br>I'll try explain myself better inline:<br><br><div class="gmail_quote">2008/6/7 Emmanuel Bernard <<a href="mailto:emmanuel@hibernate.org">emmanuel@hibernate.org</a>>:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
This sounds very promising.<br>
I don't quite understand why you talk about loading lazy objects though?<br>
On of the recommendations is to load the object and all it's related objects before indexing. No lazy triggering should happen.<br>
eg "from User u left join fetch u.address a left join fetch a.country"<br>
if Address and Country are embedded in the User index.</blockquote><div><br>
I am talking about the lazy object loading because it is not always possible to<br>
load the complete object graph eagerly because of the cartesian problem;<br>
the "hints" I mention in point A is mainly (but not limited to) the left join<br>
fetch instruction needed to load the root entity.<br>
However if I put all needed collections in the fetch join I kill the DB<br>
performance and am flooded by data; I have made many experiments to<br>
find the "gold balance" between eager and lazy and know for sure it is much<br>
faster keeping most stuff out of the initial "fetch join"<br>My current "rule of thumb" is to load no more than two additional collections,<br>the rest goes lazy.<br>
Also we should keep in mind the eager/lazy/subselect strategies<br>
going to be chosen for the entities will probably be selected for<br>
"normal" business operations finetuning and not for indexing performance;<br>
I had to fight somehow with other devs needing some setting for<br>
other usecases in a different way than what I needed to bring indexing<br>
timings down.<br> <br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
I think the E limitation is fine, we can position this API as offline indexing of the data. That's fair enough for a start. I don't like your block approach unless you couple it with JMS. I am uncomfortable in keeping work to be done for a few hours in a VM without persistent mechanism.</blockquote>
<div><br>
I am glad to hear it's fine to position it as "offline" API, as a start.<br>Do you think we should enforce or check it somehow?<br>For later improvements the batching IndexWriter could be "borrowed" by<br>
the committing transactions to synchronously write their data away,<br>we just need to avoid the need of and IndexReader for deletions;<br>I've been searching for a solution in my other post... if that could be fixed<br>
and a single IndexWriter per index could be available you could<br>
have batch indexing and normal operation available together.<br><br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d">
"this pool is usually the slowest as it has to initialize many lazy fields,<br>
so there are more threads here."<br></div>
I don't quite understand why this happens.</blockquote><div><br>I suppose I should show you an ER diagram of our model; in our case but I believe<br>in most cases people will search for an object basing his "fulltext" idea on many different<br>
fields which are external to the main entity: intersecting e.g. author nickname with historic period,<br>considering book series, categories and collections, or by a special code in one of<br>30 other legacy library encoding schemes.<br>
The use case actually shows that very few fields are read from the root entity, but most<br>are derived from linked many-to-many entities, sometimes going to a second or third level<br>of linked information. I don't think this is just my case, IMHO it is very likely most<br>
real world applications will have a similar problem, we have to encode in the root<br>object many helper fields to make most external links searchable; I believe this is part of the<br>"dealing with the mismatch between the index structure and the domain model"<br>
which is Search's slogan (pasted from homepage).<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
So what is the impact of your code on the current code base? Do you need to change a lot of things? How fast do you think you could have a beta in the codebase?</blockquote><div> </div><div>I still have not completely understood the locks around the indexes; I believe the impact on current code is not so huge, I should need to know<br>
how I should "freeze" other activity on the indexes: Indexing could just start but other threads will be waiting a long time; should other<br>methods check and throw an exception when mass indexing is busy?<br>
Is it ok for one method to spawn 40 threads?<br>How should the "management / progress monitor API" look like?<br>I didn't look at similarity and sharding, is it ok for a first beta to avoid this features? I don't think it should be difficult to figure out, but would like<br>
to show working code prototypes asap to have early feedback.<br>I think that if the answers to above questions don't complicate my current code the effort to integrate it is less than a week of work; unfortunately this translates<br>
in 4-6 weeks of time as I have other jobs and deadlines, maybe less with some luck.<br>How should this be managed? a branch? one commit when done?<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
Let's spin a different thread for the "in transaction" pool, I am not entirely convinced it actually will speed up things.<br><div><div class="Wj3C7c"></div></div></blockquote><div>Yes I agree there probably is not a huge advantage, if any; the main reason would be to have "normal operation" available<br>
even during mass reindexing, performance improvements would be limited<br>to special cases such as a single thread committing several entities: the "several" would benefit from batch behavior.<br>The other thread I had already started is linked to this: IMHO we should improve the deletion of entities first.<br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><div class="Wj3C7c">
<br>
On Jun 6, 2008, at 18:51, Sanne Grinovero wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hello list,<br>
<br>
I've finally finished some performance test about stuff I wanted to double-check<br>
before writing stupid ideas to this list, so I feel I can at last propose<br>
some code to (re)building the index for Hibernate Search.<br>
<br>
The present API of Hibernate Search provides a nice and safe<br>
transactional "index(entity)",<br>
but even when trying several optimizations it doesn't reach the speed<br>
of an unsafe (out of transaction) indexer we use in our current<br>
production environment.<br>
Also reading the forum it appears that much people are having<br>
difficulties in using<br>
the current API, even with a good example in the reference documentation<br>
some difficulties arise with Seam's transactions and with huge data sets.<br>
(I'm NOT saying something is broken, just that you need a lot of expertise<br>
to get it going)<br>
<br>
SCENARIO<br>
=======<br>
<br>
* Developers change an entity and want to test the effect on the index<br>
structure,<br>
thay want do to search experiments with the new fields.<br>
* A production system is up(down)graded to a new(old) release,<br>
involving index changes.<br>
(the system is "down for maintance" but the speed is crucial)<br>
* Existing index is corrupted/lost. (Again, speed to recover is critical)<br>
* A Database backup is restored, or data is changed by other jobs.<br>
* Some crazy developer like me prefers to disable H.Search's event<br>
listeners for some reason.<br>
(I wouldn't generally recommend it, but have met other people who<br>
have a reasonable<br>
argument to do this. Also in our case it is a feature as new entered<br>
books will be<br>
available for loans only from the next day :D)<br>
* A Lucene update breaks the index format (not so irrationale as they<br>
just did on trunk).<br>
<br>
PERFORMANCE<br>
=======<br>
<br>
In simple use cases, such as less than 1000 entities and not too much<br>
relationships,<br>
the exising API outperforms my prototype, as I have some costly setup.<br>
In more massive tests the setup costs are easily recovered by a much<br>
faster indexing speed;<br>
I have many data I could send, I'll just show some and keep the details simple:<br>
<br>
entity "Operator": standard complexity, involves loading of +4 objs, 7<br>
field affect index<br>
entity "User": moderate complexity, involves loading of +- 20 objs, 12<br>
affect index data<br>
entity "Modern": high complexity, loading of 44 entities, many are<br>
"manyToMany", 25 affect index data<br>
<br>
On my laptop (dual core, local MySQL db):<br>
type Operator User Modern<br>
number 560 100.000 100.000<br>
time-current 0,23 secs 45'' 270.3''<br>
time-new 0,43 secs 30'' 190''<br>
<br>
On a staging server (4 core Xeon with lots of ram and dedicated DB server):<br>
type Operator User Modern<br>
number 560 200.000 4.000.000<br>
time-current 0,09 secs 130'' 5h20'<br>
time-new 0,25 secs 22'' 19'<br>
<br>
[benchmark disclaimer:<br>
These timings are meant to be relative to each other for my particular<br>
code version, I'm not an expert of Java benchmarking at all.<br>
Also unfortunately I can't really access the same hardware for each tests.<br>
I used all possible tweaks I am aware of in Hibernate Search, actually<br>
enabling new needed params to make the test as fair as possible.]<br>
<br>
Examining the numbers:<br>
with current recommended H.Search strategy I can index 560 simple entities<br>
in 0,23 seconds; quite fast and newbe users will be impressed.<br>
At the other extreme, we index 4 million complex items, but I need more<br>
than 5 hours to do that; this is more like real use case and it could<br>
scare several developers.<br>
Unfortunately I don't have a complete copy of the DB on my laptop,<br>
but looking at the numbers it looks like my laptop could finish<br>
in 3 hours, nearly double the speed of our more-than-twice fast server.<br>
(yes I've had several memory leaks :-) but they're solved now)<br>
The real advantage is the round-trip to database: without multiple<br>
threading each lazy loaded collection somehow annotated to be indexed<br>
massively slows down the whole process; If you look at both DB an AS<br>
servers, they have very low resource usage confirming this, while my laptop<br>
stays at 70% cpu (and killing my harddrive) because he has data available<br>
locally, producing a constant feed of strings to my index.<br>
When using the new prototype (about 20 threads in 4 different pools)<br>
I get the 5hours down to less than 20minutes; Also I can start the<br>
indexing of all 7 indexable types in parallel and it will stay around 20minutes.<br>
The "User" entity is not as complex as Modern (less lazy loaded data)<br>
but confirms the same numbers.<br>
<br>
ISSUES<br>
=======<br>
About the current version I've ready.<br>
It is not a complete substitute of the current one and is far from perfect;<br>
currently these limitations apply but could be easily solved:<br>
(others I am not aware of not listed :-)<br>
<br>
A) I need to "read" some hints for each entity; I tinkered with a new<br>
annotation,<br>
configuration properties should work but are likely to be quite<br>
verbose (HQL);<br>
basically I need some hints about fetch strategies appopriate<br>
for batch indexing, which are often different than normal use cases.<br>
<br>
B) Hibernate Search's indexing of related entities was not available<br>
when I designed it.<br>
I think this change will probably not affect my code, but I still need to<br>
verify the functionality of IndexEmbedded.<br>
<br>
C) It is finetuned for our entities and DB, many variables are configurable but<br>
some stuff should be made more flexible.<br>
<br>
D) Also index sharding didn't exist at the time, I'll need to change some stuff<br>
to send the entities to the correct index and acquire the appropriate locks.<br>
<br>
The next limitations is not easy to solve, I have some ideas but no one I liked.<br>
<br>
E) It is not completely safe to use it during other data modification;<br>
It's not a problem in our<br>
current production but needs much warning in case other people<br>
wants to use it.<br>
The best solution I could think of is to lock the current workqueue<br>
of H.Search,<br>
so to block execution of work objects in the queue and resume the<br>
execution of<br>
this work objects after batch indexing is complete.<br>
If some entity disappears (removed from DB but a reference is in<br>
the queue) it<br>
can easily be skipped, if I index "old version" of some other data it will be<br>
fixed when scheduled updates from H.S. eventlisteners are resumed;<br>
(and the same for new entities).<br>
It would be nice to share the same database transaction during the<br>
whole process,<br>
but as I use several threads and many separate sessions I think<br>
this is not possible<br>
(this is the best place to ask I think;-)<br>
<br>
GOING PRACTICAL<br>
===============<br>
if (cheater) goto :top<br>
<br>
A nice evictAll(class) exists, I would like to add indexAll(class).<br>
It would be nice to provide non-blocking versions, maybe overloading:<br>
indexAll(Class clazz, boolean block)<br>
or provide a Future as return object, so people could wait for one<br>
or more indexAll requests if they want to.<br>
There are many parameters to tweak the indexing process, so I'm<br>
not sure if we should put them in the properties, or have a parameters-<br>
wrapper object indexAll(Class class, Properties prop), or<br>
something like makeIndexer(Class class) returning a complex object<br>
with several setters for finetuning and start() and awaitTermination()<br>
methods.<br>
<br>
the easy part<br>
--------------<br>
This part is easy to do as I have it working well, it is a pattern<br>
involving several executors; the size of each threadPool and of the<br>
linking queues between them gives the good balance to achieve the<br>
high throughput.<br>
First the entities are counted and divided in blocks, these ranges are fed to<br>
N scrollables opened in N threads, each thread begins iterating on the<br>
list of entities and feeds detached entities to the next Pool using<br>
BlockingQueues.<br>
In the next pool the entities are re-attached using Lock.none, readonly, etc..<br>
(and many others you may want to tell me) and we get and appropriate<br>
DocumentBuilder from the SearchFactory to transform it into a Lucene Document;<br>
this pool is usually the slowest as it has to initialize many lazy fields,<br>
so there are more threads here.<br>
Produced documents go to a smaller pool (best I found was for 2-3 threads)<br>
were data is concurrently written to the IndexWriter.<br>
There's an additional thread for resource monitoring to produce some hints<br>
about queue sizing and idle threads, to do some finetune and to see instant<br>
speed reports in logs when enabled.<br>
For shutdown I use the "poison pill" pattern, and I usually get rid of all<br>
threads and executors when I'm finished.<br>
It needs some adaption to take into account of latest Search features<br>
such as similarity, but is mostly beta-ready.<br>
<br>
the difficult part<br>
-------------------<br>
Integrating it with the current locking scheme is not really difficult,<br>
also because the goal is to minimize downtime so I think some downtime<br>
should be acceptable.<br>
It would be very nice however integrate this pattern as the default<br>
writer for indexes, even "in transaction"; I think it could be possible<br>
even in synchronous mode to split the work of a single transaction across<br>
the executors and wait for all the work be done at commit.<br>
You probably don't want to see the "lots of threads" meant for batch indexing,<br>
but the pools scale quite well to adapt themselves to the load,<br>
and it's easy (as in clean and maintainable code) to enforce resource limits.<br>
When integrating at this level the system wouldn't need to stop regular<br>
Search activity.<br>
<br>
any questions? If someone wanted to reproduce my benchmarks I'll<br>
be glad to send my current code and DB.<br>
<br>
kind regards,<br>
Sanne<br>
</blockquote>
<br>
</div></div></blockquote></div><br>