[hibernate-dev] DocumentBuilder refactoring in Hibernate Search: how to deal (internally) with metadata
Sanne Grinovero
sanne at hibernate.org
Wed May 29 12:39:49 EDT 2013
We're starting a series of refactorings in Hibernate Search to improve
how we handle the entity mapping to the index; to summarize goals:
1# Expose the Metadata as API
We need to expose it because:
a - OGM needs to be able to read this metadata to produce appropriate queries
b - Advanced users have expressed the need to for things like listing
all indexed entities to integrate external tools, code generation,
etc..
c - All users (advanced and not) have interest in -at least- logging
the field structure to help creating Queries; today people need a
debugger or Luke.
Personally I think we end up needing this just as an SPI: that might
be good for cases {a,b}, and I have an alternative proposal for {c}
described below.
However we expose it, I think we agree this should be a read-only
structure built as a second phase after the model is consumed from
(annotations / programmatic API / jandex / auto-generated by OGM). It
would also be good to keep it "minimal" in terms of memory cost, so to
either:
- drop references to the source structure
- not holding on it at all, building the Metadata on demand (!)
(Assuming we can build it from a more obscure internal representation
I'll describe next).
Whatever the final implementation will actually do to store this
metadata, for now the priority is to define the contract for the sake
of OGM so I'm not too concerned on the two phase buildup and how
references are handled internally - but let's discuss the options
already.
2# Better fit Lucene 4 / High performance
There are some small performance oriented optimizations that we could
already do with Lucene 3, but where unlikely to be worth the effort;
for example reusing Field instances and pre-intern all field names.
These considerations however are practically mandatory with Lucene 4, as:
- the cost of *not* doing as Lucene wants is higher (runtime field
creation is more expensive now)
- the performance benefit of following the Lucene expectations are
significantly higher (takes advantage of several new features)
- code is much more complex if we don't do it
3# MutableSearchFactory
Let's not forget we also have a MutableSearchFactory to maintain: new
entities could be added at any time so if we drop the original
metadata we need to be able to build a new (read-only) one from the
current state.
4# Finally some cleanups in AbstractDocumentBuilder
This class served us well, but has grown too much over time.
Things we wanted but where too hard to do so far:
- Separate annotation reading from Document building. Separate
validity checks too.
- It checks for JPA @Id using reflection as it might not be available
-> pluggable?
- LuceneOptionsImpl are built at runtime each time we need one ->
reuse them, coupling them to their field
DocumentBuilderIndexedEntity specific:
- A ConversionContext tracks progress on each field by push/pop a
navigation stack to eventually thrown an exception with the correct
description. If instead we used a recursive function, there would be
no need to track anything.
- We had issues with "forgetting" to initialize a collection before
trying to index it (HSEARCH-1245, HSEARCH-1240, ..)
- We need a reliable way to track which field names are created, and
from which bridge they are originating (including custom bridges:
HSEARCH-904)
- If we could know in advance which properties of the entities need
to be initialized for a complete Document to be created we could
generate more efficient queries at entity initialization time, or at
MassIndexing select time. I think users really would expect such a
clever integration with ORM (HSEARCH-1235)
== Solution ? ==
Now let's assume that we can build this as a recursive structure which
accepts a generic visitor.
One could "visit" the structure with a static collector to:
- discover which fields are written - and at the same time collect
information about specific options used on them
-> query validation
-> logging the mapping
-> connect to some tooling
- split the needed properties graph into optimised loading SQL or
auto-generated fetch profiles; ideally taking into account 2nd level
cache options from ORM (which means this visitor resides in the
hibernate-search-orm module, not engine! so note the dependency
inversion).
- visit it with a non-static collector to initialize all needed
properties of an input Entity
- visit it to build a Document of an initialized input Entity
- visit it to build something which gets feeded into a non-Lucene
output !! (ElasticSearch or Solr client value objects: HSEARCH-1188)
.. define the Analyzer mapping, generate the dynamic boosting
values, etc.. each one could be a separate, clean, concern.
This would also make it easier to implement a whole crop of feature
requests we have about improving the @IndexedEmbedded(includePaths)
feature, and the ones I like most:
# easy tool integration for inspection
# better testability of how we create this metadata
# could make a "visualizing" visitor to actually show how a test
entity is transformed and make it easier to understand why it's
matching a query (or not).
Quite related, what does everybody think of this :
https://hibernate.atlassian.net/browse/HSEARCH-438 Support runtime
polymorphism on associations (instead of defining the indexed
properties based on the returned type)
?
Personally I think the we should support that, but it's a significant
change. I'm bringing that up again as I suspect it would affect the
design of the changes proposed above.
This might sound a big change; in fact I agree it's a significant
style change but it is rewriting what is defined today in just 3
classes; no doubt we'll get more than a dozen ouf of it, but I think
it would be better to handle in the long run, more flexible and
potentially more efficient too.
Do we all agree on this? In practical terms we'd also need to define
how far Hardy wants to go with this, if he wants to deal only with the
Metadata API/SPI aspect and then I could apply the rest, or if he
wants to try doing it all in one go. I don't think we can start
working in parallel on this ;-)
[sorry I tried to keep it short.. then I run out of time]
Sanne
More information about the hibernate-dev
mailing list