JSR 354 - Money and Currency
by Steve Ebersole
So it sounds like JSR 354 may not be included in Java 9. Do we still want
to support this for ORM 5? I am not sure if "moneta" requires Java 9...
8 years, 10 months
Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 11 months
new proposal for tx timeout handling using transaction DISASSOCIATING event notification...
by Scott Marlow
With a proposed TM level listener, we will have an SPI for notification
of when application threads associated with a JTA transaction, become
disassociated with the transaction (tm.commit/rollback/suspend time).
Having this knowledge in a synchronization callback, can determine
whether the persistence context should be cleared directly from the
Synchronization.afterCompletion(int) call or should be deferred until
the transaction is disassociated from the JTA transaction.
This idea is based on a TM level listener approach that Tom Jenkinson
[1] suggested. Mike Musgrove has a "proof of concept" implementation of
the suggested changes [2]. I did some testing with [3] to see if the
improvement helps with clearing entities that might still be in the
persistence context after a background tx timeout.
I'm wondering if in the Hibernate ORM
Synchronization.afterCompletion(int status) implementation, in case of
tx rollback, if we could defer the clearing of the Hibernate session to
be handled by the JtaPlatform. This could be setup at
EntityManager.joinTransaction() time (if a new property like
"hibernate.transaction.defer_clear_session" is true). Perhaps via a
JtaPlatform.joinTransaction(EntityManager) registration call?
Thoughts?
Scott
[1] https://developer.jboss.org/thread/252572?start=45&tstart=0
[2]
https://github.com/mmusgrov/jboss-transaction-spi/blob/threadDisassociati...
[3]
https://github.com/scottmarlow/wildfly/tree/transactiontimeout_clientut_n...
9 years, 4 months
ORM Team "triage" meeting
by Steve Ebersole
Gail and I discussed Jira a little bit last week and how to best manage
scheduling issues.
We both agreed that a team get together, either weekly or every-other-week,
to discuss new issues to triage them would be a great idea.
One thing I absolutely do not want happening is just scheduling issues as a
means to come back and triage them later. Scheduling an issue, on a "real
version" anyway, should mean something. It should mean some level of
dedication to finish that task for that release. In short, unless you are
volunteering to take on a task *yourself* for that release, please do not
schedule it for that release.
As for the triage meeting, I would definitely like Gail and Andrea
involved. Of course anyone is welcome. The reason I mention this is that
Gail is usually left on early side of scheduling these. So we will find a
time that works best for us 3 and go from there. I recommend that we
leverage HipChat for these discussion.
Andrea is coming to Austin for a few days starting Monday, so I would like
to start this triaging while he is here. Gail, I am thinking 1pm my time
(11am yours) would be a good time. Andrea, does that work for you after
Austin?
9 years, 5 months
Query handling : Antlr 3 versus Antlr 4
by Steve Ebersole
As most of you know already, we are planning to redesign the current
Antlr-based HQL/JPQL parser in ORM for a variety of reasons.
The current approach in the translator (Antlr 2 based, although Antlr 3
supports the same model) is that we actually define multiple
grammars/parsers which progressively re-write the tree adding more and more
semantic information; think of this as multiple passes or phases. The
current code has 3 phases:
1) parsing - we simply parse the HQL/JPQL query into an AST, although we do
do one interesting (and uber-important!) re-write here where we "hoist" the
from clause in front of all other clauses.
2) rough semantic analysis - the current code, to be honest, sucks here.
The end result of this phase is a tree that mixes normalized semantic
information with lots of SQL fragments. It is extremely fugly
3) rendering to SQL
The idea of phases is still the best way to attack this translation imo. I
just think we did not implement the phases very well before; we were just
learning Antlr at the time. So part of the redesign here is to leverage
our better understanding of Antlr and design some better trees. The other
big reason is to centralize the generation of SQL into one place rather
than the 3 different places we do it today (not to mention the many, many
places we render SQL fragments).
Part of the process here is to decide which parser to use. Antlr 2 is
ancient :) I used Antlr 3 in the initial prototyping of this redesign
because it was the most recent release at that time. In the interim Antlr
4 has been released.
I have been evaluating whether Antlr 4 is appropriate for our needs there.
Antlr 4 is a pretty big conceptual deviation from Antlr 2/3 in quite a few
ways. Generally speaking, Antlr 4 is geared more towards interpreting
rather than translating/transforming. It can handle "transformation" if
the transformation is the final step in the process. Transformations is
where tree re-writing comes in handy.
First lets step back and look at the "conceptual model" of Antlr 4. The
grammar is used to produce:
1) the parser - takes the input and builds a "parse tree" based on the
rules of the lexer and grammar.
2) listener/visitor for parse-tree traversal - can optionally generate
listeners or visitors (or both) for traversing the parse tree (output from
parser).
There are 2 highly-related changes that negatively impact us:
1) no tree grammars/parsers
2) no tree re-writing
Our existing translator is fundamentally built on the concepts of tree
parsers and tree re-writing. Even the initial prototypes for the redesign
(and the current state of hql-parser which Sanne and Gunnar picked up from
there) are built on those concepts. So moving to Antlr 4 in that regard
does represent a risk. How big of a risk, and whether that risk is worth
it, is what we need to determine.
What does all this mean in simple, practical terms? Let's look at a simple
query: "select c.headquarters.state.code from Company c". Simple syntactic
analysis will produce a tree something like:
[QUERY]
[SELECT]
[DOT]
[DOT]
[DOT]
[IDENT, "c"]
[IDENT, "headquarters"]
[IDENT, "state"]
[IDENT, "code"]
[FROM]
[SPACE]
[SPACE_ROOT]
[IDENT, "Customer"]
[IDENT, "c"]
There is not a lot of semantic (meaning) information here. A more semantic
representation of the query would look something like:
[QUERY]
[SELECT]
[ATTRIBUTE_REF]
[ALIAS_REF, "<gen:1>"]
[IDENT, "code"]
[FROM]
[SPACE]
[PERSISTER_REF]
[ENTITY_NAME, "com.acme.Customer"]
[ALIAS, "c"]
[JOIN]
[INNER]
[ATTRIBUTE_JOIN]
[IDENT, "headquarters"]
[ALIAS, "<gen:0>"]
[JOIN]
[INNER]
[ATTRIBUTE_JOIN]
[IDENT, "state"]
[ALIAS, "<gen:1>"]
Notice especially the difference in the tree rules. This is tree
re-writing, and is the major difference affecting us. Consider a specific
thing like the "c.headquarters.state.code" DOT-IDENT sequence. Essentially
Antlr 4 would make us deal with that as a DOT-IDENT sequence through all
the phases - even SQL generation. Quite fugly. The intent of Antlr 4 in
cases like this is to build up an external state table (external to the
tree itself) or what Antlr folks typically refer to as "iterative tree
decoration"[1]. So with Antlr 4, in generating the SQL, we would still be
handling calls in terms of "c.headquarters.state.code" in the SELECT clause
and resolving that through the external symbol tables. Again, with Antlr 4
we would always be walking that initial (non-semantic) tree. Unless I am
missing something. I would be happy to be corrected, if anyone knows Antlr
4 better. I have also asked as part of the antlr-discussion group[2].
In my opinion though, if it comes down to us needing to walk the tree in
that first form across all phases I just do not see the benefit to moving
to Antlr 4.
P.S. When I say SQL above I really just mean the target query language for
the back-end data store whether that be SQL targeting a RDBMS for ORM or a
NoSQL store for OGM.
[1] I still have not fully grokked this paradigm, so I may be missing
something, but... AFAICT even in this paradigm the listener/visitor rules
are defined in terms of the initial parse tree rules rather than more
[2] https://groups.google.com/forum/#!topic/antlr-discussion/hzF_YrzfDKo
9 years, 6 months
Changelog file in Hibernate ORM
by Sanne Grinovero
The file changelog.txt in the root ot the Hibernate ORM project seems outdated.
Is it not maintained anymore? I found it handy.
Sanne
9 years, 7 months
Support for DELETE statements ActionQueue sorting
by Mihalcea Vlad
Hi,
While INSERT sorting is handled by ActionQueue.InsertActionSorter, DELETE statements are not sorted at all.
A DeleteActionSorter woudl have to rearrange DELETES in the opposite order as the INSERT sorting, the Children having to be deleted first.
The current work-around is to dissociate all Children and manually flush the Session, so that the orphan-removal kicks in before the Parent entities delete occurs.
Any plans for supporting such a feature?
Vlad Mihalcea
9 years, 7 months
Transaction
by Steve Ebersole
I thought I had asked this before, but maybe not. Andrea and I are working
through the transaction/jdbc changes and I really would like to clean up
the org.hibernate.Transaction API. But before I start cutting I wanted to
make sure noone is using the methods I plan on getting rid of...
Here is the new proposed contract:
public interface Transsaction {
public void begin();
public void commit();
public void rollback();
public void markRollbackOnly();
public Status getStatus();
public int getTimeout();
public void setTimeout(int seconds);
public void registerSynchronization(Synchronization synchronization);
}
public enum Status {
NOT_ACTIVE,
ACTIVE,
COMMITTED,
ROLLED_BACK,
FAILED_COMMIT
}
Notes:
1) isInitiator() has been removed with no real replacement. I could not
really see when that would be useful.
2) isParticipating() has been removed with no real replacement.
2) isActive(), wasCommitted() and wasRolledBack has all been removed with
call to getStatus() as replacement
3) getLocalStatus() is gone. Who cares :) If users are asking us this, we
really should be checking the REAL state of the transaction
4) Transaction is now a single impl. The distinctions are all handled
internally. TransactionImplementor is gone too.
Thoughts? Concerns?
9 years, 8 months
Trying Hibernate 5.0.0.Beta1
by Petar Tahchiev
Hi guys,
I just tried the latest beta and I cannot compile my project. With the
latest hibernate 4.3.X I was able to do this:
-------
final org.hibernate.cfg.Configuration configuration =
getHibernateConfiguration();
configuration.buildMappings();
final SchemaUpdate schemaUpdate = new SchemaUpdate(configuration);
-------
however it seems that the SchemaUpdate constructor has been removed and now
a new one is added:
--------
public SchemaUpdate(MetadataImplementor metadata) {
this( metadata.getMetadataBuildingOptions().getServiceRegistry(),
metadata );
}
---------
Also the configuration.buildMappings() method has been deprecated. Where do
I get the MetadataImplementor from? Also is there any changelog I can refer
to?
Thanks.
--
Regards, Petar!
Karlovo, Bulgaria.
---
Public PGP Key at:
https://keyserver1.pgp.com/vkd/DownloadKey.event?keyid=0x19658550C3110611
Key Fingerprint: A369 A7EE 61BC 93A3 CDFF 55A5 1965 8550 C311 0611
9 years, 8 months
The current Hibernate Search sprint: lots of topics!
by Sanne Grinovero
All,
let me clarify the general goal of this sprint. I don't expect to
celebrate with a 5.2.0.Final this time, but I'd aim at getting some of
the long standing big tasks done, and finish these three weeks with a
5.2.Beta1. We need to organize in several parallel significant themes.
There are some "big" themes going on which you need to be aware of
beyond the granularity of JIRA.
Your help in properly inspecting these with experiments and then break
them down in smaller tasks is what I'm needing the most right now. I'd
highly appreciate if each of you could take on leadership of one of
these themes, and get at least one other team member as primary
reviewer and brainstorming mate.
These are the primary themes:
- the Faceting refactoring - led by Hardy
- the dynamic types work - led by me
- Hibernate ORM 5 compatibility and testing - almost done
- getting rid of the Infinispan module - led by Gustavo
- a discussion with the wildFly team about how to share the module
structure / build / definitions (more on this soon)
- Lucene 5
- R&D: explore better clustering strategies, better master election
(or no-master architectures)
- Better integration with ORM's Multi-tenancy - being quite requested
recently - Davide?
If we really could upgrade both ORM and Lucene to 5, then we could
promote this to a new major release. Of course I'm dreaming and that's
not going to happen in practice - not least that would require an ORM
5.0.0.Final.
So what I'm expecting is that we explore the needs for these, and you
help me identify which steps are needed to get these both upgraded in
the near future. That means we might be raising more issues than
solving them, but that's good as it clarifies which atomic, self
contained and consistent steps we then need to perform to get there.
I'm currently working on ORM5 tasks, will soon share some PRs of
things which could already be merged, but of course the final step
won't be applied as we're not really going to upgrade yet - unless we
agree we're only releasing betas until ORM is final too.
For Lucene 5: the work which Hardy is doing is essential:
- update the Faceting code
- move our code to use the new FieldDocs
After that, the upgrade won't be that bad (not as hard as Lucene 4)
I just created some JIRAs as "container" for these larger themes, just
please keep in mind that I'm not setting the version to be "5.2" as
they will probably span multiple releases. The goal should not be to
resolve them, but to start them and split them up in subtasks which
can be merged already.
I'm pretty sure that several resulting sub-tasks can be merged already.
There is a new label in JIRA: "current_sprint", so we can identify
them all even though they are not marked to be fixed for version 5.2.
The "R&D" tasks are not in JIRA at all, I'm still gathering
requirements - still we'll need to dedicate some time to
experimentation and brainstorming.
I realize these are many parallel paths to work on; we're many
experienced devs though, and these should be workable in parallel.
If each of you can take some leadership on an area I hope we can close
them all by the next iteration (except probably the R&D task).
===
That said on the larger themes, there is of course a list of
traditional tasks which will shape the 5.2 improvements.
These are marked "5.2" on JIRA; some are trivial, like missing javadoc
or a paragraph of documentation but need some figuring out to craft
the right docs.
Let me comment these briefly to see if any picks your interest.
# HSEARCH-1848 Replace the Infinispan Directory provider with the one
distributed by the Infinispan project
As discussed: we'll remove the module, but need to make sure we can
plug in the one distributed by Infinispan. Needs Infinispan to release
it first.
# HSEARCH-1214 Review SearchFactory initialization
For our own sake of mind.. the boot process is hard to understand. I
have some ideas, and there are many things to keep in mind so I'll
probably try to take this myself but otherwise I'll transfer my brain
dump.. best over voice.
# HSEARCH-1472 Broaden collection of built in IndexManager
implementations to simplify choice of sensible configurations
As discussed at the team meeting. The goal is to simplify
configuration and documentation, prevent sick configuration choices.
# HSEARCH-1474 MassIndexer needs to avoid being timed out by the
TransactionManager
This is high value and long standing, but complex. Gunnar started
working on a test.
# HSEARCH-1536 Improve the test suite around MoreLikeThis
(association, custom fieldbridge, class bridges)
There are several open tasks around MLT. This is the warmup point to
finalize it MLT... I didn't schedule the other tasks for this sprint.
# HSEARCH-1589 ServiceManager closes services too aggressively
A sensible optimisation, probably easy. Beware: concurrency and
bootstrap related.
# HSEARCH-1654 Disable merge policy during Massindexing
A great performance optimisation for mass indexing people. I think
it's trivial, but to be verified you'll need to setup a relatively
long run - we have a repository with instructions to reindex the
Wikipedia
# HSEARCH-1681 Index optimisation should commit to publish the
performed optimisation
Trivial to do - one liner - but not so trivial to test for.
# HSEARCH-1684 ResultTransformer ignores transformList on tuples
No idea, needs to be looked at to make Marc S. happy.
# HSEARCH-1708 Using DistanceSortField does not verify the field
parameter passed to the constructor
# HSEARCH-1711 EntityIndexingInterceptor executes on different part of
the hierarchy
# HSEARCH-1729 Document the Infinispan configuration property
`metadata_writes_async`
This was not documented as it's an highly experimental property. I was
hoping we could run some more tests, but I won't have the time for
that at the moment, so either someone volunteers for the test, or we
keep it a secret, or decide to document it with warnings.
# HSEARCH-1762 Improve javadocs of builtin bridges
# HSEARCH-1773 org.hibernate.search.backend.impl.WorkVisitor not
exported by engine osgi bundle
Or find some alternative way... but whatever the solution we need to
get OSGi as "done" status.
# HSEARCH-1783 Reproduce transaction timeouts during mass indexing
Gunnar already on it.
# HSEARCH-1793 CriteriaObjectInitializer causes too many object loads
in cross hierarchy queries
This one is nasty, we should get rid of it.
# HSEARCH-1803 Infinispan integration test search in the wrong node
since we're removing the code.. we need to apply this as
https://issues.jboss.org/browse/ISPN-5339
# HSEARCH-1804 Boost on IndexedEmbedded properties
This really should just work as the user requests
# HSEARCH-1811 WIldcard with multiple fields
Another sensible usability improvement
# HSEARCH-1812 Documentation doesn't clearly explain how one obtains
the existing SearchIntegrator
Start a documentation section "integrators and framework developers" ?
# HSEARCH-1815 Clarify the need to depend on an implementation of
SerializationProvider
Apparently we don't state one will be needed ;)
# HSEARCH-1816 Explicitly validate the version of Hibernate ORM
A usability improvement, as proposed on the mailing list. +1 for Gunnar's ideas.
# HSEARCH-1826 Make it possible to test Hibernate Search with preview
builds of Hibernate ORM 5
I'm working on this one.
# HSEARCH-1828 Clarify documentation about ways to disable Hibernate Search
# HSEARCH-1839 FieldBridge instance initialization might use reference
access to the booting framework
This is needed by the jBPM / Drools teams. At least the programmatic
configuration should be trivial.
# HSEARCH-1844 Review which components should no longer be tagged as
experimental
# HSEARCH-1847 Create a FSDirectory extension which doesn't ever sync to disk
Requested by Infinispan - might become an urgent requirement soon,
better have this ready.
9 years, 9 months