My initial reaction is if you really need transactional consistency is to do run a
transaction commit every n operations. But that may not be ok for your application.
We could imagine more or less what you are describing. This is an idea we want to explore
to offer consistency between the index and the database even within a non committee
transaction. Basically flushing changes to an in memory index plus some smart hand waving
to via index filters to hide deleted elements from the committed indexes.
I don't think this is really going to be an easy task but if you have some spare time,
that would be awesome to start digging in this area.
On 10 déc. 2012, at 21:45, Guillaume Smet <guillaume.smet(a)gmail.com> wrote:
On Mon, Dec 10, 2012 at 9:07 PM, Guillaume Smet
<guillaume.smet(a)gmail.com> wrote:
> On Mon, Dec 10, 2012 at 8:27 PM, Guillaume Smet
> <guillaume.smet(a)gmail.com> wrote:
>> Is it transactionally safe?
>
> From what I read, it's not.
>
> Do you see any way to get this type of pattern working in a
> transactionally safe way?
Hmmm, just handwaving at the moment but couldn't we imagine that,
instead of flushing to the indexes, we could flush to a "buffer" of
Lucene documents. I think it could reduce the memory footprint
compared to keeping all the entities in memory while allowing us to
keep the operation transactional? And it might allow us to get the
above case working.
Perhaps, flushing to a queue of Lucene documents isn't the right idea
but we could prepare/"compile" the LuceneWork to whatever is
convenient for this particular operation (Lucene document, query for
deletion...), get rid of the entity and all the entity related stuff
and apply the works at the commit of the transaction as it's already
done.
I'm pretty sure it could be an interesting compromise - if doable.
Any thoughts?
--
Guillaume
_______________________________________________
hibernate-dev mailing list
hibernate-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev