Let's split the problem in two
## Infinispan
In the ORM case we do filter null results. The logic is in `QueryLoader.executeLoader` and
more precisely
`ObjectLoaderHelper.returnAlreadyLoadedObjectsInCorrectOrder`.
So it looks like a mistake of the Infinipsan Query module to not reproduce this behavior.
Of course when you paginate, you will get smaller windows.
Note however that iterate() is expected to return nulls in this situation AFAIR
## Return queries inlined with the current transaction
I have to admit I did not spend time thinking hard about how to do avoid the problem.
As you said we need an in memory index for the transaction to keep changes and
prioritize.
I am not fully following you on the update and removal but basically your collector would
ignore
documents that are planned to be removed or updated and you would index new and updated
documents in your in-memory index. Is that correct?
I'd love to see an experimental work on that because that's a question we will get
more and more often
with OGM coming. We will need a way to activate it lazily to not pay the cost unless
it's really necessary.
That would be a very nice new feature. Hardy, do you think you could work on a prototype?
I would rather
get Sanne focus on the OGM query parser.
Emmanuel
On 29 août 2012, at 00:52, Sanne Grinovero wrote:
Hi all,
I feel the need to share some thoughts on this quite tricky patch proposal:
https://github.com/infinispan/infinispan/pull/1245
I'm tempted to say that Hibernate Search should "scan ahead" to look
for more results to fill the gap; but -even assuming this was easy to
implement (which it is not)- this behaviour would be inconsistent with
update operations, or even inserts.
For inserts we could compensate by keeping an in-memory index paired
to the current transactions, and consider this additional index as a
temporary additional shard; by following this path I'm confident we
could also implement proper removals and updates using a custom collector,
but this will definitely be more complex and introduce some overhead.
Overhead could be minimized by considering this temporary in-memory
index as a pre-analysed dataset, so that we avoid doing the work again
at commit time.
Any opinions on how this should work?
Cheers,
Sanne
_______________________________________________
hibernate-dev mailing list
hibernate-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev