| Adding the full answer to this <make changes>; flush(); clear(); <repeat>; problem, since it's potentially relevant here; copy-pasted from this stackoverflow answer:
The first, perhaps more correct, solution, would be for you to not use a single, big transaction, but multiple smaller ones. For example create a transaction for each chunk of 200 elements. Of course this means if a chunk fails, the previous chunks will still be in the database, but in many cases this might not matter, as you can just restart from where you failed, and the index would still be in sync with the database.
Spring gives you control over transactions using transaction templates, so you can manually start a new transaction for each chunk of 200 elements, or just refactor your code to put the @Transactional annotation at the right place. [There should be similar features in Java EE, I just don't know about them out of the top of my head.]
The second solution would be for you to still use a single, big transaction, but during the transaction periodically flush your changes, both to the database (which would be able to rollback the changes later if the transaction is aborted, don't worry) and to the index (which would not be able to rollback the changes later). This means in particular that if the transaction fails, you would have to restart everything and purge the index, because it would not be in sync with the database (which rolled back the changes) anymore.
You can find an example using periodic flushing in the documentation: https://docs.jboss.org/hibernate/search/5.10/reference/en-US/html_single/#search-batchindex-flushtoindexes
If you adapt the example, your code will look more or less like this:
Session session = ...;
FullTextSession fullTextSession = Search.getFullTextSession( session );
int index = 0;
while(results.next()) {
index++;
if (index % BATCH_SIZE == 0) {
session.flush(); fullTextSession.flushToIndexes(); fullTextSession.clear(); }
}
|