On the surface, it seems reasonable to expose an API that allows the current session to flush the Enver's audit work units to the back-end database, but unfortunately it isn't that simple when considering all corner cases. For example:
entityManager.getTransaction().begin();
MyEntity entity = new MyEntity( "Taz" );
entityManager.persist( entity );
entityManager.flush();
entityManager.getTransaction().commit();
entityManager.getTransaction().begin();
entity = entityManager.find( MyEntity.class, entity.getId() );
entity.setName( "Roger Rabbit" );
entityManager.flush();
entityManager.unwrap( Session.class ).flushAuditWork(); entityManager.remove( entityManager.find( MyEntity.class, entity.getId() ) );
entityManager.flush();
entityManager.getTransaction().commit();
While this isn't a long-running transaction, it is meant to illustrate how such an API could be abused and cause failures in user code. The above code will work without failure under the DefaultAuditStrategy but it does not under the ValidityAuditStrategy. While the first revision occurs just fine, the second revision triggers an insert, a delete, and finally another insert. The problem is that the ValidityAuditStrategy manages additional SQL state that isn't reversible in the current design (this is a commit-time based auditing solution). This causes the second insert to detect that the SQL state it should perform results in 0 rows changed (because it was already performed by the first insert) and so the transaction is rolled back. This flush API only becomes a problem when the iterative operations are not forward-bound, as in the case above with a remove mid-stream. In order for this to work in my opinion, it really should be fault tolerant to any session operation, not just based on the assumption it's always some iterative loop where the user's code is simply always persisting new objects. |