| I understand your situation and concerns, but what you are proposing is not a simple change. It is in fact a considerable overhaul to not only the internals of Envers itself but to expected results exposed through its query API. In the meanwhile, I would suggest one of the following alternatives: 1. Preprocess the input and generate secondary input files. In your case this would imply taking your domain-expert provided CSV files and your business code output and generating a set of additional CSV files that allow you to insert directly into the Envers schema tables. 2. Rather than loading data directly to the database and bypassing Hibernate, write a batch load process that allows you to leverage Hibernate allowing the its event system to work as intended for integrators such as Envers. The benefit of (1) is that it is likely the fastest way to load large volumes of data, particularly because you can use low-level database tools to import your files while (2) is likely a bit slower but minimizes the tools needed generate the additional seed files. In the case of (1), this probably would work quite well with your existing migration solution while (2) likely does not. |