[hibernate-issues] [Hibernate-JIRA] Commented: (HHH-2004) Get more information about SQL exceptions in batched execution environments

Claus Schmid (JIRA) noreply at atlassian.com
Tue Aug 15 08:29:18 EDT 2006


    [ http://opensource.atlassian.com/projects/hibernate/browse/HHH-2004?page=comments#action_23936 ] 

Claus Schmid commented on HHH-2004:
-----------------------------------

For clarification, I'd like to rephrase the beginning of the first sentence in the 3rd paragraph as follows:

"Let's assume that the bulk upload contained (amongst many other new objects) a single object which duplicates an already existing object in the database, ..."


> Get more information about SQL exceptions in batched execution environments
> ---------------------------------------------------------------------------
>
>          Key: HHH-2004
>          URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-2004
>      Project: Hibernate3
>         Type: New Feature

>   Components: core
>  Environment: All environments
>     Reporter: Claus Schmid

>
>
> When, e.g., performing bulk operations (inserts, updates, deletes) in a batched configuration, the operations are added to the batched statement and then submitted at once (on a batch-per-batch basis). When errors occur, e.g. constraint violations, an exception is thrown. There are situations where it can be extremely helpful to determine which one(s) of the bulk operation(s) failed and to which object(s) the operation(s) pertained.
> An example:
> In our application, we have a bulk upload function which allows a user to upload data via a web interface. The data is converted to objects and the objects are then persisted. When the transaction is committed (for the whole bulk upload at once) and an error occurs, a single exception is thrown for the whole transaction. It wraps around a java.sql.BatchUpdateException which gives us access to an array of update counts. In this array, we can see for every single one of the batched operations which one failed and which one succeeded.
> Let's assume that the while bulk upload contained a single object which duplicates an already existing object in the database, and some alternate key would be violated. In this case, we would get a constraint violation. We could see from the update count array that there was one error and many successful operations. But we would not be able to relate the particular row in the update count array to any of the objects that we persisted. So all we could tell the user was that one of his objects is duplicate, but not which one. Of course, the user now has the arduous task of checking his upload data against database contents, or to use some divide-and-conquer strategy to upload the data part by part, and dividing those parts that failed again.
> If, however, one had access to the array of operations in a batch in the order in which they were batched, one could match them against the entries in the update count array, and immediately figure out the "bad" objects. From this information, one could then send a meaningful error message back to the user, identifying the object that caused the exception.
> Of course, having such a facility would also help in any other situation where batched execution was used within the guts of Hibernate, in identifying the culprit responsible for the failure.
> In many such cases, reverting to a non-batched configuration may not be useful because of performance issues.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://opensource.atlassian.com/projects/hibernate/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira




More information about the hibernate-issues mailing list