[
http://opensource.atlassian.com/projects/hibernate/browse/HHH-1268?page=c...
]
Avram Cherry commented on HHH-1268:
-----------------------------------
Wow, I just ran into this issue 2 years after I first encountered it... I'm pretty
surprised it's still not been fixed.
I'm stuck using a db model that I can't really make any changes to, so I can't
alter it to treat (PK, ordinal) as the join table's primary key with the FK
unconstrained. My primary key is on the join table is instead (PK,FK) with (PK,Ordinal)
as an additional unique constraint.
A work-around that I'm using right now is to simply replace the PersistentList
instance with a fresh (non-persistent) List with the same contents whenever you re-order
values in it. This causes Hibernate to do:
DELETE from jointable where pk=?
And then re-insert all of the relevant rows.
Considering that Hibernate would have to update every single row who's index has
changed anyway using its current logic, doing a delete/insert like this doesn't
perform all that much worse and I believe would be an acceptable long-term solution to
this bug. It has the added bonus of working with any (reasonable) combination of
primary-key and unique constraints on the join table.
An 'ideal' solution would be to have the CollectionPersister find each
'continuous' block of list items that have not changed in relative order, but has
shifted position within the whole list and perform:
First:
Delete rows that have actually be removed entirely from the list. This will leave
'holes' in the table, but ensure that the next operation won't violate any
constraints.
Second, for each continuous block that needs to be moved:
UPDATE jointable set ordinal = ordinal + :shiftAmount where PK = :pkid and ordinal BETWEEN
:firstIndex and :lastIndex
Finally:
Insert rows that are actually new to the list. This will fill in any remaining holes that
we created in step 1.
This is complicated, though and would probably be difficult to implement correctly but
would potentially reduce the number of updates that would have to be made when an indexed
list gets reordered.
So, until/unless some sort of better behavior comes along, can anybody think of a reason
why having Hibernate do a delete/re-insert would be worse than keeping this bug?
Unidirection OneToMany causes duplicate key entry violation when
removing from list
-----------------------------------------------------------------------------------
Key: HHH-1268
URL:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-1268
Project: Hibernate Core
Issue Type: Bug
Affects Versions: 3.1
Environment: 3.1 final
MySql 4.1.14 using MYISAM tables
Reporter: Rex Madden
Assignee: Gail Badner
Fix For: 3.2.x, 3.3.x
Attachments: src.zip
Simple OneToMany parent/child relationship using the default table structure (2 tables
and a join table)
Add 3 children to the parent. Flush. Remove the first child. Flush throws error:
Exception in thread "main"
org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:69)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:202)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:230)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:143)
at
org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:296)
at
org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:980)
at
UnidirectionalOneToManyRemoveFromListBug.main(UnidirectionalOneToManyRemoveFromListBug.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:86)
Caused by: java.sql.BatchUpdateException: Duplicate key or integrity constraint
violation, message from server: "Duplicate entry '5' for key 2"
at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1461)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195)
... 11 more
The problem is that there is a unique key on the relationship table that gets violated.
The session removes the last row in the relationship table, then attempts to rewrite the
child_id's. It fails since there is a uniqueness constraint on that column.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira