[Hibernate-JIRA] Created: (EJB-225) EntityManager.find infinite loop related to ANN-381
by John Schneider (JIRA)
EntityManager.find infinite loop related to ANN-381
---------------------------------------------------
Key: EJB-225
URL: http://opensource.atlassian.com/projects/hibernate/browse/EJB-225
Project: Hibernate Entity Manager
Type: Bug
Environment: Hibernate Annotations 3.2.0.cr2
Hibernate EntityManager 3.2.0.cr2
Hibernate Core 3.2.0.cr4
Reporter: John Schneider
Attachments: Test.zip
I've tested the fix for ANN-381 and found that the improvement is not working correctly with the Entity Manager, specifically the find method. When calling the find method on a OneToMany collection mapped by a field in a primary key class, the Entity manager goes into an infinite loop of selecting records.
Please note that a workaround is to replace the entity managers find method with a query, such as this:
Query query = entityManager.createQuery("select c from Card c where c.id = :id");
query.setParameter("id", id);
return (Card) query.getSingleResult();
However, this workaround doesn't work if you have other entities that relate to something like the Card entity, shown below. I'll expand on the example code from ANN-381 to show the problem of infinite looping when trying to load the related collection.
@Entity
public class Card implements Serializable {
@Id
private String id;
@OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER, mappedBy = "primaryKey.card")
private Set<CardField> fields;
public Card(String id) {
this();
this.id = id;
}
Card() {
fields = new HashSet<CardField>();
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public void addField(Card card, Key key) {
fields.add(new CardField(card, key));
}
}
@Entity
public class CardField implements Serializable {
@EmbeddedId
private PrimaryKey primaryKey;
CardField(Card card, Key key) {
this.primaryKey = new PrimaryKey(card, key);
}
CardField() {
}
}
@Embeddable
public class PrimaryKey implements Serializable {
@ManyToOne(optional = false)
private Card card;
@ManyToOne(optional = false)
private Key key;
public PrimaryKey(Card card, Key key) {
this.card = card;
this.key = key;
}
PrimaryKey() {}
}
@Entity
public class Key implements Serializable {
@Id
private String id;
public Key(String id) {
this.id = id;
}
Key() {
}
}
>From what I can tell, this maps fine with the correct Hibernate libs and JBoss EJB Embeddable Alpha 9. However, the Entity Manager must be creating bad query code or something with similar results to having bad queries. I've testing with the following DAO:
@Stateless
public class CardDAOBean implements CardDAOLocal {
@PersistenceContext
private EntityManager entityManager;
public void createCard(Card card) {
entityManager.persist(card);
}
public Card findCard(String id) {
return entityManager.find(Card.class, id);
}
public void createKey(Key key) {
entityManager.persist(key);
}
}
The Test class:
public class EJBTest {
private Card card;
private Key key;
public static void main(String[] args) {
TestRunner.run(suite());
}
public static junit.framework.Test suite() {
TestSuite suite = new TestSuite("Test for ANN-381");
suite.addTest(new JUnit4TestAdapter(EJBTest.class));
// setup test so that embedded JBoss is started/stopped once for all
// tests here.
TestSetup wrapper = new TestSetup(suite) {
protected void setUp() {
startupEmbeddedJboss();
}
protected void tearDown() {
shutdownEmbeddedJboss();
}
};
return wrapper;
}
@Before
public void setup() {
card = new Card("cardId");
key = new Key("keyId");
card.addField(card, key);
}
@Test
public void persist() throws Exception {
InitialContext ctx = getInitialContext();
CardDAOLocal CardDAOLocal = (CardDAOLocal) ctx.lookup("CardDAOBean/local");
CardDAOLocal.createKey(key);
CardDAOLocal.createCard(card);
Card card = CardDAOLocal.findCard(this.card.getId());
assertNotNull(card);
}
private static void startupEmbeddedJboss() {
EJB3StandaloneBootstrap.boot(null);
EJB3StandaloneBootstrap.scanClasspath();
}
private static void shutdownEmbeddedJboss() {
EJB3StandaloneBootstrap.shutdown();
}
public static InitialContext getInitialContext() throws Exception {
Hashtable properties = getInitialContextProperties();
return new InitialContext(properties);
}
private static Hashtable getInitialContextProperties() {
Hashtable<String, String> props = new Hashtable<String, String>();
props.put("java.naming.factory.initial",
"org.jnp.interfaces.LocalOnlyContextFactory");
props.put("java.naming.factory.url.pkgs",
"org.jboss.naming:org.jnp.interfaces");
return props;
}
}
Here's some relevant debugging output:
Row insert: update CardField set card_id=? where card_id=? and key_id=?
Row delete: update CardField set card_id=null where card_id=? and card_id=? and key_id=?
One-shot delete: update CardField set card_id=null where card_id=?
Static select for entity entity.CardField: select cardfield0_.card_id as card2_1_0_, cardfield0_.key_id as key1_1_0_ from CardField cardfield0_ where cardfield0_.card_id=? and cardfield0_.key_id=?
Static select for action ACTION_MERGE on entity entity.CardField: select cardfield0_.card_id as card2_1_0_, cardfield0_.key_id as key1_1_0_ from CardField cardfield0_ where cardfield0_.card_id=? and cardfield0_.key_id=?
Static select for action ACTION_REFRESH on entity entity.CardField: select cardfield0_.card_id as card2_1_0_, cardfield0_.key_id as key1_1_0_ from CardField cardfield0_ where cardfield0_.card_id=? and cardfield0_.key_id=?
Static select for entity entity.Card: select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
Static select for entity entity.Card: select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
Static select for entity entity.Card: select card0_.id as id0_0_ from Card card0_ where card0_.id=?
Static select for entity entity.Card: select card0_.id as id0_0_ from Card card0_ where card0_.id=?
Static select for entity entity.Card: select card0_.id as id0_0_ from Card card0_ where card0_.id=?
Static select for action ACTION_MERGE on entity entity.Card: select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
Static select for action ACTION_REFRESH on entity entity.Card: select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
Static select for entity entity.Key: select key0_.id as id2_0_ from Key key0_ where key0_.id=?
Static select for entity entity.Key: select key0_.id as id2_0_ from Key key0_ where key0_.id=?
Static select for entity entity.Key: select key0_.id as id2_0_ from Key key0_ where key0_.id=?
Static select for entity entity.Key: select key0_.id as id2_0_ from Key key0_ where key0_.id=?
Static select for entity entity.Key: select key0_.id as id2_0_ from Key key0_ where key0_.id=?
Static select for action ACTION_MERGE on entity entity.Key: select key0_.id as id2_0_ from Key key0_ where key0_.id=?
Static select for action ACTION_REFRESH on entity entity.Key: select key0_.id as id2_0_ from Key key0_ where key0_.id=?
Static select for one-to-many entity.Card.fields: select fields0_.card_id as card2_1_, fields0_.key_id as key1_1_, fields0_.card_id as card2_1_0_, fields0_.key_id as key1_1_0_ from CardField fields0_ where fields0_.card_id=?
When I call the findCard method in my EJB, the entity manager keeps looping through the select process several hundred times, and then dies from a stack overflow.
Here's a sample of looping:
DEBUG 17-09 21:17:18,328 (Log4JLogger.java:debug:84) -select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
DEBUG 17-09 21:17:18,343 (Log4JLogger.java:debug:84) -about to open ResultSet (open ResultSets: 0, globally: 0)
DEBUG 17-09 21:17:18,343 (Log4JLogger.java:debug:84) -loading entity: [entity.Card#cardId]
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -about to open PreparedStatement (open PreparedStatements: 1, globally: 1)
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -about to open ResultSet (open ResultSets: 1, globally: 1)
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -loading entity: [entity.Card#cardId]
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -about to open PreparedStatement (open PreparedStatements: 2, globally: 2)
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -about to open ResultSet (open ResultSets: 2, globally: 2)
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -loading entity: [entity.Card#cardId]
DEBUG 17-09 21:17:18,359 (Log4JLogger.java:debug:84) -about to open PreparedStatement (open PreparedStatements: 3, globally: 3)
DEBUG 17-09 21:17:18,375 (Log4JLogger.java:debug:84) -select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
DEBUG 17-09 21:17:18,375 (Log4JLogger.java:debug:84) -about to open ResultSet (open ResultSets: 3, globally: 3)
DEBUG 17-09 21:17:18,375 (Log4JLogger.java:debug:84) -loading entity: [entity.Card#cardId]
DEBUG 17-09 21:17:18,375 (Log4JLogger.java:debug:84) -about to open PreparedStatement (open PreparedStatements: 4, globally: 4)
DEBUG 17-09 21:17:18,375 (Log4JLogger.java:debug:84) -select card0_.id as id0_1_, fields1_.card_id as card2_3_, fields1_.key_id as key1_3_, fields1_.card_id as card2_1_0_, fields1_.key_id as key1_1_0_ from Card card0_ left outer join CardField fields1_ on card0_.id=fields1_.card_id where card0_.id=?
DEBUG 17-09 21:17:18,375 (Log4JLogger.java:debug:84) -about to open ResultSet (open ResultSets: 4, globally: 4)
DEBUG 17-09 21:17:18,375 (Log4JLogger.java:debug:84) -loading entity: [entity.Card#cardId]
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
14 years, 5 months
[Hibernate-JIRA] Created: (HHH-2608) allow delete-orphan cascade style in one-to-one mapping
by Joe Kelly (JIRA)
allow delete-orphan cascade style in one-to-one mapping
-------------------------------------------------------
Key: HHH-2608
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-2608
Project: Hibernate3
Issue Type: New Feature
Components: core
Affects Versions: 3.2.4
Environment: 3.2.4, DB2 v8
Reporter: Joe Kelly
Please allow the cascade-style "delete-orphan" for one-to-one relationships. When I try to use that cascade style, I get the exception "org.hibernate.MappingException: single-valued associations do not support orphan delete".
I realize that the reference manual says "Note that single valued associations (many-to-one and one-to-one associations) do not support orphan delete." But why? I can think of many cases where you WOULD want parent-child semantics in a one-to-one relationship.
For example, I have encountered some legacy databases where an entity has all of its fields stored in one table EXCEPT for an optional large field (e.g. a blob field) that is stored in another table, presumably for some performance or storage optimization reason. In this case the two tables are joined with a one-to-one relationship, using a shared primary key. Here are some hypothetical tables, classes and mappings that illustrate this example:
company
(
company_id (PK)
name
)
company_extra
(
company_extra_id (PK), (FK referencing company.company_id)
some_extra_info
)
class Company
{
int companyId;
String name;
CompanyExtra companyExtra;
}
class CompanyExtra
{
int companyExtraId;
String someExtraInfo;
Company company;
}
<class name="Company" table="company">
<id name="companyId" column="company_id">
<generator class="sequence">
<param name="sequence">COMPANY_SEQ</param>
</generator>
</id>
<property name="name" column="name" />
<one-to-one name="companyExtra" class="CompanyExtra" cascade="all, delete-orphan" />
</class>
<class name="CompanyExtra" table="company_extra">
<id name="companyExtraId" column="company_extra_id" unsaved-value="null">
<generator class="foreign">
<param name="property">company</param>
</generator>
</id>
<property name="someExtraInfo" column="some_extra_info" />
<one-to-one name="company" class="Company" constrained="true" />
</class>
For the purposes of this example, CompanyExtra is a child of Company and belongs to one and only one instance of Company. It cannot be shared between Company instances and it cannot be an orphan (i.e. it cannot exist without a parent Company). Also, CompanyExtra is optional so you can have a Company without any associated CompanyExtra during Company's lifecycle.
To me, it seems natural and logical that if you set Company.companyExtra to null, and save Company, Hibernate should automatically delete the associated record in the company_extra table if the delete-orphan cascade style is configured (which, of course, it not currently allowed). This is what I want to do when editing an existing Company instance:
aCompany = session.load(...);
aCompany.setCompanyExtra(null);
aCompany.setName("some new name");
session.save(aCompany); // automatically deletes company_extra record
For now, I think the workaround is to explicitly delete the CompanyExtra instance in Java code but that just doesn't seem natural to me and it isn't transparent. I do NOT want to do this:
aCompany = session.load(...);
companyExtra = aCompany.getCompanyExtra(); // ***extra, unnatural method call
aCompany.setCompanyExtra(null);
aCompany.setName("some new name");
session.save(aCompany);
session.delete(companyExtra); // ***another extra, unnatural method call
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 5 months
[Hibernate-JIRA] Created: (ANN-573) Many-to-many with Attributes (Attribute-class with two @ManyToOne mappings as @Id) not possible
by Christian Köberl (JIRA)
Many-to-many with Attributes (Attribute-class with two @ManyToOne mappings as @Id) not possible
-----------------------------------------------------------------------------------------------
Key: ANN-573
URL: http://opensource.atlassian.com/projects/hibernate/browse/ANN-573
Project: Hibernate Annotations
Type: Bug
Components: binder
Versions: 3.2.1
Environment: Hibernate+Annotations 3.2.1-ga, HSQLDB 1.8.0.7
Reporter: Christian Köberl
Attachments: manytomanybug.zip
Im trying to map the following relation:
Order 1-n OrderLine m-1 Product
where the OrderLine contains an amount of the ordered product.
The problem seems to be multiple @Id in combination with @ManyToOne:
@Entity
@Table(name = "order_line")
// @IdClass(OrderLinePK.class)
public class OrderLine implements Serializable
{
@Id
@ManyToOne(targetEntity = Order.class)
@JoinColumn(name = "order_id", nullable = false)
private Order order;
@Id
@ManyToOne(targetEntity = Product.class)
@JoinColumn(name = "product_id", nullable = false)
...
}
Hibernate maps the OrderLine class in the following way:
DEBUG SchemaUpdate:149 - create table order_line (product varbinary(255) not null, order varbinary(255) not null, amount integer, primary key (order))
(maybe this relates to http://opensource.atlassian.com/projects/hibernate/browse/ANN-435)
When I add an IdClass to the OrderLine, I get the following exception:
Initial SessionFactory creation failed.org.hibernate.AnnotationException: mappedBy reference an unknown target entity property: model.OrderLine.order in model.Order.lineItems
The example is attached as a runnable Maven2-Project, just run mvn test. The test dao.OrderDaoAnnotationTest fails.
In the project I also tried to map the same classes with hbm-Files and this works (dao.OrderDaoHbmTest).
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
14 years, 5 months
[Hibernate-JIRA] Created: (HHH-2309) fetch only the lazy property needed
by German de la Cruz (JIRA)
fetch only the lazy property needed
------------------------------------
Key: HHH-2309
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-2309
Project: Hibernate3
Type: New Feature
Components: core
Versions: 3.2.1
Reporter: German de la Cruz
The method AbstractEntityPersister.initializeLazyProperty(..) load all lazy properties when it's called. It would be great if could only load the requested property.
I think the only change we need is in AbstractEntityPersister.initializeLazyPropertiesFromDatastore(...) and AbstractEntityPersister.initializeLazyPropertiesFromCache(...). We must change them in a way that only the referenced property is loaded.
After that, we must change AbstractFieldInterceptor.intercept(..) to update in a better way the unitializedFields collection (I mean, removing the actual property only instead of null it).
Besides. Why in line 777 to 780 a query is executed? I think it isn't necessary.
Thanks.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
14 years, 5 months
[Hibernate-JIRA] Created: (HHH-2470) Use of session.createSQLQuery causes memory leak
by Bjørn Bjerkeli (JIRA)
Use of session.createSQLQuery causes memory leak
-------------------------------------------------
Key: HHH-2470
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-2470
Project: Hibernate3
Type: Bug
Components: query-sql
Versions: 3.1.3
Environment: Win XP, Oracle 10g, Java 1.4.2
Reporter: Bjørn Bjerkeli
Attachments: TestCase.zip
NativeSQLQuerySpecification fails to properly implement equals and hashcode caused by lacking implementation of hashCode and equals in all SQLQueryReturn implementations and SQLQueryScalarReturn which are members of NativeSQLQuerySpecification. I can see that NativeSQLQuerySpecification has been changed in 3.2, but the problem is still there.
NativeSQLQuerySpecification instances are used as keys for retrieving and caching NativeSQLQueryPlan instances.
This causes the caching-mechanism to be pretty useless when Queries created by session.createSQLQuery because new entries will be added all the time in the QueryPlanCache and the SoftLimitMRUCache member.
So far so good, the more serious problem that is caused by this is stems from the implementation of SoftLimitMRUCache which again uses LRUMap in commons-collection. The put - method of the cache is not treadsafe, and that causes the following fragment in LRUMap to allow the map to grow beyond its maximumSize. That is bacause the containsKey method will return an incorrect result when concurrently updating the map.
public Object put( Object key, Object value ) {
int mapSize = size();
Object retval = null;
if ( mapSize >= maximumSize ) {
// don't retire LRU if you are just
// updating an existing key
if (!containsKey(key)) {
// lets retire the least recently used item in the cache
removeLRU();
}
}
retval = super.put(key,value);
return retval;
}
I have included a test-case that demonstrates:
1) Wrong implementation of equals and hashCode in NativeSQLQuerySpecification
2) Concurrent use of LRUMap causes the map to grow beyound it's max limit
3) Concurrent execution of session.createSQLQuery causes memory leak due to 1) and 2)
I would be more than happy to contribute to get this fixed. Just let me know.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
14 years, 5 months
[Hibernate-JIRA] Created: (HHH-2339) merge instumented class fails
by Alexey Romanchuk (JIRA)
merge instumented class fails
-----------------------------
Key: HHH-2339
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-2339
Project: Hibernate3
Type: Bug
Components: core
Versions: 3.2.1, 3.2.0.ga
Environment: Hibernate 3.2.1 (tested with 3.2.0 too)
Postgresql 8.1.4
Reporter: Alexey Romanchuk
Priority: Blocker
When we try to merge instrumented detached entity with lazy no-proxy many-to-one association we have org.hibernate.LazyInitializationException.
It occurs because cascade try to process all associations in Cascade class, but object disconected from session and can not obtaion lazy property.
If classes are not instrumented all works ok.
Why merge action does not have overrided performOnLazyProperty mathod to prevent fetching lazy properties?
Here it is small example that illustates problem.
===MAPPING===
<hibernate-mapping>
<class name="Client" table="test_client">
<id name="id" column="id" type="long">
<generator class="sequence">
<param name="sequence">test_seq</param>
</generator>
</id>
<property name="name" column="name"/>
<many-to-one name="info" class="LoginInfo" lazy="no-proxy" column="info_id" cascade="merge,evict"/>
</class>
<class name="LoginInfo" table="test_login_info">
<id name="id" column="id" type="long">
<generator class="sequence">
<param name="sequence">test_seq</param>
</generator>
</id>
<property name="login" column="login"/>
<property name="pass" column="pass"/>
</class>
</hibernate-mapping>
===JAVA===
===DOMAIN===
public class Client
{
private long id;
private String name;
private LoginInfo info;
//getters and setters
}
public class LoginInfo
{
private long id;
private String login;
private String pass;
//getters and setters
}
===USAGE===
public class Main
{
public static void main( String[] args )
{
Session s1 = sf.openSession();
s1.beginTransaction();
Client c = ( Client ) s1.get( Client.class, 2l );
s1.flush();
s1.getTransaction().commit();
s1.close();
Session s2 = sf.openSession();
s2.beginTransaction();
c = ( Client ) s2.merge( c );
s2.flush();
s2.getTransaction().commit();
s2.close();
}
}
==STACKTRACE===
org.hibernate.LazyInitializationException: session is not connected
at org.hibernate.intercept.AbstractFieldInterceptor.intercept(AbstractFieldInterceptor.java:67)
at org.hibernate.intercept.cglib.FieldInterceptorImpl.readObject(FieldInterceptorImpl.java:75)
at Client.$cglib_read_info(Client.java)
at Client.getInfo(Client.java:12)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.hibernate.property.BasicPropertyAccessor$BasicGetter.get(BasicPropertyAccessor.java:145)
at org.hibernate.tuple.entity.AbstractEntityTuplizer.getPropertyValue(AbstractEntityTuplizer.java:277)
at org.hibernate.persister.entity.AbstractEntityPersister.getPropertyValue(AbstractEntityPersister.java:3529)
at org.hibernate.engine.Cascade.cascade(Cascade.java:130)
at org.hibernate.event.def.DefaultMergeEventListener.cascadeOnMerge(DefaultMergeEventListener.java:407)
at org.hibernate.event.def.DefaultMergeEventListener.entityIsDetached(DefaultMergeEventListener.java:266)
at org.hibernate.event.def.DefaultMergeEventListener.onMerge(DefaultMergeEventListener.java:120)
at org.hibernate.event.def.DefaultMergeEventListener.onMerge(DefaultMergeEventListener.java:53)
at org.hibernate.impl.SessionImpl.fireMerge(SessionImpl.java:677)
at org.hibernate.impl.SessionImpl.merge(SessionImpl.java:661)
at org.hibernate.impl.SessionImpl.merge(SessionImpl.java:665)
at Main.main(Main.java:56)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
14 years, 6 months
[Hibernate-JIRA] Created: (HHH-2388) Insert w/ identity column fails on Sybase but no exception occurs
by Tim Morrow (JIRA)
Insert w/ identity column fails on Sybase but no exception occurs
-----------------------------------------------------------------
Key: HHH-2388
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-2388
Project: Hibernate3
Type: Bug
Components: core
Versions: 3.2.1
Environment: Hibernate 3.2.1.GA with annotations
Sybase jConnect 6.05 JDBC driver
Sybase ASE 15 database
Reporter: Tim Morrow
I have a scenario where storing a new entity fails (i.e. the row is not inserted) but Hibernate does not realize this. No exceptions are thrown. Later, this leads to AssertionFailures (if two such entities fail in the same session - they both have the same PK).
The problem specifically occurs with a table that has a numeric column with precision (e.g. numeric(10,4)) and an identity column when using Sybase ASE.
==========
To reproduce:
1. Use Sybase jConnect 6.05 JDBC driver with Sybase ASE 15 database.
2. Define an Entity with a long ID and BigDecimal numeric column.
3. Create a corresponding table with an identity column and numeric(10, 4) column.
For example:
hibernate.MyEntity.java:
-----------------------------------------
package hibernate;
import java.math.BigDecimal;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import org.hibernate.validator.NotNull;
@Entity
@Table(name = "z_tim_test")
public class MyEntity {
private BigDecimal cost;
private long id;
public MyEntity() {}
@Column(columnDefinition = "numeric(10,4)")
@NotNull public final BigDecimal getCost() {
return cost;
}
@GeneratedValue(strategy = GenerationType.AUTO)
@Id public final long getId() {
return id;
}
public final void setCost(BigDecimal cost) {
this.cost = cost;
}
public final void setId(long id) {
this.id = id;
}
}
Table:
-----------------------------------------
CREATE TABLE z_tim_test
(
id numeric(19) PRIMARY KEY not null,
cost numeric(10,4) not null
);
4. Write some code that stores a new entity and tries to load it:
MyEntity myEntity = new MyEntity();
myEntity.setCost(new BigDecimal("123.12345"));
session.save(myEntity);
session.flush();
session.clear();
List<MyEntity> results = session.createCriteria(MyEntity.class).list();
if (results.size() != 1) {
throw new IllegalStateException("Expected 1 result");
}
This test will throw the IllegalStateException because the row was not persisted and no errors occurred.
Reason:
* Sybase does not thrown any SQLException when you try and persist a numeric value whose scale exceeds that defined on the column. Instead, it returns an updateCount of zero and an identity column value of zero.
* Hibernate does not check the updateCount after executing the statement when using an Identity column. The offending code is in:
org.hibernate.id.IdentityGenerator$InsertSelectDelegate
public Serializable executeAndExtract(PreparedStatement insert) throws SQLException {
if ( !insert.execute() ) {
while ( !insert.getMoreResults() && insert.getUpdateCount() != -1 ) {
// do nothing until we hit the rsult set containing the generated id
}
}
ResultSet rs = insert.getResultSet();
try {
return IdentifierGeneratorFactory.getGeneratedIdentity( rs, persister.getIdentifierType() );
}
finally {
rs.close();
}
}
It ignores the updateCount.
The net result is that the object is assigned a PK value of zero. Hibernate continues. My applicaiton is unaware that the row failed to insert.
Solution:
It would seem to me replacing the above code with:
if (!insert.execute()) {
if (insert.getUpdateCount() < 1) {
throw new HibernateException("No update occurred");
}
while (!insert.getMoreResults()) {
// do nothing until we hit the rsult set containing the generated id
}
}
Would take care of the problem?
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira
14 years, 6 months