Hibernate SVN: r11308 - trunk/HibernateExt/shards.
by hibernate-commits@lists.jboss.org
Author: epbernard
Date: 2007-03-19 20:28:06 -0400 (Mon, 19 Mar 2007)
New Revision: 11308
Modified:
trunk/HibernateExt/shards/build.xml
trunk/HibernateExt/shards/changelog.txt
Log:
Minor build fix to be able to test unsing the distro
Modified: trunk/HibernateExt/shards/build.xml
===================================================================
--- trunk/HibernateExt/shards/build.xml 2007-03-19 23:08:21 UTC (rev 11307)
+++ trunk/HibernateExt/shards/build.xml 2007-03-20 00:28:06 UTC (rev 11308)
@@ -6,7 +6,7 @@
-->
-<project name="Hibernate Search" default="dist" basedir=".">
+<project name="Hibernate Shards" default="dist" basedir=".">
<!-- Give user a chance to override without editing this file
(and without typing -D each time it compiles it) -->
@@ -57,7 +57,7 @@
<target name="compiletest" depends="common-build.compiletest">
<copy todir="${testclasses.dir}">
- <fileset dir="src/test">
+ <fileset dir="${test.dir}">
<exclude name="**/*.java"/>
</fileset>
</copy>
Modified: trunk/HibernateExt/shards/changelog.txt
===================================================================
--- trunk/HibernateExt/shards/changelog.txt 2007-03-19 23:08:21 UTC (rev 11307)
+++ trunk/HibernateExt/shards/changelog.txt 2007-03-20 00:28:06 UTC (rev 11308)
@@ -4,4 +4,4 @@
3.0.0.Beta1 (19-03-2007)
------------------------
-Initial release
\ No newline at end of file
+Initial release (See the documentation for more information)
\ No newline at end of file
17 years, 1 month
Hibernate SVN: r11307 - trunk/HibernateExt/shards/doc/reference/en/modules.
by hibernate-commits@lists.jboss.org
Author: max.ross
Date: 2007-03-19 19:08:21 -0400 (Mon, 19 Mar 2007)
New Revision: 11307
Modified:
trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml
Log:
fix typose
Modified: trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml
===================================================================
--- trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml 2007-03-19 22:43:03 UTC (rev 11306)
+++ trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml 2007-03-19 23:08:21 UTC (rev 11307)
@@ -7,7 +7,7 @@
In order to speed-up the initial release of Hibernate Shards, some
parts of the Hibernate API that we rarely use were left unimplemented. Of course things that
we rarely used are probably critical for some applications, so if we've left you out in the cold
- we apologize. We're committed to getting the rest of the api implemented quickly. For details on which
+ we apologize. We're committed to getting the rest of the API implemented quickly. For details on which
methods were not implemented, please see the Javadoc for <classname>ShardedSessionImpl</classname>,
<classname>ShardedCriteriaImpl</classname>, and <classname>ShardedQueryImpl</classname>.
</para>
@@ -115,7 +115,7 @@
<classname>StatefulInterceptorFactory</classname> implements this interface, Hibernate Shards will provide the
<classname>Interceptor</classname> with a reference to the real (shard-specific)
<classname>Session</classname> once the factory constructs it. This way your
- <classname>Interceptor</classname> can safely and accurately interact with a speicific shard. Here's an example:
+ <classname>Interceptor</classname> can safely and accurately interact with a specific shard. Here's an example:
<programlisting><![CDATA[
public class MyStatefulInterceptor implements Interceptor, RequiresSession {
private Session session;
@@ -196,7 +196,7 @@
<para>We have a number
of ideas about how to make this easy to deal with but we have not yet implemented any of them.
In the short term, we think your best bet is to either not create object relationships between
- sharded entities and replicated entities. In otherwords, just model the relationship like
+ sharded entities and replicated entities. In other words, just model the relationship like
you would if you weren't using an OR Mapping tool. We know this is clunky and annoying.
We'll take care of it soon.
</para>
17 years, 1 month
Hibernate SVN: r11306 - in trunk/HibernateExt/shards: src/java/org/hibernate/shards/cfg and 2 other directories.
by hibernate-commits@lists.jboss.org
Author: max.ross
Date: 2007-03-19 18:43:03 -0400 (Mon, 19 Mar 2007)
New Revision: 11306
Modified:
trunk/HibernateExt/shards/doc/reference/en/modules/architecture.xml
trunk/HibernateExt/shards/doc/reference/en/modules/configuration.xml
trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml
trunk/HibernateExt/shards/src/java/org/hibernate/shards/cfg/ShardedEnvironment.java
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard0.hibernate.cfg.xml
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard1.hibernate.cfg.xml
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard2.hibernate.cfg.xml
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard0.hibernate.cfg.xml
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard1.hibernate.cfg.xml
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard2.hibernate.cfg.xml
Log:
change property name from something absurdly verbose to something only marginally verbose (hibernate.shard.enable_cross_shard_relationship_checks)
Also added a small section to the architecture page stating that we require Java 1.5 or higher.
Modified: trunk/HibernateExt/shards/doc/reference/en/modules/architecture.xml
===================================================================
--- trunk/HibernateExt/shards/doc/reference/en/modules/architecture.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/doc/reference/en/modules/architecture.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -125,4 +125,11 @@
</para>
<para>For more information on Sharding Strategies please consult the chapter of the same name.</para>
</sect1>
+ <sect1 id="shards-architecture-requirements" revision="1">
+ <title>System Requirements</title>
+ <para>
+ Hibernate Shards has the same system requirements as Hibernate Core, with the additional restriction
+ that we require Java 1.5 or higher.
+ </para>
+ </sect1>
</chapter>
Modified: trunk/HibernateExt/shards/doc/reference/en/modules/configuration.xml
===================================================================
--- trunk/HibernateExt/shards/doc/reference/en/modules/configuration.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/doc/reference/en/modules/configuration.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -211,7 +211,7 @@
7 <property name="connection.username">my_user</property>
8 <property name="connection.password">my_password</property>
9 <property name="hibernate.connection.shard_id">0</property> <!-- new -->
- 10 <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property> <!-- new -->
+ 10 <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property> <!-- new -->
11 </session-factory>
12 </hibernate-configuration>
]]></programlisting>
@@ -225,7 +225,7 @@
7 <property name="connection.username">my_user</property>
8 <property name="connection.password">my_password</property>
9 <property name="hibernate.connection.shard_id">1</property> <!-- new -->
- 10 <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property> <!-- new -->
+ 10 <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property> <!-- new -->
11 </session-factory>
12 </hibernate-configuration>
]]></programlisting>
@@ -240,7 +240,7 @@
</para>
<para>
The other noteworthy addition is the rather verbose but hopefully descriptive
- "hibernate.shard.check_all_associated_objects_for_different_shards." You can read more about this in the
+ "hibernate.shard.enable_cross_shard_relationship_checks." You can read more about this in the
chapter on limitations.
</para>
<para>
Modified: trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml
===================================================================
--- trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/doc/reference/en/modules/limitations.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -44,7 +44,7 @@
database, so if you have lazy-loaded associations the interceptor will resolve those associations as part
of its checks. This is potentially quite expensive, and may not be suitable for a production system.
With this in mind, we've made it easy to configure whether or not this check is performed via the
- "hibernate.shard.check_all_associated_objects_for_different_shards" property we referenced in the chapter
+ "hibernate.shard.enable_cross_shard_relationship_checks" property we referenced in the chapter
on configuration. If this property is set to "true" a <classname>CrossShardRelationshipDetectingInterceptor</classname>
will be registered with every <classname>ShardedSession</classname> that is established. Don't worry,
you can still register your own interceptor as well. Our expectation is that most applications will have
Modified: trunk/HibernateExt/shards/src/java/org/hibernate/shards/cfg/ShardedEnvironment.java
===================================================================
--- trunk/HibernateExt/shards/src/java/org/hibernate/shards/cfg/ShardedEnvironment.java 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/src/java/org/hibernate/shards/cfg/ShardedEnvironment.java 2007-03-19 22:43:03 UTC (rev 11306)
@@ -36,7 +36,7 @@
* performance but will prevent the programmer from ending up with the
* same entity on multiple shards, which is bad (at least in the current version).
*/
- public static final String CHECK_ALL_ASSOCIATED_OBJECTS_FOR_DIFFERENT_SHARDS = "hibernate.shard.check_all_associated_objects_for_different_shards";
+ public static final String CHECK_ALL_ASSOCIATED_OBJECTS_FOR_DIFFERENT_SHARDS = "hibernate.shard.enable_cross_shard_relationship_checks";
private ShardedEnvironment() {}
}
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard0.hibernate.cfg.xml
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard0.hibernate.cfg.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard0.hibernate.cfg.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -28,7 +28,7 @@
<property name="connection.username">sa</property>
<property name="connection.password"></property>
<property name="hibernate.connection.shard_id">0</property>
- <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property>
+ <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property>
</session-factory>
</hibernate-configuration>
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard1.hibernate.cfg.xml
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard1.hibernate.cfg.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard1.hibernate.cfg.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -28,7 +28,7 @@
<property name="connection.username">sa</property>
<property name="connection.password"></property>
<property name="hibernate.connection.shard_id">1</property>
- <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property>
+ <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property>
</session-factory>
</hibernate-configuration>
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard2.hibernate.cfg.xml
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard2.hibernate.cfg.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/hsql/config/shard2.hibernate.cfg.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -28,7 +28,7 @@
<property name="connection.username">sa</property>
<property name="connection.password"></property>
<property name="hibernate.connection.shard_id">2</property>
- <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property>
+ <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property>
</session-factory>
</hibernate-configuration>
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard0.hibernate.cfg.xml
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard0.hibernate.cfg.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard0.hibernate.cfg.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -28,7 +28,7 @@
<property name="connection.username">shard_user</property>
<property name="connection.password">shard</property>
<property name="hibernate.connection.shard_id">0</property>
- <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property>
+ <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property>
</session-factory>
</hibernate-configuration>
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard1.hibernate.cfg.xml
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard1.hibernate.cfg.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard1.hibernate.cfg.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -28,7 +28,7 @@
<property name="connection.username">shard_user</property>
<property name="connection.password">shard</property>
<property name="hibernate.connection.shard_id">1</property>
- <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property>
+ <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property>
</session-factory>
</hibernate-configuration>
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard2.hibernate.cfg.xml
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard2.hibernate.cfg.xml 2007-03-19 22:19:11 UTC (rev 11305)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/config/shard2.hibernate.cfg.xml 2007-03-19 22:43:03 UTC (rev 11306)
@@ -28,7 +28,7 @@
<property name="connection.username">shard_user</property>
<property name="connection.password">shard</property>
<property name="hibernate.connection.shard_id">2</property>
- <property name="hibernate.shard.check_all_associated_objects_for_different_shards">true</property>
+ <property name="hibernate.shard.enable_cross_shard_relationship_checks">true</property>
</session-factory>
</hibernate-configuration>
17 years, 1 month
Hibernate SVN: r11305 - in trunk/HibernateExt/shards: src/java/org/hibernate/shards/session and 3 other directories.
by hibernate-commits@lists.jboss.org
Author: max.ross
Date: 2007-03-19 18:19:11 -0400 (Mon, 19 Mar 2007)
New Revision: 11305
Removed:
trunk/HibernateExt/shards/jdbc/mysql-3.0.14.jar
Modified:
trunk/HibernateExt/shards/src/java/org/hibernate/shards/session/CrossShardRelationshipDetectingInterceptor.java
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/MemoryLeakPlugger.java
trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/MySQLDatabasePlatform.java
trunk/HibernateExt/shards/src/test/org/hibernate/shards/query/SetBigIntegerEventTest.java
Log:
remove compile-time dependency on mysql
Deleted: trunk/HibernateExt/shards/jdbc/mysql-3.0.14.jar
===================================================================
(Binary files differ)
Modified: trunk/HibernateExt/shards/src/java/org/hibernate/shards/session/CrossShardRelationshipDetectingInterceptor.java
===================================================================
--- trunk/HibernateExt/shards/src/java/org/hibernate/shards/session/CrossShardRelationshipDetectingInterceptor.java 2007-03-19 22:06:45 UTC (rev 11304)
+++ trunk/HibernateExt/shards/src/java/org/hibernate/shards/session/CrossShardRelationshipDetectingInterceptor.java 2007-03-19 22:19:11 UTC (rev 11305)
@@ -18,14 +18,6 @@
package org.hibernate.shards.session;
-import org.hibernate.shards.CrossShardAssociationException;
-import org.hibernate.shards.ShardId;
-import org.hibernate.shards.util.Pair;
-import org.hibernate.shards.session.ShardedSessionImpl;
-import org.hibernate.shards.util.Iterables;
-import org.hibernate.shards.util.Lists;
-import org.hibernate.shards.util.Preconditions;
-
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.hibernate.CallbackException;
@@ -33,6 +25,12 @@
import org.hibernate.ObjectNotFoundException;
import org.hibernate.collection.PersistentCollection;
import org.hibernate.proxy.HibernateProxy;
+import org.hibernate.shards.CrossShardAssociationException;
+import org.hibernate.shards.ShardId;
+import org.hibernate.shards.util.Iterables;
+import org.hibernate.shards.util.Lists;
+import org.hibernate.shards.util.Pair;
+import org.hibernate.shards.util.Preconditions;
import org.hibernate.type.Type;
import java.io.Serializable;
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/MemoryLeakPlugger.java
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/MemoryLeakPlugger.java 2007-03-19 22:06:45 UTC (rev 11304)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/MemoryLeakPlugger.java 2007-03-19 22:19:11 UTC (rev 11305)
@@ -18,13 +18,12 @@
package org.hibernate.shards.integration;
-import org.hibernate.shards.Shard;
-import org.hibernate.shards.session.ShardedSessionImpl;
-
import net.sf.cglib.proxy.Callback;
import org.hibernate.engine.StatefulPersistenceContext;
import org.hibernate.impl.SessionImpl;
+import org.hibernate.shards.Shard;
+import org.hibernate.shards.session.ShardedSessionImpl;
import java.lang.reflect.AccessibleObject;
import java.lang.reflect.Field;
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/MySQLDatabasePlatform.java
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/MySQLDatabasePlatform.java 2007-03-19 22:06:45 UTC (rev 11304)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/integration/platform/mysql/MySQLDatabasePlatform.java 2007-03-19 22:19:11 UTC (rev 11305)
@@ -18,8 +18,6 @@
package org.hibernate.shards.integration.platform.mysql;
-import com.mysql.jdbc.Driver;
-
import org.hibernate.shards.integration.IdGenType;
import org.hibernate.shards.integration.platform.BaseDatabasePlatform;
import org.hibernate.shards.integration.platform.DatabasePlatform;
@@ -31,7 +29,7 @@
* @author maxr(a)google.com (Max Ross)
*/
public class MySQLDatabasePlatform extends BaseDatabasePlatform {
- private static final String DRIVER_CLASS = Driver.class.getName();
+ private static final String DRIVER_CLASS = "com.mysql.jdbc.Driver";
private static final String DB_URL_PREFIX = "jdbc:mysql://localhost:3306/shard";
private static final String DB_USER = "shard_user";
private static final String DB_PASSWORD = "shard";
Modified: trunk/HibernateExt/shards/src/test/org/hibernate/shards/query/SetBigIntegerEventTest.java
===================================================================
--- trunk/HibernateExt/shards/src/test/org/hibernate/shards/query/SetBigIntegerEventTest.java 2007-03-19 22:06:45 UTC (rev 11304)
+++ trunk/HibernateExt/shards/src/test/org/hibernate/shards/query/SetBigIntegerEventTest.java 2007-03-19 22:19:11 UTC (rev 11305)
@@ -18,10 +18,10 @@
package org.hibernate.shards.query;
-import org.hibernate.shards.defaultmock.QueryDefaultMock;
+import junit.framework.TestCase;
-import junit.framework.TestCase;
import org.hibernate.Query;
+import org.hibernate.shards.defaultmock.QueryDefaultMock;
import java.math.BigInteger;
17 years, 1 month
Hibernate SVN: r11304 - in branches/Branch_3_2/Hibernate3: src/org/hibernate/id and 6 other directories.
by hibernate-commits@lists.jboss.org
Author: steve.ebersole(a)jboss.com
Date: 2007-03-19 18:06:45 -0400 (Mon, 19 Mar 2007)
New Revision: 11304
Added:
branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java
Modified:
branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/Dialect.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/id/TableGenerator.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Column.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Table.java
branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java
branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java
branches/Branch_3_2/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java
branches/Branch_3_2/Hibernate3/test/org/hibernate/test/tm/CMTTest.java
Log:
HHH-2500 : terradata certification
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/Dialect.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/Dialect.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/Dialect.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -970,6 +970,32 @@
}
+ // table support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ /**
+ * Command used to create a table.
+ *
+ * @return The command used to create a table.
+ */
+ public String getCreateTableString() {
+ return "create table";
+ }
+
+ /**
+ * Slight variation on {@link #getCreateTableString}. Here, we have the
+ * command used to create a table when there is no primary key and
+ * duplicate rows are expected.
+ * <p/>
+ * Most databases do not care about the distinction; originally added for
+ * Teradata support which does care.
+ *
+ * @return The command used to create a multiset table.
+ */
+ public String getCreateMultisetTableString() {
+ return getCreateTableString();
+ }
+
+
// temporary table support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/**
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -127,4 +127,12 @@
// note: at least my local SQL Server 2005 Express shows this not working...
return false;
}
+
+ public boolean doesReadCommittedCauseWritersToBlockReaders() {
+ return false; // here assume SQLServer2005 using snapshot isolation, which does not have this problem
+ }
+
+ public boolean doesRepeatableReadCauseReadersToBlockWriters() {
+ return false; // here assume SQLServer2005 using snapshot isolation, which does not have this problem
+ }
}
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -227,4 +227,12 @@
public boolean supportsExistsInSelect() {
return false;
}
+
+ public boolean doesReadCommittedCauseWritersToBlockReaders() {
+ return true;
+ }
+
+ public boolean doesRepeatableReadCauseReadersToBlockWriters() {
+ return true;
+ }
}
Added: branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java (rev 0)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -0,0 +1,237 @@
+package org.hibernate.dialect;
+
+import java.sql.Types;
+
+import org.hibernate.Hibernate;
+import org.hibernate.HibernateException;
+import org.hibernate.cfg.Environment;
+import org.hibernate.dialect.function.SQLFunctionTemplate;
+import org.hibernate.dialect.function.VarArgsSQLFunction;
+
+/**
+ * A dialect for the Teradata database created by MCR as part of the
+ * dialect certification process.
+ *
+ * @author Jay Nance
+ */
+public class TeradataDialect extends Dialect {
+
+ /**
+ * Constructor
+ */
+ public TeradataDialect() {
+ super();
+ //registerColumnType data types
+ registerColumnType( Types.NUMERIC, "NUMERIC($p,$s)" );
+ registerColumnType( Types.DOUBLE, "DOUBLE PRECISION" );
+ registerColumnType( Types.BIGINT, "NUMERIC(18,0)" );
+ registerColumnType( Types.BIT, "BYTEINT" );
+ registerColumnType( Types.TINYINT, "BYTEINT" );
+ registerColumnType( Types.VARBINARY, "VARBYTE($l)" );
+ registerColumnType( Types.BINARY, "BYTEINT" );
+ registerColumnType( Types.LONGVARCHAR, "LONG VARCHAR" );
+ registerColumnType( Types.CHAR, "CHAR(1)" );
+ registerColumnType( Types.DECIMAL, "DECIMAL" );
+ registerColumnType( Types.INTEGER, "INTEGER" );
+ registerColumnType( Types.SMALLINT, "SMALLINT" );
+ registerColumnType( Types.FLOAT, "FLOAT" );
+ registerColumnType( Types.VARCHAR, "VARCHAR($l)" );
+ registerColumnType( Types.DATE, "DATE" );
+ registerColumnType( Types.TIME, "TIME" );
+ registerColumnType( Types.TIMESTAMP, "TIMESTAMP" );
+ registerColumnType( Types.BOOLEAN, "BYTEINT" ); // hibernate seems to ignore this type...
+ registerColumnType( Types.BLOB, "BLOB" );
+ registerColumnType( Types.CLOB, "CLOB" );
+
+ registerFunction( "year", new SQLFunctionTemplate( Hibernate.INTEGER, "extract(year from ?1)" ) );
+ registerFunction( "length", new SQLFunctionTemplate( Hibernate.INTEGER, "character_length(?1)" ) );
+ registerFunction( "concat", new VarArgsSQLFunction( Hibernate.STRING, "(", "||", ")" ) );
+ registerFunction( "substring", new SQLFunctionTemplate( Hibernate.STRING, "substring(?1 from ?2 for ?3)" ) );
+ registerFunction( "locate", new SQLFunctionTemplate( Hibernate.STRING, "position(?1 in ?2)" ) );
+ registerFunction( "mod", new SQLFunctionTemplate( Hibernate.STRING, "?1 mod ?2" ) );
+ registerFunction( "str", new SQLFunctionTemplate( Hibernate.STRING, "cast(?1 as varchar(255))" ) );
+
+ // bit_length feels a bit broken to me. We have to cast to char in order to
+ // pass when a numeric value is supplied. But of course the answers given will
+ // be wildly different for these two datatypes. 1234.5678 will be 9 bytes as
+ // a char string but will be 8 or 16 bytes as a true numeric.
+ // Jay Nance 2006-09-22
+ registerFunction(
+ "bit_length", new SQLFunctionTemplate( Hibernate.INTEGER, "octet_length(cast(?1 as char))*4" )
+ );
+
+ // The preference here would be
+ // SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_timestamp(?1)", false)
+ // but this appears not to work.
+ // Jay Nance 2006-09-22
+ registerFunction( "current_timestamp", new SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_timestamp" ) );
+ registerFunction( "current_time", new SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_time" ) );
+ registerFunction( "current_date", new SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_date" ) );
+ // IBID for current_time and current_date
+
+ registerKeyword( "password" );
+ registerKeyword( "type" );
+ registerKeyword( "title" );
+ registerKeyword( "year" );
+ registerKeyword( "month" );
+ registerKeyword( "summary" );
+ registerKeyword( "alias" );
+ registerKeyword( "value" );
+ registerKeyword( "first" );
+ registerKeyword( "role" );
+ registerKeyword( "account" );
+ registerKeyword( "class" );
+
+ // Tell hibernate to use getBytes instead of getBinaryStream
+ getDefaultProperties().setProperty( Environment.USE_STREAMS_FOR_BINARY, "false" );
+ // No batch statements
+ getDefaultProperties().setProperty( Environment.STATEMENT_BATCH_SIZE, NO_BATCH );
+ }
+
+ /**
+ * Does this dialect support the <tt>FOR UPDATE</tt> syntax?
+ *
+ * @return empty string ... Teradata does not support <tt>FOR UPDATE<tt> syntax
+ */
+ public String getForUpdateString() {
+ return "";
+ }
+
+ public boolean supportsIdentityColumns() {
+ return false;
+ }
+
+ public boolean supportsSequences() {
+ return false;
+ }
+
+ public String getAddColumnString() {
+ return "Add Column";
+ }
+
+ public boolean supportsTemporaryTables() {
+ return true;
+ }
+
+ public String getCreateTemporaryTableString() {
+ return "create global temporary table";
+ }
+
+ public String getCreateTemporaryTablePostfix() {
+ return " on commit preserve rows";
+ }
+
+ public Boolean performTemporaryTableDDLInIsolation() {
+ return Boolean.TRUE;
+ }
+
+ public boolean dropTemporaryTableAfterUse() {
+ return false;
+ }
+
+ /**
+ * Get the name of the database type associated with the given
+ * <tt>java.sql.Types</tt> typecode.
+ *
+ * @param code <tt>java.sql.Types</tt> typecode
+ * @param length the length or precision of the column
+ * @param precision the precision of the column
+ * @param scale the scale of the column
+ *
+ * @return the database type name
+ *
+ * @throws HibernateException
+ */
+ public String getTypeName(int code, int length, int precision, int scale) throws HibernateException {
+ /*
+ * We might want a special case for 19,2. This is very common for money types
+ * and here it is converted to 18,1
+ */
+ float f = precision > 0 ? ( float ) scale / ( float ) precision : 0;
+ int p = ( precision > 18 ? 18 : precision );
+ int s = ( precision > 18 ? ( int ) ( 18.0 * f ) : ( scale > 18 ? 18 : scale ) );
+
+ return super.getTypeName( code, length, p, s );
+ }
+
+ public boolean supportsCascadeDelete() {
+ return false;
+ }
+
+ public boolean supportsCircularCascadeDeleteConstraints() {
+ return false;
+ }
+
+ public boolean areStringComparisonsCaseInsensitive() {
+ return true;
+ }
+
+ public boolean supportsEmptyInList() {
+ return false;
+ }
+
+ public String getSelectClauseNullString(int sqlType) {
+ String v = "null";
+
+ switch ( sqlType ) {
+ case Types.BIT:
+ case Types.TINYINT:
+ case Types.SMALLINT:
+ case Types.INTEGER:
+ case Types.BIGINT:
+ case Types.FLOAT:
+ case Types.REAL:
+ case Types.DOUBLE:
+ case Types.NUMERIC:
+ case Types.DECIMAL:
+ v = "cast(null as decimal)";
+ break;
+ case Types.CHAR:
+ case Types.VARCHAR:
+ case Types.LONGVARCHAR:
+ v = "cast(null as varchar(255))";
+ break;
+ case Types.DATE:
+ case Types.TIME:
+ case Types.TIMESTAMP:
+ v = "cast(null as timestamp)";
+ break;
+ case Types.BINARY:
+ case Types.VARBINARY:
+ case Types.LONGVARBINARY:
+ case Types.NULL:
+ case Types.OTHER:
+ case Types.JAVA_OBJECT:
+ case Types.DISTINCT:
+ case Types.STRUCT:
+ case Types.ARRAY:
+ case Types.BLOB:
+ case Types.CLOB:
+ case Types.REF:
+ case Types.DATALINK:
+ case Types.BOOLEAN:
+ break;
+ }
+ return v;
+ }
+
+ public String getCreateMultisetTableString() {
+ return "create multiset table ";
+ }
+
+ public boolean supportsLobValueChangePropogation() {
+ return false;
+ }
+
+ public boolean doesReadCommittedCauseWritersToBlockReaders() {
+ return true;
+ }
+
+ public boolean doesRepeatableReadCauseReadersToBlockWriters() {
+ return true;
+ }
+
+ public boolean supportsBindAsCallableArgument() {
+ return false;
+ }
+}
\ No newline at end of file
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -89,28 +89,30 @@
public String[] sqlCreateStrings(Dialect dialect) throws HibernateException {
return new String[] {
new StringBuffer()
- .append("create table ")
- .append(tableName)
- .append(" ( ")
- .append(pkColumnName)
- .append(" ")
- .append( dialect.getTypeName(Types.VARCHAR, keySize, 0, 0) )
- .append(", ")
- .append(valueColumnName)
- .append(" ")
- .append( dialect.getTypeName(Types.INTEGER) )
- .append(" ) ")
+ .append( dialect.getCreateTableString() )
+ .append( tableName )
+ .append( " ( " )
+ .append( pkColumnName )
+ .append( ' ' )
+ .append( dialect.getTypeName( Types.VARCHAR, keySize, 0, 0 ) )
+ .append( ", " )
+ .append( valueColumnName )
+ .append( ' ' )
+ .append( dialect.getTypeName( Types.INTEGER ) )
+ .append( " ) " )
.toString()
};
}
public String[] sqlDropStrings(Dialect dialect) throws HibernateException {
- StringBuffer sqlDropString = new StringBuffer()
- .append("drop table ");
- if ( dialect.supportsIfExistsBeforeTableName() ) sqlDropString.append("if exists ");
- sqlDropString.append(tableName)
- .append( dialect.getCascadeConstraintsString() );
- if ( dialect.supportsIfExistsAfterTableName() ) sqlDropString.append(" if exists");
+ StringBuffer sqlDropString = new StringBuffer( "drop table " );
+ if ( dialect.supportsIfExistsBeforeTableName() ) {
+ sqlDropString.append( "if exists " );
+ }
+ sqlDropString.append( tableName ).append( dialect.getCascadeConstraintsString() );
+ if ( dialect.supportsIfExistsAfterTableName() ) {
+ sqlDropString.append( " if exists" );
+ }
return new String[] { sqlDropString.toString() };
}
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/id/TableGenerator.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/id/TableGenerator.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/id/TableGenerator.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -96,21 +96,22 @@
}
- public String[] sqlCreateStrings(Dialect dialect) throws HibernateException {
+ public String[] sqlCreateStrings(Dialect dialect) {
return new String[] {
- "create table " + tableName + " ( " + columnName + " " + dialect.getTypeName(Types.INTEGER) + " )",
+ dialect.getCreateTableString() + " " + tableName + " ( " + columnName + " " + dialect.getTypeName(Types.INTEGER) + " )",
"insert into " + tableName + " values ( 0 )"
};
}
public String[] sqlDropStrings(Dialect dialect) {
- //return "drop table " + tableName + dialect.getCascadeConstraintsString();
- StringBuffer sqlDropString = new StringBuffer()
- .append("drop table ");
- if ( dialect.supportsIfExistsBeforeTableName() ) sqlDropString.append("if exists ");
- sqlDropString.append(tableName)
- .append( dialect.getCascadeConstraintsString() );
- if ( dialect.supportsIfExistsAfterTableName() ) sqlDropString.append(" if exists");
+ StringBuffer sqlDropString = new StringBuffer( "drop table " );
+ if ( dialect.supportsIfExistsBeforeTableName() ) {
+ sqlDropString.append( "if exists " );
+ }
+ sqlDropString.append( tableName ).append( dialect.getCascadeConstraintsString() );
+ if ( dialect.supportsIfExistsAfterTableName() ) {
+ sqlDropString.append( " if exists" );
+ }
return new String[] { sqlDropString.toString() };
}
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -271,15 +271,15 @@
public String[] sqlCreateStrings(Dialect dialect) throws HibernateException {
return new String[] {
new StringBuffer()
- .append( "create table " )
+ .append( dialect.getCreateTableString() )
.append( tableName )
.append( " ( " )
.append( segmentColumnName )
- .append( " " )
+ .append( ' ' )
.append( dialect.getTypeName( Types.VARCHAR, segmentValueLength, 0, 0 ) )
.append( ", " )
.append( valueColumnName )
- .append( " " )
+ .append( ' ' )
.append( dialect.getTypeName( Types.BIGINT ) )
.append( " ) " )
.toString()
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -186,7 +186,9 @@
* @throws SQLException Can be thrown by the Connection.isAutoCommit() check.
*/
public boolean isAutoCommit() throws SQLException {
- return connection == null || connection.getAutoCommit();
+ return connection == null
+ || connection.isClosed()
+ || connection.getAutoCommit();
}
/**
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Column.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Column.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Column.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -65,6 +65,10 @@
quoted=true;
this.name=name.substring( 1, name.length()-1 );
}
+ else if(Dialect.getDialect().getKeywords().contains(name)) {
+ quoted=true;
+ this.name = name;
+ }
else {
this.name = name;
}
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Table.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Table.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/mapping/Table.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -360,9 +360,8 @@
return buffer.toString();
}
- public String sqlCreateString(Dialect dialect, Mapping p, String defaultCatalog, String defaultSchema)
- throws HibernateException {
- StringBuffer buf = new StringBuffer( "create table " )
+ public String sqlCreateString(Dialect dialect, Mapping p, String defaultCatalog, String defaultSchema) {
+ StringBuffer buf = new StringBuffer( hasPrimaryKey() ? dialect.getCreateTableString() : dialect.getCreateMultisetTableString() )
.append( getQualifiedName( dialect, defaultCatalog, defaultSchema ) )
.append( " (" );
Modified: branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java
===================================================================
--- branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -56,6 +56,10 @@
if ( ! readCommittedIsolationMaintained( "ejb3 lock tests" ) ) {
return;
}
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders() ) {
+ reportSkip( "write locks block readers", "jpa read locking" );
+ return;
+ }
final String initialName = "lock test";
// set up some test data
@@ -123,6 +127,11 @@
if ( ! readCommittedIsolationMaintained( "ejb3 lock tests" ) ) {
return;
}
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders() ) {
+ reportSkip( "write locks block readers", "jpa read locking" );
+ return;
+ }
+
final String initialName = "lock test";
// set up some test data
Session s1 = getSessions().openSession();
Modified: branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java
===================================================================
--- branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -35,6 +35,10 @@
// versioned entity tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
public void testStaleVersionedInstanceFoundInQueryResult() {
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "write locks block readers", "stale versioned instance" );
+ return;
+ }
String check = "EJB3 Specification";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
@@ -83,6 +87,10 @@
if ( ! readCommittedIsolationMaintained( "repeatable read tests" ) ) {
return;
}
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "lock blocking", "stale versioned instance" );
+ return;
+ }
String check = "EJB3 Specification";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
@@ -153,6 +161,10 @@
// non-versioned entity tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
public void testStaleNonVersionedInstanceFoundInQueryResult() {
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "lock blocking", "stale versioned instance" );
+ return;
+ }
String check = "Lock Modes";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
@@ -200,6 +212,10 @@
if ( ! readCommittedIsolationMaintained( "repeatable read tests" ) ) {
return;
}
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "lock blocking", "stale versioned instance" );
+ return;
+ }
String check = "Lock Modes";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
Modified: branches/Branch_3_2/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java
===================================================================
--- branches/Branch_3_2/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -48,6 +48,10 @@
}
private void testUpdateOptimisticLockFailure(String entityName) {
+ if ( getDialect().doesRepeatableReadCauseReadersToBlockWriters() ) {
+ reportSkip( "read locks block writers", "update optimistic locking" );
+ return;
+ }
Session mainSession = openSession();
mainSession.beginTransaction();
Document doc = new Document();
@@ -107,6 +111,10 @@
}
private void testDeleteOptimisticLockFailure(String entityName) {
+ if ( getDialect().doesRepeatableReadCauseReadersToBlockWriters() ) {
+ reportSkip( "read locks block writers", "update optimistic locking" );
+ return;
+ }
Session mainSession = openSession();
mainSession.beginTransaction();
Document doc = new Document();
Modified: branches/Branch_3_2/Hibernate3/test/org/hibernate/test/tm/CMTTest.java
===================================================================
--- branches/Branch_3_2/Hibernate3/test/org/hibernate/test/tm/CMTTest.java 2007-03-19 22:06:14 UTC (rev 11303)
+++ branches/Branch_3_2/Hibernate3/test/org/hibernate/test/tm/CMTTest.java 2007-03-19 22:06:45 UTC (rev 11304)
@@ -210,9 +210,8 @@
}
public void testConcurrentCachedDirtyQueries() throws Exception {
- if ( getDialect() instanceof SybaseDialect ) {
- // sybase and sqlserver have serious locking issues here...
- reportSkip( "dead-lock bug", "concurrent queries" );
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders() ) {
+ reportSkip( "write locks block readers", "concurrent queries" );
return;
}
17 years, 1 month
Hibernate SVN: r11303 - in trunk/Hibernate3: src/org/hibernate/id and 6 other directories.
by hibernate-commits@lists.jboss.org
Author: steve.ebersole(a)jboss.com
Date: 2007-03-19 18:06:14 -0400 (Mon, 19 Mar 2007)
New Revision: 11303
Added:
trunk/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java
Modified:
trunk/Hibernate3/src/org/hibernate/dialect/Dialect.java
trunk/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java
trunk/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java
trunk/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java
trunk/Hibernate3/src/org/hibernate/id/TableGenerator.java
trunk/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java
trunk/Hibernate3/src/org/hibernate/id/enhanced/TableStructure.java
trunk/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java
trunk/Hibernate3/src/org/hibernate/mapping/Table.java
trunk/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java
trunk/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java
trunk/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java
trunk/Hibernate3/test/org/hibernate/test/tm/CMTTest.java
Log:
HHH-2500 : terradata certification
Modified: trunk/Hibernate3/src/org/hibernate/dialect/Dialect.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/dialect/Dialect.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/dialect/Dialect.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -970,6 +970,32 @@
}
+ // table support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ /**
+ * Command used to create a table.
+ *
+ * @return The command used to create a table.
+ */
+ public String getCreateTableString() {
+ return "create table";
+ }
+
+ /**
+ * Slight variation on {@link #getCreateTableString}. Here, we have the
+ * command used to create a table when there is no primary key and
+ * duplicate rows are expected.
+ * <p/>
+ * Most databases do not care about the distinction; originally added for
+ * Teradata support which does care.
+ *
+ * @return The command used to create a multiset table.
+ */
+ public String getCreateMultisetTableString() {
+ return getCreateTableString();
+ }
+
+
// temporary table support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/**
Modified: trunk/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/dialect/SQLServerDialect.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -129,4 +129,12 @@
// note: at least my local SQL Server 2005 Express shows this not working...
return false;
}
+
+ public boolean doesReadCommittedCauseWritersToBlockReaders() {
+ return false; // here assume SQLServer2005 using snapshot isolation, which does not have this problem
+ }
+
+ public boolean doesRepeatableReadCauseReadersToBlockWriters() {
+ return false; // here assume SQLServer2005 using snapshot isolation, which does not have this problem
+ }
}
Modified: trunk/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/dialect/SybaseDialect.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -227,4 +227,12 @@
public boolean supportsExistsInSelect() {
return false;
}
+
+ public boolean doesReadCommittedCauseWritersToBlockReaders() {
+ return true;
+ }
+
+ public boolean doesRepeatableReadCauseReadersToBlockWriters() {
+ return true;
+ }
}
Added: trunk/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java (rev 0)
+++ trunk/Hibernate3/src/org/hibernate/dialect/TeradataDialect.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -0,0 +1,237 @@
+package org.hibernate.dialect;
+
+import java.sql.Types;
+
+import org.hibernate.Hibernate;
+import org.hibernate.HibernateException;
+import org.hibernate.cfg.Environment;
+import org.hibernate.dialect.function.SQLFunctionTemplate;
+import org.hibernate.dialect.function.VarArgsSQLFunction;
+
+/**
+ * A dialect for the Teradata database created by MCR as part of the
+ * dialect certification process.
+ *
+ * @author Jay Nance
+ */
+public class TeradataDialect extends Dialect {
+
+ /**
+ * Constructor
+ */
+ public TeradataDialect() {
+ super();
+ //registerColumnType data types
+ registerColumnType( Types.NUMERIC, "NUMERIC($p,$s)" );
+ registerColumnType( Types.DOUBLE, "DOUBLE PRECISION" );
+ registerColumnType( Types.BIGINT, "NUMERIC(18,0)" );
+ registerColumnType( Types.BIT, "BYTEINT" );
+ registerColumnType( Types.TINYINT, "BYTEINT" );
+ registerColumnType( Types.VARBINARY, "VARBYTE($l)" );
+ registerColumnType( Types.BINARY, "BYTEINT" );
+ registerColumnType( Types.LONGVARCHAR, "LONG VARCHAR" );
+ registerColumnType( Types.CHAR, "CHAR(1)" );
+ registerColumnType( Types.DECIMAL, "DECIMAL" );
+ registerColumnType( Types.INTEGER, "INTEGER" );
+ registerColumnType( Types.SMALLINT, "SMALLINT" );
+ registerColumnType( Types.FLOAT, "FLOAT" );
+ registerColumnType( Types.VARCHAR, "VARCHAR($l)" );
+ registerColumnType( Types.DATE, "DATE" );
+ registerColumnType( Types.TIME, "TIME" );
+ registerColumnType( Types.TIMESTAMP, "TIMESTAMP" );
+ registerColumnType( Types.BOOLEAN, "BYTEINT" ); // hibernate seems to ignore this type...
+ registerColumnType( Types.BLOB, "BLOB" );
+ registerColumnType( Types.CLOB, "CLOB" );
+
+ registerFunction( "year", new SQLFunctionTemplate( Hibernate.INTEGER, "extract(year from ?1)" ) );
+ registerFunction( "length", new SQLFunctionTemplate( Hibernate.INTEGER, "character_length(?1)" ) );
+ registerFunction( "concat", new VarArgsSQLFunction( Hibernate.STRING, "(", "||", ")" ) );
+ registerFunction( "substring", new SQLFunctionTemplate( Hibernate.STRING, "substring(?1 from ?2 for ?3)" ) );
+ registerFunction( "locate", new SQLFunctionTemplate( Hibernate.STRING, "position(?1 in ?2)" ) );
+ registerFunction( "mod", new SQLFunctionTemplate( Hibernate.STRING, "?1 mod ?2" ) );
+ registerFunction( "str", new SQLFunctionTemplate( Hibernate.STRING, "cast(?1 as varchar(255))" ) );
+
+ // bit_length feels a bit broken to me. We have to cast to char in order to
+ // pass when a numeric value is supplied. But of course the answers given will
+ // be wildly different for these two datatypes. 1234.5678 will be 9 bytes as
+ // a char string but will be 8 or 16 bytes as a true numeric.
+ // Jay Nance 2006-09-22
+ registerFunction(
+ "bit_length", new SQLFunctionTemplate( Hibernate.INTEGER, "octet_length(cast(?1 as char))*4" )
+ );
+
+ // The preference here would be
+ // SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_timestamp(?1)", false)
+ // but this appears not to work.
+ // Jay Nance 2006-09-22
+ registerFunction( "current_timestamp", new SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_timestamp" ) );
+ registerFunction( "current_time", new SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_time" ) );
+ registerFunction( "current_date", new SQLFunctionTemplate( Hibernate.TIMESTAMP, "current_date" ) );
+ // IBID for current_time and current_date
+
+ registerKeyword( "password" );
+ registerKeyword( "type" );
+ registerKeyword( "title" );
+ registerKeyword( "year" );
+ registerKeyword( "month" );
+ registerKeyword( "summary" );
+ registerKeyword( "alias" );
+ registerKeyword( "value" );
+ registerKeyword( "first" );
+ registerKeyword( "role" );
+ registerKeyword( "account" );
+ registerKeyword( "class" );
+
+ // Tell hibernate to use getBytes instead of getBinaryStream
+ getDefaultProperties().setProperty( Environment.USE_STREAMS_FOR_BINARY, "false" );
+ // No batch statements
+ getDefaultProperties().setProperty( Environment.STATEMENT_BATCH_SIZE, NO_BATCH );
+ }
+
+ /**
+ * Does this dialect support the <tt>FOR UPDATE</tt> syntax?
+ *
+ * @return empty string ... Teradata does not support <tt>FOR UPDATE<tt> syntax
+ */
+ public String getForUpdateString() {
+ return "";
+ }
+
+ public boolean supportsIdentityColumns() {
+ return false;
+ }
+
+ public boolean supportsSequences() {
+ return false;
+ }
+
+ public String getAddColumnString() {
+ return "Add Column";
+ }
+
+ public boolean supportsTemporaryTables() {
+ return true;
+ }
+
+ public String getCreateTemporaryTableString() {
+ return "create global temporary table";
+ }
+
+ public String getCreateTemporaryTablePostfix() {
+ return " on commit preserve rows";
+ }
+
+ public Boolean performTemporaryTableDDLInIsolation() {
+ return Boolean.TRUE;
+ }
+
+ public boolean dropTemporaryTableAfterUse() {
+ return false;
+ }
+
+ /**
+ * Get the name of the database type associated with the given
+ * <tt>java.sql.Types</tt> typecode.
+ *
+ * @param code <tt>java.sql.Types</tt> typecode
+ * @param length the length or precision of the column
+ * @param precision the precision of the column
+ * @param scale the scale of the column
+ *
+ * @return the database type name
+ *
+ * @throws HibernateException
+ */
+ public String getTypeName(int code, int length, int precision, int scale) throws HibernateException {
+ /*
+ * We might want a special case for 19,2. This is very common for money types
+ * and here it is converted to 18,1
+ */
+ float f = precision > 0 ? ( float ) scale / ( float ) precision : 0;
+ int p = ( precision > 18 ? 18 : precision );
+ int s = ( precision > 18 ? ( int ) ( 18.0 * f ) : ( scale > 18 ? 18 : scale ) );
+
+ return super.getTypeName( code, length, p, s );
+ }
+
+ public boolean supportsCascadeDelete() {
+ return false;
+ }
+
+ public boolean supportsCircularCascadeDeleteConstraints() {
+ return false;
+ }
+
+ public boolean areStringComparisonsCaseInsensitive() {
+ return true;
+ }
+
+ public boolean supportsEmptyInList() {
+ return false;
+ }
+
+ public String getSelectClauseNullString(int sqlType) {
+ String v = "null";
+
+ switch ( sqlType ) {
+ case Types.BIT:
+ case Types.TINYINT:
+ case Types.SMALLINT:
+ case Types.INTEGER:
+ case Types.BIGINT:
+ case Types.FLOAT:
+ case Types.REAL:
+ case Types.DOUBLE:
+ case Types.NUMERIC:
+ case Types.DECIMAL:
+ v = "cast(null as decimal)";
+ break;
+ case Types.CHAR:
+ case Types.VARCHAR:
+ case Types.LONGVARCHAR:
+ v = "cast(null as varchar(255))";
+ break;
+ case Types.DATE:
+ case Types.TIME:
+ case Types.TIMESTAMP:
+ v = "cast(null as timestamp)";
+ break;
+ case Types.BINARY:
+ case Types.VARBINARY:
+ case Types.LONGVARBINARY:
+ case Types.NULL:
+ case Types.OTHER:
+ case Types.JAVA_OBJECT:
+ case Types.DISTINCT:
+ case Types.STRUCT:
+ case Types.ARRAY:
+ case Types.BLOB:
+ case Types.CLOB:
+ case Types.REF:
+ case Types.DATALINK:
+ case Types.BOOLEAN:
+ break;
+ }
+ return v;
+ }
+
+ public String getCreateMultisetTableString() {
+ return "create multiset table ";
+ }
+
+ public boolean supportsLobValueChangePropogation() {
+ return false;
+ }
+
+ public boolean doesReadCommittedCauseWritersToBlockReaders() {
+ return true;
+ }
+
+ public boolean doesRepeatableReadCauseReadersToBlockWriters() {
+ return true;
+ }
+
+ public boolean supportsBindAsCallableArgument() {
+ return false;
+ }
+}
\ No newline at end of file
Modified: trunk/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/id/MultipleHiLoPerTableGenerator.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -89,28 +89,30 @@
public String[] sqlCreateStrings(Dialect dialect) throws HibernateException {
return new String[] {
new StringBuffer()
- .append("create table ")
- .append(tableName)
- .append(" ( ")
- .append(pkColumnName)
- .append(" ")
- .append( dialect.getTypeName(Types.VARCHAR, keySize, 0, 0) )
- .append(", ")
- .append(valueColumnName)
- .append(" ")
- .append( dialect.getTypeName(Types.INTEGER) )
- .append(" ) ")
+ .append( dialect.getCreateTableString() )
+ .append( tableName )
+ .append( " ( " )
+ .append( pkColumnName )
+ .append( ' ' )
+ .append( dialect.getTypeName( Types.VARCHAR, keySize, 0, 0 ) )
+ .append( ", " )
+ .append( valueColumnName )
+ .append( ' ' )
+ .append( dialect.getTypeName( Types.INTEGER ) )
+ .append( " ) " )
.toString()
};
}
public String[] sqlDropStrings(Dialect dialect) throws HibernateException {
- StringBuffer sqlDropString = new StringBuffer()
- .append("drop table ");
- if ( dialect.supportsIfExistsBeforeTableName() ) sqlDropString.append("if exists ");
- sqlDropString.append(tableName)
- .append( dialect.getCascadeConstraintsString() );
- if ( dialect.supportsIfExistsAfterTableName() ) sqlDropString.append(" if exists");
+ StringBuffer sqlDropString = new StringBuffer( "drop table " );
+ if ( dialect.supportsIfExistsBeforeTableName() ) {
+ sqlDropString.append( "if exists " );
+ }
+ sqlDropString.append( tableName ).append( dialect.getCascadeConstraintsString() );
+ if ( dialect.supportsIfExistsAfterTableName() ) {
+ sqlDropString.append( " if exists" );
+ }
return new String[] { sqlDropString.toString() };
}
Modified: trunk/Hibernate3/src/org/hibernate/id/TableGenerator.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/id/TableGenerator.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/id/TableGenerator.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -98,19 +98,20 @@
public String[] sqlCreateStrings(Dialect dialect) throws HibernateException {
return new String[] {
- "create table " + tableName + " ( " + columnName + " " + dialect.getTypeName(Types.INTEGER) + " )",
+ dialect.getCreateTableString() + " " + tableName + " ( " + columnName + " " + dialect.getTypeName(Types.INTEGER) + " )",
"insert into " + tableName + " values ( 0 )"
};
}
public String[] sqlDropStrings(Dialect dialect) {
- //return "drop table " + tableName + dialect.getCascadeConstraintsString();
- StringBuffer sqlDropString = new StringBuffer()
- .append("drop table ");
- if ( dialect.supportsIfExistsBeforeTableName() ) sqlDropString.append("if exists ");
- sqlDropString.append(tableName)
- .append( dialect.getCascadeConstraintsString() );
- if ( dialect.supportsIfExistsAfterTableName() ) sqlDropString.append(" if exists");
+ StringBuffer sqlDropString = new StringBuffer( "drop table " );
+ if ( dialect.supportsIfExistsBeforeTableName() ) {
+ sqlDropString.append( "if exists " );
+ }
+ sqlDropString.append( tableName ).append( dialect.getCascadeConstraintsString() );
+ if ( dialect.supportsIfExistsAfterTableName() ) {
+ sqlDropString.append( " if exists" );
+ }
return new String[] { sqlDropString.toString() };
}
Modified: trunk/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/id/enhanced/SegmentedTableGenerator.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -271,15 +271,15 @@
public String[] sqlCreateStrings(Dialect dialect) throws HibernateException {
return new String[] {
new StringBuffer()
- .append( "create table " )
+ .append( dialect.getCreateTableString() )
.append( tableName )
.append( " ( " )
.append( segmentColumnName )
- .append( " " )
+ .append( ' ' )
.append( dialect.getTypeName( Types.VARCHAR, segmentValueLength, 0, 0 ) )
.append( ", " )
.append( valueColumnName )
- .append( " " )
+ .append( ' ' )
.append( dialect.getTypeName( Types.BIGINT ) )
.append( " ) " )
.toString()
Modified: trunk/Hibernate3/src/org/hibernate/id/enhanced/TableStructure.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/id/enhanced/TableStructure.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/id/enhanced/TableStructure.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -76,7 +76,7 @@
public String[] sqlCreateStrings(Dialect dialect) throws HibernateException {
return new String[] {
- "create table " + tableName + " ( " + valueColumnName + " " + dialect.getTypeName( Types.BIGINT ) + " )",
+ dialect.getCreateTableString() + " " + tableName + " ( " + valueColumnName + " " + dialect.getTypeName( Types.BIGINT ) + " )",
"insert into " + tableName + " values ( " + initialValue + " )"
};
}
Modified: trunk/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/jdbc/ConnectionManager.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -186,7 +186,9 @@
* @throws SQLException Can be thrown by the Connection.isAutoCommit() check.
*/
public boolean isAutoCommit() throws SQLException {
- return connection == null || connection.getAutoCommit();
+ return connection == null
+ || connection.isClosed()
+ || connection.getAutoCommit();
}
/**
Modified: trunk/Hibernate3/src/org/hibernate/mapping/Table.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/mapping/Table.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/src/org/hibernate/mapping/Table.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -360,9 +360,9 @@
return buffer.toString();
}
- public String sqlCreateString(Dialect dialect, Mapping p, String defaultCatalog, String defaultSchema)
- throws HibernateException {
- StringBuffer buf = new StringBuffer( "create table " )
+ public String sqlCreateString(Dialect dialect, Mapping p, String defaultCatalog, String defaultSchema) {
+ StringBuffer buf = new StringBuffer( hasPrimaryKey() ? dialect.getCreateTableString() : dialect.getCreateMultisetTableString() )
+ .append( ' ' )
.append( getQualifiedName( dialect, defaultCatalog, defaultSchema ) )
.append( " (" );
Modified: trunk/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java
===================================================================
--- trunk/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/test/org/hibernate/test/jpa/lock/JPALockTest.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -56,6 +56,10 @@
if ( ! readCommittedIsolationMaintained( "ejb3 lock tests" ) ) {
return;
}
+ if(getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip("deadlock", "jpa read locking");
+ return;
+ }
final String initialName = "lock test";
// set up some test data
@@ -123,6 +127,10 @@
if ( ! readCommittedIsolationMaintained( "ejb3 lock tests" ) ) {
return;
}
+ if(getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip("deadlock", "jpa write locking");
+ return;
+ }
final String initialName = "lock test";
// set up some test data
Session s1 = getSessions().openSession();
Modified: trunk/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java
===================================================================
--- trunk/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/test/org/hibernate/test/jpa/lock/RepeatableReadTest.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -35,6 +35,10 @@
// versioned entity tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
public void testStaleVersionedInstanceFoundInQueryResult() {
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "lock blocking", "stale versioned instance" );
+ return;
+ }
String check = "EJB3 Specification";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
@@ -83,6 +87,10 @@
if ( ! readCommittedIsolationMaintained( "repeatable read tests" ) ) {
return;
}
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "lock blocking", "stale versioned instance" );
+ return;
+ }
String check = "EJB3 Specification";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
@@ -153,6 +161,10 @@
// non-versioned entity tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
public void testStaleNonVersionedInstanceFoundInQueryResult() {
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "lock blocking", "stale versioned instance" );
+ return;
+ }
String check = "Lock Modes";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
@@ -200,6 +212,10 @@
if ( ! readCommittedIsolationMaintained( "repeatable read tests" ) ) {
return;
}
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders()) {
+ reportSkip( "lock blocking", "stale versioned instance" );
+ return;
+ }
String check = "Lock Modes";
Session s1 = getSessions().openSession();
Transaction t1 = s1.beginTransaction();
Modified: trunk/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java
===================================================================
--- trunk/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/test/org/hibernate/test/optlock/OptimisticLockTest.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -48,6 +48,10 @@
}
private void testUpdateOptimisticLockFailure(String entityName) {
+ if ( getDialect().doesRepeatableReadCauseReadersToBlockWriters() ) {
+ reportSkip( "deadlock", "update optimistic locking" );
+ return;
+ }
Session mainSession = openSession();
mainSession.beginTransaction();
Document doc = new Document();
@@ -107,6 +111,10 @@
}
private void testDeleteOptimisticLockFailure(String entityName) {
+ if ( getDialect().doesRepeatableReadCauseReadersToBlockWriters() ) {
+ reportSkip( "deadlock", "delete optimistic locking" );
+ return;
+ }
Session mainSession = openSession();
mainSession.beginTransaction();
Document doc = new Document();
Modified: trunk/Hibernate3/test/org/hibernate/test/tm/CMTTest.java
===================================================================
--- trunk/Hibernate3/test/org/hibernate/test/tm/CMTTest.java 2007-03-19 20:44:11 UTC (rev 11302)
+++ trunk/Hibernate3/test/org/hibernate/test/tm/CMTTest.java 2007-03-19 22:06:14 UTC (rev 11303)
@@ -210,9 +210,8 @@
}
public void testConcurrentCachedDirtyQueries() throws Exception {
- if ( getDialect() instanceof SybaseDialect ) {
- // sybase and sqlserver have serious locking issues here...
- reportSkip( "dead-lock bug", "concurrent queries" );
+ if ( getDialect().doesReadCommittedCauseWritersToBlockReaders() ) {
+ reportSkip( "write locks block readers", "concurrent queries" );
return;
}
17 years, 1 month
Hibernate SVN: r11302 - in trunk/Hibernate3/src/org/hibernate: engine and 3 other directories.
by hibernate-commits@lists.jboss.org
Author: steve.ebersole(a)jboss.com
Date: 2007-03-19 16:44:11 -0400 (Mon, 19 Mar 2007)
New Revision: 11302
Added:
trunk/Hibernate3/src/org/hibernate/engine/loading/
trunk/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java
trunk/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java
trunk/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java
trunk/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java
Removed:
trunk/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java
Modified:
trunk/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java
trunk/Hibernate3/src/org/hibernate/engine/PersistenceContext.java
trunk/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java
trunk/Hibernate3/src/org/hibernate/loader/Loader.java
trunk/Hibernate3/src/org/hibernate/type/CollectionType.java
Log:
HHH-2495 : ResultSet processing context
Modified: trunk/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java 2007-03-19 20:43:46 UTC (rev 11301)
+++ trunk/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -538,7 +538,7 @@
/**
* Get the current session
*/
- protected final SessionImplementor getSession() {
+ public final SessionImplementor getSession() {
return session;
}
Deleted: trunk/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java 2007-03-19 20:43:46 UTC (rev 11301)
+++ trunk/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -1,340 +0,0 @@
-//$Id$
-package org.hibernate.engine;
-
-import java.io.Serializable;
-import java.util.ArrayList;
-import java.util.Comparator;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.hibernate.CacheMode;
-import org.hibernate.EntityMode;
-import org.hibernate.HibernateException;
-import org.hibernate.cache.CacheKey;
-import org.hibernate.cache.entry.CollectionCacheEntry;
-import org.hibernate.collection.PersistentCollection;
-import org.hibernate.persister.collection.CollectionPersister;
-import org.hibernate.pretty.MessageHelper;
-
-/**
- * Represents the state of collections currently being loaded. Eventually, I
- * would like to have multiple instances of this per session - one per JDBC
- * result set, instead of the resultSetId being passed.
- * @author Gavin King
- */
-public class CollectionLoadContext {
-
- private static final Log log = LogFactory.getLog(CollectionLoadContext.class);
-
- // The collections we are currently loading
- private final Map loadingCollections = new HashMap(8);
- private final PersistenceContext context;
-
- public CollectionLoadContext(PersistenceContext context) {
- this.context = context;
- }
-
- private static final class LoadingCollectionEntry {
-
- final PersistentCollection collection;
- final Serializable key;
- final Object resultSetId;
- final CollectionPersister persister;
-
- LoadingCollectionEntry(
- final PersistentCollection collection,
- final Serializable key,
- final CollectionPersister persister,
- final Object resultSetId
- ) {
- this.collection = collection;
- this.key = key;
- this.persister = persister;
- this.resultSetId = resultSetId;
- }
- }
-
- /**
- * Retrieve a collection that is in the process of being loaded, instantiating
- * a new collection if there is nothing for the given id, or returning null
- * if the collection with the given id is already fully loaded in the session
- */
- public PersistentCollection getLoadingCollection(
- final CollectionPersister persister,
- final Serializable key,
- final Object resultSetId,
- final EntityMode em)
- throws HibernateException {
- CollectionKey ckey = new CollectionKey(persister, key, em);
- LoadingCollectionEntry lce = getLoadingCollectionEntry(ckey);
- if ( lce == null ) {
- //look for existing collection
- PersistentCollection collection = context.getCollection(ckey);
- if ( collection != null ) {
- if ( collection.wasInitialized() ) {
- log.trace( "collection already initialized: ignoring" );
- return null; //ignore this row of results! Note the early exit
- }
- else {
- //initialize this collection
- log.trace( "uninitialized collection: initializing" );
- }
- }
- else {
- Object entity = context.getCollectionOwner(key, persister);
- final boolean newlySavedEntity = entity != null &&
- context.getEntry(entity).getStatus() != Status.LOADING &&
- em!=EntityMode.DOM4J;
- if ( newlySavedEntity ) {
- //important, to account for newly saved entities in query
- //TODO: some kind of check for new status...
- log.trace( "owning entity already loaded: ignoring" );
- return null;
- }
- else {
- //create one
- log.trace( "new collection: instantiating" );
- collection = persister.getCollectionType()
- .instantiate( context.getSession(), persister, key );
- }
- }
- collection.beforeInitialize( persister, -1 );
- collection.beginRead();
- addLoadingCollectionEntry(ckey, collection, persister, resultSetId);
- return collection;
- }
- else {
- if ( lce.resultSetId == resultSetId ) {
- log.trace( "reading row" );
- return lce.collection;
- }
- else {
- // ignore this row, the collection is in process of
- // being loaded somewhere further "up" the stack
- log.trace( "collection is already being initialized: ignoring row" );
- return null;
- }
- }
- }
-
- /**
- * Retrieve a collection that is in the process of being loaded, returning null
- * if there is no loading collection with the given id
- */
- public PersistentCollection getLoadingCollection(CollectionPersister persister, Serializable id, EntityMode em) {
- LoadingCollectionEntry lce = getLoadingCollectionEntry( new CollectionKey(persister, id, em) );
- if ( lce != null ) {
- if ( log.isTraceEnabled() ) {
- log.trace(
- "returning loading collection:" +
- MessageHelper.collectionInfoString(persister, id, context.getSession().getFactory())
- );
- }
- return lce.collection;
- }
- else {
- if ( log.isTraceEnabled() ) {
- log.trace(
- "creating collection wrapper:" +
- MessageHelper.collectionInfoString(persister, id, context.getSession().getFactory())
- );
- }
- return null;
- }
- }
-
- /**
- * Create a new loading collection entry
- */
- private void addLoadingCollectionEntry(
- final CollectionKey collectionKey,
- final PersistentCollection collection,
- final CollectionPersister persister,
- final Object resultSetId
- ) {
- loadingCollections.put(
- collectionKey,
- new LoadingCollectionEntry(
- collection,
- collectionKey.getKey(),
- persister,
- resultSetId
- )
- );
- }
-
- /**
- * get an existing new loading collection entry
- */
- private LoadingCollectionEntry getLoadingCollectionEntry(CollectionKey collectionKey) {
- return ( LoadingCollectionEntry ) loadingCollections.get( collectionKey );
- }
-
- /**
- * After we have finished processing a result set, a particular loading collection that
- * we are done.
- */
- private void endLoadingCollection(LoadingCollectionEntry lce, CollectionPersister persister, EntityMode em) {
-
- boolean hasNoQueuedAdds = lce.collection.endRead(); //warning: can cause a recursive query! (proxy initialization)
-
- if ( persister.getCollectionType().hasHolder(em) ) {
- context.addCollectionHolder(lce.collection);
- }
-
- CollectionEntry ce = context.getCollectionEntry(lce.collection);
- if ( ce==null ) {
- ce = context.addInitializedCollection(persister, lce.collection, lce.key);
- }
- else {
- ce.postInitialize(lce.collection);
- }
-
- final SessionImplementor session = context.getSession();
-
- boolean addToCache = hasNoQueuedAdds && // there were no queued additions
- persister.hasCache() && // and the role has a cache
- session.getCacheMode().isPutEnabled() &&
- !ce.isDoremove(); // and this is not a forced initialization during flush
- if (addToCache) addCollectionToCache(lce, persister);
-
- if ( log.isDebugEnabled() ) {
- log.debug(
- "collection fully initialized: " +
- MessageHelper.collectionInfoString(persister, lce.key, context.getSession().getFactory())
- );
- }
-
- if ( session.getFactory().getStatistics().isStatisticsEnabled() ) {
- session.getFactory().getStatisticsImplementor().loadCollection(
- persister.getRole()
- );
- }
-
- }
- /**
- * Finish the process of loading collections for a particular result set
- */
- public void endLoadingCollections(CollectionPersister persister, Object resultSetId, SessionImplementor session)
- throws HibernateException {
-
- // scan the loading collections for collections from this result set
- // put them in a new temp collection so that we are safe from concurrent
- // modification when the call to endRead() causes a proxy to be
- // initialized
- List resultSetCollections = null; //TODO: make this the resultSetId?
- Iterator iter = loadingCollections.values().iterator();
- while ( iter.hasNext() ) {
- LoadingCollectionEntry lce = (LoadingCollectionEntry) iter.next();
- if ( lce.resultSetId == resultSetId && lce.persister==persister) {
- if ( resultSetCollections == null ) {
- resultSetCollections = new ArrayList();
- }
- resultSetCollections.add(lce);
- if ( lce.collection.getOwner()==null ) {
- session.getPersistenceContext()
- .addUnownedCollection(
- new CollectionKey( persister, lce.key, session.getEntityMode() ),
- lce.collection
- );
- }
- iter.remove();
- }
- }
-
- endLoadingCollections( persister, resultSetCollections, session.getEntityMode() );
- }
-
- /**
- * After we have finished processing a result set, notify the loading collections that
- * we are done.
- */
- private void endLoadingCollections(CollectionPersister persister, List resultSetCollections, EntityMode em)
- throws HibernateException {
-
- final int count = (resultSetCollections == null) ? 0 : resultSetCollections.size();
-
- if ( log.isDebugEnabled() ) {
- log.debug( count + " collections were found in result set for role: " + persister.getRole() );
- }
-
- //now finish them
- for ( int i = 0; i < count; i++ ) {
- LoadingCollectionEntry lce = (LoadingCollectionEntry) resultSetCollections.get(i);
- endLoadingCollection(lce, persister, em);
- }
-
- if ( log.isDebugEnabled() ) {
- log.debug( count + " collections initialized for role: " + persister.getRole() );
- }
- }
-
- /**
- * Add a collection to the second-level cache
- */
- private void addCollectionToCache(LoadingCollectionEntry lce, CollectionPersister persister) {
-
- if ( log.isDebugEnabled() ) {
- log.debug(
- "Caching collection: " +
- MessageHelper.collectionInfoString( persister, lce.key, context.getSession().getFactory() )
- );
- }
-
- final SessionImplementor session = context.getSession();
- final SessionFactoryImplementor factory = session.getFactory();
-
- if ( !session.getEnabledFilters().isEmpty() && persister.isAffectedByEnabledFilters( session ) ) {
- // some filters affecting the collection are enabled on the session, so do not do the put into the cache.
- log.debug( "Refusing to add to cache due to enabled filters" );
- // todo : add the notion of enabled filters to the CacheKey to differentiate filtered collections from non-filtered;
- // but CacheKey is currently used for both collections and entities; would ideally need to define two seperate ones;
- // currently this works in conjuction with the check on
- // DefaultInitializeCollectionEventHandler.initializeCollectionFromCache() (which makes sure to not read from
- // cache with enabled filters).
- return; // EARLY EXIT!!!!!
- }
-
- final Comparator versionComparator;
- final Object version;
- if ( persister.isVersioned() ) {
- versionComparator = persister.getOwnerEntityPersister().getVersionType().getComparator();
- version = context.getEntry( context.getCollectionOwner(lce.key, persister) ).getVersion();
- }
- else {
- version = null;
- versionComparator = null;
- }
-
- CollectionCacheEntry entry = new CollectionCacheEntry(lce.collection, persister);
-
- CacheKey cacheKey = new CacheKey(
- lce.key,
- persister.getKeyType(),
- persister.getRole(),
- session.getEntityMode(),
- session.getFactory()
- );
- boolean put = persister.getCache().put(
- cacheKey,
- persister.getCacheEntryStructure().structure(entry),
- session.getTimestamp(),
- version,
- versionComparator,
- factory.getSettings().isMinimalPutsEnabled() &&
- session.getCacheMode()!=CacheMode.REFRESH
- );
-
- if ( put && factory.getStatistics().isStatisticsEnabled() ) {
- factory.getStatisticsImplementor().secondLevelCachePut(
- persister.getCache().getRegionName()
- );
- }
- }
-
-
-}
Modified: trunk/Hibernate3/src/org/hibernate/engine/PersistenceContext.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/engine/PersistenceContext.java 2007-03-19 20:43:46 UTC (rev 11301)
+++ trunk/Hibernate3/src/org/hibernate/engine/PersistenceContext.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -8,6 +8,8 @@
import org.hibernate.HibernateException;
import org.hibernate.LockMode;
import org.hibernate.MappingException;
+import org.hibernate.engine.loading.ResultSetProcessingContexts;
+import org.hibernate.engine.loading.LoadContexts;
import org.hibernate.collection.PersistentCollection;
import org.hibernate.persister.collection.CollectionPersister;
import org.hibernate.persister.entity.EntityPersister;
@@ -23,14 +25,18 @@
public boolean isStateless();
/**
- * Get the session
+ * Get the session to which this persistence context is bound.
+ *
+ * @return The session.
*/
public SessionImplementor getSession();
-
+
/**
- * Get the context for collection loading
+ * Retrieve this persistence context's managed load context.
+ *
+ * @return The load context
*/
- public CollectionLoadContext getCollectionLoadContext();
+ public LoadContexts getLoadContexts();
/**
* Add a collection which has no owner loaded
Modified: trunk/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java 2007-03-19 20:43:46 UTC (rev 11301)
+++ trunk/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -24,6 +24,7 @@
import org.hibernate.NonUniqueObjectException;
import org.hibernate.PersistentObjectException;
import org.hibernate.TransientObjectException;
+import org.hibernate.engine.loading.LoadContexts;
import org.hibernate.pretty.MessageHelper;
import org.hibernate.collection.PersistentCollection;
import org.hibernate.persister.collection.CollectionPersister;
@@ -100,8 +101,8 @@
private boolean flushing = false;
private boolean hasNonReadOnlyEntities = false;
-
- private CollectionLoadContext collectionLoadContext;
+
+ private LoadContexts loadContexts;
private BatchFetchQueue batchFetchQueue;
@@ -141,14 +142,14 @@
public SessionImplementor getSession() {
return session;
}
-
- public CollectionLoadContext getCollectionLoadContext() {
- if (collectionLoadContext==null) {
- collectionLoadContext = new CollectionLoadContext(this);
+
+ public LoadContexts getLoadContexts() {
+ if ( loadContexts == null ) {
+ loadContexts = new LoadContexts( this );
}
- return collectionLoadContext;
+ return loadContexts;
}
-
+
public void addUnownedCollection(CollectionKey key, PersistentCollection collection) {
if (unownedCollections==null) {
unownedCollections = new HashMap(8);
Added: trunk/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java (rev 0)
+++ trunk/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -0,0 +1,332 @@
+package org.hibernate.engine.loading;
+
+import java.sql.ResultSet;
+import java.io.Serializable;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Iterator;
+import java.util.ArrayList;
+import java.util.Comparator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.hibernate.collection.PersistentCollection;
+import org.hibernate.persister.collection.CollectionPersister;
+import org.hibernate.EntityMode;
+import org.hibernate.CacheMode;
+import org.hibernate.cache.entry.CollectionCacheEntry;
+import org.hibernate.cache.CacheKey;
+import org.hibernate.pretty.MessageHelper;
+import org.hibernate.engine.CollectionKey;
+import org.hibernate.engine.Status;
+import org.hibernate.engine.SessionImplementor;
+import org.hibernate.engine.CollectionEntry;
+import org.hibernate.engine.SessionFactoryImplementor;
+
+/**
+ * Represents state associated with the processing of a given {@link ResultSet}
+ * in regards to loading collections.
+ * <p/>
+ * Another implementation option to consider is to not expose {@link ResultSet}s
+ * directly (in the JDBC redesign) but to always "wrap" them and apply a
+ * [series of] context[s] to that wrapper.
+ *
+ * @author Steve Ebersole
+ */
+public class CollectionLoadContext {
+ private static final Log log = LogFactory.getLog( CollectionLoadContext.class );
+
+ private final LoadContexts loadContexts;
+ private final ResultSet resultSet;
+ private final Map loadingCollections = new HashMap( 8 );
+
+ /**
+ * Creates a collection load context for the given result set.
+ *
+ * @param loadContexts Callback to other collection load contexts.
+ * @param resultSet The result set this is "wrapping".
+ */
+ public CollectionLoadContext(LoadContexts loadContexts, ResultSet resultSet) {
+ this.loadContexts = loadContexts;
+ this.resultSet = resultSet;
+ }
+
+ public ResultSet getResultSet() {
+ return resultSet;
+ }
+
+ public LoadContexts getLoadContext() {
+ return loadContexts;
+ }
+
+ /**
+ * Retrieve the collection that is being loaded as part of processing this
+ * result set.
+ * <p/>
+ * Basically, there are two valid return values from this method:<ul>
+ * <li>an instance of {@link PersistentCollection} which indicates to
+ * continue loading the result set row data into that returned collection
+ * instance; this may be either an instance already associated and in the
+ * midst of being loaded, or a newly instantiated instance as a matching
+ * associated collection was not found.</li>
+ * <li><i>null</i> indicates to ignore the corresponding result set row
+ * data relating to the requested collection; this indicates that either
+ * the collection was found to already be associated with the persistence
+ * context in a fully loaded state, or it was found in a loading state
+ * associated with another result set processing context.</li>
+ * </ul>
+ *
+ * @param persister The persister for the collection being requested.
+ * @param key The key of the collection being requested.
+ *
+ * @return The loading collection (see discussion above).
+ */
+ public PersistentCollection getLoadingCollection(final CollectionPersister persister, final Serializable key) {
+ final EntityMode em = loadContexts.getPersistenceContext().getSession().getEntityMode();
+ final CollectionKey collectionKey = new CollectionKey( persister, key, em );
+ if ( log.isTraceEnabled() ) {
+ log.trace( "starting attempt to find loading collection [" + MessageHelper.collectionInfoString( persister.getRole(), key ) + "]" );
+ }
+ final LoadingCollectionEntry loadingCollectionEntry = locateLoadingCollectionEntry( collectionKey );
+ if ( loadingCollectionEntry == null ) {
+ // look for existing collection as part of the persistence context
+ PersistentCollection collection = loadContexts.getPersistenceContext().getCollection( collectionKey );
+ if ( collection != null ) {
+ if ( collection.wasInitialized() ) {
+ log.trace( "collection already initialized; ignoring" );
+ return null; // ignore this row of results! Note the early exit
+ }
+ else {
+ // initialize this collection
+ log.trace( "collection not yet initialized; initializing" );
+ }
+ }
+ else {
+ Object owner = loadContexts.getPersistenceContext().getCollectionOwner( key, persister );
+ final boolean newlySavedEntity = owner != null
+ && loadContexts.getPersistenceContext().getEntry( owner ).getStatus() != Status.LOADING
+ && em != EntityMode.DOM4J;
+ if ( newlySavedEntity ) {
+ // important, to account for newly saved entities in query
+ // todo : some kind of check for new status...
+ log.trace( "owning entity already loaded; ignoring" );
+ return null;
+ }
+ else {
+ // create one
+ if ( log.isTraceEnabled() ) {
+ log.trace( "instantiating new collection [key=" + key + ", rs=" + resultSet + "]" );
+ }
+ collection = persister.getCollectionType()
+ .instantiate( loadContexts.getPersistenceContext().getSession(), persister, key );
+ }
+ }
+ collection.beforeInitialize( persister, -1 );
+ collection.beginRead();
+ loadingCollections.put( collectionKey, new LoadingCollectionEntry( resultSet, persister, key, collection ) );
+ return collection;
+ }
+ else {
+ if ( loadingCollectionEntry.getResultSet() == resultSet ) {
+ log.trace( "found loading collection bound to current result set processing; reading row" );
+ return loadingCollectionEntry.getCollection();
+ }
+ else {
+ // ignore this row, the collection is in process of
+ // being loaded somewhere further "up" the stack
+ log.trace( "collection is already being initialized; ignoring row" );
+ return null;
+ }
+ }
+ }
+
+ private LoadingCollectionEntry locateLoadingCollectionEntry(CollectionKey collectionKey) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "attempting to locate loading collection entry [" + collectionKey + "]" );
+ }
+ // first try our loading collections
+ LoadingCollectionEntry loadingCollectionEntry = getLocalLoadingCollectionEntry( collectionKey );
+ if ( loadingCollectionEntry == null ) {
+ // the loading collection is not associated with our result set, so check the other
+ // result sets registered with the load context...
+ loadingCollectionEntry = loadContexts.locateLoadingCollectionEntry( collectionKey, this );
+ }
+ return loadingCollectionEntry;
+ }
+
+ LoadingCollectionEntry getLocalLoadingCollectionEntry(CollectionKey key) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "attempting to locally locate loading collection entry [key=" + key + ", rs=" + resultSet + "]" );
+ }
+ return ( LoadingCollectionEntry ) loadingCollections.get( key );
+ }
+
+ /**
+ * Finish the process of collection-loading for this bound result set. Mainly this
+ * involves cleaning up resources and notifying the collections that loading is
+ * complete.
+ *
+ * @param persister The persister for which to complete loading.
+ */
+ public void endLoadingCollections(CollectionPersister persister) {
+ SessionImplementor session = getLoadContext().getPersistenceContext().getSession();
+
+ // in an effort to avoid concurrent-modification-exceptions (from
+ // potential recursive calls back through here as a result of the
+ // eventual call to PersistentCollection#endRead), we scan the
+ // internal loadingCollections map for matches and store those matches
+ // in a temp collection. the temp collection is then used to "drive"
+ // the #endRead processing.
+ List matches = null;
+ Iterator iter = loadingCollections.values().iterator();
+ while ( iter.hasNext() ) {
+ LoadingCollectionEntry lce = (LoadingCollectionEntry) iter.next();
+ if ( lce.getResultSet() == resultSet && lce.getPersister() == persister) {
+ if ( matches == null ) {
+ matches = new ArrayList();
+ }
+ matches.add( lce );
+ if ( lce.getCollection().getOwner() == null ) {
+ session.getPersistenceContext().addUnownedCollection(
+ new CollectionKey( persister, lce.getKey(), session.getEntityMode() ),
+ lce.getCollection()
+ );
+ }
+ if ( log.isTraceEnabled() ) {
+ log.trace( "removing collection load entry [" + lce + "]" );
+ }
+ iter.remove();
+ }
+ }
+
+ endLoadingCollections( persister, matches );
+ }
+
+ private void endLoadingCollections(CollectionPersister persister, List matchedCollectionEntries) {
+ final int count = ( matchedCollectionEntries == null ) ? 0 : matchedCollectionEntries.size();
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( count + " collections were found in result set for role: " + persister.getRole() );
+ }
+
+ for ( int i = 0; i < count; i++ ) {
+ LoadingCollectionEntry lce = ( LoadingCollectionEntry ) matchedCollectionEntries.get( i );
+ endLoadingCollection( lce, persister );
+ }
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( count + " collections initialized for role: " + persister.getRole() );
+ }
+ }
+
+ private void endLoadingCollection(LoadingCollectionEntry lce, CollectionPersister persister) {
+ if ( log.isTraceEnabled() ) {
+ log.debug( "ending loading collection [" + lce + "]" );
+ }
+ final SessionImplementor session = getLoadContext().getPersistenceContext().getSession();
+ final EntityMode em = session.getEntityMode();
+
+ boolean hasNoQueuedAdds = lce.getCollection().endRead(); // warning: can cause a recursive calls! (proxy initialization)
+
+ if ( persister.getCollectionType().hasHolder( em ) ) {
+ getLoadContext().getPersistenceContext().addCollectionHolder( lce.getCollection() );
+ }
+
+ CollectionEntry ce = getLoadContext().getPersistenceContext().getCollectionEntry( lce.getCollection() );
+ if ( ce == null ) {
+ ce = getLoadContext().getPersistenceContext().addInitializedCollection( persister, lce.getCollection(), lce.getKey() );
+ }
+ else {
+ ce.postInitialize( lce.getCollection() );
+ }
+
+ boolean addToCache = hasNoQueuedAdds && // there were no queued additions
+ persister.hasCache() && // and the role has a cache
+ session.getCacheMode().isPutEnabled() &&
+ !ce.isDoremove(); // and this is not a forced initialization during flush
+ if ( addToCache ) {
+ addCollectionToCache( lce, persister );
+ }
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( "collection fully initialized: " + MessageHelper.collectionInfoString(persister, lce.getKey(), session.getFactory() ) );
+ }
+
+ if ( session.getFactory().getStatistics().isStatisticsEnabled() ) {
+ session.getFactory().getStatisticsImplementor().loadCollection( persister.getRole() );
+ }
+ }
+
+ /**
+ * Add the collection to the second-level cache
+ *
+ * @param lce The entry representing the collection to add
+ * @param persister The persister
+ */
+ private void addCollectionToCache(LoadingCollectionEntry lce, CollectionPersister persister) {
+ final SessionImplementor session = getLoadContext().getPersistenceContext().getSession();
+ final SessionFactoryImplementor factory = session.getFactory();
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( "Caching collection: " + MessageHelper.collectionInfoString( persister, lce.getKey(), factory ) );
+ }
+
+ if ( !session.getEnabledFilters().isEmpty() && persister.isAffectedByEnabledFilters( session ) ) {
+ // some filters affecting the collection are enabled on the session, so do not do the put into the cache.
+ log.debug( "Refusing to add to cache due to enabled filters" );
+ // todo : add the notion of enabled filters to the CacheKey to differentiate filtered collections from non-filtered;
+ // but CacheKey is currently used for both collections and entities; would ideally need to define two seperate ones;
+ // currently this works in conjuction with the check on
+ // DefaultInitializeCollectionEventHandler.initializeCollectionFromCache() (which makes sure to not read from
+ // cache with enabled filters).
+ return; // EARLY EXIT!!!!!
+ }
+
+ final Comparator versionComparator;
+ final Object version;
+ if ( persister.isVersioned() ) {
+ versionComparator = persister.getOwnerEntityPersister().getVersionType().getComparator();
+ final Object collectionOwner = getLoadContext().getPersistenceContext().getCollectionOwner( lce.getKey(), persister );
+ version = getLoadContext().getPersistenceContext().getEntry( collectionOwner ).getVersion();
+ }
+ else {
+ version = null;
+ versionComparator = null;
+ }
+
+ CollectionCacheEntry entry = new CollectionCacheEntry( lce.getCollection(), persister );
+ CacheKey cacheKey = new CacheKey(
+ lce.getKey(),
+ persister.getKeyType(),
+ persister.getRole(),
+ session.getEntityMode(),
+ session.getFactory()
+ );
+ boolean put = persister.getCache().put(
+ cacheKey,
+ persister.getCacheEntryStructure().structure(entry),
+ session.getTimestamp(),
+ version,
+ versionComparator,
+ factory.getSettings().isMinimalPutsEnabled() && session.getCacheMode()!= CacheMode.REFRESH
+ );
+
+ if ( put && factory.getStatistics().isStatisticsEnabled() ) {
+ factory.getStatisticsImplementor().secondLevelCachePut( persister.getCache().getRegionName() );
+ }
+ }
+
+ void cleanup() {
+ if ( !loadingCollections.isEmpty() ) {
+ log.warn( "On CollectionLoadContext#clear, loadingCollections contained [" + loadingCollections.size() + "] entries" );
+ }
+ loadingCollections.clear();
+ }
+
+
+ public String toString() {
+ return super.toString() + "<rs=" + resultSet + ">";
+ }
+}
Added: trunk/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java (rev 0)
+++ trunk/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -0,0 +1,33 @@
+package org.hibernate.engine.loading;
+
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * {@inheritDoc}
+ *
+ * @author Steve Ebersole
+ */
+public class EntityLoadContext {
+ private static final Log log = LogFactory.getLog( EntityLoadContext.class );
+
+ private final LoadContexts loadContexts;
+ private final ResultSet resultSet;
+ private final List hydratingEntities = new ArrayList( 20 ); // todo : need map? the prob is a proper key, right?
+
+ public EntityLoadContext(LoadContexts loadContexts, ResultSet resultSet) {
+ this.loadContexts = loadContexts;
+ this.resultSet = resultSet;
+ }
+
+ void cleanup() {
+ if ( !hydratingEntities.isEmpty() ) {
+ log.warn( "On EntityLoadContext#clear, hydratingEntities contained [" + hydratingEntities.size() + "] entries" );
+ }
+ hydratingEntities.clear();
+ }
+}
Added: trunk/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java (rev 0)
+++ trunk/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -0,0 +1,184 @@
+package org.hibernate.engine.loading;
+
+import java.sql.ResultSet;
+import java.util.Map;
+import java.util.Iterator;
+import java.io.Serializable;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.hibernate.util.IdentityMap;
+import org.hibernate.engine.PersistenceContext;
+import org.hibernate.engine.CollectionKey;
+import org.hibernate.engine.SessionImplementor;
+import org.hibernate.collection.PersistentCollection;
+import org.hibernate.persister.collection.CollectionPersister;
+import org.hibernate.pretty.MessageHelper;
+import org.hibernate.EntityMode;
+
+/**
+ * Maps {@link ResultSet result-sets} to specific contextual data
+ * related to processing that {@link ResultSet result-sets}.
+ * <p/>
+ * Implementation note: internally an {@link IdentityMap} is used to maintain
+ * the mappings; {@link IdentityMap} was chosen because I'd rather not be
+ * dependent upon potentially bad {@link ResultSet#equals} and {ResultSet#hashCode}
+ * implementations.
+ * <p/>
+ * Considering the JDBC-redesign work, would further like this contextual info
+ * not mapped seperately, but available based on the result set being processed.
+ * This would also allow maintaining a single mapping as we could reliably get
+ * notification of the result-set closing...
+ *
+ * @author Steve Ebersole
+ */
+public class LoadContexts {
+ private static final Log log = LogFactory.getLog( LoadContexts.class );
+
+ private final PersistenceContext persistenceContext;
+ private Map collectionLoadContexts;
+ private Map entityLoadContexts;
+
+ /**
+ * Creates and binds this to the given persistence context.
+ *
+ * @param persistenceContext The persistence context to which this
+ * will be bound.
+ */
+ public LoadContexts(PersistenceContext persistenceContext) {
+ this.persistenceContext = persistenceContext;
+ }
+
+ /**
+ * Retrieves the persistence context to which this is bound.
+ *
+ * @return The persistence context to which this is bound.
+ */
+ public PersistenceContext getPersistenceContext() {
+ return persistenceContext;
+ }
+
+ /**
+ * Get the {@link CollectionLoadContext} associated with the given
+ * {@link ResultSet}, creating one if needed.
+ *
+ * @param resultSet The result set for which to retrieve the context.
+ * @return The processing context.
+ */
+ public CollectionLoadContext getCollectionLoadContext(ResultSet resultSet) {
+ CollectionLoadContext context = null;
+ if ( collectionLoadContexts == null ) {
+ collectionLoadContexts = IdentityMap.instantiate( 8 );
+ }
+ else {
+ context = ( CollectionLoadContext ) collectionLoadContexts.get( resultSet );
+ }
+ if ( context == null ) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "constructing collection load context for result set [" + resultSet + "]" );
+ }
+ context = new CollectionLoadContext( this, resultSet );
+ collectionLoadContexts.put( resultSet, context );
+ }
+ return context;
+ }
+
+ /**
+ * Attempt to locate the loading collection given the owner's key. The lookup here
+ * occurs against all result-set contexts...
+ *
+ * @param persister The collection persister
+ * @param ownerKey The owner key
+ * @return The loading collection, or null if not found.
+ */
+ public PersistentCollection locateLoadingCollection(CollectionPersister persister, Serializable ownerKey) {
+ LoadingCollectionEntry lce = locateLoadingCollectionEntry( new CollectionKey( persister, ownerKey, getEntityMode() ), null ); // note: null because here we are interested in all contexts...
+ if ( lce != null ) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "returning loading collection:" + MessageHelper.collectionInfoString( persister, ownerKey, getSession().getFactory() ) );
+ }
+ return lce.getCollection();
+ }
+ else {
+ // todo : should really move this log statement to CollectionType, where this is used from...
+ if ( log.isTraceEnabled() ) {
+ log.trace( "creating collection wrapper:" + MessageHelper.collectionInfoString( persister, ownerKey, getSession().getFactory() ) );
+ }
+ return null;
+ }
+ }
+
+ /**
+ * Locate the LoadingCollectionEntry within *any* of the tracked
+ * {@link CollectionLoadContext}s.
+ * <p/>
+ * Implementation note: package protected, as this is meant solely for use
+ * by {@link CollectionLoadContext} to be able to locate collections
+ * being loaded by other {@link CollectionLoadContext}s/{@link ResultSet}s.
+ *
+ * @param key The collection key.
+ * @param caller The collection load context making this call (for performance optimization)
+ * @return The located entry; or null.
+ */
+ LoadingCollectionEntry locateLoadingCollectionEntry(CollectionKey key, CollectionLoadContext caller) {
+ if ( collectionLoadContexts == null ) {
+ return null;
+ }
+ if ( log.isTraceEnabled() ) {
+ log.trace( "attempting to locate loading collection entry [" + key + "] in any result-set context" );
+ }
+ LoadingCollectionEntry rtn = null;
+ Iterator itr = collectionLoadContexts.values().iterator();
+ while ( itr.hasNext() ) {
+ final CollectionLoadContext collectionLoadContext = ( CollectionLoadContext ) itr.next();
+ if ( collectionLoadContext == caller ) {
+ continue;
+ }
+ rtn = collectionLoadContext.getLocalLoadingCollectionEntry( key );
+ if ( rtn != null ) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "collection [" + key + "] located in load context [" + collectionLoadContext + "]" );
+ }
+ break;
+ }
+ }
+ return rtn;
+ }
+
+ public EntityLoadContext getEntityLoadContext(ResultSet resultSet) {
+ EntityLoadContext context = null;
+ if ( entityLoadContexts == null ) {
+ entityLoadContexts = IdentityMap.instantiate( 8 );
+ }
+ else {
+ context = ( EntityLoadContext ) entityLoadContexts.get( resultSet );
+ }
+ if ( context == null ) {
+ context = new EntityLoadContext( this, resultSet );
+ entityLoadContexts.put( resultSet, context );
+ }
+ return context;
+ }
+
+ public void cleanup(ResultSet resultSet) {
+ if ( collectionLoadContexts != null ) {
+ CollectionLoadContext collectionLoadContext = ( CollectionLoadContext ) collectionLoadContexts.remove( resultSet );
+ collectionLoadContext.cleanup();
+ }
+ if ( entityLoadContexts != null ) {
+ EntityLoadContext entityLoadContext = ( EntityLoadContext ) entityLoadContexts.remove( resultSet );
+ entityLoadContext.cleanup();
+ }
+ }
+
+ private SessionImplementor getSession() {
+ return getPersistenceContext().getSession();
+ }
+
+ private EntityMode getEntityMode() {
+ return getSession().getEntityMode();
+ }
+
+
+}
Added: trunk/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java (rev 0)
+++ trunk/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -0,0 +1,51 @@
+package org.hibernate.engine.loading;
+
+import java.io.Serializable;
+import java.sql.ResultSet;
+
+import org.hibernate.collection.PersistentCollection;
+import org.hibernate.persister.collection.CollectionPersister;
+import org.hibernate.pretty.MessageHelper;
+
+/**
+ * Represents a collection currently being loaded.
+ *
+ * @author Steve Ebersole
+ */
+public class LoadingCollectionEntry {
+ private final ResultSet resultSet;
+ private final CollectionPersister persister;
+ private final Serializable key;
+ private final PersistentCollection collection;
+
+ public LoadingCollectionEntry(
+ ResultSet resultSet,
+ CollectionPersister persister,
+ Serializable key,
+ PersistentCollection collection) {
+ this.resultSet = resultSet;
+ this.persister = persister;
+ this.key = key;
+ this.collection = collection;
+ }
+
+ public ResultSet getResultSet() {
+ return resultSet;
+ }
+
+ public CollectionPersister getPersister() {
+ return persister;
+ }
+
+ public Serializable getKey() {
+ return key;
+ }
+
+ public PersistentCollection getCollection() {
+ return collection;
+ }
+
+ public String toString() {
+ return getClass().getName() + "<rs=" + resultSet + ", coll=" + MessageHelper.collectionInfoString( persister.getRole(), key ) + ">@" + Integer.toHexString( hashCode() );
+ }
+}
Modified: trunk/Hibernate3/src/org/hibernate/loader/Loader.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/loader/Loader.java 2007-03-19 20:43:46 UTC (rev 11301)
+++ trunk/Hibernate3/src/org/hibernate/loader/Loader.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -89,18 +89,24 @@
/**
* The SQL query string to be called; implemented by all subclasses
+ *
+ * @return The sql command this loader should use to get its {@link ResultSet}.
*/
protected abstract String getSQLString();
/**
* An array of persisters of entity classes contained in each row of results;
* implemented by all subclasses
+ *
+ * @return The entity persisters.
*/
protected abstract Loadable[] getEntityPersisters();
/**
* An array indicating whether the entities have eager property fetching
- * enabled
+ * enabled.
+ *
+ * @return Eager property fetching indicators.
*/
protected boolean[] getEntityEagerPropertyFetches() {
return null;
@@ -108,15 +114,21 @@
/**
* An array of indexes of the entity that owns a one-to-one association
- * to the entity at the given index (-1 if there is no "owner")
+ * to the entity at the given index (-1 if there is no "owner"). The
+ * indexes contained here are relative to the result of
+ * {@link #getEntityPersisters}.
+ *
+ * @return The owner indicators (see discussion above).
*/
protected int[] getOwners() {
return null;
}
/**
- * An array of unique key property names by which the corresponding
- * entities are referenced by other entities in the result set
+ * An array of the owner types corresponding to the {@link #getOwners()}
+ * returns. Indices indicating no owner would be null here.
+ *
+ * @return The types for the owners.
*/
protected EntityType[] getOwnerAssociationTypes() {
return null;
@@ -860,11 +872,12 @@
private void endCollectionLoad(
final Object resultSetId,
final SessionImplementor session,
- final CollectionPersister collectionPersister
- ) {
+ final CollectionPersister collectionPersister) {
//this is a query and we are loading multiple instances of the same collection role
- session.getPersistenceContext().getCollectionLoadContext()
- .endLoadingCollections( collectionPersister, resultSetId, session );
+ session.getPersistenceContext()
+ .getLoadContexts()
+ .getCollectionLoadContext( ( ResultSet ) resultSetId )
+ .endLoadingCollections( collectionPersister );
}
protected List getResultList(List results, ResultTransformer resultTransformer) throws QueryException {
@@ -987,8 +1000,9 @@
}
}
- PersistentCollection rowCollection = persistenceContext.getCollectionLoadContext()
- .getLoadingCollection( persister, collectionRowKey, rs, session.getEntityMode() );
+ PersistentCollection rowCollection = persistenceContext.getLoadContexts()
+ .getCollectionLoadContext( rs )
+ .getLoadingCollection( persister, collectionRowKey );
if ( rowCollection != null ) {
rowCollection.readFrom( rs, persister, descriptor, owner );
@@ -1007,8 +1021,9 @@
);
}
- persistenceContext.getCollectionLoadContext()
- .getLoadingCollection( persister, optionalKey, rs, session.getEntityMode() ); //handle empty collection
+ persistenceContext.getLoadContexts()
+ .getCollectionLoadContext( rs )
+ .getLoadingCollection( persister, optionalKey ); // handle empty collection
}
@@ -1024,7 +1039,7 @@
private void handleEmptyCollections(
final Serializable[] keys,
final Object resultSetId,
- final SessionImplementor session) throws HibernateException {
+ final SessionImplementor session) {
if ( keys != null ) {
// this is a collection initializer, so we must create a collection
@@ -1042,18 +1057,13 @@
MessageHelper.collectionInfoString( collectionPersisters[j], keys[i], getFactory() )
);
}
-
+
session.getPersistenceContext()
- .getCollectionLoadContext()
- .getLoadingCollection(
- collectionPersisters[j],
- keys[i],
- resultSetId,
- session.getEntityMode()
- );
+ .getLoadContexts()
+ .getCollectionLoadContext( ( ResultSet ) resultSetId )
+ .getLoadingCollection( collectionPersisters[j], keys[i] );
}
}
-
}
// else this is not a collection initializer (and empty collections will
Modified: trunk/Hibernate3/src/org/hibernate/type/CollectionType.java
===================================================================
--- trunk/Hibernate3/src/org/hibernate/type/CollectionType.java 2007-03-19 20:43:46 UTC (rev 11301)
+++ trunk/Hibernate3/src/org/hibernate/type/CollectionType.java 2007-03-19 20:44:11 UTC (rev 11302)
@@ -101,12 +101,15 @@
/**
* Instantiate an uninitialized collection wrapper or holder. Callers MUST add the holder to the
* persistence context!
+ *
+ * @param session The session from which the request is originating.
+ * @param persister The underlying collection persister (metadata)
+ * @param key The owner key.
+ * @return The instantiated collection.
*/
- public abstract PersistentCollection instantiate(SessionImplementor session,
- CollectionPersister persister, Serializable key) throws HibernateException;
+ public abstract PersistentCollection instantiate(SessionImplementor session, CollectionPersister persister, Serializable key);
- public Object nullSafeGet(ResultSet rs, String name, SessionImplementor session, Object owner)
- throws HibernateException, SQLException {
+ public Object nullSafeGet(ResultSet rs, String name, SessionImplementor session, Object owner) throws SQLException {
return nullSafeGet( rs, new String[] { name }, session, owner );
}
@@ -174,6 +177,10 @@
/**
* Get an iterator over the element set of the collection, which may not yet be wrapped
+ *
+ * @param collection The collection to be iterated
+ * @param session The session from which the request is originating.
+ * @return The iterator.
*/
public Iterator getElementsIterator(Object collection, SessionImplementor session) {
if ( session.getEntityMode()==EntityMode.DOM4J ) {
@@ -196,6 +203,9 @@
/**
* Get an iterator over the element set of the collection in POJO mode
+ *
+ * @param collection The collection to be iterated
+ * @return The iterator.
*/
protected Iterator getElementsIterator(Object collection) {
return ( (Collection) collection ).iterator();
@@ -241,16 +251,17 @@
/**
* Is the owning entity versioned?
+ *
+ * @param session The session from which the request is originating.
+ * @return True if the collection owner is versioned; false otherwise.
+ * @throws org.hibernate.MappingException Indicates our persister could not be located.
*/
private boolean isOwnerVersioned(SessionImplementor session) throws MappingException {
- return getPersister( session )
- .getOwnerEntityPersister()
- .isVersioned();
+ return getPersister( session ).getOwnerEntityPersister().isVersioned();
}
private CollectionPersister getPersister(SessionImplementor session) {
- return session.getFactory()
- .getCollectionPersister( role );
+ return session.getFactory().getCollectionPersister( role );
}
public boolean isDirty(Object old, Object current, SessionImplementor session)
@@ -269,9 +280,14 @@
throws HibernateException {
return isDirty(old, current, session);
}
+
/**
- * Wrap the naked collection instance in a wrapper, or instantiate a holder. Callers MUST add
- * the holder to the persistence context!
+ * Wrap the naked collection instance in a wrapper, or instantiate a
+ * holder. Callers <b>MUST</b> add the holder to the persistence context!
+ *
+ * @param session The session from which the request is originating.
+ * @param collection The bare collection to be wrapped.
+ * @return The wrapped collection.
*/
public abstract PersistentCollection wrap(SessionImplementor session, Object collection);
@@ -290,6 +306,10 @@
/**
* Get the key value from the owning entity instance, usually the identifier, but might be some
* other unique key, in the case of property-ref
+ *
+ * @param owner The collection owner
+ * @param session The session from which the request is originating.
+ * @return The collection owner's key
*/
public Serializable getKeyOfOwner(Object owner, SessionImplementor session) {
@@ -502,6 +522,10 @@
/**
* Get the Hibernate type of the collection elements
+ *
+ * @param factory The session factory.
+ * @return The type of the collection elements
+ * @throws MappingException Indicates the underlying persister could not be located.
*/
public final Type getElementType(SessionFactoryImplementor factory) throws MappingException {
return factory.getCollectionPersister( getRole() ).getElementType();
@@ -518,9 +542,13 @@
/**
* instantiate a collection wrapper (called when loading an object)
+ *
+ * @param key The collection owner key
+ * @param session The session from which the request is originating.
+ * @param owner The collection owner
+ * @return The collection
*/
- public Object getCollection(Serializable key, SessionImplementor session, Object owner)
- throws HibernateException {
+ public Object getCollection(Serializable key, SessionImplementor session, Object owner) {
CollectionPersister persister = getPersister( session );
final PersistenceContext persistenceContext = session.getPersistenceContext();
@@ -531,17 +559,14 @@
}
// check if collection is currently being loaded
- PersistentCollection collection = persistenceContext
- .getCollectionLoadContext()
- .getLoadingCollection( persister, key, entityMode );
+ PersistentCollection collection = persistenceContext.getLoadContexts().locateLoadingCollection( persister, key );
if ( collection == null ) {
// check if it is already completely loaded, but unowned
collection = persistenceContext.useUnownedCollection( new CollectionKey(persister, key, entityMode) );
- if (collection==null) {
-
+ if ( collection == null ) {
// create a new collection wrapper, to be initialized later
collection = instantiate( session, persister, key );
collection.setOwner(owner);
17 years, 1 month
Hibernate SVN: r11301 - in branches/Branch_3_2/Hibernate3/src/org/hibernate: engine and 3 other directories.
by hibernate-commits@lists.jboss.org
Author: steve.ebersole(a)jboss.com
Date: 2007-03-19 16:43:46 -0400 (Mon, 19 Mar 2007)
New Revision: 11301
Added:
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java
Removed:
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java
Modified:
branches/Branch_3_2/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/PersistenceContext.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/loader/Loader.java
branches/Branch_3_2/Hibernate3/src/org/hibernate/type/CollectionType.java
Log:
HHH-2495 : ResultSet processing context
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java 2007-03-19 19:48:21 UTC (rev 11300)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/collection/AbstractPersistentCollection.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -538,7 +538,7 @@
/**
* Get the current session
*/
- protected final SessionImplementor getSession() {
+ public final SessionImplementor getSession() {
return session;
}
Deleted: branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java 2007-03-19 19:48:21 UTC (rev 11300)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/CollectionLoadContext.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -1,340 +0,0 @@
-//$Id$
-package org.hibernate.engine;
-
-import java.io.Serializable;
-import java.util.ArrayList;
-import java.util.Comparator;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.hibernate.CacheMode;
-import org.hibernate.EntityMode;
-import org.hibernate.HibernateException;
-import org.hibernate.cache.CacheKey;
-import org.hibernate.cache.entry.CollectionCacheEntry;
-import org.hibernate.collection.PersistentCollection;
-import org.hibernate.persister.collection.CollectionPersister;
-import org.hibernate.pretty.MessageHelper;
-
-/**
- * Represents the state of collections currently being loaded. Eventually, I
- * would like to have multiple instances of this per session - one per JDBC
- * result set, instead of the resultSetId being passed.
- * @author Gavin King
- */
-public class CollectionLoadContext {
-
- private static final Log log = LogFactory.getLog(CollectionLoadContext.class);
-
- // The collections we are currently loading
- private final Map loadingCollections = new HashMap(8);
- private final PersistenceContext context;
-
- public CollectionLoadContext(PersistenceContext context) {
- this.context = context;
- }
-
- private static final class LoadingCollectionEntry {
-
- final PersistentCollection collection;
- final Serializable key;
- final Object resultSetId;
- final CollectionPersister persister;
-
- LoadingCollectionEntry(
- final PersistentCollection collection,
- final Serializable key,
- final CollectionPersister persister,
- final Object resultSetId
- ) {
- this.collection = collection;
- this.key = key;
- this.persister = persister;
- this.resultSetId = resultSetId;
- }
- }
-
- /**
- * Retrieve a collection that is in the process of being loaded, instantiating
- * a new collection if there is nothing for the given id, or returning null
- * if the collection with the given id is already fully loaded in the session
- */
- public PersistentCollection getLoadingCollection(
- final CollectionPersister persister,
- final Serializable key,
- final Object resultSetId,
- final EntityMode em)
- throws HibernateException {
- CollectionKey ckey = new CollectionKey(persister, key, em);
- LoadingCollectionEntry lce = getLoadingCollectionEntry(ckey);
- if ( lce == null ) {
- //look for existing collection
- PersistentCollection collection = context.getCollection(ckey);
- if ( collection != null ) {
- if ( collection.wasInitialized() ) {
- log.trace( "collection already initialized: ignoring" );
- return null; //ignore this row of results! Note the early exit
- }
- else {
- //initialize this collection
- log.trace( "uninitialized collection: initializing" );
- }
- }
- else {
- Object entity = context.getCollectionOwner(key, persister);
- final boolean newlySavedEntity = entity != null &&
- context.getEntry(entity).getStatus() != Status.LOADING &&
- em!=EntityMode.DOM4J;
- if ( newlySavedEntity ) {
- //important, to account for newly saved entities in query
- //TODO: some kind of check for new status...
- log.trace( "owning entity already loaded: ignoring" );
- return null;
- }
- else {
- //create one
- log.trace( "new collection: instantiating" );
- collection = persister.getCollectionType()
- .instantiate( context.getSession(), persister, key );
- }
- }
- collection.beforeInitialize( persister, -1 );
- collection.beginRead();
- addLoadingCollectionEntry(ckey, collection, persister, resultSetId);
- return collection;
- }
- else {
- if ( lce.resultSetId == resultSetId ) {
- log.trace( "reading row" );
- return lce.collection;
- }
- else {
- // ignore this row, the collection is in process of
- // being loaded somewhere further "up" the stack
- log.trace( "collection is already being initialized: ignoring row" );
- return null;
- }
- }
- }
-
- /**
- * Retrieve a collection that is in the process of being loaded, returning null
- * if there is no loading collection with the given id
- */
- public PersistentCollection getLoadingCollection(CollectionPersister persister, Serializable id, EntityMode em) {
- LoadingCollectionEntry lce = getLoadingCollectionEntry( new CollectionKey(persister, id, em) );
- if ( lce != null ) {
- if ( log.isTraceEnabled() ) {
- log.trace(
- "returning loading collection:" +
- MessageHelper.collectionInfoString(persister, id, context.getSession().getFactory())
- );
- }
- return lce.collection;
- }
- else {
- if ( log.isTraceEnabled() ) {
- log.trace(
- "creating collection wrapper:" +
- MessageHelper.collectionInfoString(persister, id, context.getSession().getFactory())
- );
- }
- return null;
- }
- }
-
- /**
- * Create a new loading collection entry
- */
- private void addLoadingCollectionEntry(
- final CollectionKey collectionKey,
- final PersistentCollection collection,
- final CollectionPersister persister,
- final Object resultSetId
- ) {
- loadingCollections.put(
- collectionKey,
- new LoadingCollectionEntry(
- collection,
- collectionKey.getKey(),
- persister,
- resultSetId
- )
- );
- }
-
- /**
- * get an existing new loading collection entry
- */
- private LoadingCollectionEntry getLoadingCollectionEntry(CollectionKey collectionKey) {
- return ( LoadingCollectionEntry ) loadingCollections.get( collectionKey );
- }
-
- /**
- * After we have finished processing a result set, a particular loading collection that
- * we are done.
- */
- private void endLoadingCollection(LoadingCollectionEntry lce, CollectionPersister persister, EntityMode em) {
-
- boolean hasNoQueuedAdds = lce.collection.endRead(); //warning: can cause a recursive query! (proxy initialization)
-
- if ( persister.getCollectionType().hasHolder(em) ) {
- context.addCollectionHolder(lce.collection);
- }
-
- CollectionEntry ce = context.getCollectionEntry(lce.collection);
- if ( ce==null ) {
- ce = context.addInitializedCollection(persister, lce.collection, lce.key);
- }
- else {
- ce.postInitialize(lce.collection);
- }
-
- final SessionImplementor session = context.getSession();
-
- boolean addToCache = hasNoQueuedAdds && // there were no queued additions
- persister.hasCache() && // and the role has a cache
- session.getCacheMode().isPutEnabled() &&
- !ce.isDoremove(); // and this is not a forced initialization during flush
- if (addToCache) addCollectionToCache(lce, persister);
-
- if ( log.isDebugEnabled() ) {
- log.debug(
- "collection fully initialized: " +
- MessageHelper.collectionInfoString(persister, lce.key, context.getSession().getFactory())
- );
- }
-
- if ( session.getFactory().getStatistics().isStatisticsEnabled() ) {
- session.getFactory().getStatisticsImplementor().loadCollection(
- persister.getRole()
- );
- }
-
- }
- /**
- * Finish the process of loading collections for a particular result set
- */
- public void endLoadingCollections(CollectionPersister persister, Object resultSetId, SessionImplementor session)
- throws HibernateException {
-
- // scan the loading collections for collections from this result set
- // put them in a new temp collection so that we are safe from concurrent
- // modification when the call to endRead() causes a proxy to be
- // initialized
- List resultSetCollections = null; //TODO: make this the resultSetId?
- Iterator iter = loadingCollections.values().iterator();
- while ( iter.hasNext() ) {
- LoadingCollectionEntry lce = (LoadingCollectionEntry) iter.next();
- if ( lce.resultSetId == resultSetId && lce.persister==persister) {
- if ( resultSetCollections == null ) {
- resultSetCollections = new ArrayList();
- }
- resultSetCollections.add(lce);
- if ( lce.collection.getOwner()==null ) {
- session.getPersistenceContext()
- .addUnownedCollection(
- new CollectionKey( persister, lce.key, session.getEntityMode() ),
- lce.collection
- );
- }
- iter.remove();
- }
- }
-
- endLoadingCollections( persister, resultSetCollections, session.getEntityMode() );
- }
-
- /**
- * After we have finished processing a result set, notify the loading collections that
- * we are done.
- */
- private void endLoadingCollections(CollectionPersister persister, List resultSetCollections, EntityMode em)
- throws HibernateException {
-
- final int count = (resultSetCollections == null) ? 0 : resultSetCollections.size();
-
- if ( log.isDebugEnabled() ) {
- log.debug( count + " collections were found in result set for role: " + persister.getRole() );
- }
-
- //now finish them
- for ( int i = 0; i < count; i++ ) {
- LoadingCollectionEntry lce = (LoadingCollectionEntry) resultSetCollections.get(i);
- endLoadingCollection(lce, persister, em);
- }
-
- if ( log.isDebugEnabled() ) {
- log.debug( count + " collections initialized for role: " + persister.getRole() );
- }
- }
-
- /**
- * Add a collection to the second-level cache
- */
- private void addCollectionToCache(LoadingCollectionEntry lce, CollectionPersister persister) {
-
- if ( log.isDebugEnabled() ) {
- log.debug(
- "Caching collection: " +
- MessageHelper.collectionInfoString( persister, lce.key, context.getSession().getFactory() )
- );
- }
-
- final SessionImplementor session = context.getSession();
- final SessionFactoryImplementor factory = session.getFactory();
-
- if ( !session.getEnabledFilters().isEmpty() && persister.isAffectedByEnabledFilters( session ) ) {
- // some filters affecting the collection are enabled on the session, so do not do the put into the cache.
- log.debug( "Refusing to add to cache due to enabled filters" );
- // todo : add the notion of enabled filters to the CacheKey to differentiate filtered collections from non-filtered;
- // but CacheKey is currently used for both collections and entities; would ideally need to define two seperate ones;
- // currently this works in conjuction with the check on
- // DefaultInitializeCollectionEventHandler.initializeCollectionFromCache() (which makes sure to not read from
- // cache with enabled filters).
- return; // EARLY EXIT!!!!!
- }
-
- final Comparator versionComparator;
- final Object version;
- if ( persister.isVersioned() ) {
- versionComparator = persister.getOwnerEntityPersister().getVersionType().getComparator();
- version = context.getEntry( context.getCollectionOwner(lce.key, persister) ).getVersion();
- }
- else {
- version = null;
- versionComparator = null;
- }
-
- CollectionCacheEntry entry = new CollectionCacheEntry(lce.collection, persister);
-
- CacheKey cacheKey = new CacheKey(
- lce.key,
- persister.getKeyType(),
- persister.getRole(),
- session.getEntityMode(),
- session.getFactory()
- );
- boolean put = persister.getCache().put(
- cacheKey,
- persister.getCacheEntryStructure().structure(entry),
- session.getTimestamp(),
- version,
- versionComparator,
- factory.getSettings().isMinimalPutsEnabled() &&
- session.getCacheMode()!=CacheMode.REFRESH
- );
-
- if ( put && factory.getStatistics().isStatisticsEnabled() ) {
- factory.getStatisticsImplementor().secondLevelCachePut(
- persister.getCache().getRegionName()
- );
- }
- }
-
-
-}
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/PersistenceContext.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/PersistenceContext.java 2007-03-19 19:48:21 UTC (rev 11300)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/PersistenceContext.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -8,6 +8,7 @@
import org.hibernate.HibernateException;
import org.hibernate.LockMode;
import org.hibernate.MappingException;
+import org.hibernate.engine.loading.LoadContexts;
import org.hibernate.collection.PersistentCollection;
import org.hibernate.persister.collection.CollectionPersister;
import org.hibernate.persister.entity.EntityPersister;
@@ -21,16 +22,20 @@
public interface PersistenceContext {
public boolean isStateless();
-
+
/**
- * Get the session
+ * Get the session to which this persistence context is bound.
+ *
+ * @return The session.
*/
public SessionImplementor getSession();
-
+
/**
- * Get the context for collection loading
+ * Retrieve this persistence context's managed load context.
+ *
+ * @return The load context
*/
- public CollectionLoadContext getCollectionLoadContext();
+ public LoadContexts getLoadContexts();
/**
* Add a collection which has no owner loaded
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java 2007-03-19 19:48:21 UTC (rev 11300)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/StatefulPersistenceContext.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -24,6 +24,7 @@
import org.hibernate.NonUniqueObjectException;
import org.hibernate.PersistentObjectException;
import org.hibernate.TransientObjectException;
+import org.hibernate.engine.loading.LoadContexts;
import org.hibernate.pretty.MessageHelper;
import org.hibernate.collection.PersistentCollection;
import org.hibernate.persister.collection.CollectionPersister;
@@ -101,7 +102,7 @@
private boolean hasNonReadOnlyEntities = false;
- private CollectionLoadContext collectionLoadContext;
+ private LoadContexts loadContexts;
private BatchFetchQueue batchFetchQueue;
@@ -142,11 +143,11 @@
return session;
}
- public CollectionLoadContext getCollectionLoadContext() {
- if (collectionLoadContext==null) {
- collectionLoadContext = new CollectionLoadContext(this);
+ public LoadContexts getLoadContexts() {
+ if ( loadContexts == null ) {
+ loadContexts = new LoadContexts( this );
}
- return collectionLoadContext;
+ return loadContexts;
}
public void addUnownedCollection(CollectionKey key, PersistentCollection collection) {
Added: branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java (rev 0)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/CollectionLoadContext.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -0,0 +1,332 @@
+package org.hibernate.engine.loading;
+
+import java.sql.ResultSet;
+import java.io.Serializable;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Iterator;
+import java.util.ArrayList;
+import java.util.Comparator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.hibernate.collection.PersistentCollection;
+import org.hibernate.persister.collection.CollectionPersister;
+import org.hibernate.EntityMode;
+import org.hibernate.CacheMode;
+import org.hibernate.cache.entry.CollectionCacheEntry;
+import org.hibernate.cache.CacheKey;
+import org.hibernate.pretty.MessageHelper;
+import org.hibernate.engine.CollectionKey;
+import org.hibernate.engine.Status;
+import org.hibernate.engine.SessionImplementor;
+import org.hibernate.engine.CollectionEntry;
+import org.hibernate.engine.SessionFactoryImplementor;
+
+/**
+ * Represents state associated with the processing of a given {@link ResultSet}
+ * in regards to loading collections.
+ * <p/>
+ * Another implementation option to consider is to not expose {@link ResultSet}s
+ * directly (in the JDBC redesign) but to always "wrap" them and apply a
+ * [series of] context[s] to that wrapper.
+ *
+ * @author Steve Ebersole
+ */
+public class CollectionLoadContext {
+ private static final Log log = LogFactory.getLog( CollectionLoadContext.class );
+
+ private final LoadContexts loadContexts;
+ private final ResultSet resultSet;
+ private final Map loadingCollections = new HashMap( 8 );
+
+ /**
+ * Creates a collection load context for the given result set.
+ *
+ * @param loadContexts Callback to other collection load contexts.
+ * @param resultSet The result set this is "wrapping".
+ */
+ public CollectionLoadContext(LoadContexts loadContexts, ResultSet resultSet) {
+ this.loadContexts = loadContexts;
+ this.resultSet = resultSet;
+ }
+
+ public ResultSet getResultSet() {
+ return resultSet;
+ }
+
+ public LoadContexts getLoadContext() {
+ return loadContexts;
+ }
+
+ /**
+ * Retrieve the collection that is being loaded as part of processing this
+ * result set.
+ * <p/>
+ * Basically, there are two valid return values from this method:<ul>
+ * <li>an instance of {@link PersistentCollection} which indicates to
+ * continue loading the result set row data into that returned collection
+ * instance; this may be either an instance already associated and in the
+ * midst of being loaded, or a newly instantiated instance as a matching
+ * associated collection was not found.</li>
+ * <li><i>null</i> indicates to ignore the corresponding result set row
+ * data relating to the requested collection; this indicates that either
+ * the collection was found to already be associated with the persistence
+ * context in a fully loaded state, or it was found in a loading state
+ * associated with another result set processing context.</li>
+ * </ul>
+ *
+ * @param persister The persister for the collection being requested.
+ * @param key The key of the collection being requested.
+ *
+ * @return The loading collection (see discussion above).
+ */
+ public PersistentCollection getLoadingCollection(final CollectionPersister persister, final Serializable key) {
+ final EntityMode em = loadContexts.getPersistenceContext().getSession().getEntityMode();
+ final CollectionKey collectionKey = new CollectionKey( persister, key, em );
+ if ( log.isTraceEnabled() ) {
+ log.trace( "starting attempt to find loading collection [" + MessageHelper.collectionInfoString( persister.getRole(), key ) + "]" );
+ }
+ final LoadingCollectionEntry loadingCollectionEntry = locateLoadingCollectionEntry( collectionKey );
+ if ( loadingCollectionEntry == null ) {
+ // look for existing collection as part of the persistence context
+ PersistentCollection collection = loadContexts.getPersistenceContext().getCollection( collectionKey );
+ if ( collection != null ) {
+ if ( collection.wasInitialized() ) {
+ log.trace( "collection already initialized; ignoring" );
+ return null; // ignore this row of results! Note the early exit
+ }
+ else {
+ // initialize this collection
+ log.trace( "collection not yet initialized; initializing" );
+ }
+ }
+ else {
+ Object owner = loadContexts.getPersistenceContext().getCollectionOwner( key, persister );
+ final boolean newlySavedEntity = owner != null
+ && loadContexts.getPersistenceContext().getEntry( owner ).getStatus() != Status.LOADING
+ && em != EntityMode.DOM4J;
+ if ( newlySavedEntity ) {
+ // important, to account for newly saved entities in query
+ // todo : some kind of check for new status...
+ log.trace( "owning entity already loaded; ignoring" );
+ return null;
+ }
+ else {
+ // create one
+ if ( log.isTraceEnabled() ) {
+ log.trace( "instantiating new collection [key=" + key + ", rs=" + resultSet + "]" );
+ }
+ collection = persister.getCollectionType()
+ .instantiate( loadContexts.getPersistenceContext().getSession(), persister, key );
+ }
+ }
+ collection.beforeInitialize( persister, -1 );
+ collection.beginRead();
+ loadingCollections.put( collectionKey, new LoadingCollectionEntry( resultSet, persister, key, collection ) );
+ return collection;
+ }
+ else {
+ if ( loadingCollectionEntry.getResultSet() == resultSet ) {
+ log.trace( "found loading collection bound to current result set processing; reading row" );
+ return loadingCollectionEntry.getCollection();
+ }
+ else {
+ // ignore this row, the collection is in process of
+ // being loaded somewhere further "up" the stack
+ log.trace( "collection is already being initialized; ignoring row" );
+ return null;
+ }
+ }
+ }
+
+ private LoadingCollectionEntry locateLoadingCollectionEntry(CollectionKey collectionKey) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "attempting to locate loading collection entry [" + collectionKey + "]" );
+ }
+ // first try our loading collections
+ LoadingCollectionEntry loadingCollectionEntry = getLocalLoadingCollectionEntry( collectionKey );
+ if ( loadingCollectionEntry == null ) {
+ // the loading collection is not associated with our result set, so check the other
+ // result sets registered with the load context...
+ loadingCollectionEntry = loadContexts.locateLoadingCollectionEntry( collectionKey, this );
+ }
+ return loadingCollectionEntry;
+ }
+
+ LoadingCollectionEntry getLocalLoadingCollectionEntry(CollectionKey key) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "attempting to locally locate loading collection entry [key=" + key + ", rs=" + resultSet + "]" );
+ }
+ return ( LoadingCollectionEntry ) loadingCollections.get( key );
+ }
+
+ /**
+ * Finish the process of collection-loading for this bound result set. Mainly this
+ * involves cleaning up resources and notifying the collections that loading is
+ * complete.
+ *
+ * @param persister The persister for which to complete loading.
+ */
+ public void endLoadingCollections(CollectionPersister persister) {
+ SessionImplementor session = getLoadContext().getPersistenceContext().getSession();
+
+ // in an effort to avoid concurrent-modification-exceptions (from
+ // potential recursive calls back through here as a result of the
+ // eventual call to PersistentCollection#endRead), we scan the
+ // internal loadingCollections map for matches and store those matches
+ // in a temp collection. the temp collection is then used to "drive"
+ // the #endRead processing.
+ List matches = null;
+ Iterator iter = loadingCollections.values().iterator();
+ while ( iter.hasNext() ) {
+ LoadingCollectionEntry lce = (LoadingCollectionEntry) iter.next();
+ if ( lce.getResultSet() == resultSet && lce.getPersister() == persister) {
+ if ( matches == null ) {
+ matches = new ArrayList();
+ }
+ matches.add( lce );
+ if ( lce.getCollection().getOwner() == null ) {
+ session.getPersistenceContext().addUnownedCollection(
+ new CollectionKey( persister, lce.getKey(), session.getEntityMode() ),
+ lce.getCollection()
+ );
+ }
+ if ( log.isTraceEnabled() ) {
+ log.trace( "removing collection load entry [" + lce + "]" );
+ }
+ iter.remove();
+ }
+ }
+
+ endLoadingCollections( persister, matches );
+ }
+
+ private void endLoadingCollections(CollectionPersister persister, List matchedCollectionEntries) {
+ final int count = ( matchedCollectionEntries == null ) ? 0 : matchedCollectionEntries.size();
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( count + " collections were found in result set for role: " + persister.getRole() );
+ }
+
+ for ( int i = 0; i < count; i++ ) {
+ LoadingCollectionEntry lce = ( LoadingCollectionEntry ) matchedCollectionEntries.get( i );
+ endLoadingCollection( lce, persister );
+ }
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( count + " collections initialized for role: " + persister.getRole() );
+ }
+ }
+
+ private void endLoadingCollection(LoadingCollectionEntry lce, CollectionPersister persister) {
+ if ( log.isTraceEnabled() ) {
+ log.debug( "ending loading collection [" + lce + "]" );
+ }
+ final SessionImplementor session = getLoadContext().getPersistenceContext().getSession();
+ final EntityMode em = session.getEntityMode();
+
+ boolean hasNoQueuedAdds = lce.getCollection().endRead(); // warning: can cause a recursive calls! (proxy initialization)
+
+ if ( persister.getCollectionType().hasHolder( em ) ) {
+ getLoadContext().getPersistenceContext().addCollectionHolder( lce.getCollection() );
+ }
+
+ CollectionEntry ce = getLoadContext().getPersistenceContext().getCollectionEntry( lce.getCollection() );
+ if ( ce == null ) {
+ ce = getLoadContext().getPersistenceContext().addInitializedCollection( persister, lce.getCollection(), lce.getKey() );
+ }
+ else {
+ ce.postInitialize( lce.getCollection() );
+ }
+
+ boolean addToCache = hasNoQueuedAdds && // there were no queued additions
+ persister.hasCache() && // and the role has a cache
+ session.getCacheMode().isPutEnabled() &&
+ !ce.isDoremove(); // and this is not a forced initialization during flush
+ if ( addToCache ) {
+ addCollectionToCache( lce, persister );
+ }
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( "collection fully initialized: " + MessageHelper.collectionInfoString(persister, lce.getKey(), session.getFactory() ) );
+ }
+
+ if ( session.getFactory().getStatistics().isStatisticsEnabled() ) {
+ session.getFactory().getStatisticsImplementor().loadCollection( persister.getRole() );
+ }
+ }
+
+ /**
+ * Add the collection to the second-level cache
+ *
+ * @param lce The entry representing the collection to add
+ * @param persister The persister
+ */
+ private void addCollectionToCache(LoadingCollectionEntry lce, CollectionPersister persister) {
+ final SessionImplementor session = getLoadContext().getPersistenceContext().getSession();
+ final SessionFactoryImplementor factory = session.getFactory();
+
+ if ( log.isDebugEnabled() ) {
+ log.debug( "Caching collection: " + MessageHelper.collectionInfoString( persister, lce.getKey(), factory ) );
+ }
+
+ if ( !session.getEnabledFilters().isEmpty() && persister.isAffectedByEnabledFilters( session ) ) {
+ // some filters affecting the collection are enabled on the session, so do not do the put into the cache.
+ log.debug( "Refusing to add to cache due to enabled filters" );
+ // todo : add the notion of enabled filters to the CacheKey to differentiate filtered collections from non-filtered;
+ // but CacheKey is currently used for both collections and entities; would ideally need to define two seperate ones;
+ // currently this works in conjuction with the check on
+ // DefaultInitializeCollectionEventHandler.initializeCollectionFromCache() (which makes sure to not read from
+ // cache with enabled filters).
+ return; // EARLY EXIT!!!!!
+ }
+
+ final Comparator versionComparator;
+ final Object version;
+ if ( persister.isVersioned() ) {
+ versionComparator = persister.getOwnerEntityPersister().getVersionType().getComparator();
+ final Object collectionOwner = getLoadContext().getPersistenceContext().getCollectionOwner( lce.getKey(), persister );
+ version = getLoadContext().getPersistenceContext().getEntry( collectionOwner ).getVersion();
+ }
+ else {
+ version = null;
+ versionComparator = null;
+ }
+
+ CollectionCacheEntry entry = new CollectionCacheEntry( lce.getCollection(), persister );
+ CacheKey cacheKey = new CacheKey(
+ lce.getKey(),
+ persister.getKeyType(),
+ persister.getRole(),
+ session.getEntityMode(),
+ session.getFactory()
+ );
+ boolean put = persister.getCache().put(
+ cacheKey,
+ persister.getCacheEntryStructure().structure(entry),
+ session.getTimestamp(),
+ version,
+ versionComparator,
+ factory.getSettings().isMinimalPutsEnabled() && session.getCacheMode()!= CacheMode.REFRESH
+ );
+
+ if ( put && factory.getStatistics().isStatisticsEnabled() ) {
+ factory.getStatisticsImplementor().secondLevelCachePut( persister.getCache().getRegionName() );
+ }
+ }
+
+ void cleanup() {
+ if ( !loadingCollections.isEmpty() ) {
+ log.warn( "On CollectionLoadContext#clear, loadingCollections contained [" + loadingCollections.size() + "] entries" );
+ }
+ loadingCollections.clear();
+ }
+
+
+ public String toString() {
+ return super.toString() + "<rs=" + resultSet + ">";
+ }
+}
Added: branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java (rev 0)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/EntityLoadContext.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -0,0 +1,33 @@
+package org.hibernate.engine.loading;
+
+import java.sql.ResultSet;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * {@inheritDoc}
+ *
+ * @author Steve Ebersole
+ */
+public class EntityLoadContext {
+ private static final Log log = LogFactory.getLog( EntityLoadContext.class );
+
+ private final LoadContexts loadContexts;
+ private final ResultSet resultSet;
+ private final List hydratingEntities = new ArrayList( 20 ); // todo : need map? the prob is a proper key, right?
+
+ public EntityLoadContext(LoadContexts loadContexts, ResultSet resultSet) {
+ this.loadContexts = loadContexts;
+ this.resultSet = resultSet;
+ }
+
+ void cleanup() {
+ if ( !hydratingEntities.isEmpty() ) {
+ log.warn( "On CollectionLoadContext#clear, hydratingEntities contained [" + hydratingEntities.size() + "] entries" );
+ }
+ hydratingEntities.clear();
+ }
+}
Added: branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java (rev 0)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadContexts.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -0,0 +1,184 @@
+package org.hibernate.engine.loading;
+
+import java.sql.ResultSet;
+import java.util.Map;
+import java.util.Iterator;
+import java.io.Serializable;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.hibernate.util.IdentityMap;
+import org.hibernate.engine.PersistenceContext;
+import org.hibernate.engine.CollectionKey;
+import org.hibernate.engine.SessionImplementor;
+import org.hibernate.collection.PersistentCollection;
+import org.hibernate.persister.collection.CollectionPersister;
+import org.hibernate.pretty.MessageHelper;
+import org.hibernate.EntityMode;
+
+/**
+ * Maps {@link ResultSet result-sets} to specific contextual data
+ * related to processing that {@link ResultSet result-sets}.
+ * <p/>
+ * Implementation note: internally an {@link IdentityMap} is used to maintain
+ * the mappings; {@link IdentityMap} was chosen because I'd rather not be
+ * dependent upon potentially bad {@link ResultSet#equals} and {ResultSet#hashCode}
+ * implementations.
+ * <p/>
+ * Considering the JDBC-redesign work, would further like this contextual info
+ * not mapped seperately, but available based on the result set being processed.
+ * This would also allow maintaining a single mapping as we could reliably get
+ * notification of the result-set closing...
+ *
+ * @author Steve Ebersole
+ */
+public class LoadContexts {
+ private static final Log log = LogFactory.getLog( LoadContexts.class );
+
+ private final PersistenceContext persistenceContext;
+ private Map collectionLoadContexts;
+ private Map entityLoadContexts;
+
+ /**
+ * Creates and binds this to the given persistence context.
+ *
+ * @param persistenceContext The persistence context to which this
+ * will be bound.
+ */
+ public LoadContexts(PersistenceContext persistenceContext) {
+ this.persistenceContext = persistenceContext;
+ }
+
+ /**
+ * Retrieves the persistence context to which this is bound.
+ *
+ * @return The persistence context to which this is bound.
+ */
+ public PersistenceContext getPersistenceContext() {
+ return persistenceContext;
+ }
+
+ /**
+ * Get the {@link CollectionLoadContext} associated with the given
+ * {@link ResultSet}, creating one if needed.
+ *
+ * @param resultSet The result set for which to retrieve the context.
+ * @return The processing context.
+ */
+ public CollectionLoadContext getCollectionLoadContext(ResultSet resultSet) {
+ CollectionLoadContext context = null;
+ if ( collectionLoadContexts == null ) {
+ collectionLoadContexts = IdentityMap.instantiate( 8 );
+ }
+ else {
+ context = ( CollectionLoadContext ) collectionLoadContexts.get( resultSet );
+ }
+ if ( context == null ) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "constructing collection load context for result set [" + resultSet + "]" );
+ }
+ context = new CollectionLoadContext( this, resultSet );
+ collectionLoadContexts.put( resultSet, context );
+ }
+ return context;
+ }
+
+ /**
+ * Attempt to locate the loading collection given the owner's key. The lookup here
+ * occurs against all result-set contexts...
+ *
+ * @param persister The collection persister
+ * @param ownerKey The owner key
+ * @return The loading collection, or null if not found.
+ */
+ public PersistentCollection locateLoadingCollection(CollectionPersister persister, Serializable ownerKey) {
+ LoadingCollectionEntry lce = locateLoadingCollectionEntry( new CollectionKey( persister, ownerKey, getEntityMode() ), null ); // note: null because here we are interested in all contexts...
+ if ( lce != null ) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "returning loading collection:" + MessageHelper.collectionInfoString( persister, ownerKey, getSession().getFactory() ) );
+ }
+ return lce.getCollection();
+ }
+ else {
+ // todo : should really move this log statement to CollectionType, where this is used from...
+ if ( log.isTraceEnabled() ) {
+ log.trace( "creating collection wrapper:" + MessageHelper.collectionInfoString( persister, ownerKey, getSession().getFactory() ) );
+ }
+ return null;
+ }
+ }
+
+ /**
+ * Locate the LoadingCollectionEntry within *any* of the tracked
+ * {@link CollectionLoadContext}s.
+ * <p/>
+ * Implementation note: package protected, as this is meant solely for use
+ * by {@link CollectionLoadContext} to be able to locate collections
+ * being loaded by other {@link CollectionLoadContext}s/{@link ResultSet}s.
+ *
+ * @param key The collection key.
+ * @param caller The collection load context making this call (for performance optimization)
+ * @return The located entry; or null.
+ */
+ LoadingCollectionEntry locateLoadingCollectionEntry(CollectionKey key, CollectionLoadContext caller) {
+ if ( collectionLoadContexts == null ) {
+ return null;
+ }
+ if ( log.isTraceEnabled() ) {
+ log.trace( "attempting to locate loading collection entry [" + key + "] in any result-set context" );
+ }
+ LoadingCollectionEntry rtn = null;
+ Iterator itr = collectionLoadContexts.values().iterator();
+ while ( itr.hasNext() ) {
+ final CollectionLoadContext collectionLoadContext = ( CollectionLoadContext ) itr.next();
+ if ( collectionLoadContext == caller ) {
+ continue;
+ }
+ rtn = collectionLoadContext.getLocalLoadingCollectionEntry( key );
+ if ( rtn != null ) {
+ if ( log.isTraceEnabled() ) {
+ log.trace( "collection [" + key + "] located in load context [" + collectionLoadContext + "]" );
+ }
+ break;
+ }
+ }
+ return rtn;
+ }
+
+ public EntityLoadContext getEntityLoadContext(ResultSet resultSet) {
+ EntityLoadContext context = null;
+ if ( entityLoadContexts == null ) {
+ entityLoadContexts = IdentityMap.instantiate( 8 );
+ }
+ else {
+ context = ( EntityLoadContext ) entityLoadContexts.get( resultSet );
+ }
+ if ( context == null ) {
+ context = new EntityLoadContext( this, resultSet );
+ entityLoadContexts.put( resultSet, context );
+ }
+ return context;
+ }
+
+ public void cleanup(ResultSet resultSet) {
+ if ( collectionLoadContexts != null ) {
+ CollectionLoadContext collectionLoadContext = ( CollectionLoadContext ) collectionLoadContexts.remove( resultSet );
+ collectionLoadContext.cleanup();
+ }
+ if ( entityLoadContexts != null ) {
+ EntityLoadContext entityLoadContext = ( EntityLoadContext ) entityLoadContexts.remove( resultSet );
+ entityLoadContext.cleanup();
+ }
+ }
+
+ private SessionImplementor getSession() {
+ return getPersistenceContext().getSession();
+ }
+
+ private EntityMode getEntityMode() {
+ return getSession().getEntityMode();
+ }
+
+
+}
Added: branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java (rev 0)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/engine/loading/LoadingCollectionEntry.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -0,0 +1,51 @@
+package org.hibernate.engine.loading;
+
+import java.io.Serializable;
+import java.sql.ResultSet;
+
+import org.hibernate.collection.PersistentCollection;
+import org.hibernate.persister.collection.CollectionPersister;
+import org.hibernate.pretty.MessageHelper;
+
+/**
+ * Represents a collection currently being loaded.
+ *
+ * @author Steve Ebersole
+ */
+public class LoadingCollectionEntry {
+ private final ResultSet resultSet;
+ private final CollectionPersister persister;
+ private final Serializable key;
+ private final PersistentCollection collection;
+
+ public LoadingCollectionEntry(
+ ResultSet resultSet,
+ CollectionPersister persister,
+ Serializable key,
+ PersistentCollection collection) {
+ this.resultSet = resultSet;
+ this.persister = persister;
+ this.key = key;
+ this.collection = collection;
+ }
+
+ public ResultSet getResultSet() {
+ return resultSet;
+ }
+
+ public CollectionPersister getPersister() {
+ return persister;
+ }
+
+ public Serializable getKey() {
+ return key;
+ }
+
+ public PersistentCollection getCollection() {
+ return collection;
+ }
+
+ public String toString() {
+ return getClass().getName() + "<rs=" + resultSet + ", coll=" + MessageHelper.collectionInfoString( persister.getRole(), key ) + ">@" + Integer.toHexString( hashCode() );
+ }
+}
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/loader/Loader.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/loader/Loader.java 2007-03-19 19:48:21 UTC (rev 11300)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/loader/Loader.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -89,18 +89,24 @@
/**
* The SQL query string to be called; implemented by all subclasses
+ *
+ * @return The sql command this loader should use to get its {@link ResultSet}.
*/
protected abstract String getSQLString();
/**
* An array of persisters of entity classes contained in each row of results;
* implemented by all subclasses
+ *
+ * @return The entity persisters.
*/
protected abstract Loadable[] getEntityPersisters();
-
+
/**
* An array indicating whether the entities have eager property fetching
- * enabled
+ * enabled.
+ *
+ * @return Eager property fetching indicators.
*/
protected boolean[] getEntityEagerPropertyFetches() {
return null;
@@ -108,15 +114,21 @@
/**
* An array of indexes of the entity that owns a one-to-one association
- * to the entity at the given index (-1 if there is no "owner")
+ * to the entity at the given index (-1 if there is no "owner"). The
+ * indexes contained here are relative to the result of
+ * {@link #getEntityPersisters}.
+ *
+ * @return The owner indicators (see discussion above).
*/
protected int[] getOwners() {
return null;
}
/**
- * An array of unique key property names by which the corresponding
- * entities are referenced by other entities in the result set
+ * An array of the owner types corresponding to the {@link #getOwners()}
+ * returns. Indices indicating no owner would be null here.
+ *
+ * @return The types for the owners.
*/
protected EntityType[] getOwnerAssociationTypes() {
return null;
@@ -858,13 +870,14 @@
}
private void endCollectionLoad(
- final Object resultSetId,
- final SessionImplementor session,
- final CollectionPersister collectionPersister
- ) {
+ final Object resultSetId,
+ final SessionImplementor session,
+ final CollectionPersister collectionPersister) {
//this is a query and we are loading multiple instances of the same collection role
- session.getPersistenceContext().getCollectionLoadContext()
- .endLoadingCollections( collectionPersister, resultSetId, session );
+ session.getPersistenceContext()
+ .getLoadContexts()
+ .getCollectionLoadContext( ( ResultSet ) resultSetId )
+ .endLoadingCollections( collectionPersister );
}
protected List getResultList(List results, ResultTransformer resultTransformer) throws QueryException {
@@ -987,8 +1000,9 @@
}
}
- PersistentCollection rowCollection = persistenceContext.getCollectionLoadContext()
- .getLoadingCollection( persister, collectionRowKey, rs, session.getEntityMode() );
+ PersistentCollection rowCollection = persistenceContext.getLoadContexts()
+ .getCollectionLoadContext( rs )
+ .getLoadingCollection( persister, collectionRowKey );
if ( rowCollection != null ) {
rowCollection.readFrom( rs, persister, descriptor, owner );
@@ -1007,9 +1021,9 @@
);
}
- persistenceContext.getCollectionLoadContext()
- .getLoadingCollection( persister, optionalKey, rs, session.getEntityMode() ); //handle empty collection
-
+ persistenceContext.getLoadContexts()
+ .getCollectionLoadContext( rs )
+ .getLoadingCollection( persister, optionalKey ); // handle empty collection
}
// else no collection element, but also no owner
@@ -1024,7 +1038,7 @@
private void handleEmptyCollections(
final Serializable[] keys,
final Object resultSetId,
- final SessionImplementor session) throws HibernateException {
+ final SessionImplementor session) {
if ( keys != null ) {
// this is a collection initializer, so we must create a collection
@@ -1042,18 +1056,13 @@
MessageHelper.collectionInfoString( collectionPersisters[j], keys[i], getFactory() )
);
}
-
+
session.getPersistenceContext()
- .getCollectionLoadContext()
- .getLoadingCollection(
- collectionPersisters[j],
- keys[i],
- resultSetId,
- session.getEntityMode()
- );
+ .getLoadContexts()
+ .getCollectionLoadContext( ( ResultSet ) resultSetId )
+ .getLoadingCollection( collectionPersisters[j], keys[i] );
}
}
-
}
// else this is not a collection initializer (and empty collections will
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/type/CollectionType.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/type/CollectionType.java 2007-03-19 19:48:21 UTC (rev 11300)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/type/CollectionType.java 2007-03-19 20:43:46 UTC (rev 11301)
@@ -101,9 +101,13 @@
/**
* Instantiate an uninitialized collection wrapper or holder. Callers MUST add the holder to the
* persistence context!
+ *
+ * @param session The session from which the request is originating.
+ * @param persister The underlying collection persister (metadata)
+ * @param key The owner key.
+ * @return The instantiated collection.
*/
- public abstract PersistentCollection instantiate(SessionImplementor session,
- CollectionPersister persister, Serializable key) throws HibernateException;
+ public abstract PersistentCollection instantiate(SessionImplementor session, CollectionPersister persister, Serializable key);
public Object nullSafeGet(ResultSet rs, String name, SessionImplementor session, Object owner)
throws HibernateException, SQLException {
@@ -145,8 +149,7 @@
}
}
- protected String renderLoggableString(Object value, SessionFactoryImplementor factory)
- throws HibernateException {
+ protected String renderLoggableString(Object value, SessionFactoryImplementor factory) {
if ( Element.class.isInstance( value ) ) {
// for DOM4J "collections" only
// TODO: it would be better if this was done at the higher level by Printer
@@ -174,6 +177,10 @@
/**
* Get an iterator over the element set of the collection, which may not yet be wrapped
+ *
+ * @param collection The collection to be iterated
+ * @param session The session from which the request is originating.
+ * @return The iterator.
*/
public Iterator getElementsIterator(Object collection, SessionImplementor session) {
if ( session.getEntityMode()==EntityMode.DOM4J ) {
@@ -196,6 +203,9 @@
/**
* Get an iterator over the element set of the collection in POJO mode
+ *
+ * @param collection The collection to be iterated
+ * @return The iterator.
*/
protected Iterator getElementsIterator(Object collection) {
return ( (Collection) collection ).iterator();
@@ -241,16 +251,25 @@
/**
* Is the owning entity versioned?
+ *
+ * @param session The session from which the request is originating.
+ * @return True if the collection owner is versioned; false otherwise.
+ * @throws MappingException Indicates the underlying persister could not be located.
*/
private boolean isOwnerVersioned(SessionImplementor session) throws MappingException {
- return getPersister( session )
- .getOwnerEntityPersister()
- .isVersioned();
+ return getPersister( session ).getOwnerEntityPersister().isVersioned();
}
- private CollectionPersister getPersister(SessionImplementor session) {
- return session.getFactory()
- .getCollectionPersister( role );
+ /**
+ * Get our underlying collection persister (using the session to access the
+ * factory).
+ *
+ * @param session The session from which the request is originating.
+ * @return The underlying collection persister
+ * @throws org.hibernate.MappingException Indicates the underlying persister could not be located.
+ */
+ private CollectionPersister getPersister(SessionImplementor session) throws MappingException {
+ return session.getFactory().getCollectionPersister( role );
}
public boolean isDirty(Object old, Object current, SessionImplementor session)
@@ -269,9 +288,14 @@
throws HibernateException {
return isDirty(old, current, session);
}
+
/**
- * Wrap the naked collection instance in a wrapper, or instantiate a holder. Callers MUST add
- * the holder to the persistence context!
+ * Wrap the naked collection instance in a wrapper, or instantiate a
+ * holder. Callers <b>MUST</b> add the holder to the persistence context!
+ *
+ * @param session The session from which the request is originating.
+ * @param collection The bare collection to be wrapped.
+ * @return The wrapped collection.
*/
public abstract PersistentCollection wrap(SessionImplementor session, Object collection);
@@ -290,6 +314,10 @@
/**
* Get the key value from the owning entity instance, usually the identifier, but might be some
* other unique key, in the case of property-ref
+ *
+ * @param owner The collection owner
+ * @param session The session from which the request is originating.
+ * @return The collection owner's key
*/
public Serializable getKeyOfOwner(Object owner, SessionImplementor session) {
@@ -518,9 +546,13 @@
/**
* instantiate a collection wrapper (called when loading an object)
+ *
+ * @param key The collection owner key
+ * @param session The session from which the request is originating.
+ * @param owner The collection owner
+ * @return The collection
*/
- public Object getCollection(Serializable key, SessionImplementor session, Object owner)
- throws HibernateException {
+ public Object getCollection(Serializable key, SessionImplementor session, Object owner) {
CollectionPersister persister = getPersister( session );
final PersistenceContext persistenceContext = session.getPersistenceContext();
@@ -529,25 +561,19 @@
if (entityMode==EntityMode.DOM4J && !isEmbeddedInXML) {
return UNFETCHED_COLLECTION;
}
-
+
// check if collection is currently being loaded
- PersistentCollection collection = persistenceContext
- .getCollectionLoadContext()
- .getLoadingCollection( persister, key, entityMode );
-
+ PersistentCollection collection = persistenceContext.getLoadContexts().locateLoadingCollection( persister, key );
if ( collection == null ) {
-
// check if it is already completely loaded, but unowned
collection = persistenceContext.useUnownedCollection( new CollectionKey(persister, key, entityMode) );
-
- if (collection==null) {
-
+ if ( collection == null ) {
// create a new collection wrapper, to be initialized later
collection = instantiate( session, persister, key );
- collection.setOwner(owner);
-
+ collection.setOwner( owner );
+
persistenceContext.addUninitializedCollection( persister, collection, key );
-
+
// some collections are not lazy:
if ( initializeImmediately( entityMode ) ) {
session.initializeCollection( collection, false );
@@ -559,13 +585,9 @@
if ( hasHolder( entityMode ) ) {
session.getPersistenceContext().addCollectionHolder( collection );
}
-
}
-
}
-
- collection.setOwner(owner);
-
+ collection.setOwner( owner );
return collection.getValue();
}
17 years, 1 month
Hibernate SVN: r11300 - in trunk/HibernateExt: annotations/lib and 9 other directories.
by hibernate-commits@lists.jboss.org
Author: epbernard
Date: 2007-03-19 15:48:21 -0400 (Mon, 19 Mar 2007)
New Revision: 11300
Modified:
trunk/HibernateExt/annotations/changelog.txt
trunk/HibernateExt/annotations/lib/README.txt
trunk/HibernateExt/entitymanager/changelog.txt
trunk/HibernateExt/search/build.properties.dist
trunk/HibernateExt/search/build.xml
trunk/HibernateExt/search/changelog.txt
trunk/HibernateExt/search/doc/reference/en/modules/architecture.xml
trunk/HibernateExt/search/doc/reference/en/modules/configuration.xml
trunk/HibernateExt/search/doc/reference/en/modules/mapping.xml
trunk/HibernateExt/search/lib/README.txt
trunk/HibernateExt/shards/build.xml
trunk/HibernateExt/shards/changelog.txt
trunk/HibernateExt/shards/doc/reference/en/master.xml
trunk/HibernateExt/shards/lib/README.txt
trunk/HibernateExt/shards/readme.txt
trunk/HibernateExt/validator/changelog.txt
trunk/HibernateExt/validator/src/java/org/hibernate/validator/Version.java
Log:
Adjust version for releases
Modified: trunk/HibernateExt/annotations/changelog.txt
===================================================================
--- trunk/HibernateExt/annotations/changelog.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/annotations/changelog.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -1,6 +1,53 @@
Hibernate Annotations Changelog
===============================
+3.3.0.GA (19-03-2007)
+---------------------
+
+** Bug
+ * [ANN-515] - Fields are not correctly quoted in @OneToMany relationships when specified
+ * [ANN-516] - @OrderBy added to wrong table in inheritance relationship
+ * [ANN-517] - Default NodeName value not set in HAN leading to NPE in DOM4J mode (Daniel)
+ * [ANN-521] - package-list file is missing from javax.persistence documentation
+ * [ANN-531] - EntityMode.DOM4J does not deserialize collection entities
+ * [ANN-544] - @SqlDeleteAll wrt to Collections
+ * [ANN-547] - Typo in Docs: 2.2.2.2 Declaring Column Attributes
+ * [ANN-549] - key column of a true map should be not null when a join table is used
+ * [ANN-551] - Guaranty the same parameter ordering when overriding SQL across VMs and compilations (S�ren Pedersen)
+ * [ANN-554] - NPE with @Id on @OneToOne
+ * [ANN-555] - Fix typo @Tables.values to @Table.value
+ * [ANN-556] - @OneToOne(mappedBy might fail depending on alphebetical order
+ * [ANN-559] - Undefined filter definition leads to NPE rather than a proper exception
+ * [ANN-560] - Quoting clashes with defaults in NamingStrategy
+ * [ANN-567] - Ability to specify a custom persister on a collection (Shawn Clowater)
+ * [ANN-570] - @DiscriminatorForumla typo
+ * [ANN-574] - CascadeType ALL is not equals to REMOVE+REFRESH+PERSIST+MERGE
+
+
+** Improvement
+ * [ANN-26] - @OptimisticLock(excluded=true) (Logi Ragnarsson)
+ * [ANN-104] - Allow SQL customization for CRUD on secondary tables
+ * [ANN-252] - AnnotationConfiguration silently ignores classes that are annotated with wrong Entity, or not annotated.
+ * [ANN-444] - Ability to define fetch mode, inverse and optional for a Secondary table
+ * [ANN-492] - IdClass of a composite id + ManyToOne associations in id = Repeated column error (testcase patch)
+ * [ANN-502] - Cannot fully disable integration with Hibernate Validator
+ * [ANN-525] - @ForeignKey for secondary tables and joined subclasses
+ * [ANN-529] - MapBinder can generate SQL statements not supported by Oracle 10g
+ * [ANN-532] - Better exception when @UniqueConstraint refers to a wrong column name
+ * [ANN-535] - Force property insertability/updatability when @Generated is used
+ * [ANN-542] - @Immutable for entities and collections
+ * [ANN-553] - Remove classpath dependency between Hibernate Annotations and Hibernate Validator
+
+** New Feature
+ * [ANN-103] - Allow to specify fetching strategy for secondary table
+ * [ANN-505] - Support @Tuplizer
+ * [ANN-552] - Transparent event integration for Search and Validator
+
+
+** Task
+ * [ANN-584] - Move Validator and Search to their own project
+
+
3.2.1.GA (8-12-2006)
--------------------
Modified: trunk/HibernateExt/annotations/lib/README.txt
===================================================================
--- trunk/HibernateExt/annotations/lib/README.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/annotations/lib/README.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -7,8 +7,9 @@
hibernate-commons-annotations.jar: required
hibernate3.jar: required
hibernate core dependencies: required (see Hibernate Core for more information)
+ejb3-persistence.jar: required
hibernate-validator.jar: optional
-ejb3-persistence.jar: required
+hibernate-search.jar: optional
Test
====
Modified: trunk/HibernateExt/entitymanager/changelog.txt
===================================================================
--- trunk/HibernateExt/entitymanager/changelog.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/entitymanager/changelog.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -1,6 +1,26 @@
Hibernate EntityManager Changelog
==================================
+3.3.0.GA (19-03-2007)
+---------------------
+
+** Bug
+ * [EJB-46] - PrePersist callback method not called if entity's primary key is null
+ * [EJB-257] - EJB3Configuration should work wo having to call any of the configure(*)
+ * [EJB-259] - Evaluate orm.xml files in referenced jar files
+ * [EJB-261] - merge fails to update join table
+ * [EJB-263] - getSingleResult() and fetch raise abusive NonUniqueResultException
+ * [EJB-269] - Fail to deploy a persistence archive in Weblogic Server
+ * [EJB-275] - JarVisitor fails on WAS with white space
+
+
+** Improvement
+ * [EJB-242] - Be more defensive regarding exotic (aka buggy) URL protocol handler
+ * [EJB-262] - Provides XML file name on parsing error
+ * [EJB-271] - Raise a WARN when deployment descriptors (orm.xml) refer to an unknown property (increase usability)
+ * [EJB-266] - Avoid collection loading during cascaded PERSIST (improving performance on heavily cascaded object graphs)
+
+
3.2.1.GA (8-12-2006)
--------------------
Modified: trunk/HibernateExt/search/build.properties.dist
===================================================================
--- trunk/HibernateExt/search/build.properties.dist 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/search/build.properties.dist 2007-03-19 19:48:21 UTC (rev 11300)
@@ -1,6 +1,7 @@
common.dir=.
src.dir=src
test.dir=test
+testresources.dir=test-resources
hibernate-core.home=../hibernate-3.2
#locally present jars
Modified: trunk/HibernateExt/search/build.xml
===================================================================
--- trunk/HibernateExt/search/build.xml 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/search/build.xml 2007-03-19 19:48:21 UTC (rev 11300)
@@ -269,6 +269,12 @@
</fileset>
</copy>
+ <copy todir="${dist.dir}/test-resources" failonerror="false">
+ <fileset dir="${testresources.dir}">
+ <include name="**/*.*"/>
+ </fileset>
+ </copy>
+
<!-- copy dependencies -->
<copy todir="${dist.lib.dir}" failonerror="false">
<fileset file="${jpa-api.jar}"/>
@@ -278,6 +284,9 @@
<copy todir="${dist.lib.dir}/test" failonerror="false">
<fileset file="${annotations.jar}"/>
</copy>
+ <copy todir="${dist.lib.dir}/test" failonerror="false">
+ <fileset file="${lib.dir}/test/*.jar"/>
+ </copy>
<copy file="${basedir}/build.properties.dist" tofile="${dist.dir}/build.properties" failonerror="false">
</copy>
Modified: trunk/HibernateExt/search/changelog.txt
===================================================================
--- trunk/HibernateExt/search/changelog.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/search/changelog.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -1,7 +1,33 @@
Hibernate Search Changelog
==========================
-3.0.Beta1 (19-03-2007)
-----------------------
+3.0.0.Beta1 (19-03-2007)
+------------------------
-Initial release as a standalone product (see Hibernate Annotations changelog for previous informations)
\ No newline at end of file
+Initial release as a standalone product (see Hibernate Annotations changelog for previous informations)
+
+
+Release Notes - Hibernate Search - Version 3.0.0.beta1
+
+** Bug
+ * [HSEARCH-7] - Ignore object found in the index but no longer present in the database (for out of date indexes)
+ * [HSEARCH-21] - NPE in SearchFactory while using different threads
+ * [HSEARCH-22] - Enum value Index.UN_TOKENISED is misspelled
+ * [HSEARCH-24] - Potential deadlock when using multiple DirectoryProviders in a highly concurrent index update
+ * [HSEARCH-25] - Class cast exception in org.hibernate.search.impl.FullTextSessionImpl<init>(FullTextSessionImpl.java:54)
+ * [HSEARCH-28] - Wrong indexDir property in Apache Lucene Integration
+
+
+** Improvement
+ * [HSEARCH-29] - Share the initialization state across all Search event listeners instance
+ * [HSEARCH-30] - @FieldBridge now use o.h.s.a.Parameter rather than o.h.a.Parameter
+ * [HSEARCH-31] - Move to Lucene 2.1.0
+
+** New Feature
+ * [HSEARCH-1] - Give access to Directory providers
+ * [HSEARCH-2] - Default FieldBridge for enums (Sylvain Vieujot)
+ * [HSEARCH-3] - Default FieldBridge for booleans (Sylvain Vieujot)
+ * [HSEARCH-9] - Introduce a worker factory and its configuration
+ * [HSEARCH-16] - Cluster capability through JMS
+ * [HSEARCH-23] - Support asynchronous batch worker queue
+ * [HSEARCH-27] - Ability to index associated / embedded objects
Modified: trunk/HibernateExt/search/doc/reference/en/modules/architecture.xml
===================================================================
--- trunk/HibernateExt/search/doc/reference/en/modules/architecture.xml 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/search/doc/reference/en/modules/architecture.xml 2007-03-19 19:48:21 UTC (rev 11300)
@@ -6,7 +6,7 @@
engine. Both are backed by Apache Lucene.</para>
<para>When an entity is inserted, updated or removed to/from the database,
- <productname>Hibernate Search</productname> keeps track of this event
+ Hibernate Search keeps track of this event
(through the Hibernate event system) and schedule an index update. All the
index updates are handled for you without you having to use the Apache
Lucene APIs.</para>
@@ -16,7 +16,7 @@
will manage a given Lucene <classname>Directory</classname> type. You can
configure directory providers to adjust the directory target.</para>
- <para><productname>Hibernate Search</productname> can also use a Lucene
+ <para>Hibernate Search can also use a Lucene
index to search an entity and return a (list of) managed entity saving you
from the tedious Object / Lucene Document mapping and low level Lucene APIs.
The same persistence context is shared between Hibernate and Hibernate
Modified: trunk/HibernateExt/search/doc/reference/en/modules/configuration.xml
===================================================================
--- trunk/HibernateExt/search/doc/reference/en/modules/configuration.xml 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/search/doc/reference/en/modules/configuration.xml 2007-03-19 19:48:21 UTC (rev 11300)
@@ -7,10 +7,9 @@
<para>Apache Lucene has a notion of Directory where the index is stored.
The Directory implementation can be customized but Lucene comes bundled
- with a file system and a full memory implementation.
- <productname>Hibernate Search</productname> has the notion of
- <literal>DirectoryProvider</literal> that handle the configuration and the
- initialization of the Lucene Directory.</para>
+ with a file system and a full memory implementation. Hibernate Search has
+ the notion of <literal>DirectoryProvider</literal> that handle the
+ configuration and the initialization of the Lucene Directory.</para>
<table>
<title>List of built-in Directory Providers</title>
@@ -347,7 +346,7 @@
<remark>Hibernate Search test suite makes use of JBoss Embedded to
test the JMS integration. It allows the unit test to run both the MDB
container and JBoss Messaging (JMS provider) in a standalone way
- (marketed by some as "lightweight"). </remark>
+ (marketed by some as "lightweight").</remark>
</section>
</section>
</section>
@@ -380,5 +379,21 @@
<listener class="org.hibernate.search.event.FullTextIndexEventListener"/>
</event>
</hibernate-configuration></programlisting>
+
+ <para>Be sure to add the appropriate jar files in your classpath. Check
+ <literal>lib/README.TXT</literal> for the list of third party libraries. A
+ typical installation on top of Hibernate Annotations will add:</para>
+
+ <itemizedlist>
+ <listitem>
+ <para><filename>hibernate-search.jar</filename>: the core
+ engine</para>
+ </listitem>
+
+ <listitem>
+ <para><filename>lucene-core-*.jar</filename>: Lucene core
+ engine</para>
+ </listitem>
+ </itemizedlist>
</section>
</chapter>
\ No newline at end of file
Modified: trunk/HibernateExt/search/doc/reference/en/modules/mapping.xml
===================================================================
--- trunk/HibernateExt/search/doc/reference/en/modules/mapping.xml 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/search/doc/reference/en/modules/mapping.xml 2007-03-19 19:48:21 UTC (rev 11300)
@@ -75,14 +75,14 @@
<para>Whether or not you want to store the data depends on how you wish
to use the index query result. As of today, for a pure
- <productname>Hibernate Search </productname> usage, storing is not
+ Hibernate Search usage, storing is not
necessary. Whether or not you want to tokenize a property or not depends
on whether you wish to search the element as is, or only normalized part
of it. It make sense to tokenize a text field, but it does not to do it
for a date field (or an id field).</para>
<para>Finally, the id property of an entity is a special property used
- by <productname>Hibernate Search</productname> to ensure index unicity
+ by Hibernate Search to ensure index unicity
of a given entity. By design, an id has to be stored and must not be
tokenized. To mark a property as index id, use the
<literal>@DocumentId</literal> annotation.</para>
@@ -302,7 +302,7 @@
<para>All field of a full text index in Lucene have to be represented as
Strings. Ones Java properties have to be indexed in a String form. For
- most of your properties, <productname>Hibernate Search</productname> does
+ most of your properties, Hibernate Search does
the translation job for you thanks to a built-in set of bridges. In some
cases, though you need a fine grain control over the translation
process.</para>
Modified: trunk/HibernateExt/search/lib/README.txt
===================================================================
--- trunk/HibernateExt/search/lib/README.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/search/lib/README.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -6,8 +6,8 @@
hibernate-commons-annotations.jar: required
hibernate3.jar: required
hibernate core dependencies: required (see Hibernate Core for more information)
-lucene-core-*.jar: required
-jms.jar: optional (needed for JMS based clustering strategy)
+lucene-core-*.jar: required (used version 2.1.0)
+jms.jar: optional (needed for JMS based clustering strategy, usually available with your application server)
Test
====
Modified: trunk/HibernateExt/shards/build.xml
===================================================================
--- trunk/HibernateExt/shards/build.xml 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/shards/build.xml 2007-03-19 19:48:21 UTC (rev 11300)
@@ -16,7 +16,7 @@
<!-- Name of project and version, used to create filenames -->
<property name="Name" value="Hibernate Shards"/>
<property name="name" value="hibernate-shards"/>
- <property name="version" value="3.0.0.BETA1"/>
+ <property name="version" value="3.0.0.Beta1"/>
<property name="javadoc.packagenames" value="org.hibernate.shards.*"/>
<property name="copy.test" value="true"/>
<property name="javac.source" value="1.5"/>
Modified: trunk/HibernateExt/shards/changelog.txt
===================================================================
--- trunk/HibernateExt/shards/changelog.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/shards/changelog.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -1,7 +1,7 @@
Hibernate Shards Changelog
==========================
-3.0.0.BETA1 (20-03-2007)
----------------------
+3.0.0.Beta1 (19-03-2007)
+------------------------
Initial release
\ No newline at end of file
Modified: trunk/HibernateExt/shards/doc/reference/en/master.xml
===================================================================
--- trunk/HibernateExt/shards/doc/reference/en/master.xml 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/shards/doc/reference/en/master.xml 2007-03-19 19:48:21 UTC (rev 11300)
@@ -13,7 +13,7 @@
<title>Hibernate Shards</title>
<subtitle>Horizontal Partitioning With Hibernate</subtitle>
<subtitle>Reference Guide</subtitle>
- <releaseinfo>3.0.0.BETA1</releaseinfo>
+ <releaseinfo>3.0.0.Beta1</releaseinfo>
<mediaobject>
<imageobject>
<imagedata fileref="images/hibernate_logo_a.png" format="PNG"/>
Modified: trunk/HibernateExt/shards/lib/README.txt
===================================================================
--- trunk/HibernateExt/shards/lib/README.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/shards/lib/README.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -3,6 +3,7 @@
Core
====
+hibernate3.jar: required
hibernate core dependencies: required (see Hibernate Core for more information)
#list all other dependencies (including version) and put the jar in ./lib
Modified: trunk/HibernateExt/shards/readme.txt
===================================================================
--- trunk/HibernateExt/shards/readme.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/shards/readme.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -1,6 +1,6 @@
Hibernate Shards
================
-Version: 3.0.0.BETA1, 20.03.2007
+Version: 3.0.0.Beta1, 19.03.2007
Description
-----------
Modified: trunk/HibernateExt/validator/changelog.txt
===================================================================
--- trunk/HibernateExt/validator/changelog.txt 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/validator/changelog.txt 2007-03-19 19:48:21 UTC (rev 11300)
@@ -4,4 +4,20 @@
3.0.0.GA (19-03-2007)
---------------------
-Initial release as a standalone product (see Hibernate Annotations changelog for previous informations)
\ No newline at end of file
+Initial release as a standalone product (see Hibernate Annotations changelog for previous informations)
+
+** Bug
+ * [HV-2] - Deprecate String support for both @Past and @Future validating Strings
+ * [HV-3] - @Email fail on empty strings
+ * [HV-7] - Two level @Valid annotation doesn't work
+
+
+** Improvement
+ * [HV-5] - Multiple validators of the same type per element (John Gilbert)
+
+** New Feature
+ * [HV-1] - Make ClassValidator independent of Hibernate Annotations
+ * [HV-6] - @EAN
+ * [HV-8] - Make Validator support pure JavaPersistence players
+ * [HV-9] - @Digits(integerDigits, fractionalDigits)
+ * [HV-10] - @CreditCardNumber for Hibernate Validator
Modified: trunk/HibernateExt/validator/src/java/org/hibernate/validator/Version.java
===================================================================
--- trunk/HibernateExt/validator/src/java/org/hibernate/validator/Version.java 2007-03-19 19:04:03 UTC (rev 11299)
+++ trunk/HibernateExt/validator/src/java/org/hibernate/validator/Version.java 2007-03-19 19:48:21 UTC (rev 11300)
@@ -10,7 +10,7 @@
* @author Emmanuel Bernard
*/
public class Version {
- public static final String VERSION = "3.2.2.GA";
+ public static final String VERSION = "3.0.0.GA";
private static Log log = LogFactory.getLog( Version.class );
static {
17 years, 1 month
Hibernate SVN: r11299 - branches/Branch_3_2/Hibernate3/src/org/hibernate/persister/entity.
by hibernate-commits@lists.jboss.org
Author: steve.ebersole(a)jboss.com
Date: 2007-03-19 15:04:03 -0400 (Mon, 19 Mar 2007)
New Revision: 11299
Modified:
branches/Branch_3_2/Hibernate3/src/org/hibernate/persister/entity/AbstractEntityPersister.java
Log:
HHH-2499 : minor, incorrect assertion check
Modified: branches/Branch_3_2/Hibernate3/src/org/hibernate/persister/entity/AbstractEntityPersister.java
===================================================================
--- branches/Branch_3_2/Hibernate3/src/org/hibernate/persister/entity/AbstractEntityPersister.java 2007-03-19 19:03:39 UTC (rev 11298)
+++ branches/Branch_3_2/Hibernate3/src/org/hibernate/persister/entity/AbstractEntityPersister.java 2007-03-19 19:04:03 UTC (rev 11299)
@@ -3674,7 +3674,7 @@
}
public void processUpdateGeneratedProperties(Serializable id, Object entity, Object[] state, SessionImplementor session) {
- if ( !hasInsertGeneratedProperties() ) {
+ if ( !hasUpdateGeneratedProperties() ) {
throw new AssertionFailure("no update-generated properties");
}
processGeneratedProperties( id, entity, state, session, sqlUpdateGeneratedValuesSelectString, getPropertyUpdateGenerationInclusions() );
17 years, 1 month