[Hibernate-JIRA] Created: (HSEARCH-979) Programmatic API: mapping a composite primary key
by Fuessmann (JIRA)
Programmatic API: mapping a composite primary key
-------------------------------------------------
Key: HSEARCH-979
URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-979
Project: Hibernate Search
Issue Type: Bug
Components: mapping
Affects Versions: 3.4.1.Final, 3.3.0.Final
Environment: Hibernate 3.6 Final
Reporter: Fuessmann
Hallo, I'm using Hibernate Search in combination with the [b]Programmatic API[/b] for dynamic indexing. Is there any possibility to index an entity with an composite primary key annotated with [b]@EmbeddedId[/b]. Using H.Search without the Programmatic API i choose a TwoWayFieldBridge but how can I use it with the Programmatic API?
In my case I am trying to implement a programmatic mapping for an entity. The entity has an an composite primary key annotated with @EmbeddedId.
{quote}
...
@Entity(name = "a")
@Table(name = "a")
public class A implements java.io.Serializable {
...
@EmbeddedId
@AttributeOverrides( {
@AttributeOverride(name = "name_1", column = @Column(name = "name_1", nullable = false, length = 64)),
@AttributeOverride(name = "name_2", column = @Column(name = "name_2", nullable = false, length = 16)) })
public AId getId() {
return this.id;
...
}
{quote}
Code for ABridge.class is not shown but still exists ..
I try a code like this, but the code does not work:
{quote}
mapping.entity(A.class).indexed().indexName("a")
.property("id", ElementType.METHOD).documentId().field().bridge(ABridge.class)
{quote}
Exception occurred during event dispatching:
{quote}
org.hibernate.HibernateException: [color=#FF0000]could not init listeners{color}
...
Caused by: org.hibernate.search.SearchException: {color:red} Unable to guess FieldBridge for id{color}
...
{quote}
I need an advise to solve my problem. Thanks
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[Hibernate-JIRA] Created: (HHH-7218) Result is closed; invalid operation
by Pranav Shah (JIRA)
Result is closed; invalid operation
-----------------------------------
Key: HHH-7218
URL: https://hibernate.onjira.com/browse/HHH-7218
Project: Hibernate ORM
Issue Type: Bug
Components: core
Affects Versions: 3.5.1
Environment: DB2. Hibernate 3.5.1 core.
Reporter: Pranav Shah
I am trying to use, query.list() and pagination. first time when I am querying the chunk of data, it returns data with correct values and for the second time, when I am querying the db, it throws exception as said in summary. when second time, application is querying the db, it has the lesser number of rows then the first time query as db is getting updated in between.
e.g. right now, employee has 6 records and I am trying to query for first result 0 and max result 5. after the first batch completion and before second batch start, now employee has only 4 records, and second batch is getting crashed with "ResultSet is closed. invalid operation". exception message is coming. SQLCode -4470.
Please let me know, if you need more description.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[Hibernate-JIRA] Created: (HHH-7217) Performance dropped between 3.6 and 4.1.1 using cached entities
by Emeric Fillatre (JIRA)
Performance dropped between 3.6 and 4.1.1 using cached entities
---------------------------------------------------------------
Key: HHH-7217
URL: https://hibernate.onjira.com/browse/HHH-7217
Project: Hibernate ORM
Issue Type: Improvement
Components: caching (L2)
Affects Versions: 4.1.1
Environment: Hibernate 4.1.1.final, mysql 5, atomikos 3.7
Reporter: Emeric Fillatre
Attachments: jta-cached.JPG, jta-not cached.JPG, non-jta.JPG
It seems that performance has dropped between 3.6 and 4.1.1
I was trying to upgrade to the new version and ran some unit tests while i found that having just upgraded the librayries, keeping the same code and configuration, some execution looked slower
I run several tests to try to figure out where it could come from and noticed that repetable cached entities read seems much longer.
I tested hibernate 3.6, using atomikos as JTA 3.7 implentation and infinispan 4.2.1 against hibernate 4.1.1, atomikos 3.7 and infinispan 5.1.2 in different scenarios :
non JTA transaction, JTA transaction with entities not declared as cacheable and JTA transaction with cacheable entities for both configurations.
It seems that multiple successive reads are more than 3 times slower in the last scenario using hibernate 4.
You could find included test results with on the left side, the 3.6 configuration and on the right side the 4.1.1 one.
The last test on each run represents the successive load by the ID of the same entity a thousand times.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[Hibernate-JIRA] Created: (HHH-7072) ElementCollection not updated correctly if the Embeddable component has a nullable property
by Aaron Trewern (JIRA)
ElementCollection not updated correctly if the Embeddable component has a nullable property
-------------------------------------------------------------------------------------------
Key: HHH-7072
URL: https://hibernate.onjira.com/browse/HHH-7072
Project: Hibernate ORM
Issue Type: Bug
Components: annotations
Affects Versions: 3.5.2
Reporter: Aaron Trewern
I have an @ElementCollection that is a collection of @Embeddable components. When saving an updated entity I can see that Hibernate issues an SQL DELETE for each of the rows in the collection followed by an INSERT, as is expected behaviour for an @ElementCollection.
The rows are deleted using a where clause that includes all the properties of the @Embeddabkle component. With MySQL the DELETE fails to delete any rows where the property is a nullable property and the previous value was null.
This is because Hibernate issues a prepared SQL ststement like "DELETE FROM tableName t where t.a = ? and t.b = ?" now if b is a nullable property and the component being saved has a null value for b then the delete will fail.
To correctly delete the row with MySQL the statement should be "DELETE FROM tableName t where t.a = ? and t.b is null"
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months
[Hibernate-JIRA] Created: (HHH-7214) DiscriminatorValue
by Lorber Sebastien (JIRA)
DiscriminatorValue
------------------
Key: HHH-7214
URL: https://hibernate.onjira.com/browse/HHH-7214
Project: Hibernate ORM
Issue Type: Improvement
Components: annotations
Affects Versions: 3.6.4
Environment: Oracle + Hibernate 3.6.4 final + Java 1.6
Reporter: Lorber Sebastien
I have an abstract entity with the following annotations:
@Entity
@Table(name = LineBlock.TABLE_NAME)
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name = "blockType", discriminatorType = DiscriminatorType.STRING)
public abstract class LineBlock implements Serializable, Visitable {
I have 2 sub entities:
@Entity
@DiscriminatorValue("EMAIL")
public class EmailBlock extends LineBlock {
@Entity
@DiscriminatorValue("EMAIL")
public class ArgusBlock extends LineBlock {
Actually it was a mistake i made to not change the discriminator value... But it took me some time to figure out the problem was so easy to solve...
The problem is that when i was saving an object refering an ArgusBlock entity, when loading that object, it seems Hibernate took the first block it found with that discriminator value, which in this case was the EmailBlock.
I think when a discriminator value is used twice or more, an exception should prevent this "random" behavior".
The problem seems to be there:
org.hibernate.persister.entity.SingleTableEntityPersister#getSubclassForDiscriminatorValue
There is a Map<Discriminator,EntityName> in which we probably try to put twice the same discriminator key, so one value override the other.
The problem seems easy to solve and i can try doing it myself. I see 2 possible solutions:
- Raise exception when we try to override a key in the map construction
- Use a multimap and raise exception when accessing the map with multiple values
Don't really know if it's a bug or improvement, but i think it's always better to be fail-fast in such cases
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira
12 years, 9 months