[Hibernate-JIRA] Created: (HHH-5680) IN expression does not conform to JPA spec.
by Azuo Lee (JIRA)
IN expression does not conform to JPA spec.
-------------------------------------------
Key: HHH-5680
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-5680
Project: Hibernate Core
Issue Type: Bug
Affects Versions: 3.6.0
Reporter: Azuo Lee
According to JPA 2.0 spec 4.6.9, the sytax for an IN expression should be:
in_expression ::=
{state_field_path_expression | type_discriminator} [NOT] IN
{ ( in_item {, in_item}* ) | (subquery) | collection_valued_input_parameter }
in_item ::= literal | single_valued_input_parameter
Note that parentheses are required only if a subquery, or one or more listerals or single_valued_input_parameters is used, but should be absent if a collection_valued_input_parameter is used.
For example, with the following query parameters:
p1 : String "01";
p2 : String "02";
p3 : List containing 2 Strings "01" and "02";
the following IN expressions should all be legal, per JPA spec:
IN ("01", "02")
IN (:p1, :p2)
IN (:p1)
IN :p3
but the following expressions are ILLEGAL:
IN :p1
IN (:p3)
Current Hibernate implementation requires parentheses as mandatory in an IN expression, this is not conform to the JPA spec.
Using of parentheses in an IN expression provides a strict syntax to distinguish between a single_valued_input_parameter and a collection_valued_input_parameter, which is more reliable than "guessing" the semantics of the parameter by examining its Java type.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 6 months
[Hibernate-JIRA] Created: (HHH-5694) Unique constraint violation when removing an item from a unidirectional OneToMany ordered List
by Pascal Thivent (JIRA)
Unique constraint violation when removing an item from a unidirectional OneToMany ordered List
----------------------------------------------------------------------------------------------
Key: HHH-5694
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-5694
Project: Hibernate Core
Issue Type: Bug
Components: entity-manager
Affects Versions: 3.6.0, 3.5.6, 3.5.5, 3.5.4, 3.5.3, 3.5.2, 3.5.1, 3.5.0-Final
Environment: Tested with Hibernate 3.5+ on H2, Derby, HSQLDB
Reporter: Pascal Thivent
I have a {{Foo}} entity that has a unidirectional ordered {{OneToMany}} {{List}} of {{Bars}}:
{code}
@Entity
public class Foo {
@Id @GeneratedValue
private Long id;
@OneToMany
@OrderColumn(name = "order_index")
@JoinTable(name = "foo_bar_map", joinColumns = @JoinColumn(name = "foo_id"), inverseJoinColumns = @JoinColumn(name = "bar_id"))
private List<Bar> bars;
//...
}
{code}
So let's say {{Foo#1}} holds a list with {{Bar#1}}, {{Bar#2}}, {{Bar#3}} (in that order). When removing {{Bar#1}} from the List and persisting [[Foo#1}}, Hibernate performs the following weird SQL:
{code}
delete from foo_bar_map where foo_id=1 and order_index=2
update foo_bar_map set bar_id=2 where foo_id=1 and order_index=0
{code}
And this obviously fails with a unique constraint violation. Why does Hibernate delete the last item from the join table? Why does Hibernate mess with the bar_id? Shouldn't Hibernate update the order_column instead?
I'm attaching a mavenized test allowing to reproduce, run {{mvn test}}.
FWIW, this works with the RI (run {{mvn test -Peclipselink,h2}}).
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 6 months
[Hibernate-JIRA] Created: (HHH-5727) Collection member declaration not handling optional AS in HQL.
by Dave Stephan (JIRA)
Collection member declaration not handling optional AS in HQL.
--------------------------------------------------------------
Key: HHH-5727
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-5727
Project: Hibernate Core
Issue Type: Bug
Components: core
Affects Versions: 3.3.2, 3.2.4.sp1
Reporter: Dave Stephan
HQL:
SELECT o FROM EntityBean AS o, IN (o.items) AS l WHERE l.itemValue = '1'
The log output gives the following:
2010-11-10 16:03:53,286 DEBUG [org.hibernate.hql.ast.QueryTranslatorImpl] (WorkerThread#0[127.0.0.1:60518]) parse() - HQL: SELECT o FROM EntityBean AS o, IN (o.items) AS l WHERE l.itemValue = '1'
2010-11-10 16:03:53,290 DEBUG [org.hibernate.hql.PARSER] (WorkerThread#0[127.0.0.1:60518]) Keyword 'AS' is being interpreted as an identifier due to: expecting IDENT, found 'AS'
2010-11-10 16:03:53,403 ERROR [org.hibernate.hql.PARSER] (WorkerThread#0[127.0.0.1:60518]) line 1:48: unexpected token: l
According to the jpa persistence spec the AS keyword is optional for collection declarations:
collection_member_declaration ::=
IN (collection_valued_path_expression) [AS] identification_variable
In hql.g we have:
inCollectionDeclaration!
: IN! OPEN! p:path CLOSE! a:alias
{ #inCollectionDeclaration = #([JOIN, "join"], [INNER, "inner"], #p, #a); }
;
Should this be a:asAlias rather than a:alias?
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 6 months
[Hibernate-JIRA] Created: (HHH-5740) EXTRA Lazy collection with inverse owner: PersistenSet still contains previosly removed elements
by Guenther Demetz (JIRA)
EXTRA Lazy collection with inverse owner: PersistenSet still contains previosly removed elements
-------------------------------------------------------------------------------------------------
Key: HHH-5740
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-5740
Project: Hibernate Core
Issue Type: Bug
Components: core
Affects Versions: 3.6.0, 3.5.0-Final
Environment: Hibernate 3.6.0 (also 3.5.0 is affect of this bug), database indipendent (attached Testcase is based on hsqldb)
Reporter: Guenther Demetz
Priority: Critical
Attachments: TestExtraLazyCollectionWithInverseOwner.jar
When mapping a Set with inverse owner (= specifying mappedBy attribute)
and annotated with
@org.hibernate.annotations.LazyCollection (LazyCollectionOption.EXTRA)
then the removal of single elements doesn't work as expected:
the PersistentSet still contains elements which have previously removed.
..
assertTrue(a.assD.remove(d)); // removing element d from PersistentSet, this assert goes well
assertFalse(a.assD.contains(d)); // asserting PersistentSet doesn't contain element d anymore, FAILS !!
The contains method implementation of PersistentSet properly calls flush()
before executing the select-query, but anyway for some reason the registered DelayedOperation
(as before enqueued by the remove call) isn't properly brought to execution if the collection is declared with inverse owner.
This IMHO is a critical bug, as this behavior clearly violates the contracts of method
java.util.Set#remove(java.lang.Object) where the specifications says:
<quoted: "the set will not contain the specified element once the call returns">
Furthermore it could create serious damage to companies using hibernate:
imagine for example a company cancels a order-position and nevertheless the canceled position is subsequently delivered and invoiced to the customer.
Please see attached testcase for more details.
Thanks for attention.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 6 months
[Hibernate-JIRA] Created: (HHH-3007) Unchanged persistent set gets marked dirty on session.merge()
by Lars Koedderitzsch (JIRA)
Unchanged persistent set gets marked dirty on session.merge()
-------------------------------------------------------------
Key: HHH-3007
URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-3007
Project: Hibernate3
Issue Type: Bug
Components: core
Affects Versions: 3.2.5
Reporter: Lars Koedderitzsch
Persistent sets are marked dirty on session.merge() even if there have been no changes to the collection.
This is especially painful when the collection is immutable and results in an "changed an immutable collection instance" exception on flush.
I tracked the behaviour down a bit and believe the problem to be in CollectionType.replace().
Here the passed in orginal PersistentSet is replaced by a plain HashSet in this line:
Object result = target == null || target == original ? instantiateResult( original ) : target;
The "result" object (HashSet) is then passed to the CollectionType.replaceElements() method (instead of the original PersistentSet).
In CollectionType.replaceElements() the code to clear the dirty state of the collection does not execute anymore, because the passed-in "original" collection is the described HashSet and *not* the original PersistentSet.
This way the PersistentSet remains marked dirty.
A workaround is to manually clear the dirty state of an immutable collection after merge but before flush.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 6 months
[Hibernate-JIRA] Created: (HV-362) Including Annotation Processor in Eclipse results in java.lang.OutOfMemoryError: Java heap space
by Marcel Tietze (JIRA)
Including Annotation Processor in Eclipse results in java.lang.OutOfMemoryError: Java heap space
------------------------------------------------------------------------------------------------
Key: HV-362
URL: http://opensource.atlassian.com/projects/hibernate/browse/HV-362
Project: Hibernate Validator
Issue Type: Bug
Components: annotation-processor
Affects Versions: 4.1.0.Final
Environment: eclipse.buildId=I20100608-0911
java.version=1.6.0_15
java.vendor=Sun Microsystems Inc.
BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=de_DE
Framework arguments: -product org.eclipse.epp.package.jee.product
Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.jee.product
Reporter: Marcel Tietze
Assignee: Hardy Ferentschik
Priority: Minor
I added the annotation processor like in http://docs.jboss.org/hibernate/stable/validator/reference/en-US/html/ch0... chapter 8.4.2.1. Eclipse described. After a while the build process stops and the following exception occures:
{code}
An internal error occurred during: "Building Workspace".
java.lang.OutOfMemoryError: Java heap space
at org.eclipse.jdt.core.dom.ASTNode$NodeList.<init>(ASTNode.java:1112)
at org.eclipse.jdt.core.dom.Block.<init>(Block.java:70)
at org.eclipse.jdt.core.dom.ASTConverter.convert(ASTConverter.java:511)
at org.eclipse.jdt.core.dom.ASTConverter.buildBodyDeclarations(ASTConverter.java:180)
at org.eclipse.jdt.core.dom.ASTConverter.convert(ASTConverter.java:2709)
at org.eclipse.jdt.core.dom.ASTConverter.convert(ASTConverter.java:1266)
at org.eclipse.jdt.core.dom.CompilationUnitResolver.resolve(CompilationUnitResolver.java:876)
at org.eclipse.jdt.core.dom.CompilationUnitResolver.resolve(CompilationUnitResolver.java:577)
at org.eclipse.jdt.core.dom.ASTParser.createASTs(ASTParser.java:888)
at org.eclipse.jdt.apt.core.internal.env.BaseProcessorEnv.createASTs(BaseProcessorEnv.java:859)
at org.eclipse.jdt.apt.core.internal.env.BuildEnv.createASTs(BuildEnv.java:356)
at org.eclipse.jdt.apt.core.internal.env.AbstractCompilationEnv.newBuildEnv(AbstractCompilationEnv.java:111)
at org.eclipse.jdt.apt.core.internal.APTDispatchRunnable.build(APTDispatchRunnable.java:271)
at org.eclipse.jdt.apt.core.internal.APTDispatchRunnable.run(APTDispatchRunnable.java:217)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:1975)
at org.eclipse.jdt.apt.core.internal.APTDispatchRunnable.runAPTDuringBuild(APTDispatchRunnable.java:142)
at org.eclipse.jdt.apt.core.internal.AptCompilationParticipant.processAnnotations(AptCompilationParticipant.java:193)
at org.eclipse.jdt.internal.core.builder.AbstractImageBuilder.processAnnotations(AbstractImageBuilder.java:627)
at org.eclipse.jdt.internal.core.builder.AbstractImageBuilder.compile(AbstractImageBuilder.java:338)
at org.eclipse.jdt.internal.core.builder.BatchImageBuilder.build(BatchImageBuilder.java:60)
at org.eclipse.jdt.internal.core.builder.JavaBuilder.buildAll(JavaBuilder.java:254)
at org.eclipse.jdt.internal.core.builder.JavaBuilder.build(JavaBuilder.java:178)
at org.eclipse.core.internal.events.BuildManager$2.run(BuildManager.java:629)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:172)
at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:203)
at org.eclipse.core.internal.events.BuildManager$1.run(BuildManager.java:255)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at org.eclipse.core.internal.events.BuildManager.basicBuild(BuildManager.java:258)
at org.eclipse.core.internal.events.BuildManager.basicBuildLoop(BuildManager.java:311)
at org.eclipse.core.internal.events.BuildManager.build(BuildManager.java:343)
at org.eclipse.core.internal.resources.Workspace.build(Workspace.java:344)
{code}
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 6 months
[Hibernate-JIRA] Created: (HSEARCH-625) Some errors triggered by Lucene are not catched by the ErrorHandler
by Emmanuel Bernard (JIRA)
Some errors triggered by Lucene are not catched by the ErrorHandler
-------------------------------------------------------------------
Key: HSEARCH-625
URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-625
Project: Hibernate Search
Issue Type: Bug
Affects Versions: 3.3.0.Beta3
Reporter: Emmanuel Bernard
Assignee: Sanne Grinovero
Fix For: 3.3.0
I ran through them when working on HSEARCH-573
You can reproduce by workspace.forceLockRelease(); all the time in PerDPQueueProcessor (instead of only when not LockObtainFailedException)
And running DoNotCloseOnLockTimeoutTest
Exception in thread "Hibernate Search: indexwriter-1" java.lang.RuntimeException: after flush: fdx size mismatch: 9000 docs vs 0 length in bytes of _d.fdx file exists?=false
at org.apache.lucene.index.StoredFieldsWriter.closeDocStore(StoredFieldsWriter.java:97)
at org.apache.lucene.index.DocFieldProcessor.closeDocStore(DocFieldProcessor.java:51)
at org.apache.lucene.index.DocumentsWriter.closeDocStore(DocumentsWriter.java:417)
at org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:1777)
at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3649)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
at org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4171)
at org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4053)
at org.apache.lucene.index.ConcurrentMergeScheduler.merge(ConcurrentMergeScheduler.java:189)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2521)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2516)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2512)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3556)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2001)
at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:76)
at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.doWorkInSync(DirectoryProviderWorkspace.java:95)
Exception in thread "Thread-3" java.lang.RuntimeException: after flush: fdx size mismatch: 1000 docs vs 0 length in bytes of _m.fdx file exists?=false
at org.apache.lucene.index.StoredFieldsWriter.closeDocStore(StoredFieldsWriter.java:97)
at org.apache.lucene.index.DocFieldProcessor.closeDocStore(DocFieldProcessor.java:51)
at org.apache.lucene.index.DocumentsWriter.closeDocStore(DocumentsWriter.java:417)
at org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:1777)
at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3649)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3555)
at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3431)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3506)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3477)
at org.hibernate.search.backend.Workspace.commitIndexWriter(Workspace.java:173)
at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.stopAndFlush(DirectoryProviderWorkspace.java:83)
at org.hibernate.search.backend.impl.batchlucene.LuceneBatchBackend.stopAndFlush(LuceneBatchBackend.java:96)
at org.hibernate.search.batchindexing.BatchCoordinator.run(BatchCoordinator.java:96)
at org.hibernate.search.impl.MassIndexerImpl.startAndWait(MassIndexerImpl.java:196)
at org.hibernate.search.test.batchindexing.DoNotCloseOnLockTimeoutTest$MassindexerWork.run(DoNotCloseOnLockTimeoutTest.java:90)
org.hibernate.search.SearchException: Exception while closing IndexWriter
at org.hibernate.search.backend.Workspace.closeIndexWriter(Workspace.java:195)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:113)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:637)
Caused by: java.io.FileNotFoundException: /Users/manu/projects/notbackedup/git/search/hibernate-search/target/indextemp/org.hibernate.search.test.batchindexing.ConcurrentData/_0.cfs (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:76)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:97)
at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:87)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
at org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:67)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:114)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:590)
at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:591)
at org.apache.lucene.index.DocumentsWriter.applyDeletes(DocumentsWriter.java:997)
at org.apache.lucene.index.IndexWriter.applyDeletes(IndexWriter.java:4520)
at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3723)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3555)
at org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1711)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1674)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1638)
at org.hibernate.search.backend.Workspace.closeIndexWriter(Workspace.java:191)
... 7 more
//We probably can't do much on this one
Exception in thread "Lucene Merge Thread #1" org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: doc counts differ for segment _c: fieldsReader shows 1 but segmentInfo shows 0
at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:347)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:312)
Caused by: org.apache.lucene.index.CorruptIndexException: doc counts differ for segment _c: fieldsReader shows 1 but segmentInfo shows 0
at org.apache.lucene.index.SegmentReader$CoreReaders.openDocStores(SegmentReader.java:296)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:592)
at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4309)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3965)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace$AsyncIndexRunnable.run(DirectoryProviderWorkspace.java:143)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:637)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
13 years, 6 months