From smarlow at redhat.com Mon Jun 1 09:32:04 2015 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 01 Jun 2015 09:32:04 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <5568AE35.4010608@redhat.com> References: <556898D2.8010103@redhat.com> <55689A8C.5080304@redhat.com> <5568AE35.4010608@redhat.com> Message-ID: <556C5ED4.7060706@redhat.com> > > On 05/29/2015 02:03 PM, Steve Ebersole wrote: >> Scott, first please use a SNAPSHOT build as you'll need it for the HCANN >> fix I did, or you could just pull in the newest HCANN (5.0.0.Final) Should I use a pushed SNAPSHOT? I can build ORM master via "gradlew clean install", however, "gradlew PublishToMavenLocal" fails on "hibernate-osgi:publishMavenJavaPublicationToMavenLocal" with details: " What went wrong: Execution failed for task ':hibernate-osgi:publishMavenJavaPublicationToMavenLocal'. > Failed to publish publication 'mavenJava' to repository 'MavenLocal' > Invalid publication 'mavenJava': artifact file does not exist: '/mnt/ssd/work/hibernate5/hibernate-osgi/target/karafFeatures/hibernate-osgi-5.0.1-SNAPSHOT-karaf.xml' " From johara at redhat.com Mon Jun 1 09:42:49 2015 From: johara at redhat.com (John O'Hara) Date: Mon, 01 Jun 2015 14:42:49 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <556C5ED4.7060706@redhat.com> References: <556898D2.8010103@redhat.com> <55689A8C.5080304@redhat.com> <5568AE35.4010608@redhat.com> <556C5ED4.7060706@redhat.com> Message-ID: <556C6159.3070202@redhat.com> Scott, To publish to your local Maven repo you can run; ./gradlew generateKarafFeatures publishToMavenLocal On 01/06/15 14:32, Scott Marlow wrote: >> On 05/29/2015 02:03 PM, Steve Ebersole wrote: >>> Scott, first please use a SNAPSHOT build as you'll need it for the HCANN >>> fix I did, or you could just pull in the newest HCANN (5.0.0.Final) > Should I use a pushed SNAPSHOT? > > I can build ORM master via "gradlew clean install", however, "gradlew > PublishToMavenLocal" fails on > "hibernate-osgi:publishMavenJavaPublicationToMavenLocal" with details: > > " > What went wrong: > Execution failed for task > ':hibernate-osgi:publishMavenJavaPublicationToMavenLocal'. > > Failed to publish publication 'mavenJava' to repository 'MavenLocal' > > Invalid publication 'mavenJava': artifact file does not exist: > '/mnt/ssd/work/hibernate5/hibernate-osgi/target/karafFeatures/hibernate-osgi-5.0.1-SNAPSHOT-karaf.xml' > " > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev -- John O'Hara johara at redhat.com JBoss, by Red Hat Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland). From sanne at hibernate.org Mon Jun 1 10:58:44 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 1 Jun 2015 15:58:44 +0100 Subject: [hibernate-dev] HSearch + Tika bridge using Wildfly modules In-Reply-To: <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> References: <1663839921.8674641.1432909533929.JavaMail.zimbra@redhat.com> <650952943.8677484.1432909719158.JavaMail.zimbra@redhat.com> <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> Message-ID: Hi Brett, with that configuration, your application is depending on the Tika module but since module dependencies are not transitive, and Hibernate ORM (and thus Search) depend on your WAR, they don't get access to the Tika dependencies. I see two options: - you have your deployment-structure declare that you export the Tika module - we patch the hibernate-search-engine module to depend on an optional tika module (w/o including it) I think we should do both, the first to get you going until our improved modules are available, and the second as it's correct to state that we optionally benefit from it. https://hibernate.atlassian.net/browse/HSEARCH-1883 Is there some way I could easily fetch your definition of the tika modules to include that in our integration tests? Thanks, Sanne On 30 May 2015 at 04:36, Brett Meyer wrote: > Sanne, I might still be missing something. Artificer's war does not include any Hibernate ORM, Hibernate Search, or Tika jars. For Wildfly 8.2, the war's jboss-deployment-structure.xml includes: > > (includes tika-parsers-1.6.jar) > > (Also using the org.hibernate module, but that's implicitly added by Wildfly.) > > I understand what you're saying about the ORM classloader, but the above still didn't work. I'm admittedly a little fuzzy on WF module classloading, but I'm wondering if that's not actually making its way into the ORM classloader, even though my war is the persistence unit. > > However: > > What *does* work is adding '' directly in org.hibernate.search.engine's module.xml. > > Apologies if 1.) I'm missing something and/or 2.) that's expected. I can certainly tweak org.hibernate.search.engine's module.xml with our installer, but that's obviously less than desirable. Any idea what we might be missing that would allow us to get that to work from the app itself? > > Thanks for the help! > > ----- Original Message ----- >> From: "Sanne Grinovero" >> To: "Brett Meyer" >> Cc: "Hibernate.org" >> Sent: Friday, May 29, 2015 11:00:12 AM >> Subject: Re: HSearch + Tika bridge using Wildfly modules >> >> Hi Brett, >> we don't include all existing analysers and extensions within the >> WildFly modules. In particular the Apache Tika libraries have a huge >> amount of dependencies, you should choose the ones you need depending >> on what kind of media you intend to parse. >> >> Include any extension in your "application", we use the Hibernate ORM >> classloader to lookup for extensions so these should be discoverable >> if they are visible to the same classloader having your entities and >> other extensions. >> >> Sanne >> >> On 29 May 2015 at 15:28, Brett Meyer wrote: >> > Hey Sanne! Artificer has '> > services="export" />' defined in its jboss-deployment-structure >> > dependencies. But, when we try to use it, the following happens. >> > >> > Caused by: java.lang.ClassNotFoundException: org.apache.tika.parser.Parser >> > from [Module "org.hibernate.search.engine:main" from local module loader >> > @6cf76647 (finder: local module finder @665bf734 (roots: >> > /home/brmeyer/IdeaProjects/artificer/installer/target/wildfly-8.2.0.Final/modules,/home/brmeyer/IdeaProjects/artificer/installer/target/wildfly-8.2.0.Final/modules/system/layers/base))] >> > >> > One of our entities uses the built-in TikaBridge. I figured the search.orm >> > module would bring the necessary Tika jars in with it. Is there something >> > else we need to add? >> From smarlow at redhat.com Mon Jun 1 11:05:49 2015 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 01 Jun 2015 11:05:49 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <55689A8C.5080304@redhat.com> <5568AE35.4010608@redhat.com> Message-ID: <556C74CD.5010501@redhat.com> On 05/29/2015 02:39 PM, Steve Ebersole wrote: > So: > 1) hibernate-infinispan seems to be able to see infinispan-core (which > is what defines as a dependency) > 2) hibernate-infinispan seems to not be able to see infinispan-commons > (which I would have to assume infinispan-core defines as a dependency) Your right, this is a WF module/classpath issue, as hibernate-infinispan does not have access to org.infinispan.commons. Not exactly sure why this wasn't a problem before ORM 5.x. Adding org.infinispan.commons to the hibernate-infinispan classpath fixes the org.jboss.as.test.integration.jpa.secondlevelcache.JPA2LCTestCase failure. Next we just have to find the WildFly bug that is causing Sanne problems with his Hibernate Search testing, that prevents HS from doing external WF testing. > > This sure seems like a problem in the WF module/classpath set up... > > > On Fri, May 29, 2015 at 1:21 PM Scott Marlow > wrote: > > > > On 05/29/2015 02:03 PM, Steve Ebersole wrote: > > Scott, first please use a SNAPSHOT build as you'll need it for > the HCANN > > fix I did, or you could just pull in the newest HCANN (5.0.0.Final) > > Will do. > > > > > As for the ClassNotFoundException, I really do not get this one. So, > > essentially: > > > > 1) hibernate-infinispan is able to access infinispan-core classes > > 2) hibernate-infinispan makes use of > > this org/infinispan/commons/util/CloseableIteratorSet class as > returned > > from classes contained in infinispan-core. I am not sure which jar > > holds org/infinispan/commons/util/CloseableIteratorSet. Anyone? > > infinispan-commons-7.2.1.Final.jar contains > org.infinispan.commons.util.CloseableIteratorSet > > > > 3) hibernate-infinispan is not able to access > > org/infinispan/commons/util/CloseableIteratorSet > > > > On Fri, May 29, 2015 at 11:57 AM, Scott Marlow > > > >> wrote: > > > > Also am using Infinispan 7.2.1.Final but noticed that Infinispan > > 7.2.2.Final is now in WildFly 10, so I'll sync my branch with > WF master > > to upgrade Infinispan. > > > > On 05/29/2015 12:50 PM, Scott Marlow wrote: > > > I ran part of the WildFly basic integration tests against the > > > > https://github.com/scottmarlow/wildfly/tree/jipijapa3_hibernate5 > > branch, > > > which includes the following Hibernate versions: > > > > > > ORM 5.0.0.CR1 > > > HCA 4.0.5.Final > > > HS 5.2.0.Final > > > > > > I am seeing the below errors. > > > > > > 1. The Hibernate Search test > > > > > > (org.jboss.as.test.integration.hibernate.search.HibernateSearchJPATestCase) > > > is failing with an AbstractServiceMethodError > > http://pastebin.com/CzEgVp0L > > > > > > 2. In the > > > > > > org.jboss.as.test.integration.jpa.secondlevelcache.JPA2LCTestCase.testEvictEntityCache, > > > we are seeing a "java.lang.ClassNotFoundException: > > > org.infinispan.commons.util.CloseableIteratorSet from [Module > > > "org.hibernate.infinispan:main"http://pastie.org/10213943 > > > > > > Scott > > > _______________________________________________ > > > hibernate-dev mailing list > > > hibernate-dev at lists.jboss.org > > > > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > From steve at hibernate.org Mon Jun 1 11:24:16 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 01 Jun 2015 15:24:16 +0000 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <556C5ED4.7060706@redhat.com> References: <556898D2.8010103@redhat.com> <55689A8C.5080304@redhat.com> <5568AE35.4010608@redhat.com> <556C5ED4.7060706@redhat.com> Message-ID: Oh, I'll need to make generation of that file a dependency for publishing... On Mon, Jun 1, 2015 at 8:32 AM Scott Marlow wrote: > > > > On 05/29/2015 02:03 PM, Steve Ebersole wrote: > >> Scott, first please use a SNAPSHOT build as you'll need it for the HCANN > >> fix I did, or you could just pull in the newest HCANN (5.0.0.Final) > > Should I use a pushed SNAPSHOT? > > I can build ORM master via "gradlew clean install", however, "gradlew > PublishToMavenLocal" fails on > "hibernate-osgi:publishMavenJavaPublicationToMavenLocal" with details: > > " > What went wrong: > Execution failed for task > ':hibernate-osgi:publishMavenJavaPublicationToMavenLocal'. > > Failed to publish publication 'mavenJava' to repository 'MavenLocal' > > Invalid publication 'mavenJava': artifact file does not exist: > > '/mnt/ssd/work/hibernate5/hibernate-osgi/target/karafFeatures/hibernate-osgi-5.0.1-SNAPSHOT-karaf.xml' > " > > From brmeyer at redhat.com Mon Jun 1 12:05:03 2015 From: brmeyer at redhat.com (Brett Meyer) Date: Mon, 1 Jun 2015 12:05:03 -0400 (EDT) Subject: [hibernate-dev] HSearch + Tika bridge using Wildfly modules In-Reply-To: References: <1663839921.8674641.1432909533929.JavaMail.zimbra@redhat.com> <650952943.8677484.1432909719158.JavaMail.zimbra@redhat.com> <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> Message-ID: <1537092915.9912064.1433174703258.JavaMail.zimbra@redhat.com> Sanne, right, I tried that as well -- makes sense in theory. jboss-deployment-structure: However, I still hit the same error. The *only* way I can get it to work is to directly add the Tika module to org.hibernate.search.engine's module.xml. Think there could be a larger classloading issue? This is even more worrisome for EAP 6.4, where there is no Tika module. I'll need to include the jars in our war, and have Search pick those up. ----- Original Message ----- > From: "Sanne Grinovero" > To: "Brett Meyer" > Cc: "Hibernate.org" > Sent: Monday, June 1, 2015 10:58:44 AM > Subject: Re: HSearch + Tika bridge using Wildfly modules > > Hi Brett, > with that configuration, your application is depending on the Tika > module but since module dependencies are not transitive, and Hibernate > ORM (and thus Search) depend on your WAR, they don't get access to the > Tika dependencies. > > I see two options: > - you have your deployment-structure declare that you export the Tika module > - we patch the hibernate-search-engine module to depend on an > optional tika module (w/o including it) > > I think we should do both, the first to get you going until our > improved modules are available, and the second as it's correct to > state that we optionally benefit from it. > > https://hibernate.atlassian.net/browse/HSEARCH-1883 > Is there some way I could easily fetch your definition of the tika > modules to include that in our integration tests? > > Thanks, > Sanne > > On 30 May 2015 at 04:36, Brett Meyer wrote: > > Sanne, I might still be missing something. Artificer's war does not > > include any Hibernate ORM, Hibernate Search, or Tika jars. For Wildfly > > 8.2, the war's jboss-deployment-structure.xml includes: > > > > (includes tika-parsers-1.6.jar) > > > > (Also using the org.hibernate module, but that's implicitly added by > > Wildfly.) > > > > I understand what you're saying about the ORM classloader, but the above > > still didn't work. I'm admittedly a little fuzzy on WF module > > classloading, but I'm wondering if that's not actually making its way into > > the ORM classloader, even though my war is the persistence unit. > > > > However: > > > > What *does* work is adding '' > > directly in org.hibernate.search.engine's module.xml. > > > > Apologies if 1.) I'm missing something and/or 2.) that's expected. I can > > certainly tweak org.hibernate.search.engine's module.xml with our > > installer, but that's obviously less than desirable. Any idea what we > > might be missing that would allow us to get that to work from the app > > itself? > > > > Thanks for the help! > > > > ----- Original Message ----- > >> From: "Sanne Grinovero" > >> To: "Brett Meyer" > >> Cc: "Hibernate.org" > >> Sent: Friday, May 29, 2015 11:00:12 AM > >> Subject: Re: HSearch + Tika bridge using Wildfly modules > >> > >> Hi Brett, > >> we don't include all existing analysers and extensions within the > >> WildFly modules. In particular the Apache Tika libraries have a huge > >> amount of dependencies, you should choose the ones you need depending > >> on what kind of media you intend to parse. > >> > >> Include any extension in your "application", we use the Hibernate ORM > >> classloader to lookup for extensions so these should be discoverable > >> if they are visible to the same classloader having your entities and > >> other extensions. > >> > >> Sanne > >> > >> On 29 May 2015 at 15:28, Brett Meyer wrote: > >> > Hey Sanne! Artificer has ' >> > services="export" />' defined in its jboss-deployment-structure > >> > dependencies. But, when we try to use it, the following happens. > >> > > >> > Caused by: java.lang.ClassNotFoundException: > >> > org.apache.tika.parser.Parser > >> > from [Module "org.hibernate.search.engine:main" from local module loader > >> > @6cf76647 (finder: local module finder @665bf734 (roots: > >> > /home/brmeyer/IdeaProjects/artificer/installer/target/wildfly-8.2.0.Final/modules,/home/brmeyer/IdeaProjects/artificer/installer/target/wildfly-8.2.0.Final/modules/system/layers/base))] > >> > > >> > One of our entities uses the built-in TikaBridge. I figured the > >> > search.orm > >> > module would bring the necessary Tika jars in with it. Is there > >> > something > >> > else we need to add? > >> > From sanne at hibernate.org Mon Jun 1 12:33:50 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 1 Jun 2015 17:33:50 +0100 Subject: [hibernate-dev] HSearch + Tika bridge using Wildfly modules In-Reply-To: <1537092915.9912064.1433174703258.JavaMail.zimbra@redhat.com> References: <1663839921.8674641.1432909533929.JavaMail.zimbra@redhat.com> <650952943.8677484.1432909719158.JavaMail.zimbra@redhat.com> <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> <1537092915.9912064.1433174703258.JavaMail.zimbra@redhat.com> Message-ID: On 1 June 2015 at 17:05, Brett Meyer wrote: > Sanne, right, I tried that as well -- makes sense in theory. jboss-deployment-structure: > > > > > > However, I still hit the same error. The *only* way I can get it to work is to directly add the Tika module to org.hibernate.search.engine's module.xml. Think there could be a larger classloading issue? It would surprise me. We delegate to the ORM classloader, and it's well tested to load dependencies from the user module. My experience with the jboss-deployment-structure XML file is more limited, I'd rather suspect the dependency definition is not done correctly? For example, I remember some users to mention on the forums that you'd need to specify elements carefully depending on your exact deployment structure: https://developer.jboss.org/thread/248888?tstart=0#917815 But if you have just a WAR, I wouldn't expect that.. happy to help debugging it? Could you share the exact stacktrace, or even better do you have test? > This is even more worrisome for EAP 6.4, where there is no Tika module. I'll need to include the jars in our war, and have Search pick those up. That's what we normally expect people to do, also the reason why we didn't have this optional dependency. There are many extension points, and we're not going to add a module dependency to "module name=user-extensions" .. But I see how it may be convenient to take advantage of Tika if it exists as a module. Thanks, Sanne > > ----- Original Message ----- >> From: "Sanne Grinovero" >> To: "Brett Meyer" >> Cc: "Hibernate.org" >> Sent: Monday, June 1, 2015 10:58:44 AM >> Subject: Re: HSearch + Tika bridge using Wildfly modules >> >> Hi Brett, >> with that configuration, your application is depending on the Tika >> module but since module dependencies are not transitive, and Hibernate >> ORM (and thus Search) depend on your WAR, they don't get access to the >> Tika dependencies. >> >> I see two options: >> - you have your deployment-structure declare that you export the Tika module >> - we patch the hibernate-search-engine module to depend on an >> optional tika module (w/o including it) >> >> I think we should do both, the first to get you going until our >> improved modules are available, and the second as it's correct to >> state that we optionally benefit from it. >> >> https://hibernate.atlassian.net/browse/HSEARCH-1883 >> Is there some way I could easily fetch your definition of the tika >> modules to include that in our integration tests? >> >> Thanks, >> Sanne >> >> On 30 May 2015 at 04:36, Brett Meyer wrote: >> > Sanne, I might still be missing something. Artificer's war does not >> > include any Hibernate ORM, Hibernate Search, or Tika jars. For Wildfly >> > 8.2, the war's jboss-deployment-structure.xml includes: >> > >> > (includes tika-parsers-1.6.jar) >> > >> > (Also using the org.hibernate module, but that's implicitly added by >> > Wildfly.) >> > >> > I understand what you're saying about the ORM classloader, but the above >> > still didn't work. I'm admittedly a little fuzzy on WF module >> > classloading, but I'm wondering if that's not actually making its way into >> > the ORM classloader, even though my war is the persistence unit. >> > >> > However: >> > >> > What *does* work is adding '' >> > directly in org.hibernate.search.engine's module.xml. >> > >> > Apologies if 1.) I'm missing something and/or 2.) that's expected. I can >> > certainly tweak org.hibernate.search.engine's module.xml with our >> > installer, but that's obviously less than desirable. Any idea what we >> > might be missing that would allow us to get that to work from the app >> > itself? >> > >> > Thanks for the help! >> > >> > ----- Original Message ----- >> >> From: "Sanne Grinovero" >> >> To: "Brett Meyer" >> >> Cc: "Hibernate.org" >> >> Sent: Friday, May 29, 2015 11:00:12 AM >> >> Subject: Re: HSearch + Tika bridge using Wildfly modules >> >> >> >> Hi Brett, >> >> we don't include all existing analysers and extensions within the >> >> WildFly modules. In particular the Apache Tika libraries have a huge >> >> amount of dependencies, you should choose the ones you need depending >> >> on what kind of media you intend to parse. >> >> >> >> Include any extension in your "application", we use the Hibernate ORM >> >> classloader to lookup for extensions so these should be discoverable >> >> if they are visible to the same classloader having your entities and >> >> other extensions. >> >> >> >> Sanne >> >> >> >> On 29 May 2015 at 15:28, Brett Meyer wrote: >> >> > Hey Sanne! Artificer has '> >> > services="export" />' defined in its jboss-deployment-structure >> >> > dependencies. But, when we try to use it, the following happens. >> >> > >> >> > Caused by: java.lang.ClassNotFoundException: >> >> > org.apache.tika.parser.Parser >> >> > from [Module "org.hibernate.search.engine:main" from local module loader >> >> > @6cf76647 (finder: local module finder @665bf734 (roots: >> >> > /home/brmeyer/IdeaProjects/artificer/installer/target/wildfly-8.2.0.Final/modules,/home/brmeyer/IdeaProjects/artificer/installer/target/wildfly-8.2.0.Final/modules/system/layers/base))] >> >> > >> >> > One of our entities uses the built-in TikaBridge. I figured the >> >> > search.orm >> >> > module would bring the necessary Tika jars in with it. Is there >> >> > something >> >> > else we need to add? >> >> >> From steve at hibernate.org Mon Jun 1 13:06:55 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 01 Jun 2015 17:06:55 +0000 Subject: [hibernate-dev] AnnotationBinder class loading Message-ID: HHH-9818[1] and HHH-9837[2] contains all the details. Essentially there is a very bad flaw in how hibernate-osgi is currently propagating class loading to mapping binding. Fixing this was missed in the 5.0 work. It only affects AnnotationBinder, but it happens to affect every single application that uses annotations because it happens to be the code that loads the entity Class and proxy interface Class references. Personally I feel that this warrants a CR2. What do y'all think? [1] https://hibernate.atlassian.net/browse/HHH-9818 [2] https://hibernate.atlassian.net/browse/HHH-9837 From sanne at hibernate.org Mon Jun 1 14:03:05 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 1 Jun 2015 19:03:05 +0100 Subject: [hibernate-dev] AnnotationBinder class loading In-Reply-To: References: Message-ID: +1 And my caching work is ready (in my opinion). Could that be included? The important thing it does, is to replace those key contracts with an interface, preparing ground for various optimisations. I didn't implement the actual optimisations, but that won't break any SPI.. On 1 June 2015 at 18:06, Steve Ebersole wrote: > HHH-9818[1] and HHH-9837[2] contains all the details. Essentially there is > a very bad flaw in how hibernate-osgi is currently propagating class > loading to mapping binding. Fixing this was missed in the 5.0 work. It > only affects AnnotationBinder, but it happens to affect every single > application that uses annotations because it happens to be the code that > loads the entity Class and proxy interface Class references. > > Personally I feel that this warrants a CR2. What do y'all think? > > [1] https://hibernate.atlassian.net/browse/HHH-9818 > [2] https://hibernate.atlassian.net/browse/HHH-9837 > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Mon Jun 1 14:08:29 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 01 Jun 2015 18:08:29 +0000 Subject: [hibernate-dev] AnnotationBinder class loading In-Reply-To: References: Message-ID: Yes, I think we should. On Mon, Jun 1, 2015 at 1:03 PM Sanne Grinovero wrote: > +1 > > And my caching work is ready (in my opinion). Could that be included? > The important thing it does, is to replace those key contracts with an > interface, preparing ground for various optimisations. I didn't > implement the actual optimisations, but that won't break any SPI.. > > On 1 June 2015 at 18:06, Steve Ebersole wrote: > > HHH-9818[1] and HHH-9837[2] contains all the details. Essentially there > is > > a very bad flaw in how hibernate-osgi is currently propagating class > > loading to mapping binding. Fixing this was missed in the 5.0 work. It > > only affects AnnotationBinder, but it happens to affect every single > > application that uses annotations because it happens to be the code that > > loads the entity Class and proxy interface Class references. > > > > Personally I feel that this warrants a CR2. What do y'all think? > > > > [1] https://hibernate.atlassian.net/browse/HHH-9818 > > [2] https://hibernate.atlassian.net/browse/HHH-9837 > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Mon Jun 1 14:24:33 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 1 Jun 2015 19:24:33 +0100 Subject: [hibernate-dev] 2nd level cache tuning on WildFly Message-ID: Does someone have an example of how I could use different Infinispan Cache(s) for each of my Hibernate entities? The WildFly documentation doesn't get much into tuning: https://docs.jboss.org/author/display/WFLY9/JPA+Reference+Guide#JPAReferenceGuide-UsingtheInfinispansecondlevelcache I'd like to define Cache configuration in the WildFly configuration file and map them 1:1 to the cacheable entities. Which also brings up the question on why I should edit the root configuration for sake of app-specific details.. ideally I'd want to add such a configuration snippet within my application deployment. thanks in advance for any pointer, Sanne From sanne at hibernate.org Mon Jun 1 15:43:07 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 1 Jun 2015 20:43:07 +0100 Subject: [hibernate-dev] "master" branch for Hibernate Search is now aiming at 5.4 Message-ID: I'm in the process of releasing Hibernate Search 5.3.0.CR1. This time, we won't wait for the Final release for branching to the next development iteration: "master" is now version 5.4.0-SNAPSHOT and you're all welcome to start proposing pull requests meant for 5.4. Since 5.3.0.CR1 is a candidate release, we don't expect further changes other than some overdue documentation polishing, to be merged both in master and branch "5.3". Of course if regressions are reported we'll re-evaluate the plan, otherwise I will release 5.3.0.Final the 10th of June. So for the current work sprint please look at both issues targeting "5.4" and "5.3.0.Final". = https://hibernate.atlassian.net/issues/?jql=project%20%3D%2010061%20AND%20fixVersion%20%3D%205.4 = https://hibernate.atlassian.net/issues/?jql=project%20%3D%2010061%20AND%20fixVersion%20%3D%205.3.0.Final The main goal for 5.4 is to be compatible with Hibernate ORM 5: as soon as that's integrated we'll publish an Alpha.. hopefully just in a couple of days. Thanks, Sanne From sanne at hibernate.org Mon Jun 1 15:56:53 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 1 Jun 2015 20:56:53 +0100 Subject: [hibernate-dev] Hibernate Search for Hibernate 5 - status In-Reply-To: References: Message-ID: Hi Guillaume, thanks for confirming this. So I've changed the course in Search, and moved all non-blocking tasks out of the 5.3 stream of Hibernate Search, tagging the current stable 5.3 as CR1. We'll now move on as 5.4 and ORM5 is the highest priority - as soon as that's merged I'll tag a first preview. We'll release a preview even if it means to disable WildFly modules temporarily. Mostly because we highly value your feedback :) As a hint to upgrade: remember that Hibernate Search 5.3 made some important details on how Faceting is mapped. I'd suggest you to upgrade to 5.3.0.CR1 already, so that when the day comes (soon) to try ORM5 with your projects you'll not have to deal with changes relating with to both Search and ORM aspects. Thanks! Sanne On 31 May 2015 at 10:35, Guillaume Smet wrote: > Hi Sanne, > > On Sun, May 31, 2015 at 2:47 AM, Sanne Grinovero > wrote: >> >> I don't think it's acceptable we withhold an Hibernate 5 compatible >> version of Hibernate Search for much longer. > > > FWIW, I'm waiting for this to test Hibernate 5 on our applications and > provide feedback from the field on ORM 5. All our applications are highly > dependent on Search. > > So, it would be nice to have an alpha of Search to test all this! > > -- > Guillaume From mih_vlad at yahoo.com Mon Jun 1 17:11:58 2015 From: mih_vlad at yahoo.com (Mihalcea Vlad) Date: Mon, 1 Jun 2015 21:11:58 +0000 (UTC) Subject: [hibernate-dev] Hibernate Metrics support Message-ID: <1867048415.2940602.1433193118530.JavaMail.yahoo@mail.yahoo.com> Hi Steve, I was thinking of having a Metrics gathering API for all sorts of database-related operations: - connection acquiring/lease time- connection wait time- transaction durations- SQL query logger - slow queries threshold- number of queries per transaction threshold Something similar to https://github.com/vladmihalcea/flexy-pool This will ease profiling a Hibernate application and we could have the hibernate-core define the integration hooks anda hibernate-metrics module to inject the metrics gathering components. This module could use Dropwizard Metrics, sinceit supports various Reservoir types and many reporting flavors (log, JMX, Graphite). Hibernate users will get an insight of what's going on in their application, so they can better understand what Hibernate doeson their behalf. What do you think of this? Vlad MIhalcea? From sanne at hibernate.org Mon Jun 1 18:24:07 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 1 Jun 2015 23:24:07 +0100 Subject: [hibernate-dev] Hibernate Caching SPI change proposal Message-ID: Hello caching experts! At last minute before Hibernate ORM 5, I'm proposing a change in the second level caching SPI. I've already implemented it, including patching both the Ehcache and Infinispan modules to satisfy the new requirement, so you don't have to do much but please have a look and hopefully approve: - https://github.com/hibernate/hibernate-orm/pull/974 The summary is: - The CacheKey SPI is changed to an interface - I'm adding a factory method to each Region to allow the cache implementor to produce its own key implementations - Patched all current Region implementations to simply produce the old, proven key implementation. - Changed all code in ORM which deals with cache access to exclusively use the new factory. - All signatures using a "key" have been changed from Object to the respective contract Since Hibernate ORM 5 is almost ready I'm only defining the API changes; the description on the PR should give some possible suggestions of optimisations which we could then play with. As a background: we had several tests showing that the allocation cost of the CacheKey implementation being "imposed" by the Hibernate core was quite high; this is a first step to improve that, to allow slimming of these but also possible further crazy ideas such as HHH-9780. Thanks, Sanne From steve at hibernate.org Mon Jun 1 21:39:48 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jun 2015 01:39:48 +0000 Subject: [hibernate-dev] Hibernate Metrics support In-Reply-To: <1867048415.2940602.1433193118530.JavaMail.yahoo@mail.yahoo.com> References: <1867048415.2940602.1433193118530.JavaMail.yahoo@mail.yahoo.com> Message-ID: Hibernate already has hooks to implement all of these between org.hibernate.SessionEventListener and org.hibernate.resource.jdbc.spi.StatementInspector On Mon, Jun 1, 2015 at 4:12 PM Mihalcea Vlad wrote: > Hi Steve, > > I was thinking of having a Metrics gathering API for all sorts of > database-related operations: > > - connection acquiring/lease time > - connection wait time > - transaction durations > - SQL query logger > - slow queries threshold > - number of queries per transaction threshold > > Something similar to https://github.com/vladmihalcea/flexy-pool > > This will ease profiling a Hibernate application and we could have the > hibernate-core define the integration hooks and > a hibernate-metrics module to inject the metrics gathering components. > This module could use Dropwizard Metrics, since > it supports various Reservoir types and many reporting flavors (log, JMX, > Graphite). > > Hibernate users will get an insight of what's going on in their > application, so they can better understand what Hibernate does > on their behalf. > > What do you think of this? > > Vlad MIhalcea > > > From brmeyer at redhat.com Mon Jun 1 22:27:39 2015 From: brmeyer at redhat.com (Brett Meyer) Date: Mon, 1 Jun 2015 22:27:39 -0400 (EDT) Subject: [hibernate-dev] HSearch + Tika bridge using Wildfly modules In-Reply-To: References: <1663839921.8674641.1432909533929.JavaMail.zimbra@redhat.com> <650952943.8677484.1432909719158.JavaMail.zimbra@redhat.com> <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> <1537092915.9912064.1433174703258.JavaMail.zimbra@redhat.com> Message-ID: <1235209134.10211711.1433212059806.JavaMail.zimbra@redhat.com> > It would surprise me. We delegate to the ORM classloader, and it's > well tested to load dependencies from the user module. My experience > with the jboss-deployment-structure XML file is more limited, I'd > rather suspect the dependency definition is not done correctly? > For example, I remember some users to mention on the forums that you'd > need to specify elements carefully depending on your > exact deployment structure: > https://developer.jboss.org/thread/248888?tstart=0#917815 > > But if you have just a WAR, I wouldn't expect that.. happy to help > debugging it? Could you share the exact stacktrace, or even better do > you have test? Right, since this is a WAR and not an EAR, the sub-deployment discussion wouldn't be applicable. Stack: https://gist.github.com/brmeyer/5bb39097d76d2e45d856 An isolated test case might be a bit tricky. But, I'm more than happy to package up this specific WAR and demo what happens... > That's what we normally expect people to do, also the reason why we > didn't have this optional dependency. If I do not deploy the Tika module and instead explicitly include tika-core and tika-parser JARs in my WAR, the same error occurs. Ditto if my specific JAR *provides* the Tika dependencies. If what you're saying is true, where Search is leaning on the WAR and Hibernate's classloaders, I'd assume one of those two would work, right? Thanks! From sanne at hibernate.org Tue Jun 2 04:48:56 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 2 Jun 2015 09:48:56 +0100 Subject: [hibernate-dev] HSearch + Tika bridge using Wildfly modules In-Reply-To: <1235209134.10211711.1433212059806.JavaMail.zimbra@redhat.com> References: <1663839921.8674641.1432909533929.JavaMail.zimbra@redhat.com> <650952943.8677484.1432909719158.JavaMail.zimbra@redhat.com> <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> <1537092915.9912064.1433174703258.JavaMail.zimbra@redhat.com> <1235209134.10211711.1433212059806.JavaMail.zimbra@redhat.com> Message-ID: Hi Brett, thanks for the stacktrace, that clarified a lot: looks like a problem with our classloader strategy after all. In BridgeFactory.doExtractType:592 we invoke "Class#newInstance()"; the Class itself is already found but then it fails to initialize it as this isn't happening within the ORM classloader which we normally use to create extension points via reflection. I've created HSEARCH-1885 for this, I think I have enough elements to try reproduce it in our tests.. if not I'll ask your help! While I'll want to add a test for this, I'm not sure this still applies to latest versions. If it's not too complex to try switching versions, could you try that? You should upgrade anyway ;) Thanks, Sanne On 2 June 2015 at 03:27, Brett Meyer wrote: >> It would surprise me. We delegate to the ORM classloader, and it's >> well tested to load dependencies from the user module. My experience >> with the jboss-deployment-structure XML file is more limited, I'd >> rather suspect the dependency definition is not done correctly? >> For example, I remember some users to mention on the forums that you'd >> need to specify elements carefully depending on your >> exact deployment structure: >> https://developer.jboss.org/thread/248888?tstart=0#917815 >> >> But if you have just a WAR, I wouldn't expect that.. happy to help >> debugging it? Could you share the exact stacktrace, or even better do >> you have test? > > Right, since this is a WAR and not an EAR, the sub-deployment discussion wouldn't be applicable. > > Stack: https://gist.github.com/brmeyer/5bb39097d76d2e45d856 > > An isolated test case might be a bit tricky. But, I'm more than happy to package up this specific WAR and demo what happens... > >> That's what we normally expect people to do, also the reason why we >> didn't have this optional dependency. > > If I do not deploy the Tika module and instead explicitly include tika-core and tika-parser JARs in my WAR, the same error occurs. Ditto if my specific JAR *provides* the Tika dependencies. If what you're saying is true, where Search is leaning on the WAR and Hibernate's classloaders, I'd assume one of those two would work, right? > > Thanks! From sanne at hibernate.org Tue Jun 2 04:52:31 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 2 Jun 2015 09:52:31 +0100 Subject: [hibernate-dev] HSearch + Tika bridge using Wildfly modules In-Reply-To: References: <1663839921.8674641.1432909533929.JavaMail.zimbra@redhat.com> <650952943.8677484.1432909719158.JavaMail.zimbra@redhat.com> <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> <1537092915.9912064.1433174703258.JavaMail.zimbra@redhat.com> <1235209134.10211711.1433212059806.JavaMail.zimbra@redhat.com> Message-ID: On 2 June 2015 at 09:48, Sanne Grinovero wrote: > Hi Brett, > thanks for the stacktrace, that clarified a lot: looks like a problem > with our classloader strategy after all. > > In BridgeFactory.doExtractType:592 we invoke "Class#newInstance()"; > the Class itself is already found but then it fails to initialize it > as this isn't happening within the ORM classloader which we normally > use to create extension points via reflection. > > I've created HSEARCH-1885 for this, I think I have enough elements to > try reproduce it in our tests.. if not I'll ask your help! > > While I'll want to add a test for this, I'm not sure this still > applies to latest versions. If it's not too complex to try switching > versions, could you try that? You should upgrade anyway ;) Actually: while I still think you should upgrade, it looks like the issue would apply on latest as well. Sanne > > Thanks, > Sanne > > On 2 June 2015 at 03:27, Brett Meyer wrote: >>> It would surprise me. We delegate to the ORM classloader, and it's >>> well tested to load dependencies from the user module. My experience >>> with the jboss-deployment-structure XML file is more limited, I'd >>> rather suspect the dependency definition is not done correctly? >>> For example, I remember some users to mention on the forums that you'd >>> need to specify elements carefully depending on your >>> exact deployment structure: >>> https://developer.jboss.org/thread/248888?tstart=0#917815 >>> >>> But if you have just a WAR, I wouldn't expect that.. happy to help >>> debugging it? Could you share the exact stacktrace, or even better do >>> you have test? >> >> Right, since this is a WAR and not an EAR, the sub-deployment discussion wouldn't be applicable. >> >> Stack: https://gist.github.com/brmeyer/5bb39097d76d2e45d856 >> >> An isolated test case might be a bit tricky. But, I'm more than happy to package up this specific WAR and demo what happens... >> >>> That's what we normally expect people to do, also the reason why we >>> didn't have this optional dependency. >> >> If I do not deploy the Tika module and instead explicitly include tika-core and tika-parser JARs in my WAR, the same error occurs. Ditto if my specific JAR *provides* the Tika dependencies. If what you're saying is true, where Search is leaning on the WAR and Hibernate's classloaders, I'd assume one of those two would work, right? >> >> Thanks! From sanne at hibernate.org Tue Jun 2 09:45:17 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 2 Jun 2015 14:45:17 +0100 Subject: [hibernate-dev] Hibernate Search 5.3.0.CR1 released! Message-ID: The (last?) candidate release for Hibernate Search version 5.3 is now available: http://in.relation.to/Bloggers/TheNewFacetingEngineGetsCloserHibernateSearch530CR1Released Regards, Sanne From steve at hibernate.org Tue Jun 2 10:11:05 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 02 Jun 2015 14:11:05 +0000 Subject: [hibernate-dev] Hibernate Metrics support In-Reply-To: <16554972.3202169.1433226195252.JavaMail.yahoo@mail.yahoo.com> References: <16554972.3202169.1433226195252.JavaMail.yahoo@mail.yahoo.com> Message-ID: start() I assume is supposed to indicate Session start? That is a possibility. transactionStart() is not. We do not always know when a transaction starts. On Tue, Jun 2, 2015 at 1:23 AM Mihalcea Vlad wrote: > Thanks. The second one is on the master branch and the 5.0 code-base. > Can the SessionEventListener have two more methods: > > public void start(); > public void transactionStart(); > > So we can monitor how long the Session (start() and end() pair) and > Transactions take (transactionStart() and transactionCompletion()). > > Vlad > > > > > On Tuesday, June 2, 2015 4:39 AM, Steve Ebersole > wrote: > > > Hibernate already has hooks to implement all of these > between org.hibernate.SessionEventListener > and org.hibernate.resource.jdbc.spi.StatementInspector > > > On Mon, Jun 1, 2015 at 4:12 PM Mihalcea Vlad wrote: > > Hi Steve, > > I was thinking of having a Metrics gathering API for all sorts of > database-related operations: > > - connection acquiring/lease time > - connection wait time > - transaction durations > - SQL query logger > - slow queries threshold > - number of queries per transaction threshold > > Something similar to https://github.com/vladmihalcea/flexy-pool > > This will ease profiling a Hibernate application and we could have the > hibernate-core define the integration hooks and > a hibernate-metrics module to inject the metrics gathering components. > This module could use Dropwizard Metrics, since > it supports various Reservoir types and many reporting flavors (log, JMX, > Graphite). > > Hibernate users will get an insight of what's going on in their > application, so they can better understand what Hibernate does > on their behalf. > > What do you think of this? > > Vlad MIhalcea > > > > > From brmeyer at redhat.com Tue Jun 2 10:39:28 2015 From: brmeyer at redhat.com (Brett Meyer) Date: Tue, 2 Jun 2015 10:39:28 -0400 (EDT) Subject: [hibernate-dev] HSearch + Tika bridge using Wildfly modules In-Reply-To: References: <1663839921.8674641.1432909533929.JavaMail.zimbra@redhat.com> <586396569.9056717.1432956991219.JavaMail.zimbra@redhat.com> <1537092915.9912064.1433174703258.JavaMail.zimbra@redhat.com> <1235209134.10211711.1433212059806.JavaMail.zimbra@redhat.com> Message-ID: <1707281390.10524870.1433255968013.JavaMail.zimbra@redhat.com> Sanne, Artificer leans entirely on the ORM/Search modules available in Wildfly and EAP. I can probably upgrade Search upstream, but I'd still need a workaround for EAP 6.4. It sounds like using a Tika module and having Search's module explicitly import it is the only option, right? ----- Original Message ----- > From: "Sanne Grinovero" > To: "Brett Meyer" > Cc: "Hibernate.org" > Sent: Tuesday, June 2, 2015 4:52:31 AM > Subject: Re: HSearch + Tika bridge using Wildfly modules > > On 2 June 2015 at 09:48, Sanne Grinovero wrote: > > Hi Brett, > > thanks for the stacktrace, that clarified a lot: looks like a problem > > with our classloader strategy after all. > > > > In BridgeFactory.doExtractType:592 we invoke "Class#newInstance()"; > > the Class itself is already found but then it fails to initialize it > > as this isn't happening within the ORM classloader which we normally > > use to create extension points via reflection. > > > > I've created HSEARCH-1885 for this, I think I have enough elements to > > try reproduce it in our tests.. if not I'll ask your help! > > > > While I'll want to add a test for this, I'm not sure this still > > applies to latest versions. If it's not too complex to try switching > > versions, could you try that? You should upgrade anyway ;) > > Actually: while I still think you should upgrade, it looks like the > issue would apply on latest as well. > > Sanne > > > > > Thanks, > > Sanne > > > > On 2 June 2015 at 03:27, Brett Meyer wrote: > >>> It would surprise me. We delegate to the ORM classloader, and it's > >>> well tested to load dependencies from the user module. My experience > >>> with the jboss-deployment-structure XML file is more limited, I'd > >>> rather suspect the dependency definition is not done correctly? > >>> For example, I remember some users to mention on the forums that you'd > >>> need to specify elements carefully depending on your > >>> exact deployment structure: > >>> https://developer.jboss.org/thread/248888?tstart=0#917815 > >>> > >>> But if you have just a WAR, I wouldn't expect that.. happy to help > >>> debugging it? Could you share the exact stacktrace, or even better do > >>> you have test? > >> > >> Right, since this is a WAR and not an EAR, the sub-deployment discussion > >> wouldn't be applicable. > >> > >> Stack: https://gist.github.com/brmeyer/5bb39097d76d2e45d856 > >> > >> An isolated test case might be a bit tricky. But, I'm more than happy to > >> package up this specific WAR and demo what happens... > >> > >>> That's what we normally expect people to do, also the reason why we > >>> didn't have this optional dependency. > >> > >> If I do not deploy the Tika module and instead explicitly include > >> tika-core and tika-parser JARs in my WAR, the same error occurs. Ditto > >> if my specific JAR *provides* the Tika dependencies. If what you're > >> saying is true, where Search is leaning on the WAR and Hibernate's > >> classloaders, I'd assume one of those two would work, right? > >> > >> Thanks! > From davide at hibernate.org Tue Jun 2 10:52:57 2015 From: davide at hibernate.org (Davide D'Alto) Date: Tue, 2 Jun 2015 15:52:57 +0100 Subject: [hibernate-dev] Hibernate OGM 4.2 Final is out Message-ID: Hi, after several months of hard work, I'm happy to announce the next final release of Hibernate OGM: 4.2. Compared to 4.1.Final, this version includes: - API for retrieving all executed and failed datastore operations, - preview for Apache Cassandra support, - Fongo support, - new built-in types for boolean mapping, - Support for MongoDB 3 and MongoDB replica sets - HQL support improvements and bug fixes, - bug fixes related to the mapping of embedded properties, - at least JDK 7 is required. You can find all the details in the blog post [1]. Many thanks to all the people that helped us make this release possible. Cheers, Davide [1] http://in.relation.to/Bloggers/HibernateOGM42FinalIsOut From steve at hibernate.org Tue Jun 2 22:37:08 2015 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jun 2015 02:37:08 +0000 Subject: [hibernate-dev] Envers ReflectionTools Message-ID: I am needing to change how "property access" is handled (as in the org.hibernate.property package). I have no idea how to fit that into Envers and specifically into its ReflectionTools class. The problems boil down to org.hibernate.envers.internal.tools.ReflectionTools#getAccessor and all the uses of it. Basically when resolving a "property access strategy" I need a ServiceRegistry for class loading of custom strategies. I am lost in hooking that into Envers. But then I also started thinking.. doesn't Envers always just use the Map entity mnde and access strategy? And if so, why is it trying to resolve a named access strategy? From steve at hibernate.org Wed Jun 3 08:48:54 2015 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jun 2015 12:48:54 +0000 Subject: [hibernate-dev] Hibernate Metrics support In-Reply-To: <360320749.4048936.1433316747452.JavaMail.yahoo@mail.yahoo.com> References: <360320749.4048936.1433316747452.JavaMail.yahoo@mail.yahoo.com> Message-ID: Please keep these discussions on list. Thanks :) We do not always know when JTA transactions have started if we are not the ones starting them. transactionStart() isn't going to happen. I am ok with discussing start() as an option. On Wed, Jun 3, 2015, 2:32 AM Mihalcea Vlad wrote: > I tried to match the current existing methods, to help one match the pairs. > > But "org.hibernate.engine.transaction.spi.AbstractTransactionImpl" has a > doBegin method. > Isn't it called when the transaction starts? > > > > > > On Tuesday, June 2, 2015 5:11 PM, Steve Ebersole > wrote: > > > start() I assume is supposed to indicate Session start? That is a > possibility. > > transactionStart() is not. We do not always know when a transaction > starts. > > On Tue, Jun 2, 2015 at 1:23 AM Mihalcea Vlad wrote: > > Thanks. The second one is on the master branch and the 5.0 code-base. > Can the SessionEventListener have two more methods: > > public void start(); > public void transactionStart(); > > So we can monitor how long the Session (start() and end() pair) and > Transactions take (transactionStart() and transactionCompletion()). > > Vlad > > > > > On Tuesday, June 2, 2015 4:39 AM, Steve Ebersole > wrote: > > > Hibernate already has hooks to implement all of these > between org.hibernate.SessionEventListener > and org.hibernate.resource.jdbc.spi.StatementInspector > > > On Mon, Jun 1, 2015 at 4:12 PM Mihalcea Vlad wrote: > > Hi Steve, > > I was thinking of having a Metrics gathering API for all sorts of > database-related operations: > > - connection acquiring/lease time > - connection wait time > - transaction durations > - SQL query logger > - slow queries threshold > - number of queries per transaction threshold > > Something similar to https://github.com/vladmihalcea/flexy-pool > > This will ease profiling a Hibernate application and we could have the > hibernate-core define the integration hooks and > a hibernate-metrics module to inject the metrics gathering components. > This module could use Dropwizard Metrics, since > it supports various Reservoir types and many reporting flavors (log, JMX, > Graphite). > > Hibernate users will get an insight of what's going on in their > application, so they can better understand what Hibernate does > on their behalf. > > What do you think of this? > > Vlad MIhalcea > > > > > > > From steve at hibernate.org Wed Jun 3 10:00:38 2015 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jun 2015 14:00:38 +0000 Subject: [hibernate-dev] PropertyAccessor -> PropertyAccessStrategy and PropertyAccess Message-ID: I hinted at this in the 'Envers ReflectionTools' email, but wanted to follow up with some details. Part of the work to move away from the last bits of TCCL-based classloading was completely redesigning PropertyAccessorFactory/PropertyAccessor (work which is long overdue anyway). Just a heads up that this is no longer a straight-forward migration item. I took advantage of the opportunity to apply the package split here. I also took this as a chance to add another strategy allowing a mix of Field and Method based access. We talk about this strategy in documentation and other places, so it always made sense to me to provide it. At the moment however I am looking for names for this in terms of specifying it in mappings/annotations. . "mixed"? From steve at hibernate.org Wed Jun 3 10:24:56 2015 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 03 Jun 2015 14:24:56 +0000 Subject: [hibernate-dev] PropertyAccessor -> PropertyAccessStrategy and PropertyAccess In-Reply-To: References: Message-ID: I forgot to mention that this is largely still incubating. I realized that overall we need to re-think how we do class loading and reflection (in terms of field/method discovery). I do not mean just this specific part. I mean throughout bootstrapping. This view might change as we redesign annotation binding. This discussion is probably worth its own subject, so I will follow up. On Wed, Jun 3, 2015 at 9:00 AM Steve Ebersole wrote: > I hinted at this in the 'Envers ReflectionTools' email, but wanted to > follow up with some details. Part of the work to move away from the last > bits of TCCL-based classloading was completely > redesigning PropertyAccessorFactory/PropertyAccessor (work which is long > overdue anyway). Just a heads up that this is no longer a straight-forward > migration item. > > I took advantage of the opportunity to apply the package split here. > > I also took this as a chance to add another strategy allowing a mix of > Field and Method based access. We talk about this strategy in > documentation and other places, so it always made sense to me to provide > it. At the moment however I am looking for names for this in terms of > specifying it in mappings/annotations. . > "mixed"? > From adam at warski.org Wed Jun 3 15:40:50 2015 From: adam at warski.org (Adam Warski) Date: Wed, 3 Jun 2015 21:40:50 +0200 Subject: [hibernate-dev] Envers ReflectionTools In-Reply-To: References: Message-ID: > On 03 Jun 2015, at 04:37, Steve Ebersole wrote: > > I am needing to change how "property access" is handled (as in the > org.hibernate.property package). I have no idea how to fit that into > Envers and specifically into its ReflectionTools class. The problems boil > down to org.hibernate.envers.internal.tools.ReflectionTools#getAccessor and > all the uses of it. Basically when resolving a "property access strategy" > I need a ServiceRegistry for class loading of custom strategies. I am lost > in hooking that into Envers. > > But then I also started thinking.. doesn't Envers always just use the Map > entity mnde and access strategy? And if so, why is it trying to resolve a > named access strategy? To write audit data - yes, but as far as I remember the different access modes where to *read* data from the user entities (either from getters or fields) Adam -- Adam Warski http://twitter.com/#!/adamwarski http://www.softwaremill.com http://www.warski.org From sanne at hibernate.org Thu Jun 4 13:02:34 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 4 Jun 2015 18:02:34 +0100 Subject: [hibernate-dev] Hibernate Search for Hibernate 5 - status In-Reply-To: References: Message-ID: The necessary changes to be compatible with Hibernate ORM 5.0.0.CR1 are now merged, I'll release Hibernate Search 5.4.0.Alpha1 tomorrow. For this release only, we'll not publish a set of modules to overlay on WildFly. Sanne On 1 June 2015 at 20:56, Sanne Grinovero wrote: > Hi Guillaume, > thanks for confirming this. So I've changed the course in Search, and > moved all non-blocking tasks out of the 5.3 stream of Hibernate > Search, tagging the current stable 5.3 as CR1. We'll now move on as > 5.4 and ORM5 is the highest priority - as soon as that's merged I'll > tag a first preview. We'll release a preview even if it means to > disable WildFly modules temporarily. > > Mostly because we highly value your feedback :) > > As a hint to upgrade: remember that Hibernate Search 5.3 made some > important details on how Faceting is mapped. I'd suggest you to > upgrade to 5.3.0.CR1 already, so that when the day comes (soon) to try > ORM5 with your projects you'll not have to deal with changes relating > with to both Search and ORM aspects. > > Thanks! > Sanne > > > On 31 May 2015 at 10:35, Guillaume Smet wrote: >> Hi Sanne, >> >> On Sun, May 31, 2015 at 2:47 AM, Sanne Grinovero >> wrote: >>> >>> I don't think it's acceptable we withhold an Hibernate 5 compatible >>> version of Hibernate Search for much longer. >> >> >> FWIW, I'm waiting for this to test Hibernate 5 on our applications and >> provide feedback from the field on ORM 5. All our applications are highly >> dependent on Search. >> >> So, it would be nice to have an alpha of Search to test all this! >> >> -- >> Guillaume From sanne at hibernate.org Thu Jun 4 18:47:22 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 4 Jun 2015 23:47:22 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <5568A16D.5090201@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> Message-ID: On 29 May 2015 at 18:27, Scott Marlow wrote: > > > On 05/29/2015 01:05 PM, Sanne Grinovero wrote: >> >> Thanks Scott! >> >> 1. this error is expected: HS 5.2 is not compatible with ORM 5. >> We'll need a compatible WildFly version to release a compatible >> version, or alternatively know how to get the tests to run w/o the >> jipijapa patch as I was trying ;-) > > > In the interest of getting ORM 5 into WildFly 10 before HS is upgraded, we > could disable > org.jboss.as.test.integration.hibernate.search.HibernateSearchJPATestCase > and create a blocking jira for WF10 assigned to you, so you can either > enable the HibernateSearchJPATestCase test or remove Search from WildFly 10 > as you mention below (as a possible option). Please let me know how you > want me to proceed. That won't be necessary, as a compatible release is now available: update Hibernate Search to version 5.4.0.Alpha1 when you upgrade Hibernate ORM. (don't upgrade HS w/o ORM to 5: it's required for this version of Hibernate Search) Thanks! Sanne From gbadner at redhat.com Thu Jun 4 20:00:16 2015 From: gbadner at redhat.com (Gail Badner) Date: Thu, 4 Jun 2015 20:00:16 -0400 (EDT) Subject: [hibernate-dev] How to run master unit test in Intellij with non-default DB In-Reply-To: <1220955480.10395065.1433462057667.JavaMail.zimbra@redhat.com> Message-ID: <530523157.10395736.1433462416498.JavaMail.zimbra@redhat.com> For 4.3 and before, when running a unit test in Intellij using a non-default DB, I would simply add the JDBC jar as a module dependency and then add the hibernate-specific properties (e.g., for dialect, etc) as VM options in the Run/Debug configuration. This doesn't work for master because the added dependency is not getting picked up. Is there some other way to do this without messing with build.gradle or libraries.gradle? Thanks, Gail From steve at hibernate.org Thu Jun 4 21:46:24 2015 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 05 Jun 2015 01:46:24 +0000 Subject: [hibernate-dev] How to run master unit test in Intellij with non-default DB In-Reply-To: <530523157.10395736.1433462416498.JavaMail.zimbra@redhat.com> References: <1220955480.10395065.1433462057667.JavaMail.zimbra@redhat.com> <530523157.10395736.1433462416498.JavaMail.zimbra@redhat.com> Message-ID: I would assume you are using the "Run with Gradle" stuff in IntelliJ rather than the normal "Run it in IntelliJ" stuff On Thu, Jun 4, 2015 at 7:01 PM Gail Badner wrote: > For 4.3 and before, when running a unit test in Intellij using a > non-default DB, I would simply add the JDBC jar as a module dependency and > then add the hibernate-specific properties (e.g., for dialect, etc) as VM > options in the Run/Debug configuration. > > This doesn't work for master because the added dependency is not getting > picked up. > > Is there some other way to do this without messing with build.gradle or > libraries.gradle? > > Thanks, > Gail > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gbadner at redhat.com Thu Jun 4 23:41:41 2015 From: gbadner at redhat.com (Gail Badner) Date: Thu, 4 Jun 2015 23:41:41 -0400 (EDT) Subject: [hibernate-dev] How to run master unit test in Intellij with non-default DB In-Reply-To: References: <1220955480.10395065.1433462057667.JavaMail.zimbra@redhat.com> <530523157.10395736.1433462416498.JavaMail.zimbra@redhat.com> Message-ID: <421361318.10531603.1433475701388.JavaMail.zimbra@redhat.com> I imported the project as a Gradle project and the Run/Debug configuration is listed under "Gradle". Gradle project: /home/gbadner/git/hibernate-orm-HHH-redo-again/hibernate-entitymanager Tasks: cleanTest test VM options: -Dhibernate.jdbc.use_get_generated_keys=false -Dhibernate.connection.password=... -Dhibernate41.dialect=org.hibernate.dialect.DB2Dialect -Dhibernate.jdbc.use_streams_for_binary=false -Dhibernate.connection.username=... -Dhibernate.connection.driver_class=com.ibm.db2.jcc.DB2Driver -Dhibernate.dialect=org.hibernate.dialect.DB2Dialect -Dhibernate.connection.url=... -Dhibernate.connection.schema=... Script parameters: --tests org.hibernate.jpa.test.query.QueryTest How do I add db2jcc4.jar as a dependency? Thanks, Gail ----- Original Message ----- > From: "Steve Ebersole" > To: "Gail Badner" , "Hibernate Dev" > Sent: Thursday, June 4, 2015 6:46:24 PM > Subject: Re: [hibernate-dev] How to run master unit test in Intellij with non-default DB > > I would assume you are using the "Run with Gradle" stuff in IntelliJ rather > than the normal "Run it in IntelliJ" stuff > > On Thu, Jun 4, 2015 at 7:01 PM Gail Badner wrote: > > > For 4.3 and before, when running a unit test in Intellij using a > > non-default DB, I would simply add the JDBC jar as a module dependency and > > then add the hibernate-specific properties (e.g., for dialect, etc) as VM > > options in the Run/Debug configuration. > > > > This doesn't work for master because the added dependency is not getting > > picked up. > > > > Is there some other way to do this without messing with build.gradle or > > libraries.gradle? > > > > Thanks, > > Gail > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > From sanne at hibernate.org Fri Jun 5 05:23:34 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 5 Jun 2015 10:23:34 +0100 Subject: [hibernate-dev] Hibernate Search is now compatible with Hibernate ORM 5: use version 5.4.0.Alpha1 Message-ID: Hibernate Search version 5.4.0.Alpha1 is now available, and is compatible with Hibernate ORM version 5.0.0.CR1. More details at: - http://in.relation.to/Bloggers/FirstPreviewOfHibernateSearchForHibernateORM5 Thanks, Sanne From galder at redhat.com Fri Jun 5 08:50:53 2015 From: galder at redhat.com (=?utf-8?Q?Galder_Zamarre=C3=B1o?=) Date: Fri, 5 Jun 2015 14:50:53 +0200 Subject: [hibernate-dev] 2nd level cache tuning on WildFly In-Reply-To: References: Message-ID: <34BD9AC6-1CF3-4E0E-8704-177402B55049@redhat.com> Hi Sanne, By default, each entity gets its own cache in both standalone and WF. WF might only expose the configuration for "entity" cache, but internally, each entity gets a new cache that has the configuration of "entity". If you want for a particular entity type to have different cache settings, you might be able to modify them directly in the Hibernate configuration itself via properties [1] ("hibernate.cache.infinispan.cfg" would not apply within WF) Otherwise, right now yeah, you'd need to define a cache in the persistence cache container, and assign that cache name to your entity, again defined in [1]. For standalone envs, you can reference the XML from the Hibernate configuration, but yeah, I see your point of having an easier way to define the cache from the deployment configuration itself. This could potentially done in such way that can be used for both standalone and WF. Can you create an HHH for this? Cheers, [1] http://infinispan.org/docs/7.2.x/user_guide/user_guide.html#_advanced_configuration_2 > Date: Mon, 1 Jun 2015 19:24:33 +0100 > From: Sanne Grinovero > Subject: [hibernate-dev] 2nd level cache tuning on WildFly > To: "Hibernate.org" > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > Does someone have an example of how I could use different Infinispan > Cache(s) for each of my Hibernate entities? > > The WildFly documentation doesn't get much into tuning: > https://docs.jboss.org/author/display/WFLY9/JPA+Reference+Guide#JPAReferenceGuide-UsingtheInfinispansecondlevelcache > > I'd like to define Cache configuration in the WildFly configuration > file and map them 1:1 to the cacheable entities. > Which also brings up the question on why I should edit the root > configuration for sake of app-specific details.. ideally I'd want to > add such a configuration snippet within my application deployment. > > thanks in advance for any pointer, > Sanne > -- Galder Zamarre?o galder at redhat.com From mih_vlad at yahoo.com Mon Jun 8 09:25:53 2015 From: mih_vlad at yahoo.com (Mihalcea Vlad) Date: Mon, 8 Jun 2015 13:25:53 +0000 (UTC) Subject: [hibernate-dev] Persistence.xml properties are not available when the Hibernate services are bootstrapped Message-ID: <1914732108.7081395.1433769953848.JavaMail.yahoo@mail.yahoo.com> Hi Steven, I'm trying to integrate FlexyPool ( https://github.com/vladmihalcea/flexy-pool ) with Java EE Application servers, and the only work-around I found is to add a new Hibernate ConnectionProvider that extends the DataSourceConnectionProvider and exposes a DataSource proxy, instead of the original Application server one. This can allow monitoring connection allocation. The persistence.xml properties (transaction-type, jta or non-jta data source) are not available in the properties Map supplied to the Configurable interface I'm also implementing. I checked the org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl source code and found this comment:// HHH-8121 : make the PU-defined value available to EMF.getProperties() configurationValues.put( AvailableSettings.JTA_DATASOURCE, persistenceUnit.getJtaDataSource() );This Hibernate issues is closed: https://hibernate.atlassian.net/browse/HHH-8122?But the properties are still missing in the properties Map supplied to the Configurable interface. Without this info, it's very hard to decide whether my implementation should support aggressive release (for JTA) or not (RESOURCE_LOCAL). Do you know anything about it? Vlad From steve at hibernate.org Mon Jun 8 17:04:50 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 08 Jun 2015 21:04:50 +0000 Subject: [hibernate-dev] Persistence.xml properties are not available when the Hibernate services are bootstrapped In-Reply-To: <1914732108.7081395.1433769953848.JavaMail.yahoo@mail.yahoo.com> References: <1914732108.7081395.1433769953848.JavaMail.yahoo@mail.yahoo.com> Message-ID: That comment is in org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl#applyJdbcConnectionProperties. The more important calls are the ones right before those #put calls. The ones to StandardServiceRegistryBuilder#applySetting. Those are the settings that ultimately will get passed to the Configurable. #applyJdbcConnectionProperties does have branches dealing with jta- and non-jta datasources. EntityManagerFactoryBuilderImpl#applyTransactionProperties deals with transaction types. So the code looks right. I assume you are just looking under the wrong keys. On Mon, Jun 8, 2015 at 8:25 AM Mihalcea Vlad wrote: > Hi Steven, > > I'm trying to integrate FlexyPool ( > https://github.com/vladmihalcea/flexy-pool ) with Java EE Application > servers, and the only work-around I found is to add a new Hibernate > ConnectionProvider that extends the DataSourceConnectionProvider and > exposes a DataSource proxy, instead of the original Application server one. > This can allow monitoring connection allocation. > > The persistence.xml properties (transaction-type, jta or non-jta data > source) are not available in the properties Map supplied to the > Configurable interface I'm also implementing. > > I checked the org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl > source code and found this comment: > > // HHH-8121 : make the PU-defined value available to EMF.getProperties() > configurationValues.put( AvailableSettings.JTA_DATASOURCE, persistenceUnit.getJtaDataSource() ); > > This Hibernate issues is closed: > > https://hibernate.atlassian.net/browse/HHH-8122 > > But the properties are still missing in the properties Map supplied to the Configurable > interface. Without this info, it's very hard to decide whether my > implementation should support aggressive release (for JTA) or not > (RESOURCE_LOCAL). > > Do you know anything about it? > > Vlad > From steve at hibernate.org Mon Jun 8 17:33:41 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 08 Jun 2015 21:33:41 +0000 Subject: [hibernate-dev] Hibernate ORM - next steps In-Reply-To: References: <5568947D.5000104@redhat.com> Message-ID: > > I know personally, time/resources aside, the biggest reason I have not > worked a lot on the bigger task (other than my initial work on the new > query parser) is because I had hoped a solution would present itself to the > Antlr 4 quandary. But it hasn't and likely wont and I think Gunnar and > Sanne and I are all in agreement that it probably just makes sense to base > this work on Antlr 3. > Sanne? Gunnar? ;) > From steve at hibernate.org Mon Jun 8 17:41:59 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 08 Jun 2015 21:41:59 +0000 Subject: [hibernate-dev] Hibernate ORM - next steps In-Reply-To: References: <20150529102213.GB43402@Nineveh.lan> Message-ID: > > - since I'm currently exploring the 2nd level cache keys and the > persistence context keys again, it's getting clear (again as we > discussed this before) to potentially use a different data structure > to hold the persistence context. > The persistence context holds a lot of data, a lot of Maps. Which in particular are you think of? Based on your wording, I would assume you speak of the `Map entitiesByKey` and maybe the `EntityEntryContext`? From steve at hibernate.org Mon Jun 8 17:53:58 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 08 Jun 2015 21:53:58 +0000 Subject: [hibernate-dev] Hibernate ORM - next steps In-Reply-To: References: <20150529102213.GB43402@Nineveh.lan> Message-ID: Here is a list of tasks, with dependent tasks nested under the tasks they depend on... 1. rework SQL generation & HQL parser 1. change JDBC extraction to work by position, rather than alias 2. port Hibernate Criteria constructs to JPA criteria, begin deprecation of Hibernate Criteria 3. ability to override EAGER fetching with LAZY (fetch profiles, HQL, etc) 2. rework annotation binding (Jandex, etc) 1. extended orm.xml, deprecate hbm.xml 2. discriminator-based multi-tenancy 3. (?) extend JPA criteria API with fluent support 4. merging hibernate-entitymanager into hibernate-core 5. continue to fill out bytecode enhancement capabilities 6. others, as we discuss I already mentioned why (1.1) requires (1). But I also moved (1.2) and (1.3) under there. I moved (1.2) because part of both the JPA criteria and Hibernate criteria is the rendering of that to SQL. At the moment the native Hibernate Criteria contract renders the parts directly into SQL fragments. For the JPA criteria we currently render to HQL and then render that. Ultimately we want this to render to the AST. WRT (1.2) we could decide to simply port the Hibernate-provided Criteria pieces and fit them into the HQL-rendering scheme short term. I am a little leery of forcing users to port their custom extensions just to have to change them again later when we change how the rendering happens. I moved (1.3) for similar reasons. This affects the SQL generation as well as the configuration of the Loaders used to process the results. Things all slated to be done in (1). From steve at hibernate.org Mon Jun 8 19:22:47 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 08 Jun 2015 23:22:47 +0000 Subject: [hibernate-dev] Hibernate ORM - next steps In-Reply-To: References: <20150529102213.GB43402@Nineveh.lan> Message-ID: Wow, the formatting on that came out awful. One more try 1. rework SQL generation & HQL parser 1. change JDBC extraction to work by position, rather than alias 2. port Hibernate Criteria constructs to JPA criteria, begin deprecation of Hibernate Criteria 3. ability to override EAGER fetching with LAZY (fetch profiles, HQL, etc) 2. rework annotation binding (Jandex, etc) 1. extended orm.xml, deprecate hbm.xml 2. discriminator-based multi-tenancy 3. extend JPA criteria API with fluent support 4. merging hibernate-entitymanager into hibernate-core 5. continue to fill out bytecode enhancement capabilities 6. others, as we discuss > > From gunnar at hibernate.org Tue Jun 9 04:56:57 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Tue, 9 Jun 2015 10:56:57 +0200 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: Hi Steve, Did you ever have a chance to apply the "decorated parse tree" approach to your Antlr4 PoC? What I like about the Antlr4 approach is the fact that you don't need a set of several quite similar grammars as you'd do with the tree transformation approach. Also using the current version of Antlr instead of 3 appears attractive to me wrt. to bugfixes and future development of the tool. Based on what I understand from your discussions on the Antlr mailing list, I'd assume the parse tree and the external state it references to look roughly like so (---> indicates a reference to state built up during sub-sequential walks, maybe in some external "table", maybe stored within the (typed) tree nodes themselves): [QUERY] [SELECT] [ATTRIBUTE_REF] ---> AttributeReference("", "code") [DOT] [DOT] [DOT] [IDENT, "c"] [IDENT, "headquarters"] [IDENT, "state"] [IDENT, "code"] [FROM] [SPACE] [SPACE_ROOT] ---> InnerJoin( InnerJoin ( PersisterRef( "c", "com.acme.Customer" ), TableRef ( "", "headquarters" ) ), TableRef ( "", "state" ) ) ) [IDENT, "Customer"] [IDENT, "c"] I.e. instead of transforming the tree itself, the state required for output generation would be added as "decorators" to nodes of the original parse tree itself. That's just the basic idea as I understand it, surely the specific types of the decorator elements (AttributeReference, InnerJoin etc.) may look different. During "query rendering" we'd have to inspect the decorator state of the parse tree nodes and interpret it accordingly. So I believe the issue of alias resolution and implicit join conversion could be handled without tree transformations (at least conceptually, I could not code an actual implementation out of my head right away). But maybe there are other cases where tree transformations are more strictly needed? --Gunnar 2014-11-13 19:42 GMT+01:00 Steve Ebersole : > As most of you know already, we are planning to redesign the current > Antlr-based HQL/JPQL parser in ORM for a variety of reasons. > > The current approach in the translator (Antlr 2 based, although Antlr 3 > supports the same model) is that we actually define multiple > grammars/parsers which progressively re-write the tree adding more and more > semantic information; think of this as multiple passes or phases. The > current code has 3 phases: > 1) parsing - we simply parse the HQL/JPQL query into an AST, although we do > do one interesting (and uber-important!) re-write here where we "hoist" the > from clause in front of all other clauses. > 2) rough semantic analysis - the current code, to be honest, sucks here. > The end result of this phase is a tree that mixes normalized semantic > information with lots of SQL fragments. It is extremely fugly > 3) rendering to SQL > > The idea of phases is still the best way to attack this translation imo. I > just think we did not implement the phases very well before; we were just > learning Antlr at the time. So part of the redesign here is to leverage > our better understanding of Antlr and design some better trees. The other > big reason is to centralize the generation of SQL into one place rather > than the 3 different places we do it today (not to mention the many, many > places we render SQL fragments). > > Part of the process here is to decide which parser to use. Antlr 2 is > ancient :) I used Antlr 3 in the initial prototyping of this redesign > because it was the most recent release at that time. In the interim Antlr > 4 has been released. > > I have been evaluating whether Antlr 4 is appropriate for our needs there. > Antlr 4 is a pretty big conceptual deviation from Antlr 2/3 in quite a few > ways. Generally speaking, Antlr 4 is geared more towards interpreting > rather than translating/transforming. It can handle "transformation" if > the transformation is the final step in the process. Transformations is > where tree re-writing comes in handy. > > First lets step back and look at the "conceptual model" of Antlr 4. The > grammar is used to produce: > 1) the parser - takes the input and builds a "parse tree" based on the > rules of the lexer and grammar. > 2) listener/visitor for parse-tree traversal - can optionally generate > listeners or visitors (or both) for traversing the parse tree (output from > parser). > > There are 2 highly-related changes that negatively impact us: > 1) no tree grammars/parsers > 2) no tree re-writing > > Our existing translator is fundamentally built on the concepts of tree > parsers and tree re-writing. Even the initial prototypes for the redesign > (and the current state of hql-parser which Sanne and Gunnar picked up from > there) are built on those concepts. So moving to Antlr 4 in that regard > does represent a risk. How big of a risk, and whether that risk is worth > it, is what we need to determine. > > What does all this mean in simple, practical terms? Let's look at a simple > query: "select c.headquarters.state.code from Company c". Simple syntactic > analysis will produce a tree something like: > > [QUERY] > [SELECT] > [DOT] > [DOT] > [DOT] > [IDENT, "c"] > [IDENT, "headquarters"] > [IDENT, "state"] > [IDENT, "code"] > [FROM] > [SPACE] > [SPACE_ROOT] > [IDENT, "Customer"] > [IDENT, "c"] > > There is not a lot of semantic (meaning) information here. A more semantic > representation of the query would look something like: > > [QUERY] > [SELECT] > [ATTRIBUTE_REF] > [ALIAS_REF, ""] > [IDENT, "code"] > [FROM] > [SPACE] > [PERSISTER_REF] > [ENTITY_NAME, "com.acme.Customer"] > [ALIAS, "c"] > [JOIN] > [INNER] > [ATTRIBUTE_JOIN] > [IDENT, "headquarters"] > [ALIAS, ""] > [JOIN] > [INNER] > [ATTRIBUTE_JOIN] > [IDENT, "state"] > [ALIAS, ""] > > > Notice especially the difference in the tree rules. This is tree > re-writing, and is the major difference affecting us. Consider a specific > thing like the "c.headquarters.state.code" DOT-IDENT sequence. Essentially > Antlr 4 would make us deal with that as a DOT-IDENT sequence through all > the phases - even SQL generation. Quite fugly. The intent of Antlr 4 in > cases like this is to build up an external state table (external to the > tree itself) or what Antlr folks typically refer to as "iterative tree > decoration"[1]. So with Antlr 4, in generating the SQL, we would still be > handling calls in terms of "c.headquarters.state.code" in the SELECT clause > and resolving that through the external symbol tables. Again, with Antlr 4 > we would always be walking that initial (non-semantic) tree. Unless I am > missing something. I would be happy to be corrected, if anyone knows Antlr > 4 better. I have also asked as part of the antlr-discussion group[2]. > > In my opinion though, if it comes down to us needing to walk the tree in > that first form across all phases I just do not see the benefit to moving > to Antlr 4. > > P.S. When I say SQL above I really just mean the target query language for > the back-end data store whether that be SQL targeting a RDBMS for ORM or a > NoSQL store for OGM. > > [1] I still have not fully grokked this paradigm, so I may be missing > something, but... AFAICT even in this paradigm the listener/visitor rules > are defined in terms of the initial parse tree rules rather than more > [2] https://groups.google.com/forum/#!topic/antlr-discussion/hzF_YrzfDKo > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Tue Jun 9 06:49:40 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 9 Jun 2015 11:49:40 +0100 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: On 9 June 2015 at 09:56, Gunnar Morling wrote: > Hi Steve, > > Did you ever have a chance to apply the "decorated parse tree" approach to > your Antlr4 PoC? > > What I like about the Antlr4 approach is the fact that you don't need a set > of several quite similar grammars as you'd do with the tree transformation > approach. Also using the current version of Antlr instead of 3 appears > attractive to me wrt. to bugfixes and future development of the tool. > > Based on what I understand from your discussions on the Antlr mailing list, > I'd assume the parse tree and the external state it references to look > roughly like so (---> indicates a reference to state built up during > sub-sequential walks, maybe in some external "table", maybe stored within > the (typed) tree nodes themselves): > > [QUERY] > [SELECT] > [ATTRIBUTE_REF] ---> AttributeReference("", "code") > [DOT] > [DOT] > [DOT] > [IDENT, "c"] > [IDENT, "headquarters"] > [IDENT, "state"] > [IDENT, "code"] > [FROM] > [SPACE] > [SPACE_ROOT] ---> InnerJoin( InnerJoin ( PersisterRef( "c", > "com.acme.Customer" ), TableRef ( "", "headquarters" ) ), TableRef ( > "", "state" ) ) ) > [IDENT, "Customer"] > [IDENT, "c"] > > I.e. instead of transforming the tree itself, the state required for output > generation would be added as "decorators" to nodes of the original parse > tree itself. That's just the basic idea as I understand it, surely the > specific types of the decorator elements (AttributeReference, InnerJoin etc.) > may look different. During "query rendering" we'd have to inspect the > decorator state of the parse tree nodes and interpret it accordingly. > > So I believe the issue of alias resolution and implicit join conversion > could be handled without tree transformations (at least conceptually, I > could not code an actual implementation out of my head right away). But > maybe there are other cases where tree transformations are more strictly > needed? Do you mean that you would be ok to "navigate" all the [DOT] nodes to get to the decorated attachments? In that case while you might be fine to translate each fragment into a different fragment, it's not straight forward to transform it into a different structure, say with sub-trees in different orders or nodes which don't have a 1:1 match. It's of course doable if you are filling in your own builder while navigating these (like we do with the Lucene DSL output), but it doesn't help you with multiple phases which is what Steve is pointing out. I would highly prefer to feed the semantic representation of the tree to our query generating backends, especially so if we could all share the same initial smart phases to do some basic validations and optimisations DRY. But then the consuming backends will likely have some additional validations and optimisations which need to be backend-specific (dialect-specific or technology specific in case of OGM). Steve, you mentioned that ANTLR4 handles transformations but only when it's the last step. What prevents us to chain multiple such transformations, applying the "last step" approach multiple times? I didn't look at it at all, so take this just as an high level, conceptual question. I guess one would need to clearly define all intermediate data types rather than have ANTLR generate them like it does with tokens, but that could be the lesser trouble? Thanks, Sanne > > --Gunnar > > > > > > > > 2014-11-13 19:42 GMT+01:00 Steve Ebersole : > >> As most of you know already, we are planning to redesign the current >> Antlr-based HQL/JPQL parser in ORM for a variety of reasons. >> >> The current approach in the translator (Antlr 2 based, although Antlr 3 >> supports the same model) is that we actually define multiple >> grammars/parsers which progressively re-write the tree adding more and more >> semantic information; think of this as multiple passes or phases. The >> current code has 3 phases: >> 1) parsing - we simply parse the HQL/JPQL query into an AST, although we do >> do one interesting (and uber-important!) re-write here where we "hoist" the >> from clause in front of all other clauses. >> 2) rough semantic analysis - the current code, to be honest, sucks here. >> The end result of this phase is a tree that mixes normalized semantic >> information with lots of SQL fragments. It is extremely fugly >> 3) rendering to SQL >> >> The idea of phases is still the best way to attack this translation imo. I >> just think we did not implement the phases very well before; we were just >> learning Antlr at the time. So part of the redesign here is to leverage >> our better understanding of Antlr and design some better trees. The other >> big reason is to centralize the generation of SQL into one place rather >> than the 3 different places we do it today (not to mention the many, many >> places we render SQL fragments). >> >> Part of the process here is to decide which parser to use. Antlr 2 is >> ancient :) I used Antlr 3 in the initial prototyping of this redesign >> because it was the most recent release at that time. In the interim Antlr >> 4 has been released. >> >> I have been evaluating whether Antlr 4 is appropriate for our needs there. >> Antlr 4 is a pretty big conceptual deviation from Antlr 2/3 in quite a few >> ways. Generally speaking, Antlr 4 is geared more towards interpreting >> rather than translating/transforming. It can handle "transformation" if >> the transformation is the final step in the process. Transformations is >> where tree re-writing comes in handy. >> >> First lets step back and look at the "conceptual model" of Antlr 4. The >> grammar is used to produce: >> 1) the parser - takes the input and builds a "parse tree" based on the >> rules of the lexer and grammar. >> 2) listener/visitor for parse-tree traversal - can optionally generate >> listeners or visitors (or both) for traversing the parse tree (output from >> parser). >> >> There are 2 highly-related changes that negatively impact us: >> 1) no tree grammars/parsers >> 2) no tree re-writing >> >> Our existing translator is fundamentally built on the concepts of tree >> parsers and tree re-writing. Even the initial prototypes for the redesign >> (and the current state of hql-parser which Sanne and Gunnar picked up from >> there) are built on those concepts. So moving to Antlr 4 in that regard >> does represent a risk. How big of a risk, and whether that risk is worth >> it, is what we need to determine. >> >> What does all this mean in simple, practical terms? Let's look at a simple >> query: "select c.headquarters.state.code from Company c". Simple syntactic >> analysis will produce a tree something like: >> >> [QUERY] >> [SELECT] >> [DOT] >> [DOT] >> [DOT] >> [IDENT, "c"] >> [IDENT, "headquarters"] >> [IDENT, "state"] >> [IDENT, "code"] >> [FROM] >> [SPACE] >> [SPACE_ROOT] >> [IDENT, "Customer"] >> [IDENT, "c"] >> >> There is not a lot of semantic (meaning) information here. A more semantic >> representation of the query would look something like: >> >> [QUERY] >> [SELECT] >> [ATTRIBUTE_REF] >> [ALIAS_REF, ""] >> [IDENT, "code"] >> [FROM] >> [SPACE] >> [PERSISTER_REF] >> [ENTITY_NAME, "com.acme.Customer"] >> [ALIAS, "c"] >> [JOIN] >> [INNER] >> [ATTRIBUTE_JOIN] >> [IDENT, "headquarters"] >> [ALIAS, ""] >> [JOIN] >> [INNER] >> [ATTRIBUTE_JOIN] >> [IDENT, "state"] >> [ALIAS, ""] >> >> >> Notice especially the difference in the tree rules. This is tree >> re-writing, and is the major difference affecting us. Consider a specific >> thing like the "c.headquarters.state.code" DOT-IDENT sequence. Essentially >> Antlr 4 would make us deal with that as a DOT-IDENT sequence through all >> the phases - even SQL generation. Quite fugly. The intent of Antlr 4 in >> cases like this is to build up an external state table (external to the >> tree itself) or what Antlr folks typically refer to as "iterative tree >> decoration"[1]. So with Antlr 4, in generating the SQL, we would still be >> handling calls in terms of "c.headquarters.state.code" in the SELECT clause >> and resolving that through the external symbol tables. Again, with Antlr 4 >> we would always be walking that initial (non-semantic) tree. Unless I am >> missing something. I would be happy to be corrected, if anyone knows Antlr >> 4 better. I have also asked as part of the antlr-discussion group[2]. >> >> In my opinion though, if it comes down to us needing to walk the tree in >> that first form across all phases I just do not see the benefit to moving >> to Antlr 4. >> >> P.S. When I say SQL above I really just mean the target query language for >> the back-end data store whether that be SQL targeting a RDBMS for ORM or a >> NoSQL store for OGM. >> >> [1] I still have not fully grokked this paradigm, so I may be missing >> something, but... AFAICT even in this paradigm the listener/visitor rules >> are defined in terms of the initial parse tree rules rather than more >> [2] https://groups.google.com/forum/#!topic/antlr-discussion/hzF_YrzfDKo >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From gunnar at hibernate.org Tue Jun 9 07:11:26 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Tue, 9 Jun 2015 13:11:26 +0200 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: 2015-06-09 12:49 GMT+02:00 Sanne Grinovero : > On 9 June 2015 at 09:56, Gunnar Morling wrote: > > Hi Steve, > > > > Did you ever have a chance to apply the "decorated parse tree" approach > to > > your Antlr4 PoC? > > > > What I like about the Antlr4 approach is the fact that you don't need a > set > > of several quite similar grammars as you'd do with the tree > transformation > > approach. Also using the current version of Antlr instead of 3 appears > > attractive to me wrt. to bugfixes and future development of the tool. > > > > Based on what I understand from your discussions on the Antlr mailing > list, > > I'd assume the parse tree and the external state it references to look > > roughly like so (---> indicates a reference to state built up during > > sub-sequential walks, maybe in some external "table", maybe stored within > > the (typed) tree nodes themselves): > > > > [QUERY] > > [SELECT] > > [ATTRIBUTE_REF] ---> AttributeReference("", "code") > > [DOT] > > [DOT] > > [DOT] > > [IDENT, "c"] > > [IDENT, "headquarters"] > > [IDENT, "state"] > > [IDENT, "code"] > > [FROM] > > [SPACE] > > [SPACE_ROOT] ---> InnerJoin( InnerJoin ( PersisterRef( "c", > > "com.acme.Customer" ), TableRef ( "", "headquarters" ) ), > TableRef ( > > "", "state" ) ) ) > > [IDENT, "Customer"] > > [IDENT, "c"] > > > > I.e. instead of transforming the tree itself, the state required for > output > > generation would be added as "decorators" to nodes of the original parse > > tree itself. That's just the basic idea as I understand it, surely the > > specific types of the decorator elements (AttributeReference, InnerJoin > etc.) > > may look different. During "query rendering" we'd have to inspect the > > decorator state of the parse tree nodes and interpret it accordingly. > > > > So I believe the issue of alias resolution and implicit join conversion > > could be handled without tree transformations (at least conceptually, I > > could not code an actual implementation out of my head right away). But > > maybe there are other cases where tree transformations are more strictly > > needed? > > Do you mean that you would be ok to "navigate" all the [DOT] nodes to > get to the decorated attachments? > In that case while you might be fine to translate each fragment into a > different fragment, it's not straight forward to transform it into a > different structure, say with sub-trees in different orders or nodes > which don't have a 1:1 match. > It's of course doable if you are filling in your own builder while > navigating these (like we do with the Lucene DSL output), but it > doesn't help you with multiple phases which is what Steve is pointing > out. > No, what I mean is to add attachments to nodes up in the tree, based on information either a) from sub-nodes of that tree or b) nodes somewhere else in the tree. E.g. a) is the case for the attribute reference, which is represented by an attachment at the ATTRIBUTE_REF node (it has been created by prior visits to the DOT sub-nodes) and b) is the case for the implicit join syntax: It is declared by sub-nodes of the SELECT clause, but the attachments representing the join are added beneath the FROM clause. The query generation would work based on these "semantic" attachments, it would not visit the individual DOT nodes for instance. I would highly prefer to feed the semantic representation of the tree > to our query generating backends, especially so if we could all share > the same initial smart phases to do some basic validations and > optimisations DRY. But then the consuming backends will likely have > some additional validations and optimisations which need to be > backend-specific (dialect-specific or technology specific in case of > OGM). > Yes, of course that's my preference as well. But collecting semantic attachments on higher-level nodes (using one more several visits on the parse tree) should not be in the way of that. The difference to incrementally altering the structure of the tree is that this approach attaches the required state to nodes of the original tree itself. E.g. in a first pass you could register all alias definitions (e.g. "c" = PersisterRef(Customer)) in some look-up table. Then in a second pass you could resolve alias uses against these definitions and attach that resolved information to the node representing the original reference. So "semantic representation" would be in node attachments (again, likely in aggregated forms on super-nodes or nodes somewhere else in the tree) instead of nodes themselves. At least that's how I understand things to work in Antlr4 based on their docs and the user group discussions initiated by Steve. > > Steve, you mentioned that ANTLR4 handles transformations but only when > it's the last step. What prevents us to chain multiple such > transformations, applying the "last step" approach multiple times? > I didn't look at it at all, so take this just as an high level, > conceptual question. I guess one would need to clearly define all > intermediate data types rather than have ANTLR generate them like it > does with tokens, but that could be the lesser trouble? > > Thanks, > Sanne > > > > > --Gunnar > > > > > > > > > > > > > > > > 2014-11-13 19:42 GMT+01:00 Steve Ebersole : > > > >> As most of you know already, we are planning to redesign the current > >> Antlr-based HQL/JPQL parser in ORM for a variety of reasons. > >> > >> The current approach in the translator (Antlr 2 based, although Antlr 3 > >> supports the same model) is that we actually define multiple > >> grammars/parsers which progressively re-write the tree adding more and > more > >> semantic information; think of this as multiple passes or phases. The > >> current code has 3 phases: > >> 1) parsing - we simply parse the HQL/JPQL query into an AST, although > we do > >> do one interesting (and uber-important!) re-write here where we "hoist" > the > >> from clause in front of all other clauses. > >> 2) rough semantic analysis - the current code, to be honest, sucks here. > >> The end result of this phase is a tree that mixes normalized semantic > >> information with lots of SQL fragments. It is extremely fugly > >> 3) rendering to SQL > >> > >> The idea of phases is still the best way to attack this translation > imo. I > >> just think we did not implement the phases very well before; we were > just > >> learning Antlr at the time. So part of the redesign here is to leverage > >> our better understanding of Antlr and design some better trees. The > other > >> big reason is to centralize the generation of SQL into one place rather > >> than the 3 different places we do it today (not to mention the many, > many > >> places we render SQL fragments). > >> > >> Part of the process here is to decide which parser to use. Antlr 2 is > >> ancient :) I used Antlr 3 in the initial prototyping of this redesign > >> because it was the most recent release at that time. In the interim > Antlr > >> 4 has been released. > >> > >> I have been evaluating whether Antlr 4 is appropriate for our needs > there. > >> Antlr 4 is a pretty big conceptual deviation from Antlr 2/3 in quite a > few > >> ways. Generally speaking, Antlr 4 is geared more towards interpreting > >> rather than translating/transforming. It can handle "transformation" if > >> the transformation is the final step in the process. Transformations is > >> where tree re-writing comes in handy. > >> > >> First lets step back and look at the "conceptual model" of Antlr 4. The > >> grammar is used to produce: > >> 1) the parser - takes the input and builds a "parse tree" based on the > >> rules of the lexer and grammar. > >> 2) listener/visitor for parse-tree traversal - can optionally generate > >> listeners or visitors (or both) for traversing the parse tree (output > from > >> parser). > >> > >> There are 2 highly-related changes that negatively impact us: > >> 1) no tree grammars/parsers > >> 2) no tree re-writing > >> > >> Our existing translator is fundamentally built on the concepts of tree > >> parsers and tree re-writing. Even the initial prototypes for the > redesign > >> (and the current state of hql-parser which Sanne and Gunnar picked up > from > >> there) are built on those concepts. So moving to Antlr 4 in that regard > >> does represent a risk. How big of a risk, and whether that risk is > worth > >> it, is what we need to determine. > >> > >> What does all this mean in simple, practical terms? Let's look at a > simple > >> query: "select c.headquarters.state.code from Company c". Simple > syntactic > >> analysis will produce a tree something like: > >> > >> [QUERY] > >> [SELECT] > >> [DOT] > >> [DOT] > >> [DOT] > >> [IDENT, "c"] > >> [IDENT, "headquarters"] > >> [IDENT, "state"] > >> [IDENT, "code"] > >> [FROM] > >> [SPACE] > >> [SPACE_ROOT] > >> [IDENT, "Customer"] > >> [IDENT, "c"] > >> > >> There is not a lot of semantic (meaning) information here. A more > semantic > >> representation of the query would look something like: > >> > >> [QUERY] > >> [SELECT] > >> [ATTRIBUTE_REF] > >> [ALIAS_REF, ""] > >> [IDENT, "code"] > >> [FROM] > >> [SPACE] > >> [PERSISTER_REF] > >> [ENTITY_NAME, "com.acme.Customer"] > >> [ALIAS, "c"] > >> [JOIN] > >> [INNER] > >> [ATTRIBUTE_JOIN] > >> [IDENT, "headquarters"] > >> [ALIAS, ""] > >> [JOIN] > >> [INNER] > >> [ATTRIBUTE_JOIN] > >> [IDENT, "state"] > >> [ALIAS, ""] > >> > >> > >> Notice especially the difference in the tree rules. This is tree > >> re-writing, and is the major difference affecting us. Consider a > specific > >> thing like the "c.headquarters.state.code" DOT-IDENT sequence. > Essentially > >> Antlr 4 would make us deal with that as a DOT-IDENT sequence through all > >> the phases - even SQL generation. Quite fugly. The intent of Antlr 4 > in > >> cases like this is to build up an external state table (external to the > >> tree itself) or what Antlr folks typically refer to as "iterative tree > >> decoration"[1]. So with Antlr 4, in generating the SQL, we would still > be > >> handling calls in terms of "c.headquarters.state.code" in the SELECT > clause > >> and resolving that through the external symbol tables. Again, with > Antlr 4 > >> we would always be walking that initial (non-semantic) tree. Unless I > am > >> missing something. I would be happy to be corrected, if anyone knows > Antlr > >> 4 better. I have also asked as part of the antlr-discussion group[2]. > >> > >> In my opinion though, if it comes down to us needing to walk the tree in > >> that first form across all phases I just do not see the benefit to > moving > >> to Antlr 4. > >> > >> P.S. When I say SQL above I really just mean the target query language > for > >> the back-end data store whether that be SQL targeting a RDBMS for ORM > or a > >> NoSQL store for OGM. > >> > >> [1] I still have not fully grokked this paradigm, so I may be missing > >> something, but... AFAICT even in this paradigm the listener/visitor > rules > >> are defined in terms of the initial parse tree rules rather than more > >> [2] > https://groups.google.com/forum/#!topic/antlr-discussion/hzF_YrzfDKo > >> _______________________________________________ > >> hibernate-dev mailing list > >> hibernate-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From hardy at hibernate.org Tue Jun 9 07:12:39 2015 From: hardy at hibernate.org (Hardy Ferentschik) Date: Tue, 9 Jun 2015 13:12:39 +0200 Subject: [hibernate-dev] [Blog] Hosting of new blog site Message-ID: <20150609111239.GC7620@Nineveh.lan> Hi, just wondering whether there are reasons or preferences for choosing GitHub vs CloudFront for hosting the new blog site? See also https://hibernate.atlassian.net/browse/WEBSITE-311 --Hardy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 496 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hibernate-dev/attachments/20150609/eee080f8/attachment.bin From johara at redhat.com Tue Jun 9 08:03:06 2015 From: johara at redhat.com (John O'Hara) Date: Tue, 09 Jun 2015 13:03:06 +0100 Subject: [hibernate-dev] HHH-9857 - Reuse of EntityEntry for bytecode enhanced read-only reference cached entities Message-ID: <5576D5FA.6050606@redhat.com> For our use case, bytecode enhanced reference cached immutable entities, our top object for memory allocation is EntityEntry. We see an EntityEntry object created every time an immutable entity is added to a persistence context. In our use case, where we know the entity is immutable and we already have an EntityEntry cached, can we re-use the EntityEntry between sessions? This would reduce the allocation rate of EntityEntry in our use case by ~50%. -- John O'Hara johara at redhat.com JBoss, by Red Hat Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland). From sanne at hibernate.org Tue Jun 9 08:14:53 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 9 Jun 2015 13:14:53 +0100 Subject: [hibernate-dev] HHH-9857 - Reuse of EntityEntry for bytecode enhanced read-only reference cached entities In-Reply-To: <5576D5FA.6050606@redhat.com> References: <5576D5FA.6050606@redhat.com> Message-ID: There are lots of setters on EntityEntry, but sharing it would require at least the implementation to be fully immutable to be threadsafe. I see three options for the custom EntityEntry implementation: - simply ignore any write method by implementing each method as a no-op - throw exceptions on any write method - split the EntityEntry interface into a parent interface "ReadOnlyEntityEntry" which doesn't have any such method The first option seems the easy way out but we would not notice any unintended / illegal usage; I'd prefer the third one but I'm not sure which impact it would have, seems like a large change that needs experimenting. I just noticed an ImmutableEntityEntry implementation exists now, but it's not actually immutable? That should be fixed, at very least the javadoc to explain what that class purpose is? Thanks, Sanne On 9 June 2015 at 13:03, John O'Hara wrote: > For our use case, bytecode enhanced reference cached immutable entities, > our top object for memory allocation is EntityEntry. > > We see an EntityEntry object created every time an immutable entity is > added to a persistence context. > > In our use case, where we know the entity is immutable and we already > have an EntityEntry cached, can we re-use the EntityEntry between > sessions? This would reduce the allocation rate of EntityEntry in our > use case by ~50%. > > > -- > John O'Hara > johara at redhat.com > > JBoss, by Red Hat > Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. > Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland). > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From johara at redhat.com Tue Jun 9 08:50:21 2015 From: johara at redhat.com (John O'Hara) Date: Tue, 09 Jun 2015 13:50:21 +0100 Subject: [hibernate-dev] HHH-9857 - Reuse of EntityEntry for bytecode enhanced read-only reference cached entities In-Reply-To: References: <5576D5FA.6050606@redhat.com> Message-ID: <5576E10D.5090103@redhat.com> On 09/06/15 13:14, Sanne Grinovero wrote: > There are lots of setters on EntityEntry, but sharing it would require > at least the implementation to be fully immutable to be threadsafe. > > I see three options for the custom EntityEntry implementation: > - simply ignore any write method by implementing each method as a no-op > - throw exceptions on any write method > - split the EntityEntry interface into a parent interface > "ReadOnlyEntityEntry" which doesn't have any such method > > The first option seems the easy way out but we would not notice any > unintended / illegal usage; I'd prefer the third one but I'm not sure > which impact it would have, seems like a large change that needs > experimenting. > > I just noticed an ImmutableEntityEntry implementation exists now, but > it's not actually immutable? That should be fixed, at very least the > javadoc to explain what that class purpose is? Yes, an ImmutableEntityEntry instance will be created for the EntityEntry in our use case (e.g. when this would be a performance benefit), so we can test for instanceof ImmutableEntityEntry or add no-ops for write operations for this implementation. The object isn't immutable as the state field changes during the lifetime of the object. This question was asked by Steve, i.e. whether it was the Entity that was immutable or the EntityEntry that was immutable, I thought I had replied with my thoughts but I can not find my response to that question. Should clarify this in the javadoc. I think that ImmutbaleEntityEntry should refer to the Entity being immutable. Thanks, John > > Thanks, > Sanne > > On 9 June 2015 at 13:03, John O'Hara wrote: >> For our use case, bytecode enhanced reference cached immutable entities, >> our top object for memory allocation is EntityEntry. >> >> We see an EntityEntry object created every time an immutable entity is >> added to a persistence context. >> >> In our use case, where we know the entity is immutable and we already >> have an EntityEntry cached, can we re-use the EntityEntry between >> sessions? This would reduce the allocation rate of EntityEntry in our >> use case by ~50%. >> >> >> -- >> John O'Hara >> johara at redhat.com >> >> JBoss, by Red Hat >> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. >> Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland). >> >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev -- John O'Hara johara at redhat.com JBoss, by Red Hat Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland). From hchanfreau at gmail.com Tue Jun 9 09:19:23 2015 From: hchanfreau at gmail.com (=?UTF-8?Q?Hern=C3=A1n_Chanfreau?=) Date: Tue, 9 Jun 2015 10:19:23 -0300 Subject: [hibernate-dev] HHH-9789: collection size() with lazy extra does not applies filters if proxy is not initialized Message-ID: Hi! A time ago I created this issue. The problem is this: When using lazy extra and enabling filters affecting the collection, the size() and isEmpty() methods are not applying the filters when the collection proxy isn't initialized. So, enabling filters and accessing a collection marked as lazy extra (the filters applying to it), the methods size() and isEmpty() returns differents values: - if the proxy is initialized, the methods access the real filtered collection. - if the proxy is not initialized, the methods fire a separated select count(*) ignoring the filters. I?ve attached a test case. I'm wondering if this scenario breaks the idea of not fetching the collection if is not really needed (for lazy extra collections) or, we can add on the fly the filters conditions to the select count(*) in order to avoid fetching it. What do you think? Regards. Hern?n. From gunnar at hibernate.org Tue Jun 9 09:49:09 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Tue, 9 Jun 2015 15:49:09 +0200 Subject: [hibernate-dev] Hibernate ORM - next steps In-Reply-To: References: <5568947D.5000104@redhat.com> Message-ID: 2015-06-08 23:33 GMT+02:00 Steve Ebersole : > > > > I know personally, time/resources aside, the biggest reason I have not > > worked a lot on the bigger task (other than my initial work on the new > > query parser) is because I had hoped a solution would present itself to > the > > Antlr 4 quandary. But it hasn't and likely wont and I think Gunnar and > > Sanne and I are all in agreement that it probably just makes sense to > base > > this work on Antlr 3. > > > > Sanne? Gunnar? ;) > If we come to the conclusion that tree transformation is vital, then yes, I don't see another way than going with Antlr3 (see my reply to the other thread on that). Apart from looking into alternatives to Antlr altogether, but I don't feel that's a realistic option. In general, +1 for addressing the parser topic. The changes we have made for OGM feel a bit ad-hoc-ish to me and I think it makes sense to revisit this sooner than later to make sure we are (or get) on the right trick. Wrt. to other agenda items, has Java 8 support been brought up already? Apart from some "simple" things (like support for Java 8 date/time types), looking at APIs in the light of Lambdas seems very promising. Also the criteria API has potential. Things like the already mentioned QueryDSL or JOOQ seem much nicer on the user wrt. to query creation in a fluent style. Not sure how we could embrace that, but some usability improvements in that area would be nice. --Gunnar > > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Tue Jun 9 10:02:50 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 09 Jun 2015 14:02:50 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: On Tue, Jun 9, 2015 at 3:57 AM Gunnar Morling wrote: What I like about the Antlr4 approach is the fact that you don't need a set > of several quite similar grammars as you'd do with the tree transformation > approach. Also using the current version of Antlr instead of 3 appears > attractive to me wrt. to bugfixes and future development of the tool. > Understand that we would all "like" to use Antlr 4 for many reasons, myself included. But it has to work for our needs. There are just so many open questions (for me) as to whether that is the case. > > Based on what I understand from your discussions on the Antlr mailing > list, I'd assume the parse tree and the external state it references to > look roughly like so (---> indicates a reference to state built up during > sub-sequential walks, maybe in some external "table", maybe stored within > the (typed) tree nodes themselves): > > [QUERY] > [SELECT] > [ATTRIBUTE_REF] ---> AttributeReference("", "code") > [DOT] > [DOT] > [DOT] > [IDENT, "c"] > [IDENT, "headquarters"] > [IDENT, "state"] > [IDENT, "code"] > [FROM] > [SPACE] > [SPACE_ROOT] ---> InnerJoin( InnerJoin ( PersisterRef( "c", > "com.acme.Customer" ), TableRef ( "", "headquarters" ) ), TableRef ( > "", "state" ) ) ) > [IDENT, "Customer"] > [IDENT, "c"] > > I.e. instead of transforming the tree itself, the state required for > output generation would be added as "decorators" to nodes of the original > parse tree itself. That's just the basic idea as I understand it, surely > the specific types of the decorator elements (AttributeReference, > InnerJoin etc.) may look different. During "query rendering" we'd have to > inspect the decorator state of the parse tree nodes and interpret it > accordingly. > Well, see you do something "tricky" here that is actually one of my concerns with Antlr 4 :) You mix a parse tree and a semantic tree. Specifically this part of your tree: [ATTRIBUTE_REF] ---> AttributeReference("", "code") [DOT] [DOT] [DOT] [IDENT, "c"] [IDENT, "headquarters"] [IDENT, "state"] [IDENT, "code"] The idea of "ATTRIBUTE_REF" is a semantic concept. The DOT-IDENT struct is your parse tree. Antlr 4 does allow mixing these based on left refactoring of the rules, *but* there is an assumption there... that the branches in such a left-refactored rule can be resolved unambiguously. I am not so sure we can do that. In simpler terms... Antlr 4 needs you to be able to apply those semantic resolutions (attributeRef versus javaLiteralRef versus oraclePackagedProcedure versus ...) up front. So take the input that produces that tree: select c.headquarters.state.code Syntactically that dot-ident structure could represent any number of things. And semantically we just simply do not have enough information. We *could* eliminate it being a javaLiteralRef if we made javaLiteralRef the highest precedence branch in the left-factored rule that produces this, but that has serious drawbacks: 1) we are checking each and every dot-ident path as a possible javaLiteralRef first, which means reflection (perf) 2) it is not a fool-proof approach. The problem is that javaLiteralRef should really have very low precedence. There are conceivably cases where the expression could resolve to either a javaLiteralRef or an attributeRef, and in those cases the resolution should be routed through attributeRef not javaLiteralRef The ultimate problem there is that we cannot possibly know much of the information we need to know for proper semantic analysis until after we have seen the FROM clause. We got around that with older Antlr versions specifically via tree-rewriting: we re-write the tree to "hoist" FROM before the other clauses. So I believe the issue of alias resolution and implicit join conversion > could be handled without tree transformations (at least conceptually, I > could not code an actual implementation out of my head right away). But > maybe there are other cases where tree transformations are more strictly > needed? > Well I just illustrated above how that is actually a problem that does need either tree transformations or at least delayed processing of the sub-tree. Also get out of your head this idea that we can encode the semantic resolution of dot-ident paths into the tree. We simply will not be able to (I believe). And I think that starts to show my reservations about Antlr 4. Basically every pass over this tree we will need to deal with [[DOT][IDENT]] as opposed to [ATTRIBUTE_REFERENCE] From steve at hibernate.org Tue Jun 9 10:11:08 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 09 Jun 2015 14:11:08 +0000 Subject: [hibernate-dev] Hibernate ORM - next steps In-Reply-To: References: <5568947D.5000104@redhat.com> Message-ID: > > If we come to the conclusion that tree transformation is vital, then yes, > I don't see another way than going with Antlr3 (see my reply to the other > thread on that). Apart from looking into alternatives to Antlr altogether, > but I don't feel that's a realistic option. > Lets keep this discussion on that other thread then, since thats likely to get very detailed. In general, +1 for addressing the parser topic. The changes we have made > for OGM feel a bit ad-hoc-ish to me and I think it makes sense to revisit > this sooner than later to make sure we are (or get) on the right trick. > Can you explain this some more? Ad-hoc how? What is the wrong track? Wrt. to other agenda items, has Java 8 support been brought up already? > Apart from some "simple" things (like support for Java 8 date/time types), > looking at APIs in the light of Lambdas seems very promising. > Its been brought up. Jakub Narloch and others have brought up interest in continuing that work beyond what hibernate-java8 does atm. Obviously the hurdle is jumping from Java 6 to Java 8 as the baseline for development. Also the criteria API has potential. Things like the already mentioned > QueryDSL or JOOQ seem much nicer on the user wrt. to query creation in a > fluent style. Not sure how we could embrace that, but some usability > improvements in that area would be nice. > Are you the one who brought this up in Amsterdam? No one else really seems to know what this is about. Maybe you could make some specific proposals so we could understand better what you are proposing? I do see us extending the JPA Criteria contracts for added functionality. So what you are thinking possibly has a hook there. From sanne at hibernate.org Tue Jun 9 10:30:11 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 9 Jun 2015 15:30:11 +0100 Subject: [hibernate-dev] HHH-9857 - Reuse of EntityEntry for bytecode enhanced read-only reference cached entities In-Reply-To: <5576E10D.5090103@redhat.com> References: <5576D5FA.6050606@redhat.com> <5576E10D.5090103@redhat.com> Message-ID: On 9 June 2015 at 13:50, John O'Hara wrote: > On 09/06/15 13:14, Sanne Grinovero wrote: >> >> There are lots of setters on EntityEntry, but sharing it would require >> at least the implementation to be fully immutable to be threadsafe. >> >> I see three options for the custom EntityEntry implementation: >> - simply ignore any write method by implementing each method as a no-op >> - throw exceptions on any write method >> - split the EntityEntry interface into a parent interface >> "ReadOnlyEntityEntry" which doesn't have any such method >> >> The first option seems the easy way out but we would not notice any >> unintended / illegal usage; I'd prefer the third one but I'm not sure >> which impact it would have, seems like a large change that needs >> experimenting. >> >> I just noticed an ImmutableEntityEntry implementation exists now, but >> it's not actually immutable? That should be fixed, at very least the >> javadoc to explain what that class purpose is? > > Yes, an ImmutableEntityEntry instance will be created for the EntityEntry in > our use case (e.g. when this would be a performance benefit), so we can test > for instanceof ImmutableEntityEntry or add no-ops for write operations for > this implementation. > > The object isn't immutable as the state field changes during the lifetime of > the object. This question was asked by Steve, i.e. whether it was the Entity > that was immutable or the EntityEntry that was immutable, I thought I had > replied with my thoughts but I can not find my response to that question. > Should clarify this in the javadoc. I think that ImmutbaleEntityEntry should > refer to the Entity being immutable. Possibly, but then you can't reuse the same instance across multiple Session(s). If your goal is to completely avoid allocating new instances of ImmutbaleEntityEntry, you have to make it really immutable, or play with synchronized and volatiles.. wouldn't we be adding a worse problem in that case? I guess we could try and measure, but if we can find a way to make it completely immutable that would be easier. Sanne > > Thanks, > > John > >> >> Thanks, >> Sanne >> >> On 9 June 2015 at 13:03, John O'Hara wrote: >>> >>> For our use case, bytecode enhanced reference cached immutable entities, >>> our top object for memory allocation is EntityEntry. >>> >>> We see an EntityEntry object created every time an immutable entity is >>> added to a persistence context. >>> >>> In our use case, where we know the entity is immutable and we already >>> have an EntityEntry cached, can we re-use the EntityEntry between >>> sessions? This would reduce the allocation rate of EntityEntry in our >>> use case by ~50%. >>> >>> >>> -- >>> John O'Hara >>> johara at redhat.com >>> >>> JBoss, by Red Hat >>> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod >>> Street, Windsor, Berkshire, SI4 1TE, United Kingdom. >>> Registered in UK and Wales under Company Registration No. 3798903 >>> Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons >>> (USA) and Michael O'Neill (Ireland). >>> >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > -- > John O'Hara > johara at redhat.com > > JBoss, by Red Hat > Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, > Windsor, Berkshire, SI4 1TE, United Kingdom. > Registered in UK and Wales under Company Registration No. 3798903 Directors: > Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and > Michael O'Neill (Ireland). > From johara at redhat.com Tue Jun 9 10:34:33 2015 From: johara at redhat.com (John O'Hara) Date: Tue, 09 Jun 2015 15:34:33 +0100 Subject: [hibernate-dev] HHH-9857 - Reuse of EntityEntry for bytecode enhanced read-only reference cached entities In-Reply-To: References: <5576D5FA.6050606@redhat.com> <5576E10D.5090103@redhat.com> Message-ID: <5576F979.7080803@redhat.com> On 09/06/15 15:30, Sanne Grinovero wrote: > On 9 June 2015 at 13:50, John O'Hara wrote: >> On 09/06/15 13:14, Sanne Grinovero wrote: >>> There are lots of setters on EntityEntry, but sharing it would require >>> at least the implementation to be fully immutable to be threadsafe. >>> >>> I see three options for the custom EntityEntry implementation: >>> - simply ignore any write method by implementing each method as a no-op >>> - throw exceptions on any write method >>> - split the EntityEntry interface into a parent interface >>> "ReadOnlyEntityEntry" which doesn't have any such method >>> >>> The first option seems the easy way out but we would not notice any >>> unintended / illegal usage; I'd prefer the third one but I'm not sure >>> which impact it would have, seems like a large change that needs >>> experimenting. >>> >>> I just noticed an ImmutableEntityEntry implementation exists now, but >>> it's not actually immutable? That should be fixed, at very least the >>> javadoc to explain what that class purpose is? >> Yes, an ImmutableEntityEntry instance will be created for the EntityEntry in >> our use case (e.g. when this would be a performance benefit), so we can test >> for instanceof ImmutableEntityEntry or add no-ops for write operations for >> this implementation. >> >> The object isn't immutable as the state field changes during the lifetime of >> the object. This question was asked by Steve, i.e. whether it was the Entity >> that was immutable or the EntityEntry that was immutable, I thought I had >> replied with my thoughts but I can not find my response to that question. >> Should clarify this in the javadoc. I think that ImmutbaleEntityEntry should >> refer to the Entity being immutable. > Possibly, but then you can't reuse the same instance across multiple Session(s). > If your goal is to completely avoid allocating new instances of > ImmutbaleEntityEntry, you have to make it really immutable, or play > with synchronized and volatiles.. wouldn't we be adding a worse > problem in that case? I guess we could try and measure, but if we can > find a way to make it completely immutable that would be easier. Yes, the goal would be to avoid allocating new instances and reusing across sessions. I think synchronization would be v. expensive in this area of code. My instinct would be try and make ImmutbaleEntityEntry immutable to ensure thread safety, but would need to think about current state changes e.g. the status field changing LOADING->READ_ONLY > Sanne > >> Thanks, >> >> John >> >>> Thanks, >>> Sanne >>> >>> On 9 June 2015 at 13:03, John O'Hara wrote: >>>> For our use case, bytecode enhanced reference cached immutable entities, >>>> our top object for memory allocation is EntityEntry. >>>> >>>> We see an EntityEntry object created every time an immutable entity is >>>> added to a persistence context. >>>> >>>> In our use case, where we know the entity is immutable and we already >>>> have an EntityEntry cached, can we re-use the EntityEntry between >>>> sessions? This would reduce the allocation rate of EntityEntry in our >>>> use case by ~50%. >>>> >>>> >>>> -- >>>> John O'Hara >>>> johara at redhat.com >>>> >>>> JBoss, by Red Hat >>>> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod >>>> Street, Windsor, Berkshire, SI4 1TE, United Kingdom. >>>> Registered in UK and Wales under Company Registration No. 3798903 >>>> Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons >>>> (USA) and Michael O'Neill (Ireland). >>>> >>>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >> >> -- >> John O'Hara >> johara at redhat.com >> >> JBoss, by Red Hat >> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, >> Windsor, Berkshire, SI4 1TE, United Kingdom. >> Registered in UK and Wales under Company Registration No. 3798903 Directors: >> Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and >> Michael O'Neill (Ireland). >> -- John O'Hara johara at redhat.com JBoss, by Red Hat Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland). From steve at hibernate.org Tue Jun 9 10:51:21 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 09 Jun 2015 14:51:21 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: On Tue, Jun 9, 2015 at 5:50 AM Sanne Grinovero wrote: Do you mean that you would be ok to "navigate" all the [DOT] nodes to > get to the decorated attachments? > In that case while you might be fine to translate each fragment into a > different fragment, it's not straight forward to transform it into a > different structure, say with sub-trees in different orders or nodes > which don't have a 1:1 match. > It's of course doable if you are filling in your own builder while > navigating these (like we do with the Lucene DSL output), but it > doesn't help you with multiple phases which is what Steve is pointing > out. > > I would highly prefer to feed the semantic representation of the tree > to our query generating backends, especially so if we could all share > the same initial smart phases to do some basic validations and > optimisations DRY. But then the consuming backends will likely have > some additional validations and optimisations which need to be > backend-specific (dialect-specific or technology specific in case of > OGM). > > Steve, you mentioned that ANTLR4 handles transformations but only > when it's the last step. What prevents us to chain multiple such transformations, applying the "last step" approach multiple times? > I didn't look at it at all, so take this just as an high level, > conceptual question. I guess one would need to clearly define all > intermediate data types rather than have ANTLR generate them like it > does with tokens, but that could be the lesser trouble? > The "problem" is that the Antlr listeners/visitors are always based on the original parse tree. The transformation is not the concern. The concern is how you match up the listener/visitor calls based on the original parse tree into actions on the semantic tree. The thing to keep in mind is that the Antlr listeners/visitors are based on that parse tree. Going back to the Customer-headquarters query and the original parse and semantic trees, given a call to process the "dot node" that represents the root of the select expression, how do you "map" that to the attributeReference node in the semantic tree? Once the trees start to deviate you have basically lost the ability to drive processing of that "subsequent tree structure" based on Antlr. At least easily. There are probably some approaches we could use to allow that. Off the top of my head, I could see assigning each and every node in the parse tree a uid and then maintaining a "node replacement map" based on those uids. But that all seems like a lot of work. Another option I have seen Antlr folks mention is to write a second grammar defined based on your semantic tree. It would produce listeners/visitors based on the structure we ultimately expect in the semantic tree. Antlr would just not provide use the tree re-writing anymore; we'd do that manually. Other than that, everything else (10,000 foot view) should remain the same. Of course devils's in the details :) We could also, which I think is what you are suggesting Sanne, have the query parser project produce the semantic tree and then it would just be up to the consumers of that semantic tree to do with it whats it wants. Combining this with the idea of a second grammar for the semantic tree, we could say that the query parser project provides: 1) Antlr 4 listeners and visitors based on that semantic tree grammar 2) An API for converting HQL to such semantic trees. From steve at hibernate.org Tue Jun 9 11:07:49 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 09 Jun 2015 15:07:49 +0000 Subject: [hibernate-dev] HHH-9857 - Reuse of EntityEntry for bytecode enhanced read-only reference cached entities In-Reply-To: <5576F979.7080803@redhat.com> References: <5576D5FA.6050606@redhat.com> <5576E10D.5090103@redhat.com> <5576F979.7080803@redhat.com> Message-ID: Sanne, the phrase ImmutableEntityEntry was meant to convey that it is the entry corresponding to an immutable entity. Maybe something like NonUpdateableEntityEntry would be better. At any rate the javadocs need to be redone there. The javadocs on ImmutableEntityEntry were left there from when it was just one class. Those need to be moved to the contract. The javadocs for ImmutableEntityEntry should discuss all that we have discussed here. Unfortunately making EntityEntry itself completely immutable is at direct odds with its entire purpose for existence. It is meant to hold the state for the entity in regards to its association with the PC. Now, that said, in the case of non-mutable entities much of that state is irrelevant. As far as status, non-mutable entities are always considered READ_ONLY, aside from the cases or creating or deleting them. And I guess loading them too. Personally I would vote that we simply override more of the methods here: 1) setStatus - make sure the incoming status is "allowed" much like we do for lockmode 2) postUpdate - throw exception 3) setReadOnly - only allow readOnly==true and just set status On Tue, Jun 9, 2015 at 9:35 AM John O'Hara wrote: > On 09/06/15 15:30, Sanne Grinovero wrote: > > On 9 June 2015 at 13:50, John O'Hara wrote: > >> On 09/06/15 13:14, Sanne Grinovero wrote: > >>> There are lots of setters on EntityEntry, but sharing it would require > >>> at least the implementation to be fully immutable to be threadsafe. > >>> > >>> I see three options for the custom EntityEntry implementation: > >>> - simply ignore any write method by implementing each method as a > no-op > >>> - throw exceptions on any write method > >>> - split the EntityEntry interface into a parent interface > >>> "ReadOnlyEntityEntry" which doesn't have any such method > >>> > >>> The first option seems the easy way out but we would not notice any > >>> unintended / illegal usage; I'd prefer the third one but I'm not sure > >>> which impact it would have, seems like a large change that needs > >>> experimenting. > >>> > >>> I just noticed an ImmutableEntityEntry implementation exists now, but > >>> it's not actually immutable? That should be fixed, at very least the > >>> javadoc to explain what that class purpose is? > >> Yes, an ImmutableEntityEntry instance will be created for the > EntityEntry in > >> our use case (e.g. when this would be a performance benefit), so we can > test > >> for instanceof ImmutableEntityEntry or add no-ops for write operations > for > >> this implementation. > >> > >> The object isn't immutable as the state field changes during the > lifetime of > >> the object. This question was asked by Steve, i.e. whether it was the > Entity > >> that was immutable or the EntityEntry that was immutable, I thought I > had > >> replied with my thoughts but I can not find my response to that > question. > >> Should clarify this in the javadoc. I think that ImmutbaleEntityEntry > should > >> refer to the Entity being immutable. > > Possibly, but then you can't reuse the same instance across multiple > Session(s). > > If your goal is to completely avoid allocating new instances of > > ImmutbaleEntityEntry, you have to make it really immutable, or play > > with synchronized and volatiles.. wouldn't we be adding a worse > > problem in that case? I guess we could try and measure, but if we can > > find a way to make it completely immutable that would be easier. > Yes, the goal would be to avoid allocating new instances and reusing > across sessions. I think synchronization would be v. expensive in this > area of code. My instinct would be try and make ImmutbaleEntityEntry > immutable to ensure thread safety, but would need to think about current > state changes e.g. the status field changing LOADING->READ_ONLY > > > Sanne > > > >> Thanks, > >> > >> John > >> > >>> Thanks, > >>> Sanne > >>> > >>> On 9 June 2015 at 13:03, John O'Hara wrote: > >>>> For our use case, bytecode enhanced reference cached immutable > entities, > >>>> our top object for memory allocation is EntityEntry. > >>>> > >>>> We see an EntityEntry object created every time an immutable entity is > >>>> added to a persistence context. > >>>> > >>>> In our use case, where we know the entity is immutable and we already > >>>> have an EntityEntry cached, can we re-use the EntityEntry between > >>>> sessions? This would reduce the allocation rate of EntityEntry in our > >>>> use case by ~50%. > >>>> > >>>> > >>>> -- > >>>> John O'Hara > >>>> johara at redhat.com > >>>> > >>>> JBoss, by Red Hat > >>>> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod > >>>> Street, Windsor, Berkshire, SI4 1TE, United Kingdom. > >>>> Registered in UK and Wales under Company Registration No. 3798903 > >>>> Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt > Parsons > >>>> (USA) and Michael O'Neill (Ireland). > >>>> > >>>> _______________________________________________ > >>>> hibernate-dev mailing list > >>>> hibernate-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >> > >> > >> -- > >> John O'Hara > >> johara at redhat.com > >> > >> JBoss, by Red Hat > >> Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod > Street, > >> Windsor, Berkshire, SI4 1TE, United Kingdom. > >> Registered in UK and Wales under Company Registration No. 3798903 > Directors: > >> Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and > >> Michael O'Neill (Ireland). > >> > > > -- > John O'Hara > johara at redhat.com > > JBoss, by Red Hat > Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod > Street, Windsor, Berkshire, SI4 1TE, United Kingdom. > Registered in UK and Wales under Company Registration No. 3798903 > Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons > (USA) and Michael O'Neill (Ireland). > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From gunnar at hibernate.org Tue Jun 9 11:14:50 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Tue, 9 Jun 2015 17:14:50 +0200 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: 2015-06-09 16:02 GMT+02:00 Steve Ebersole : > On Tue, Jun 9, 2015 at 3:57 AM Gunnar Morling > wrote: > > What I like about the Antlr4 approach is the fact that you don't need a >> set of several quite similar grammars as you'd do with the tree >> transformation approach. Also using the current version of Antlr instead of >> 3 appears attractive to me wrt. to bugfixes and future development of the >> tool. >> > > Understand that we would all "like" to use Antlr 4 for many reasons, > myself included. But it has to work for our needs. There are just so many > open questions (for me) as to whether that is the case. > Sure, that's what we need to find out. >> Based on what I understand from your discussions on the Antlr mailing >> list, I'd assume the parse tree and the external state it references to >> look roughly like so (---> indicates a reference to state built up during >> sub-sequential walks, maybe in some external "table", maybe stored within >> the (typed) tree nodes themselves): >> >> [QUERY] >> [SELECT] >> [ATTRIBUTE_REF] ---> AttributeReference("", "code") >> [DOT] >> [DOT] >> [DOT] >> [IDENT, "c"] >> [IDENT, "headquarters"] >> [IDENT, "state"] >> [IDENT, "code"] >> [FROM] >> [SPACE] >> [SPACE_ROOT] ---> InnerJoin( InnerJoin ( PersisterRef( "c", >> "com.acme.Customer" ), TableRef ( "", "headquarters" ) ), TableRef ( >> "", "state" ) ) ) >> [IDENT, "Customer"] >> [IDENT, "c"] >> >> I.e. instead of transforming the tree itself, the state required for >> output generation would be added as "decorators" to nodes of the original >> parse tree itself. That's just the basic idea as I understand it, surely >> the specific types of the decorator elements (AttributeReference, >> InnerJoin etc.) may look different. During "query rendering" we'd have >> to inspect the decorator state of the parse tree nodes and interpret it >> accordingly. >> > > Well, see you do something "tricky" here that is actually one of my > concerns with Antlr 4 :) You mix a parse tree and a semantic tree. > Specifically this part of your tree: > > [ATTRIBUTE_REF] ---> AttributeReference("", "code") > [DOT] > [DOT] > [DOT] > [IDENT, "c"] > [IDENT, "headquarters"] > [IDENT, "state"] > [IDENT, "code"] > > The idea of "ATTRIBUTE_REF" is a semantic concept. The DOT-IDENT struct > is your parse tree. Antlr 4 does allow mixing these based on left > refactoring of the rules, *but* there is an assumption there... that the > branches in such a left-refactored rule can be resolved unambiguously. I > am not so sure we can do that. > Yes, indeed I cheated here a bit. Probably it should be the following instead: [DOT] ---> AttributeReference("", "code") [DOT] [DOT] [IDENT, "c"] [IDENT, "headquarters"] [IDENT, "state"] [IDENT, "code"] Or maybe something like: [SELECTION_PARTICLE] ---> AttributeReference("", "code") [DOT] [DOT] [DOT] [IDENT, "c"] [IDENT, "headquarters"] [IDENT, "state"] [IDENT, "code"] Where SELECTION_PARTICLE would be an abstract representation of anything that can be selected (attribute ref, Java literal ref etc.) and the decorator element added in a later pass would specify its actual semantics based on the alias definitions etc. discovered before. Bottom line being, that decorators providing semantics are attached to the nodes of the parse tree based on information gathered in previous passes. In simpler terms... Antlr 4 needs you to be able to apply those semantic > resolutions (attributeRef versus javaLiteralRef versus > oraclePackagedProcedure versus ...) up front. > > So take the input that produces that tree: select c.headquarters.state.code > > Syntactically that dot-ident structure could represent any number of > things. And semantically we just simply do not have enough information. > We *could* eliminate it being a javaLiteralRef if we > made javaLiteralRef the highest precedence branch in the left-factored rule > that produces this, but that has serious drawbacks: > 1) we are checking each and every dot-ident path as a possible > javaLiteralRef first, which means reflection (perf) > 2) it is not a fool-proof approach. The problem is that javaLiteralRef > should really have very low precedence. There are conceivably cases where > the expression could resolve to either a javaLiteralRef or an attributeRef, > and in those cases the resolution should be routed through attributeRef not > javaLiteralRef > > The ultimate problem there is that we cannot possibly know much of the > information we need to know for proper semantic analysis until after we > have seen the FROM clause. We got around that with older Antlr versions > specifically via tree-rewriting: we re-write the tree to "hoist" FROM > before the other clauses. > > > So I believe the issue of alias resolution and implicit join conversion >> could be handled without tree transformations (at least conceptually, I >> could not code an actual implementation out of my head right away). But >> maybe there are other cases where tree transformations are more strictly >> needed? >> > > Well I just illustrated above how that is actually a problem that does > need either tree transformations or at least delayed processing of the > sub-tree. > > Also get out of your head this idea that we can encode the semantic > resolution of dot-ident paths into the tree. We simply will not be able to > (I believe). > Not into the tree itself, but we can encode that semantic resolution into decorators (node attachments). > And I think that starts to show my reservations about Antlr 4. Basically > every pass over this tree we will need to deal with [[DOT][IDENT]] as > opposed to [ATTRIBUTE_REFERENCE] > Yes, they would deal with [[DOT][IDENT]] nodes but would benefit from semantic decorators attached previously. During rendering I would expect mainly those attachments to be of importance for the query creation. Admittedly, that's all quite "high level", but so far it seems doable to me in principle. It doesn't answer of course actual tree transformations such as (x + 0) -> x. I am not sure whether there are cases like this. From steve at hibernate.org Tue Jun 9 11:47:22 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 09 Jun 2015 15:47:22 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: On Tue, Jun 9, 2015 at 10:14 AM Gunnar Morling wrote: Yes, indeed I cheated here a bit. Probably it should be the following > instead: > > [DOT] ---> AttributeReference("", "code") > [DOT] > [DOT] > [IDENT, "c"] > [IDENT, "headquarters"] > [IDENT, "state"] > [IDENT, "code"] > How do you identify one DOT as referring to something else versus any of the other DOTs? Or maybe something like: > > [SELECTION_PARTICLE] ---> AttributeReference("", "code") > [DOT] > [DOT] > [DOT] > [IDENT, "c"] > [IDENT, "headquarters"] > [IDENT, "state"] > [IDENT, "code"] > > Where SELECTION_PARTICLE would be an abstract representation of anything > that can be selected (attribute ref, Java literal ref etc.) and the > decorator element added in a later pass would specify its actual semantics > based on the alias definitions etc. discovered before. > > Bottom line being, that decorators providing semantics are attached to the > nodes of the parse tree based on information gathered in previous passes. > And what does that look like in real, practical terms? That's what concerns me :) I don't know, and you are just speaking in generalities. So what does that look like in practice? Not into the tree itself, but we can encode that semantic resolution into > decorators (node attachments). > Again, what do these "node attachments" look like in practice? I have zero clue and based on my discussions with Antlr folks its not pretty. Maybe I misunderstand. But if you are proposing this approach, I would think you should have an idea of how it would look practically-speaking :) Maybe this is the way to go, I just need to see what this looks like. Yes, they would deal with [[DOT][IDENT]] nodes but would benefit from > semantic decorators attached previously. During rendering I would expect > mainly those attachments to be of importance for the query creation. > > Admittedly, that's all quite "high level", but so far it seems doable to > me in principle. It doesn't answer of course actual tree transformations > such as (x + 0) -> x. I am not sure whether there are cases like this. > Yes it is all extremely high-level. That is my concern. Principle and practice are often 2 very different things. I plan on spending some time taking my hibernate-antlr4-poc project and expanding it specifically to try the "second grammar" approach and see what practical difficulties that shakes out. Would you be willing to do the same for this decorated approach? Then we'd have concrete stuff to compare and base a decision on. Also, `(x + 0) -> x` is actually a quite simple case. Ours is much more complicated. In analyzing `c.headquarters.state.code` in the SELECT clause we need a few things to happen in a few different parts of the tree. We need: 1) `c.headquarters.state` to be transformed into 2 "implicit joins" in the FROM clause 2) we need to replace `c.headquarters.state.code` as `{implicit-alias}.code` in the SELECT 3) register `c.headquarters` and `c.headquarters.state` as implicit join paths (additional implicit joins using these paths should re-use the same joins). From steve at hibernate.org Tue Jun 9 12:33:16 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 09 Jun 2015 16:33:16 +0000 Subject: [hibernate-dev] JPA pessimisticLockScope.EXTENDED In-Reply-To: References: Message-ID: See my comment on the Jira. We can discuss this on IRC if you wish. On Tue, Jun 9, 2015 at 6:51 AM andrea boriero wrote: > Hi Steve, > > I'm getting crazy with https://hibernate.atlassian.net/browse/HHH-9636 > JPA pessimisticLockScope.EXTENDED > > 1. EntityManage#lock > 1. LockMode.PESSIMISTIC_WRITE without extended lock scope adds a > "for update" just to the parent entity. > 2. Setting the lock scope to Extended the lock is cascaded only if > the lock entity is detached because in > DefaultLockEventListener.onLock(LockEvent event) the cascadeOnLock(event, > persister, entity) is applyed only if EntityEntry entry = > source.getPersistenceContext().getEntry(entity) is null ,but anyway is not > applyed to Components like for the @ElementCollection in the issue > example. Not sure if this is the intended behavior. > 2. Entitymanager#createQuery() and EntityManager#find() > with LockMode.PESSIMISTIC_WRITE and scope Extended add the "for update" > just to the parent entity. > > Can you give me some help with this issue? Also a little explanation about > the intended behaviour of the PESSIMISTIC_WRITE and the scope is really > appreciated. The documentation is not so clear and i really want to > understand it. > > Thanks > > Andrea > > > From steve at hibernate.org Tue Jun 9 16:11:04 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 09 Jun 2015 20:11:04 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: So today I spent some time cleaning up the basic HQL parser. Personally I think it would be best if our 2 proof-of-concepts could share that first grammar. IMO that would make the differences between the 2 approaches more apparent. I will push those changes soon. It is not complete yet. But it covers most cases. On Tue, Jun 9, 2015 at 10:47 AM Steve Ebersole wrote: > On Tue, Jun 9, 2015 at 10:14 AM Gunnar Morling > wrote: > > Yes, indeed I cheated here a bit. Probably it should be the following >> instead: >> >> [DOT] ---> AttributeReference("", "code") >> [DOT] >> [DOT] >> [IDENT, "c"] >> [IDENT, "headquarters"] >> [IDENT, "state"] >> [IDENT, "code"] >> > > How do you identify one DOT as referring to something else versus any of > the other DOTs? > > > Or maybe something like: >> >> [SELECTION_PARTICLE] ---> AttributeReference("", "code") >> [DOT] >> [DOT] >> [DOT] >> [IDENT, "c"] >> [IDENT, "headquarters"] >> [IDENT, "state"] >> [IDENT, "code"] >> >> Where SELECTION_PARTICLE would be an abstract representation of anything >> that can be selected (attribute ref, Java literal ref etc.) and the >> decorator element added in a later pass would specify its actual semantics >> based on the alias definitions etc. discovered before. >> >> Bottom line being, that decorators providing semantics are attached to >> the nodes of the parse tree based on information gathered in previous >> passes. >> > > And what does that look like in real, practical terms? That's what > concerns me :) I don't know, and you are just speaking in generalities. > So what does that look like in practice? > > > Not into the tree itself, but we can encode that semantic resolution into >> decorators (node attachments). >> > > Again, what do these "node attachments" look like in practice? I have > zero clue and based on my discussions with Antlr folks its not pretty. > Maybe I misunderstand. But if you are proposing this approach, I would > think you should have an idea of how it would look practically-speaking :) > Maybe this is the way to go, I just need to see what this looks like. > > > Yes, they would deal with [[DOT][IDENT]] nodes but would benefit from >> semantic decorators attached previously. During rendering I would expect >> mainly those attachments to be of importance for the query creation. >> >> Admittedly, that's all quite "high level", but so far it seems doable to >> me in principle. It doesn't answer of course actual tree transformations >> such as (x + 0) -> x. I am not sure whether there are cases like this. >> > > Yes it is all extremely high-level. That is my concern. Principle and > practice are often 2 very different things. > > I plan on spending some time taking my hibernate-antlr4-poc project and > expanding it specifically to try the "second grammar" approach and see what > practical difficulties that shakes out. Would you be willing to do the > same for this decorated approach? Then we'd have concrete stuff to compare > and base a decision on. > > Also, `(x + 0) -> x` is actually a quite simple case. Ours is much more > complicated. In analyzing `c.headquarters.state.code` in the SELECT clause > we need a few things to happen in a few different parts of the tree. We > need: > 1) `c.headquarters.state` to be transformed into 2 "implicit joins" in > the FROM clause > 2) we need to replace `c.headquarters.state.code` as > `{implicit-alias}.code` in the SELECT > 3) register `c.headquarters` and `c.headquarters.state` as implicit join > paths (additional implicit joins using these paths should re-use the same > joins). > From sanne at hibernate.org Wed Jun 10 11:48:51 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 10 Jun 2015 16:48:51 +0100 Subject: [hibernate-dev] Released: Hibernate Search 5.3.0.Final Message-ID: The stable version of Hibernate Search containing all the faceting improvements is now released as Final: http://in.relation.to/Bloggers/HibernateSearch530FinalNowAvailable Regards, Sanne From gunnar at hibernate.org Wed Jun 10 11:49:18 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Wed, 10 Jun 2015 17:49:18 +0200 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: 2015-06-09 22:11 GMT+02:00 Steve Ebersole : > So today I spent some time cleaning up the basic HQL parser. Personally I > think it would be best if our 2 proof-of-concepts could share that first > grammar. IMO that would make the differences between the 2 approaches more > apparent. I will push those changes soon. > Ok, I can try and work on a PoC for the decorator-based approach. Not sure what the outcome will be, as your's, my understanding of it is roughly vague and high-level. But if it fails we can maybe settle for the Antlr3 approach with the better feeling of having investigated the alternative. Can you let me know when you have pushed your stuff? What does it do, render the query below as SQL? It is not complete yet. But it covers most cases. > > > On Tue, Jun 9, 2015 at 10:47 AM Steve Ebersole > wrote: > >> On Tue, Jun 9, 2015 at 10:14 AM Gunnar Morling >> wrote: >> >> Yes, indeed I cheated here a bit. Probably it should be the following >>> instead: >>> >>> [DOT] ---> AttributeReference("", "code") >>> [DOT] >>> [DOT] >>> [IDENT, "c"] >>> [IDENT, "headquarters"] >>> [IDENT, "state"] >>> [IDENT, "code"] >>> >> >> How do you identify one DOT as referring to something else versus any of >> the other DOTs? >> >> >> Or maybe something like: >>> >>> [SELECTION_PARTICLE] ---> AttributeReference("", "code") >>> [DOT] >>> [DOT] >>> [DOT] >>> [IDENT, "c"] >>> [IDENT, "headquarters"] >>> [IDENT, "state"] >>> [IDENT, "code"] >>> >>> Where SELECTION_PARTICLE would be an abstract representation of anything >>> that can be selected (attribute ref, Java literal ref etc.) and the >>> decorator element added in a later pass would specify its actual semantics >>> based on the alias definitions etc. discovered before. >>> >>> Bottom line being, that decorators providing semantics are attached to >>> the nodes of the parse tree based on information gathered in previous >>> passes. >>> >> >> And what does that look like in real, practical terms? That's what >> concerns me :) I don't know, and you are just speaking in generalities. >> So what does that look like in practice? >> >> >> Not into the tree itself, but we can encode that semantic resolution into >>> decorators (node attachments). >>> >> >> Again, what do these "node attachments" look like in practice? I have >> zero clue and based on my discussions with Antlr folks its not pretty. >> Maybe I misunderstand. But if you are proposing this approach, I would >> think you should have an idea of how it would look practically-speaking :) >> Maybe this is the way to go, I just need to see what this looks like. >> >> >> Yes, they would deal with [[DOT][IDENT]] nodes but would benefit from >>> semantic decorators attached previously. During rendering I would expect >>> mainly those attachments to be of importance for the query creation. >>> >>> Admittedly, that's all quite "high level", but so far it seems doable to >>> me in principle. It doesn't answer of course actual tree transformations >>> such as (x + 0) -> x. I am not sure whether there are cases like this. >>> >> >> Yes it is all extremely high-level. That is my concern. Principle and >> practice are often 2 very different things. >> >> I plan on spending some time taking my hibernate-antlr4-poc project and >> expanding it specifically to try the "second grammar" approach and see what >> practical difficulties that shakes out. Would you be willing to do the >> same for this decorated approach? Then we'd have concrete stuff to compare >> and base a decision on. >> >> Also, `(x + 0) -> x` is actually a quite simple case. Ours is much more >> complicated. In analyzing `c.headquarters.state.code` in the SELECT clause >> we need a few things to happen in a few different parts of the tree. We >> need: >> 1) `c.headquarters.state` to be transformed into 2 "implicit joins" in >> the FROM clause >> 2) we need to replace `c.headquarters.state.code` as >> `{implicit-alias}.code` in the SELECT >> 3) register `c.headquarters` and `c.headquarters.state` as implicit join >> paths (additional implicit joins using these paths should re-use the same >> joins). >> > From steve at hibernate.org Thu Jun 11 10:50:19 2015 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 11 Jun 2015 14:50:19 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: In the re-write case we will have to decide between 2 courses. The problem lies in the fact that the listeners and visitors expect a tree specifically generated from the grammar that generated them. For HQL, say we have one grammar named HqlParser (like in the poc). The listeners and visitors built from HqlParser specifically expect the tree from HqlParser. The trees are typed. Attempting to use trees from one grammar in the listener/visitor from another grammar will not work. As far as re-writing that effectively means 2 options. If we want to have a second grammar for the "semantic query" we are going to have to re-write the entire tree based on that second grammar. I had thought we might be able to mix them. But that was based on my Antlr 2/3 understanding where the trees are de-typed. That approach will not work in Antlr 4. Its not a huge deal, but worth mentioning. The other option would be to encode the "semantically correct" rules into the original grammar (HqlParser) as a higher precedence than their parse tree corollary. This gets a little fugly. Consider again the `select c.headquarters.state.code` fragment. During the parse phase we need to accept any dotIdentifierPath as a selectable item. We simply do not know during parse what that represents. So for the parse phase, a `selectItem` rule (overly simplified) might look like: selectItem : dotIdentifierPath; In this approach we would re-write the tree "in place" during semantic analysis. So at some point we know that the given dotIdentifierPath represents a reference to a persistent attribute. So we'd alter that rule to look contain alternatives for each semantic possibility: selectItem : attributeReference | javaConstant | dotIdentifierPath; The rules attributeReference and javaConstant would never match during the parse phase. Again, this is fugly imo. On Wed, Jun 10, 2015 at 10:49 AM Gunnar Morling wrote: > 2015-06-09 22:11 GMT+02:00 Steve Ebersole : > >> So today I spent some time cleaning up the basic HQL parser. Personally >> I think it would be best if our 2 proof-of-concepts could share that first >> grammar. IMO that would make the differences between the 2 approaches more >> apparent. I will push those changes soon. >> > > Ok, I can try and work on a PoC for the decorator-based approach. Not sure > what the outcome will be, as your's, my understanding of it is roughly > vague and high-level. But if it fails we can maybe settle for the Antlr3 > approach with the better feeling of having investigated the alternative. > > Can you let me know when you have pushed your stuff? What does it do, > render the query below as SQL? > > It is not complete yet. But it covers most cases. >> >> >> On Tue, Jun 9, 2015 at 10:47 AM Steve Ebersole >> wrote: >> >>> On Tue, Jun 9, 2015 at 10:14 AM Gunnar Morling >>> wrote: >>> >>> Yes, indeed I cheated here a bit. Probably it should be the following >>>> instead: >>>> >>>> [DOT] ---> AttributeReference("", "code") >>>> [DOT] >>>> [DOT] >>>> [IDENT, "c"] >>>> [IDENT, "headquarters"] >>>> [IDENT, "state"] >>>> [IDENT, "code"] >>>> >>> >>> How do you identify one DOT as referring to something else versus any of >>> the other DOTs? >>> >>> >>> Or maybe something like: >>>> >>>> [SELECTION_PARTICLE] ---> AttributeReference("", "code") >>>> [DOT] >>>> [DOT] >>>> [DOT] >>>> [IDENT, "c"] >>>> [IDENT, "headquarters"] >>>> [IDENT, "state"] >>>> [IDENT, "code"] >>>> >>>> Where SELECTION_PARTICLE would be an abstract representation of >>>> anything that can be selected (attribute ref, Java literal ref etc.) and >>>> the decorator element added in a later pass would specify its actual >>>> semantics based on the alias definitions etc. discovered before. >>>> >>>> Bottom line being, that decorators providing semantics are attached to >>>> the nodes of the parse tree based on information gathered in previous >>>> passes. >>>> >>> >>> And what does that look like in real, practical terms? That's what >>> concerns me :) I don't know, and you are just speaking in generalities. >>> So what does that look like in practice? >>> >>> >>> Not into the tree itself, but we can encode that semantic resolution >>>> into decorators (node attachments). >>>> >>> >>> Again, what do these "node attachments" look like in practice? I have >>> zero clue and based on my discussions with Antlr folks its not pretty. >>> Maybe I misunderstand. But if you are proposing this approach, I would >>> think you should have an idea of how it would look practically-speaking :) >>> Maybe this is the way to go, I just need to see what this looks like. >>> >>> >>> Yes, they would deal with [[DOT][IDENT]] nodes but would benefit from >>>> semantic decorators attached previously. During rendering I would expect >>>> mainly those attachments to be of importance for the query creation. >>>> >>>> Admittedly, that's all quite "high level", but so far it seems doable >>>> to me in principle. It doesn't answer of course actual tree transformations >>>> such as (x + 0) -> x. I am not sure whether there are cases like this. >>>> >>> >>> Yes it is all extremely high-level. That is my concern. Principle and >>> practice are often 2 very different things. >>> >>> I plan on spending some time taking my hibernate-antlr4-poc project and >>> expanding it specifically to try the "second grammar" approach and see what >>> practical difficulties that shakes out. Would you be willing to do the >>> same for this decorated approach? Then we'd have concrete stuff to compare >>> and base a decision on. >>> >>> Also, `(x + 0) -> x` is actually a quite simple case. Ours is much >>> more complicated. In analyzing `c.headquarters.state.code` in the SELECT >>> clause we need a few things to happen in a few different parts of the >>> tree. We need: >>> 1) `c.headquarters.state` to be transformed into 2 "implicit joins" in >>> the FROM clause >>> 2) we need to replace `c.headquarters.state.code` as >>> `{implicit-alias}.code` in the SELECT >>> 3) register `c.headquarters` and `c.headquarters.state` as implicit >>> join paths (additional implicit joins using these paths should re-use the >>> same joins). >>> >> From sanne at hibernate.org Fri Jun 12 09:08:44 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jun 2015 14:08:44 +0100 Subject: [hibernate-dev] Blog / CI setup Message-ID: Hi all, the ci server was reconfigured to host our next.gen blog platform; I have now created a build job here: http://ci.hibernate.org/view/Website/job/staging.in.relation.to It's using the following build script: rake clean rake setup rake test gen[staging] && rsync -avh _site/ ci.hibernate.org:/var/www/staging-in.relation.to Which gets me this error: http://ci.hibernate.org/view/Website/job/staging.in.relation.to/3/console I copied this script from the job which builds www.hibernate.org, but I guess the incantation needs to be different in this case? I'm not using RVM, I was hoping we could get it work work without it. Should I just assume RVM is requirement? I did install the following RPMs: - gcc - make - ruby-devel - gcc-c++ - libxml2 - libxml2-devel - libxslt - libxslt-devel - rubygem-nokogiri And these gems: - rake - bundler (both, system wide) The system-wide ruby version is: ruby 2.1.6p336 (2015-04-13 revision 50298) [x86_64-linux] Thanks, Sanne From gunnar at hibernate.org Fri Jun 12 09:38:22 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Fri, 12 Jun 2015 15:38:22 +0200 Subject: [hibernate-dev] Blog / CI setup In-Reply-To: References: Message-ID: This seems not related to RVM. More to the fact that we don't have any RSpec tests ("_spec" dir doesn't exist): cannot load such file -- /.../staging.in.relation.to/_spec Hence "rake test" fails. Just running "rake gen[staging]" should do the trick. Btw. you might check out the job for staging.hibernate.org and its approach for keeping around the .bundle directory. > I did install the following RPMs: Did you actually have to newly install any of these? They should be already there as part of the Ansible slave set-up IIRC. 2015-06-12 15:08 GMT+02:00 Sanne Grinovero : > Hi all, > the ci server was reconfigured to host our next.gen blog platform; > I have now created a build job here: > http://ci.hibernate.org/view/Website/job/staging.in.relation.to > > It's using the following build script: > > rake clean > rake setup > rake test gen[staging] && rsync -avh _site/ > ci.hibernate.org:/var/www/staging-in.relation.to > > Which gets me this error: > http://ci.hibernate.org/view/Website/job/staging.in.relation.to/3/console > > I copied this script from the job which builds www.hibernate.org, but > I guess the incantation needs to be different in this case? > > I'm not using RVM, I was hoping we could get it work work without it. > Should I just assume RVM is requirement? > I did install the following RPMs: > - gcc > - make > - ruby-devel > - gcc-c++ > - libxml2 > - libxml2-devel > - libxslt > - libxslt-devel > - rubygem-nokogiri > > And these gems: > - rake > - bundler > > (both, system wide) > > The system-wide ruby version is: > ruby 2.1.6p336 (2015-04-13 revision 50298) [x86_64-linux] > > Thanks, > Sanne > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Fri Jun 12 10:53:58 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jun 2015 15:53:58 +0100 Subject: [hibernate-dev] Blog / CI setup In-Reply-To: References: Message-ID: On 12 June 2015 at 14:38, Gunnar Morling wrote: > This seems not related to RVM. More to the fact that we don't have any RSpec > tests ("_spec" dir doesn't exist): > > cannot load such file -- /.../staging.in.relation.to/_spec > > Hence "rake test" fails. Just running "rake gen[staging]" should do the > trick. Btw. you might check out the job for staging.hibernate.org and its > approach for keeping around the .bundle directory. thanks! Looks like we're getting there. >> I did install the following RPMs: > > Did you actually have to newly install any of these? They should be already > there as part of the Ansible slave set-up IIRC. A couple were missing, so I've added them, although that was probably unrelated so I'm not sure if that was necessary :) > > > > > 2015-06-12 15:08 GMT+02:00 Sanne Grinovero : >> >> Hi all, >> the ci server was reconfigured to host our next.gen blog platform; >> I have now created a build job here: >> http://ci.hibernate.org/view/Website/job/staging.in.relation.to >> >> It's using the following build script: >> >> rake clean >> rake setup >> rake test gen[staging] && rsync -avh _site/ >> ci.hibernate.org:/var/www/staging-in.relation.to >> >> Which gets me this error: >> http://ci.hibernate.org/view/Website/job/staging.in.relation.to/3/console >> >> I copied this script from the job which builds www.hibernate.org, but >> I guess the incantation needs to be different in this case? >> >> I'm not using RVM, I was hoping we could get it work work without it. >> Should I just assume RVM is requirement? >> I did install the following RPMs: >> - gcc >> - make >> - ruby-devel >> - gcc-c++ >> - libxml2 >> - libxml2-devel >> - libxslt >> - libxslt-devel >> - rubygem-nokogiri >> >> And these gems: >> - rake >> - bundler >> >> (both, system wide) >> >> The system-wide ruby version is: >> ruby 2.1.6p336 (2015-04-13 revision 50298) [x86_64-linux] >> >> Thanks, >> Sanne >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev > > From daltodavide at gmail.com Fri Jun 12 11:03:25 2015 From: daltodavide at gmail.com (Davide D'Alto) Date: Fri, 12 Jun 2015 16:03:25 +0100 Subject: [hibernate-dev] [WEBSITE] Jira workflow Message-ID: Hi, it seems thath the worflow for WEBSITE on JIRA does not include the state "PULL REQUEST SENT" (like in Search and OGM, for example) I'd like to have it so that I can have a quick overview from JIRA of the issues that are "almost" done. Would it be ok to add it? Cheers, Davide From hardy at hibernate.org Fri Jun 12 11:08:33 2015 From: hardy at hibernate.org (Hardy Ferentschik) Date: Fri, 12 Jun 2015 17:08:33 +0200 Subject: [hibernate-dev] Blog / CI setup In-Reply-To: References: Message-ID: <20150612150833.GA22142@Nineveh.lan> On Fri, Jun 12, 2015 at 02:08:44PM +0100, Sanne Grinovero wrote: > I copied this script from the job which builds www.hibernate.org, but > I guess the incantation needs to be different in this case? Correct. I think there will be more changes coming, since I am adjusting the build script. Also as Gunnar was saying, there are no rspec tests yet. We can adjust the rake targets as we go. > I'm not using RVM, I was hoping we could get it work work without it. Absolutely, no need for RVM at all. --Hardy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 496 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hibernate-dev/attachments/20150612/66aef027/attachment.bin From sanne at hibernate.org Fri Jun 12 11:12:51 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jun 2015 16:12:51 +0100 Subject: [hibernate-dev] [WEBSITE] Jira workflow In-Reply-To: References: Message-ID: sure! On 12 June 2015 at 16:03, Davide D'Alto wrote: > Hi, > it seems thath the worflow for WEBSITE on JIRA does not include the state > "PULL REQUEST SENT" (like in Search and OGM, for example) > > I'd like to have it so that I can have a quick overview from JIRA of the > issues that are "almost" done. > > Would it be ok to add it? > > Cheers, > Davide > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Fri Jun 12 14:56:28 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 12 Jun 2015 19:56:28 +0100 Subject: [hibernate-dev] Blog / CI setup In-Reply-To: <20150612150833.GA22142@Nineveh.lan> References: <20150612150833.GA22142@Nineveh.lan> Message-ID: Ok, so the staging website is up and running at: - http://staging.in.relation.to [provided you tune your hosts file to have that domain name point to the same IP as ci.hibernate.org] A similar setup is ready for in.relation.to as well, but I didn't push any preview to its location yet. It takes more than 10 minutes to perform the build, though these are powerful machines. Can we do anything about that? Am I missing some parameter? On 12 June 2015 at 16:08, Hardy Ferentschik wrote: > On Fri, Jun 12, 2015 at 02:08:44PM +0100, Sanne Grinovero wrote: >> I copied this script from the job which builds www.hibernate.org, but >> I guess the incantation needs to be different in this case? > > Correct. I think there will be more changes coming, since I am adjusting the > build script. Also as Gunnar was saying, there are no rspec tests yet. > We can adjust the rake targets as we go. > >> I'm not using RVM, I was hoping we could get it work work without it. > > Absolutely, no need for RVM at all. > > --Hardy From hardy at hibernate.org Fri Jun 12 15:35:55 2015 From: hardy at hibernate.org (Hardy Ferentschik) Date: Fri, 12 Jun 2015 21:35:55 +0200 Subject: [hibernate-dev] [WEBSITE] Jira workflow In-Reply-To: References: Message-ID: <20150612193555.GB22142@Nineveh.lan> Hi, > it seems thath the worflow for WEBSITE on JIRA does not include the state > "PULL REQUEST SENT" (like in Search and OGM, for example) > > I'd like to have it so that I can have a quick overview from JIRA of the > issues that are "almost" done. > > Would it be ok to add it? +1 from my side. While on it, you could check the team permissions. It seems I don't have the same permissions on WEBSITE as on the other projects. --Hardy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 496 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hibernate-dev/attachments/20150612/11e31710/attachment.bin From hardy at hibernate.org Fri Jun 12 16:34:06 2015 From: hardy at hibernate.org (Hardy Ferentschik) Date: Fri, 12 Jun 2015 22:34:06 +0200 Subject: [hibernate-dev] Blog / CI setup In-Reply-To: References: <20150612150833.GA22142@Nineveh.lan> Message-ID: <20150612203406.GC22142@Nineveh.lan> Hi, On Fri, Jun 12, 2015 at 07:56:28PM +0100, Sanne Grinovero wrote: > Ok, so the staging website is up and running at: > - http://staging.in.relation.to > [provided you tune your hosts file to have that domain name point to > the same IP as ci.hibernate.org] Sweet. Works for me. > It takes more than 10 minutes to perform the build, though these are > powerful machines. > Can we do anything about that? Am I missing some parameter? Hmm, that's indeed quite bad. Locally I lie around 1:50 for a clean build, but that does not include the setup (which really needs not to be run every time) and neither does the minifying get applied. I'll need to run a actual staging build to get some comparable times. More than 10 minutes sounds bad indeed. I'll take a look at the CI build. Something else is odd on the staging site. The links for the lists of posts per author or tag are broken. This works for me locally (w/ the development profile). Something else to look into. --Hardy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 496 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hibernate-dev/attachments/20150612/e28bf94d/attachment.bin From steve at hibernate.org Fri Jun 12 18:30:21 2015 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 12 Jun 2015 22:30:21 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: I just pushed my initial work on performing some indexing of explicit from clauses. Essentially it takes the from clauses defined in the query and begins the massage process. From here I will start working on handling implicit from-clause-elements. Part of that however will require me being able to know what is an entity and whether an IDENTIFIER in the query represents an attribute in one (or more) of those from-clause-element entities. IIRC this needs to be different between consumers of this project as they do not always have persisters, etc. Previously we had discussed an API that all consumers could provide. I had developed org.hibernate.hql.ast.common.ParserContext and friends as a means to that end. However, the parser project is currently not using those. On the bright side that means we have a clean slate for how to do this! :) Part of the equation here is how we want certain things to work in terms of an input/output paradigm, specifically what the "parser" should output. Let's take some examples... 1) resolving possible unqualified attribute references. I have mentioned this one before. Take a query like `select ssn from Person`. There are 2 main choices here when it comes to processing the `ssn` select expression. It's a matter of precedence. The first option is to have ATTRIBUTE_REFERENCE have the lowest precedence. Here we'd try other possibilities first. Mainly that would entail trying it as various forms of a constant. If all those attempts fail we would *assume* that the expression is an ATTRIBUTE_REFERENCE. The assumption aspect is important. It means that would not be validated here. This approach would not require any form of API, but it has many downsides: it would require more expensive resolution and it could potentially hide a ATTRIBUTE_REFERENCE. IMO ATTRIBUTE_REFERENCE should have the higher precedence. The other option is to have such an API. This would allow the parser to ask the consumer whether the given identifier (`ssn`) is a persistent attribute of any of the available entities. If you can't tell, I am in big time favor of having such an API :) But I am open to discussions as to the other side. 2) Understanding capabilities. The principle here is understanding what is possible in different contexts based on the domain model being queried. For example, if we see a query like `select c.addresses.city from Company c` and if we know that `c.addresses` resolves to a persistent collection then we know that the following de-reference is invalid. So here it is a question of whether we want the parser to perform capability based validations for us. Again, I'd argue we do as otherwise each consumer ends up having to do these validations themselves. This is/was the intent of the org.hibernate.hql.ast.TypeDescriptor stuff I had developed there originally. So something like that. 3) We also need to decide how we want to handle polymorphic queries in this parser. For a query like `from Object` what do we ultimately want returned? Specifically how do we deal with the multi-valued java.lang.Object reference in whatever we send back from the parser? Because what we send out implies some things we need to send in (API). Anyway, the from clause parser is looking nice so far. On Thu, Jun 11, 2015 at 9:50 AM Steve Ebersole wrote: > In the re-write case we will have to decide between 2 courses. > > The problem lies in the fact that the listeners and visitors expect a tree > specifically generated from the grammar that generated them. For HQL, say > we have one grammar named HqlParser (like in the poc). The listeners and > visitors built from HqlParser specifically expect the tree from HqlParser. > The trees are typed. Attempting to use trees from one grammar in the > listener/visitor from another grammar will not work. > > As far as re-writing that effectively means 2 options. > > If we want to have a second grammar for the "semantic query" we are going > to have to re-write the entire tree based on that second grammar. I had > thought we might be able to mix them. But that was based on my Antlr 2/3 > understanding where the trees are de-typed. That approach will not work in > Antlr 4. Its not a huge deal, but worth mentioning. > > The other option would be to encode the "semantically correct" rules into > the original grammar (HqlParser) as a higher precedence than their parse > tree corollary. This gets a little fugly. Consider again the `select > c.headquarters.state.code` fragment. During the parse phase we need to > accept any dotIdentifierPath as a selectable item. We simply do not know > during parse what that represents. So for the parse phase, a `selectItem` > rule (overly simplified) might look like: > > selectItem : dotIdentifierPath; > > In this approach we would re-write the tree "in place" during semantic > analysis. So at some point we know that the given dotIdentifierPath > represents a reference to a persistent attribute. So we'd alter that rule > to look contain alternatives for each semantic possibility: > > selectItem : attributeReference | javaConstant | dotIdentifierPath; > > The rules attributeReference and javaConstant would never match during > the parse phase. > > Again, this is fugly imo. > > > On Wed, Jun 10, 2015 at 10:49 AM Gunnar Morling > wrote: > >> 2015-06-09 22:11 GMT+02:00 Steve Ebersole : >> >>> So today I spent some time cleaning up the basic HQL parser. Personally >>> I think it would be best if our 2 proof-of-concepts could share that first >>> grammar. IMO that would make the differences between the 2 approaches more >>> apparent. I will push those changes soon. >>> >> >> Ok, I can try and work on a PoC for the decorator-based approach. Not >> sure what the outcome will be, as your's, my understanding of it is roughly >> vague and high-level. But if it fails we can maybe settle for the Antlr3 >> approach with the better feeling of having investigated the alternative. >> >> Can you let me know when you have pushed your stuff? What does it do, >> render the query below as SQL? >> >> It is not complete yet. But it covers most cases. >>> >>> >>> On Tue, Jun 9, 2015 at 10:47 AM Steve Ebersole >>> wrote: >>> >>>> On Tue, Jun 9, 2015 at 10:14 AM Gunnar Morling >>>> wrote: >>>> >>>> Yes, indeed I cheated here a bit. Probably it should be the following >>>>> instead: >>>>> >>>>> [DOT] ---> AttributeReference("", "code") >>>>> [DOT] >>>>> [DOT] >>>>> [IDENT, "c"] >>>>> [IDENT, "headquarters"] >>>>> [IDENT, "state"] >>>>> [IDENT, "code"] >>>>> >>>> >>>> How do you identify one DOT as referring to something else versus any >>>> of the other DOTs? >>>> >>>> >>>> Or maybe something like: >>>>> >>>>> [SELECTION_PARTICLE] ---> AttributeReference("", "code") >>>>> [DOT] >>>>> [DOT] >>>>> [DOT] >>>>> [IDENT, "c"] >>>>> [IDENT, "headquarters"] >>>>> [IDENT, "state"] >>>>> [IDENT, "code"] >>>>> >>>>> Where SELECTION_PARTICLE would be an abstract representation of >>>>> anything that can be selected (attribute ref, Java literal ref etc.) and >>>>> the decorator element added in a later pass would specify its actual >>>>> semantics based on the alias definitions etc. discovered before. >>>>> >>>>> Bottom line being, that decorators providing semantics are attached to >>>>> the nodes of the parse tree based on information gathered in previous >>>>> passes. >>>>> >>>> >>>> And what does that look like in real, practical terms? That's what >>>> concerns me :) I don't know, and you are just speaking in generalities. >>>> So what does that look like in practice? >>>> >>>> >>>> Not into the tree itself, but we can encode that semantic resolution >>>>> into decorators (node attachments). >>>>> >>>> >>>> Again, what do these "node attachments" look like in practice? I have >>>> zero clue and based on my discussions with Antlr folks its not pretty. >>>> Maybe I misunderstand. But if you are proposing this approach, I would >>>> think you should have an idea of how it would look practically-speaking :) >>>> Maybe this is the way to go, I just need to see what this looks like. >>>> >>>> >>>> Yes, they would deal with [[DOT][IDENT]] nodes but would benefit from >>>>> semantic decorators attached previously. During rendering I would expect >>>>> mainly those attachments to be of importance for the query creation. >>>>> >>>>> Admittedly, that's all quite "high level", but so far it seems doable >>>>> to me in principle. It doesn't answer of course actual tree transformations >>>>> such as (x + 0) -> x. I am not sure whether there are cases like this. >>>>> >>>> >>>> Yes it is all extremely high-level. That is my concern. Principle and >>>> practice are often 2 very different things. >>>> >>>> I plan on spending some time taking my hibernate-antlr4-poc project and >>>> expanding it specifically to try the "second grammar" approach and see what >>>> practical difficulties that shakes out. Would you be willing to do the >>>> same for this decorated approach? Then we'd have concrete stuff to compare >>>> and base a decision on. >>>> >>>> Also, `(x + 0) -> x` is actually a quite simple case. Ours is much >>>> more complicated. In analyzing `c.headquarters.state.code` in the SELECT >>>> clause we need a few things to happen in a few different parts of the >>>> tree. We need: >>>> 1) `c.headquarters.state` to be transformed into 2 "implicit joins" in >>>> the FROM clause >>>> 2) we need to replace `c.headquarters.state.code` as >>>> `{implicit-alias}.code` in the SELECT >>>> 3) register `c.headquarters` and `c.headquarters.state` as implicit >>>> join paths (additional implicit joins using these paths should re-use the >>>> same joins). >>>> >>> From gbadner at redhat.com Mon Jun 15 16:26:56 2015 From: gbadner at redhat.com (Gail Badner) Date: Mon, 15 Jun 2015 16:26:56 -0400 (EDT) Subject: [hibernate-dev] TREAT operator and joined inheritance (HHH-9862) In-Reply-To: <2011779521.2267219.1434393440897.JavaMail.zimbra@redhat.com> Message-ID: <902223750.2378776.1434400016312.JavaMail.zimbra@redhat.com> JPA 2.1 shows examples of using multiple downcasts in a restriction: 4.4.9 Downcasting SELECT e FROM Employee e WHERE TREAT(e AS Exempt).vacationDays > 10 OR TREAT(e AS Contractor).hours > 100 6.5.7 Downcasting Example 3: CriteriaQuery q = cb.createQuery(Employee.class); Root e = q.from(Employee.class); q.where( cb.or(cb.gt(cb.treat(e, Exempt.class).get(Exempt_.vacationDays), 10), cb.gt(cb.treat(e, Contractor.class).get(Contractor_.hours), 100))); These don't work in Hibernate for joined inheritance because Hibernate uses an inner join for the downcasts. I've added a FailureExpected test case for this: https://github.com/hibernate/hibernate-orm/commit/1ec76887825bebda4c02ea2bc1590d374aa4415b IIUC, inner join is correct when TREAT is used in a JOIN clause. If TREAT is only used for restrictions in the WHERE clause, I *think* it should be an outer join. Is that correct? HHH-9862 also mentions that Hibernate doesn't work properly when there are multiple select expressions using different downcasts, as in: CriteriaBuilder cb = entityManager.getCriteriaBuilder(); CriteriaQuery query = cb.createQuery(Object[].class); Root root = query.from(Pet.class); query.multiselect( root.get("id"), root.get("name"), cb.treat(root, Cat.class).get("felineProperty"), cb.treat(root, Dog.class).get("canineProperty") ); I don't think this should work, at least not with implicit joins. Is this valid? Thanks, Gail From guillaume.smet at gmail.com Tue Jun 16 06:24:39 2015 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Tue, 16 Jun 2015 12:24:39 +0200 Subject: [hibernate-dev] ORM 5 - Default schema Message-ID: Hi, Still trying to get one of our applications starting with ORM 5. With Search 5.4.0.Beta1 and Spring 4.2.0.RC1, I'm now at the database schema validation phase. I think there's something fishy with the way a table is looked for when we're using specific schemas in our database. Some background: we use PostgreSQL, we remove the public schema and we create a schema which is the default schema for the user. Our Hibernate app doesn't know anything about the schema and it used to work fine with ORM 3 and 4. We like this configuration as the schema used is totally transparent for the application and we can play with it at the database level without changing the Hibernate configuration. The fact is that it doesn't work anymore with ORM 5. The problem is that, when the schema is null, ORM now considers that the schema should be "" (empty string) if we haven't set a default schema at the JDBC level. This leads to trying to find the tables in the "" schema which obviously fails. AFAICS, it's something specifically wanted as in NormalizingIdentifierHelperImpl.toMetaDataSchemaName, we have the following lines: if ( identifier == null ) { if ( jdbcEnvironment.getCurrentSchema() == null ) { return ""; } identifier = jdbcEnvironment.getCurrentSchema(); } IMHO, in the null case, if the current schema isn't specified, we should return null and let the database determine which schema to use instead of deciding that the schema is "". If it's null, the schema filter will not be considered and the schema resolution will be let to the database. Any thoughts? -- Guillaume From guillaume.smet at gmail.com Tue Jun 16 07:01:45 2015 From: guillaume.smet at gmail.com (Guillaume Smet) Date: Tue, 16 Jun 2015 13:01:45 +0200 Subject: [hibernate-dev] ORM 5 - Default schema In-Reply-To: References: Message-ID: FWIW, if I change the return ""; to return null;, I get my application to start \o/. I'll start testing the application more in depth. FWIW, I don't know if it's something normal but AvailableSettings.DEFAULT_SCHEMA is not used in the constructor of JdbcEnvironmentImpl used when JDBC is available (I first tried to fix the issue by using this setting). On Tue, Jun 16, 2015 at 12:24 PM, Guillaume Smet wrote: > Hi, > > Still trying to get one of our applications starting with ORM 5. With > Search 5.4.0.Beta1 and Spring 4.2.0.RC1, I'm now at the database schema > validation phase. > > I think there's something fishy with the way a table is looked for when > we're using specific schemas in our database. > > Some background: we use PostgreSQL, we remove the public schema and we > create a schema which is the default schema for the user. > > Our Hibernate app doesn't know anything about the schema and it used to > work fine with ORM 3 and 4. > > We like this configuration as the schema used is totally transparent for > the application and we can play with it at the database level without > changing the Hibernate configuration. > > The fact is that it doesn't work anymore with ORM 5. The problem is that, > when the schema is null, ORM now considers that the schema should be "" > (empty string) if we haven't set a default schema at the JDBC level. This > leads to trying to find the tables in the "" schema which obviously fails. > > AFAICS, it's something specifically wanted as in > NormalizingIdentifierHelperImpl.toMetaDataSchemaName, we have the following > lines: > if ( identifier == null ) { > if ( jdbcEnvironment.getCurrentSchema() == null ) { > return ""; > } > identifier = jdbcEnvironment.getCurrentSchema(); > } > > IMHO, in the null case, if the current schema isn't specified, we should > return null and let the database determine which schema to use instead of > deciding that the schema is "". > > If it's null, the schema filter will not be considered and the schema > resolution will be let to the database. > > Any thoughts? > > -- > Guillaume > From johara at redhat.com Tue Jun 16 09:57:07 2015 From: johara at redhat.com (John O'Hara) Date: Tue, 16 Jun 2015 14:57:07 +0100 Subject: [hibernate-dev] (HHH-9857) Reuse of EntityEntry for bytecode enhanced read-only reference cached entities Message-ID: <55802B33.3010604@redhat.com> Steve, I missed you ping yesterday about HHH-9857. I reworked based on the EntityEntry needing to be threadsafe to be shared across sessions. With the current impl a new EntityEntry is created for each PeristenceContext. If we share it between sessions, there is a race condition on the compressedState field, as multiple threads will access the same object. We had discussed about making access to this field threadsafe, or removing some of the operations and making the ImmutableEntityEntry objects immutable themselves. I have used a ReadWriteRentrantLock in ImmutableEntityEntry to remove the race cond. as this appeared to require least code change, but I am uncertain atm what impact this will have on performance as it is in a critical section of code. I have not been able to test my changes as we currently have an issue with our perf lab after a recent upgrade. I will test as soon as I can to see what impact this has on cpu. This change only impacts ImmutableEntityEntry, and not MutableEntityEntry Thanks John -- John O'Hara johara at redhat.com JBoss, by Red Hat Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SI4 1TE, United Kingdom. Registered in UK and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA), Matt Parsons (USA) and Michael O'Neill (Ireland). From steve at hibernate.org Tue Jun 16 12:20:20 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 16 Jun 2015 16:20:20 +0000 Subject: [hibernate-dev] ORM 5 - Default schema In-Reply-To: References: Message-ID: Hi Guillaume. The trouble with the way it used to work is that we are essentially looking at tables from all catalogs/schemas. That is the difference between "" and null in those DBMD params. This causes problems in a few different situations. First is the case of simply having more than one table with the same name in different catalogs/schemas. The other is the case of Oracle synonyms. In both cases we get back multiple results for tables with a given name if we do not account for catalog/schema. Bottom line is that neither schema validation nor schema update were previously supported. I am trying to change that, but part of that means fixing oddities like this. That said... there are a few things you could do within the changes. You could simply tell Hibernate via Dialect#getNameQualifierSupport that neither catalog nor schema are supported for your db; just return NameQualifierSupport#NONE. This will effectively make Hibernate use null rather than "" in these DBMD calls. The other thing you could do relates to a task I have for myself. Basically I would like to make org.hibernate.tool.schema.extract.spi.InformationExtractor configurable. This is the thing where you are seeing the calls into NormalizingIdentifierHelperImpl.toMetaDataSchemaName. It is the thing that talks to the database to extract the schema information (tables, sequences, columns, etc). Andrea is in the middle of refactoring this contract at the moment, but you can at least see its intent; after his work there will no longer be plural forms to these methods. As you can see from my comments where this gets constructed, I planned to make that pluggable. I just have not gotten to that yet. If you want to work on that, that would get it done faster. I plan the next release next week so its becoming time critical for 5.0. As for your comment about AvailableSettings.DEFAULT_SCHEMA I don't understand. On Tue, Jun 16, 2015 at 6:02 AM Guillaume Smet wrote: > FWIW, if I change the return ""; to return null;, I get my application to > start \o/. > > I'll start testing the application more in depth. > > FWIW, I don't know if it's something normal but > AvailableSettings.DEFAULT_SCHEMA is not used in the constructor of > JdbcEnvironmentImpl used when JDBC is available (I first tried to fix the > issue by using this setting). > > On Tue, Jun 16, 2015 at 12:24 PM, Guillaume Smet > > wrote: > > > Hi, > > > > Still trying to get one of our applications starting with ORM 5. With > > Search 5.4.0.Beta1 and Spring 4.2.0.RC1, I'm now at the database schema > > validation phase. > > > > I think there's something fishy with the way a table is looked for when > > we're using specific schemas in our database. > > > > Some background: we use PostgreSQL, we remove the public schema and we > > create a schema which is the default schema for the user. > > > > Our Hibernate app doesn't know anything about the schema and it used to > > work fine with ORM 3 and 4. > > > > We like this configuration as the schema used is totally transparent for > > the application and we can play with it at the database level without > > changing the Hibernate configuration. > > > > The fact is that it doesn't work anymore with ORM 5. The problem is that, > > when the schema is null, ORM now considers that the schema should be "" > > (empty string) if we haven't set a default schema at the JDBC level. This > > leads to trying to find the tables in the "" schema which obviously > fails. > > > > AFAICS, it's something specifically wanted as in > > NormalizingIdentifierHelperImpl.toMetaDataSchemaName, we have the > following > > lines: > > if ( identifier == null ) { > > if ( jdbcEnvironment.getCurrentSchema() == null ) { > > return ""; > > } > > identifier = jdbcEnvironment.getCurrentSchema(); > > } > > > > IMHO, in the null case, if the current schema isn't specified, we should > > return null and let the database determine which schema to use instead of > > deciding that the schema is "". > > > > If it's null, the schema filter will not be considered and the schema > > resolution will be let to the database. > > > > Any thoughts? > > > > -- > > Guillaume > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Tue Jun 16 14:00:49 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 16 Jun 2015 18:00:49 +0000 Subject: [hibernate-dev] TREAT operator and joined inheritance (HHH-9862) In-Reply-To: <902223750.2378776.1434400016312.JavaMail.zimbra@redhat.com> References: <2011779521.2267219.1434393440897.JavaMail.zimbra@redhat.com> <902223750.2378776.1434400016312.JavaMail.zimbra@redhat.com> Message-ID: As for the "multi-select" case you mention, JPA actually does not mention support for TREAT in select clauses. In fact it explicitly lists support for TREAT in the from and where clause. So because it explicitly mentions those, I'd say it implicitly excludes support for them in select clause. The use of the TREAT operator is supported for downcasting within path expressions in the FROM and WHERE clauses. ... So unfortunately there is no "properly" in this case because JPA does not define what is proper. There is just what we deem to be appropriate. There is a lot of difficulty in getting the inner/outer join right here. The difficulty is knowing the context that the TREAT occurs in in the code that is building the join fragment. Ultimately this is done in org.hibernate.persister.entity.AbstractEntityPersister#determineSubclassTableJoinType. But that method has no visibility into whether this is driven by a select or a where or a from or ... And in fact I'd argue that its not just a question of select versus from. Its really more a question of how many other treats occur for that same "from element" and whether they are used in an conjunctive (AND) or disjunctive (OR) way. But I am not convinced we'd ever be able to get the inner/outer join right in all these cases. At the least the contextual info we'd need is well beyond what we have available to us given the current SQL generation engine here. And even if we did have all the information available to us. I am not sure it is reasonable way to apply restrictions. Maybe a slightly different way to look at this is better. Rather that attempting to alter the outer join (which is what Hibernate would use for the subclasses) to be inner joins in certain cases, maybe we instead just use a type restriction. Keeping in mind that by default Hibernate will want to render the joins for subclasses as outer joins, I think this is easiest to understand with some examples 1) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty = 'x' or treat(p as Dog).canineProperty = 'y'" So by default Hibernate would want to render SQL here like: select ... from Pet p left outer join Dog d on ... left outer join Cat c on .. where c.felineProperty = 'x' or d.canineProperty = 'y' which is actually perfect in this case. 2) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty = 'x' and treat(p as Dog).canineProperty = 'y'" Hibernate would render SQL like: from Pet p left outer join Dog d on ... left outer join Cat c on .. where c.felineProperty = 'x' and d.canineProperty = 'y' which again is actually perfect here. As it turns out the original "alter join for treat" support was done to handle the case of a singular restriction: 3) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty <> 'x'" Hibernate would render SQL like: from Pet p left outer join Dog d on ... left outer join Cat c on .. where c.felineProperty <> 'x' the problem here is that Dogs can also be returned. In retrospect looking at all these cases I think it might have been better to instead render a restriction for the type into the where: from Pet p left outer join Dog d on ... left outer join Cat c on .. where ( and c.felineProperty <> 'x' ) ( is the case statement that is used to restrict based on concrete type). Now we will only get back Cats. The nice thing is that this approach works no matter the and/or context: select ... from Pet p left outer join Dog d on ... left outer join Cat c on .. where ( and c.felineProperty = 'x' ) or ( and d.canineProperty = 'y' ) from Pet p left outer join Dog d on ... left outer join Cat c on .. where ( and c.felineProperty = 'x' ) and ( and d.canineProperty = 'y' ) I'd have to think through treats in the from-clause a bit more. On Mon, Jun 15, 2015 at 3:27 PM Gail Badner wrote: > JPA 2.1 shows examples of using multiple downcasts in a restriction: > > 4.4.9 Downcasting > > SELECT e FROM Employee e > WHERE TREAT(e AS Exempt).vacationDays > 10 > OR TREAT(e AS Contractor).hours > 100 > > 6.5.7 Downcasting > > Example 3: > CriteriaQuery q = cb.createQuery(Employee.class); > Root e = q.from(Employee.class); > q.where( > cb.or(cb.gt(cb.treat(e, Exempt.class).get(Exempt_.vacationDays), > 10), > cb.gt(cb.treat(e, Contractor.class).get(Contractor_.hours), > 100))); > > These don't work in Hibernate for joined inheritance because Hibernate > uses an inner join for the downcasts. > > I've added a FailureExpected test case for this: > https://github.com/hibernate/hibernate-orm/commit/1ec76887825bebda4c02ea2bc1590d374aa4415b > > IIUC, inner join is correct when TREAT is used in a JOIN clause. If TREAT > is only used for restrictions in the WHERE clause, I *think* it should be > an outer join. Is that correct? > > HHH-9862 also mentions that Hibernate doesn't work properly when there are > multiple select expressions using different downcasts, as in: > > CriteriaBuilder cb = entityManager.getCriteriaBuilder(); > CriteriaQuery query = cb.createQuery(Object[].class); > Root root = query.from(Pet.class); > query.multiselect( > root.get("id"), > root.get("name"), > cb.treat(root, Cat.class).get("felineProperty"), > cb.treat(root, Dog.class).get("canineProperty") > ); > > I don't think this should work, at least not with implicit joins. Is this > valid? > > Thanks, > Gail > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Tue Jun 16 16:22:19 2015 From: steve at hibernate.org (sebersole) Date: Tue, 16 Jun 2015 13:22:19 -0700 (PDT) Subject: [hibernate-dev] Test from Nabble Message-ID: <1434486139171-3.post@n6.nabble.com> Test from new Nabble gateway -- View this message in context: http://hibernate-development.74578.x6.nabble.com/Test-from-Nabble-tp3.html Sent from the Hibernate Development mailing list archive at Nabble.com. From steve at hibernate.org Tue Jun 16 17:04:56 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 16 Jun 2015 21:04:56 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 Message-ID: I am not so sure that manually building a tree that would work with listeners/visitors generated from a second grammar is going to be an option. I have asked on SO and on the Antlr discussion group and basically got no responses as to how that might be possible. See https://groups.google.com/forum/#!topic/antlr-discussion/vBkwCovqHcI So the question is whether generating a semantic tree that is not Antlr specific is a viable alternative. I think it is. And we can still provide hand written listener and or visitor for processing this. From gbadner at redhat.com Tue Jun 16 20:02:43 2015 From: gbadner at redhat.com (Gail Badner) Date: Tue, 16 Jun 2015 20:02:43 -0400 (EDT) Subject: [hibernate-dev] TREAT operator and joined inheritance (HHH-9862) In-Reply-To: References: <2011779521.2267219.1434393440897.JavaMail.zimbra@redhat.com> <902223750.2378776.1434400016312.JavaMail.zimbra@redhat.com> Message-ID: <1966207744.3463334.1434499363119.JavaMail.zimbra@redhat.com> See below: ----- Original Message ----- > From: "Steve Ebersole" > To: "Gail Badner" , "Hibernate Dev" > Sent: Tuesday, June 16, 2015 11:00:49 AM > Subject: Re: [hibernate-dev] TREAT operator and joined inheritance (HHH-9862) > > As for the "multi-select" case you mention, JPA actually does not mention > support for TREAT in select clauses. In fact it explicitly lists support > for TREAT in the from and where clause. So because it explicitly mentions > those, I'd say it implicitly excludes support for them in select clause. > > > The use of the TREAT operator is supported for downcasting within path > expressions in the FROM and > WHERE clauses. ... > > Yes, I noticed this as well. > So unfortunately there is no "properly" in this case because JPA does not > define what is proper. There is just what we deem to be appropriate. We have some unit tests that have a single TREAT select expression on the root entity using HQL and CriteriaBuilder: Using HQL (https://hibernate.atlassian.net/browse/HHH-8637): org.hibernate.test.jpa.ql.TreatKeywordTest.testFilteringDiscriminatorSubclasses org.hibernate.test.jpa.ql.TreatKeywordTest.testFilteringJoinedSubclasses Using CriteriaBuilder (https://hibernate.atlassian.net/browse/HHH-9549): org.hibernate.jpa.test.criteria.TreatKeywordTest.treatRoot org.hibernate.jpa.test.criteria.TreatKeywordTest.treatRootReturnSuperclass As you can see, Hibernate supports one TREATed root entity in a SELECT clause (no projections). Should we limit Hibernate support to that use case? > > There is a lot of difficulty in getting the inner/outer join right here. The > difficulty is knowing the context that the TREAT occurs in in the code that > is building the join fragment. Ultimately this is done > in > org.hibernate.persister.entity.AbstractEntityPersister#determineSubclassTableJoinType. > But that method has no visibility into whether this is driven by a select > or a where or a from or ... > > And in fact I'd argue that its not just a question of select versus from. > Its really more a question of how many other treats occur for that same > "from element" and whether they are used in an conjunctive (AND) or > disjunctive (OR) way. But I am not convinced we'd ever be able to get the > inner/outer join right in all these cases. At the least the contextual > info we'd need is well beyond what we have available to us given the > current SQL generation engine here. And even if we did have all the > information available to us. I am not sure it is reasonable way to apply > restrictions. > > Maybe a slightly different way to look at this is better. Rather that > attempting to alter the outer join (which is what Hibernate would use for > the subclasses) to be inner joins in certain cases, maybe we instead just > use a type restriction. Keeping in mind that by default Hibernate will > want to render the joins for subclasses as outer joins, I think this is > easiest to understand with some examples > > 1) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty = > 'x' or treat(p as Dog).canineProperty = 'y'" > So by default Hibernate would want to render SQL here like: > select ... > from Pet p > left outer join Dog d on ... > left outer join Cat c on .. > where c.felineProperty = 'x' > or d.canineProperty = 'y' > > which is actually perfect in this case. > > 2) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty = > 'x' and treat(p as Dog).canineProperty = 'y'" > Hibernate would render SQL like: > from Pet p > left outer join Dog d on ... > left outer join Cat c on .. > where c.felineProperty = 'x' > and d.canineProperty = 'y' > > which again is actually perfect here. > > As it turns out the original "alter join for treat" support was done to > handle the case of a singular restriction: > > 3) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty <> > 'x'" > Hibernate would render SQL like: > from Pet p > left outer join Dog d on ... > left outer join Cat c on .. > where c.felineProperty <> 'x' > > the problem here is that Dogs can also be returned. In retrospect looking > at all these cases I think it might have been better to instead render a > restriction for the type into the where: > > from Pet p > left outer join Dog d on ... > left outer join Cat c on .. > where ( and c.felineProperty <> 'x' ) > > ( is the case statement that is used to restrict based > on concrete type). Now we will only get back Cats. The nice thing is that > this approach works no matter the and/or context: > > select ... > from Pet p > left outer join Dog d on ... > left outer join Cat c on .. > where ( and c.felineProperty = 'x' ) > or ( and d.canineProperty = 'y' ) > > from Pet p > left outer join Dog d on ... > left outer join Cat c on .. > where ( and c.felineProperty = 'x' ) > and ( and d.canineProperty = 'y' ) > > I agree that using should cover these cases. For joined subclasse, it looks like is generated from the CaseFragment returned by JoinedSubclassEntityPersister#discriminatorFragment. I imagine there is something similar for single-table inheritance, but I haven't found it yet. > I'd have to think through treats in the from-clause a bit more. > > On Mon, Jun 15, 2015 at 3:27 PM Gail Badner wrote: > > > JPA 2.1 shows examples of using multiple downcasts in a restriction: > > > > 4.4.9 Downcasting > > > > SELECT e FROM Employee e > > WHERE TREAT(e AS Exempt).vacationDays > 10 > > OR TREAT(e AS Contractor).hours > 100 > > > > 6.5.7 Downcasting > > > > Example 3: > > CriteriaQuery q = cb.createQuery(Employee.class); > > Root e = q.from(Employee.class); > > q.where( > > cb.or(cb.gt(cb.treat(e, Exempt.class).get(Exempt_.vacationDays), > > 10), > > cb.gt(cb.treat(e, Contractor.class).get(Contractor_.hours), > > 100))); > > > > These don't work in Hibernate for joined inheritance because Hibernate > > uses an inner join for the downcasts. > > > > I've added a FailureExpected test case for this: > > https://github.com/hibernate/hibernate-orm/commit/1ec76887825bebda4c02ea2bc1590d374aa4415b > > > > IIUC, inner join is correct when TREAT is used in a JOIN clause. If TREAT > > is only used for restrictions in the WHERE clause, I *think* it should be > > an outer join. Is that correct? > > > > HHH-9862 also mentions that Hibernate doesn't work properly when there are > > multiple select expressions using different downcasts, as in: > > > > CriteriaBuilder cb = entityManager.getCriteriaBuilder(); > > CriteriaQuery query = cb.createQuery(Object[].class); > > Root root = query.from(Pet.class); > > query.multiselect( > > root.get("id"), > > root.get("name"), > > cb.treat(root, Cat.class).get("felineProperty"), > > cb.treat(root, Dog.class).get("canineProperty") > > ); > > > > I don't think this should work, at least not with implicit joins. Is this > > valid? > > > > Thanks, > > Gail > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > From sanne at hibernate.org Wed Jun 17 06:44:59 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 17 Jun 2015 11:44:59 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> Message-ID: Hi Scott, can we expect to see both Hibernate ORM 5 and this latest Hibernate Search version 5.4.0.Alpha1 soon in WildFly 10? thanks, Sanne On 4 June 2015 at 23:47, Sanne Grinovero wrote: > On 29 May 2015 at 18:27, Scott Marlow wrote: >> >> >> On 05/29/2015 01:05 PM, Sanne Grinovero wrote: >>> >>> Thanks Scott! >>> >>> 1. this error is expected: HS 5.2 is not compatible with ORM 5. >>> We'll need a compatible WildFly version to release a compatible >>> version, or alternatively know how to get the tests to run w/o the >>> jipijapa patch as I was trying ;-) >> >> >> In the interest of getting ORM 5 into WildFly 10 before HS is upgraded, we >> could disable >> org.jboss.as.test.integration.hibernate.search.HibernateSearchJPATestCase >> and create a blocking jira for WF10 assigned to you, so you can either >> enable the HibernateSearchJPATestCase test or remove Search from WildFly 10 >> as you mention below (as a possible option). Please let me know how you >> want me to proceed. > > That won't be necessary, as a compatible release is now available: > update Hibernate Search to version 5.4.0.Alpha1 when you upgrade Hibernate ORM. > > (don't upgrade HS w/o ORM to 5: it's required for this version of > Hibernate Search) > > Thanks! > > Sanne From davide at hibernate.org Wed Jun 17 08:01:43 2015 From: davide at hibernate.org (Davide D'Alto) Date: Wed, 17 Jun 2015 13:01:43 +0100 Subject: [hibernate-dev] [WEBSITE] Jira workflow In-Reply-To: <20150612193555.GB22142@Nineveh.lan> References: <20150612193555.GB22142@Nineveh.lan> Message-ID: I've changed the workflow. @hardy You seem to have the same permisison I have (I just gave a quick look). Is there something in particular you cannot do? On Fri, Jun 12, 2015 at 8:35 PM, Hardy Ferentschik wrote: > Hi, > > > it seems thath the worflow for WEBSITE on JIRA does not include the state > > "PULL REQUEST SENT" (like in Search and OGM, for example) > > > > I'd like to have it so that I can have a quick overview from JIRA of the > > issues that are "almost" done. > > > > Would it be ok to add it? > > +1 from my side. > > While on it, you could check the team permissions. It seems I don't have > the same > permissions on WEBSITE as on the other projects. > > --Hardy > From sanne at hibernate.org Wed Jun 17 08:08:21 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 17 Jun 2015 13:08:21 +0100 Subject: [hibernate-dev] [WEBSITE] Jira workflow In-Reply-To: References: <20150612193555.GB22142@Nineveh.lan> Message-ID: I also have very limited permissions on the WEBSITE project.. which is the reason for it to not show have the github/HipChat integration setup. I don't need the admin permissions, but would love it if you could setup the hipchat integration as I'm definitely losing track of all comment notifications :) On 17 June 2015 at 13:01, Davide D'Alto wrote: > I've changed the workflow. > > @hardy You seem to have the same permisison I have (I just gave a quick > look). Is there something in particular you cannot do? > > > On Fri, Jun 12, 2015 at 8:35 PM, Hardy Ferentschik > wrote: > >> Hi, >> >> > it seems thath the worflow for WEBSITE on JIRA does not include the state >> > "PULL REQUEST SENT" (like in Search and OGM, for example) >> > >> > I'd like to have it so that I can have a quick overview from JIRA of the >> > issues that are "almost" done. >> > >> > Would it be ok to add it? >> >> +1 from my side. >> >> While on it, you could check the team permissions. It seems I don't have >> the same >> permissions on WEBSITE as on the other projects. >> >> --Hardy >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From hardy at hibernate.org Wed Jun 17 08:14:21 2015 From: hardy at hibernate.org (Hardy Ferentschik) Date: Wed, 17 Jun 2015 14:14:21 +0200 Subject: [hibernate-dev] [WEBSITE] Jira workflow In-Reply-To: References: <20150612193555.GB22142@Nineveh.lan> Message-ID: <20150617121421.GA48988@Nineveh.lan> On Wed, Jun 17, 2015 at 01:01:43PM +0100, Davide D'Alto wrote: > I've changed the workflow. > > @hardy You seem to have the same permisison I have (I just gave a quick > look). Is there something in particular you cannot do? I think you created a wrong issue which you actually wanted to delete, but couldn't. I thought I do it, but I don't have the permissions. On the other projects I can, probably because since I am in some sort of admin group. Would be nice to be in there as well for WEBSITE. --Hardy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 496 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hibernate-dev/attachments/20150617/14e05612/attachment.bin From gunnar at hibernate.org Wed Jun 17 08:39:39 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Wed, 17 Jun 2015 14:39:39 +0200 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: Hi, > resolving possible unqualified attribute references [...] > The other option is to have such an API. This would allow the parser to ask the consumer whether the given identifier (`ssn`) is a persistent attribute of any of the available entities. Yes, having such extension point seems reasonable. OGM would probably use the same implementation as ORM, but other users may plug in another impl based on their own type of entity definitions. Would the scope of that extension point be solely attribute resolution or also handling of other things such as literals? I'd hope the latter could be done in a unified way by the parser? > So here it is a question of whether we want the parser to perform capability based validations for us. Again, I'd argue we do as otherwise each consumer ends up having to do these validations themselves +1 > We also need to decide how we want to handle polymorphic queries in this parser. I am not sure in terms of exact types to be returned, but it'd help if the returned structure contained information about the actually affected "tables" (or more generally, "structures" in the query backend), so that users don't each have to deal with resolving that information wrt. the current mapping strategy. That need some extension point for specifying the sub-types of given types. Again, OGM would probably share an impl. with ORM. --Gunnar 2015-06-13 0:30 GMT+02:00 Steve Ebersole : > I just pushed my initial work on performing some indexing of explicit from > clauses. Essentially it takes the from clauses defined in the query and > begins the massage process. From here I will start working on handling > implicit from-clause-elements. Part of that however will require me being > able to know what is an entity and whether an IDENTIFIER in the query > represents an attribute in one (or more) of those from-clause-element > entities. IIRC this needs to be different between consumers of this > project as they do not always have persisters, etc. Previously we had > discussed an API that all consumers could provide. I had developed org.hibernate.hql.ast.common.ParserContext > and friends as a means to that end. However, the parser project is > currently not using those. On the bright side that means we have a clean > slate for how to do this! :) > > Part of the equation here is how we want certain things to work in terms > of an input/output paradigm, specifically what the "parser" should output. > Let's take some examples... > > 1) resolving possible unqualified attribute references. I have mentioned > this one before. Take a query like `select ssn from Person`. There are 2 > main choices here when it comes to processing the `ssn` select > expression. It's a matter of precedence. The first option is to have > ATTRIBUTE_REFERENCE have the lowest precedence. Here we'd try other > possibilities first. Mainly that would entail trying it as various forms > of a constant. If all those attempts fail we would *assume* that the > expression is an ATTRIBUTE_REFERENCE. The assumption aspect is > important. It means that would not be validated here. This approach would > not require any form of API, but it has many downsides: it would require > more expensive resolution and it could potentially hide a ATTRIBUTE_REFERENCE. > IMO ATTRIBUTE_REFERENCE should have the higher precedence. The other > option is to have such an API. This would allow the parser to ask the > consumer whether the given identifier (`ssn`) is a persistent attribute of > any of the available entities. If you can't tell, I am in big time favor > of having such an API :) But I am open to discussions as to the other side. > > 2) Understanding capabilities. The principle here is understanding what > is possible in different contexts based on the domain model being queried. > For example, if we see a query like `select c.addresses.city from Company > c` and if we know that `c.addresses` resolves to a persistent collection > then we know that the following de-reference is invalid. So here it is a > question of whether we want the parser to perform capability based > validations for us. Again, I'd argue we do as otherwise each consumer ends > up having to do these validations themselves. This is/was the intent of > the org.hibernate.hql.ast.TypeDescriptor stuff I had developed there > originally. So something like that. > > 3) We also need to decide how we want to handle polymorphic queries in > this parser. For a query like `from Object` what do we ultimately want > returned? Specifically how do we deal with the multi-valued > java.lang.Object reference in whatever we send back from the parser? > Because what we send out implies some things we need to send in (API). > > Anyway, the from clause parser is looking nice so far. > > > On Thu, Jun 11, 2015 at 9:50 AM Steve Ebersole > wrote: > >> In the re-write case we will have to decide between 2 courses. >> >> The problem lies in the fact that the listeners and visitors expect a >> tree specifically generated from the grammar that generated them. For HQL, >> say we have one grammar named HqlParser (like in the poc). The >> listeners and visitors built from HqlParser specifically expect the tree >> from HqlParser. The trees are typed. Attempting to use trees from one >> grammar in the listener/visitor from another grammar will not work. >> >> As far as re-writing that effectively means 2 options. >> >> If we want to have a second grammar for the "semantic query" we are going >> to have to re-write the entire tree based on that second grammar. I had >> thought we might be able to mix them. But that was based on my Antlr 2/3 >> understanding where the trees are de-typed. That approach will not work in >> Antlr 4. Its not a huge deal, but worth mentioning. >> >> The other option would be to encode the "semantically correct" rules into >> the original grammar (HqlParser) as a higher precedence than their parse >> tree corollary. This gets a little fugly. Consider again the `select >> c.headquarters.state.code` fragment. During the parse phase we need to >> accept any dotIdentifierPath as a selectable item. We simply do not >> know during parse what that represents. So for the parse phase, a >> `selectItem` rule (overly simplified) might look like: >> >> selectItem : dotIdentifierPath; >> >> In this approach we would re-write the tree "in place" during semantic >> analysis. So at some point we know that the given dotIdentifierPath >> represents a reference to a persistent attribute. So we'd alter that rule >> to look contain alternatives for each semantic possibility: >> >> selectItem : attributeReference | javaConstant | dotIdentifierPath; >> >> The rules attributeReference and javaConstant would never match during >> the parse phase. >> >> Again, this is fugly imo. >> >> >> On Wed, Jun 10, 2015 at 10:49 AM Gunnar Morling >> wrote: >> >>> 2015-06-09 22:11 GMT+02:00 Steve Ebersole : >>> >>>> So today I spent some time cleaning up the basic HQL parser. >>>> Personally I think it would be best if our 2 proof-of-concepts could share >>>> that first grammar. IMO that would make the differences between the 2 >>>> approaches more apparent. I will push those changes soon. >>>> >>> >>> Ok, I can try and work on a PoC for the decorator-based approach. Not >>> sure what the outcome will be, as your's, my understanding of it is roughly >>> vague and high-level. But if it fails we can maybe settle for the Antlr3 >>> approach with the better feeling of having investigated the alternative. >>> >>> Can you let me know when you have pushed your stuff? What does it do, >>> render the query below as SQL? >>> >>> It is not complete yet. But it covers most cases. >>>> >>>> >>>> On Tue, Jun 9, 2015 at 10:47 AM Steve Ebersole >>>> wrote: >>>> >>>>> On Tue, Jun 9, 2015 at 10:14 AM Gunnar Morling >>>>> wrote: >>>>> >>>>> Yes, indeed I cheated here a bit. Probably it should be the following >>>>>> instead: >>>>>> >>>>>> [DOT] ---> AttributeReference("", "code") >>>>>> [DOT] >>>>>> [DOT] >>>>>> [IDENT, "c"] >>>>>> [IDENT, "headquarters"] >>>>>> [IDENT, "state"] >>>>>> [IDENT, "code"] >>>>>> >>>>> >>>>> How do you identify one DOT as referring to something else versus any >>>>> of the other DOTs? >>>>> >>>>> >>>>> Or maybe something like: >>>>>> >>>>>> [SELECTION_PARTICLE] ---> AttributeReference("", "code") >>>>>> [DOT] >>>>>> [DOT] >>>>>> [DOT] >>>>>> [IDENT, "c"] >>>>>> [IDENT, "headquarters"] >>>>>> [IDENT, "state"] >>>>>> [IDENT, "code"] >>>>>> >>>>>> Where SELECTION_PARTICLE would be an abstract representation of >>>>>> anything that can be selected (attribute ref, Java literal ref etc.) and >>>>>> the decorator element added in a later pass would specify its actual >>>>>> semantics based on the alias definitions etc. discovered before. >>>>>> >>>>>> Bottom line being, that decorators providing semantics are attached >>>>>> to the nodes of the parse tree based on information gathered in previous >>>>>> passes. >>>>>> >>>>> >>>>> And what does that look like in real, practical terms? That's what >>>>> concerns me :) I don't know, and you are just speaking in generalities. >>>>> So what does that look like in practice? >>>>> >>>>> >>>>> Not into the tree itself, but we can encode that semantic resolution >>>>>> into decorators (node attachments). >>>>>> >>>>> >>>>> Again, what do these "node attachments" look like in practice? I >>>>> have zero clue and based on my discussions with Antlr folks its not >>>>> pretty. Maybe I misunderstand. But if you are proposing this approach, I >>>>> would think you should have an idea of how it would look >>>>> practically-speaking :) Maybe this is the way to go, I just need to see >>>>> what this looks like. >>>>> >>>>> >>>>> Yes, they would deal with [[DOT][IDENT]] nodes but would benefit from >>>>>> semantic decorators attached previously. During rendering I would expect >>>>>> mainly those attachments to be of importance for the query creation. >>>>>> >>>>>> Admittedly, that's all quite "high level", but so far it seems doable >>>>>> to me in principle. It doesn't answer of course actual tree transformations >>>>>> such as (x + 0) -> x. I am not sure whether there are cases like this. >>>>>> >>>>> >>>>> Yes it is all extremely high-level. That is my concern. Principle >>>>> and practice are often 2 very different things. >>>>> >>>>> I plan on spending some time taking my hibernate-antlr4-poc project >>>>> and expanding it specifically to try the "second grammar" approach and see >>>>> what practical difficulties that shakes out. Would you be willing to do >>>>> the same for this decorated approach? Then we'd have concrete stuff to >>>>> compare and base a decision on. >>>>> >>>>> Also, `(x + 0) -> x` is actually a quite simple case. Ours is much >>>>> more complicated. In analyzing `c.headquarters.state.code` in the SELECT >>>>> clause we need a few things to happen in a few different parts of the >>>>> tree. We need: >>>>> 1) `c.headquarters.state` to be transformed into 2 "implicit joins" >>>>> in the FROM clause >>>>> 2) we need to replace `c.headquarters.state.code` as >>>>> `{implicit-alias}.code` in the SELECT >>>>> 3) register `c.headquarters` and `c.headquarters.state` as implicit >>>>> join paths (additional implicit joins using these paths should re-use the >>>>> same joins). >>>>> >>>> From smarlow at redhat.com Wed Jun 17 08:44:31 2015 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 17 Jun 2015 08:44:31 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> Message-ID: <55816BAF.7080708@redhat.com> On 06/17/2015 06:44 AM, Sanne Grinovero wrote: > Hi Scott, > can we expect to see both Hibernate ORM 5 and this latest Hibernate > Search version 5.4.0.Alpha1 soon in WildFly 10? I hope so. https://github.com/wildfly/wildfly/pull/7509 is passing the testsuite and I made the suggested code changes. I asked for an ETA on the PR. > > thanks, > Sanne > > On 4 June 2015 at 23:47, Sanne Grinovero wrote: >> On 29 May 2015 at 18:27, Scott Marlow wrote: >>> >>> >>> On 05/29/2015 01:05 PM, Sanne Grinovero wrote: >>>> >>>> Thanks Scott! >>>> >>>> 1. this error is expected: HS 5.2 is not compatible with ORM 5. >>>> We'll need a compatible WildFly version to release a compatible >>>> version, or alternatively know how to get the tests to run w/o the >>>> jipijapa patch as I was trying ;-) >>> >>> >>> In the interest of getting ORM 5 into WildFly 10 before HS is upgraded, we >>> could disable >>> org.jboss.as.test.integration.hibernate.search.HibernateSearchJPATestCase >>> and create a blocking jira for WF10 assigned to you, so you can either >>> enable the HibernateSearchJPATestCase test or remove Search from WildFly 10 >>> as you mention below (as a possible option). Please let me know how you >>> want me to proceed. >> >> That won't be necessary, as a compatible release is now available: >> update Hibernate Search to version 5.4.0.Alpha1 when you upgrade Hibernate ORM. >> >> (don't upgrade HS w/o ORM to 5: it's required for this version of >> Hibernate Search) >> >> Thanks! >> >> Sanne From gunnar at hibernate.org Wed Jun 17 08:47:52 2015 From: gunnar at hibernate.org (Gunnar Morling) Date: Wed, 17 Jun 2015 14:47:52 +0200 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: This seems very similar to what I had in mind with the decorator stuff. The decorating elements would represent that manually implemented semantic tree. In the end it probably doesn't even matter whether the elements of that tree would have links to the parse tree elements they originated from or whether that tree is completely "stand-alone". As you say, we'd traverse/alter it with our own listeners. In my understanding that's as good as it gets with Antlr4. 2015-06-16 23:04 GMT+02:00 Steve Ebersole : > I am not so sure that manually building a tree that would work with > listeners/visitors generated from a second grammar is going to be an > option. I have asked on SO and on the Antlr discussion group and basically > got no responses as to how that might be possible. See > https://groups.google.com/forum/#!topic/antlr-discussion/vBkwCovqHcI > > So the question is whether generating a semantic tree that is not Antlr > specific is a viable alternative. I think it is. And we can still provide > hand written listener and or visitor for processing this. > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Wed Jun 17 08:57:15 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Wed, 17 Jun 2015 13:57:15 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <55816BAF.7080708@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> Message-ID: On 17 June 2015 at 13:44, Scott Marlow wrote: > > > On 06/17/2015 06:44 AM, Sanne Grinovero wrote: >> >> Hi Scott, >> can we expect to see both Hibernate ORM 5 and this latest Hibernate >> Search version 5.4.0.Alpha1 soon in WildFly 10? > > > I hope so. https://github.com/wildfly/wildfly/pull/7509 is passing the > testsuite and I made the suggested code changes. I asked for an ETA on the > PR. The actual update will be done in another PR after that is merged right? I didn't see the update in that one. Maybe it's worth to include both changes in one pull request? > > >> >> thanks, >> Sanne >> >> On 4 June 2015 at 23:47, Sanne Grinovero wrote: >>> >>> On 29 May 2015 at 18:27, Scott Marlow wrote: >>>> >>>> >>>> >>>> On 05/29/2015 01:05 PM, Sanne Grinovero wrote: >>>>> >>>>> >>>>> Thanks Scott! >>>>> >>>>> 1. this error is expected: HS 5.2 is not compatible with ORM 5. >>>>> We'll need a compatible WildFly version to release a compatible >>>>> version, or alternatively know how to get the tests to run w/o the >>>>> jipijapa patch as I was trying ;-) >>>> >>>> >>>> >>>> In the interest of getting ORM 5 into WildFly 10 before HS is upgraded, >>>> we >>>> could disable >>>> >>>> org.jboss.as.test.integration.hibernate.search.HibernateSearchJPATestCase >>>> and create a blocking jira for WF10 assigned to you, so you can either >>>> enable the HibernateSearchJPATestCase test or remove Search from WildFly >>>> 10 >>>> as you mention below (as a possible option). Please let me know how you >>>> want me to proceed. >>> >>> >>> That won't be necessary, as a compatible release is now available: >>> update Hibernate Search to version 5.4.0.Alpha1 when you upgrade >>> Hibernate ORM. >>> >>> (don't upgrade HS w/o ORM to 5: it's required for this version of >>> Hibernate Search) >>> >>> Thanks! >>> >>> Sanne From smarlow at redhat.com Wed Jun 17 09:11:31 2015 From: smarlow at redhat.com (Scott Marlow) Date: Wed, 17 Jun 2015 09:11:31 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> Message-ID: <55817203.3010403@redhat.com> On 06/17/2015 08:57 AM, Sanne Grinovero wrote: > On 17 June 2015 at 13:44, Scott Marlow wrote: >> >> >> On 06/17/2015 06:44 AM, Sanne Grinovero wrote: >>> >>> Hi Scott, >>> can we expect to see both Hibernate ORM 5 and this latest Hibernate >>> Search version 5.4.0.Alpha1 soon in WildFly 10? >> >> >> I hope so. https://github.com/wildfly/wildfly/pull/7509 is passing the >> testsuite and I made the suggested code changes. I asked for an ETA on the >> PR. > > The actual update will be done in another PR after that is merged > right? I didn't see the update in that one. Correct, I kept the ORM 5 upgrade as a separate change. > Maybe it's worth to include both changes in one pull request? Problem is that could delay getting the above pull request merged in, which is something that I'd like to see happen first. > >> >> >>> >>> thanks, >>> Sanne >>> >>> On 4 June 2015 at 23:47, Sanne Grinovero wrote: >>>> >>>> On 29 May 2015 at 18:27, Scott Marlow wrote: >>>>> >>>>> >>>>> >>>>> On 05/29/2015 01:05 PM, Sanne Grinovero wrote: >>>>>> >>>>>> >>>>>> Thanks Scott! >>>>>> >>>>>> 1. this error is expected: HS 5.2 is not compatible with ORM 5. >>>>>> We'll need a compatible WildFly version to release a compatible >>>>>> version, or alternatively know how to get the tests to run w/o the >>>>>> jipijapa patch as I was trying ;-) >>>>> >>>>> >>>>> >>>>> In the interest of getting ORM 5 into WildFly 10 before HS is upgraded, >>>>> we >>>>> could disable >>>>> >>>>> org.jboss.as.test.integration.hibernate.search.HibernateSearchJPATestCase >>>>> and create a blocking jira for WF10 assigned to you, so you can either >>>>> enable the HibernateSearchJPATestCase test or remove Search from WildFly >>>>> 10 >>>>> as you mention below (as a possible option). Please let me know how you >>>>> want me to proceed. >>>> >>>> >>>> That won't be necessary, as a compatible release is now available: >>>> update Hibernate Search to version 5.4.0.Alpha1 when you upgrade >>>> Hibernate ORM. >>>> >>>> (don't upgrade HS w/o ORM to 5: it's required for this version of >>>> Hibernate Search) >>>> >>>> Thanks! >>>> >>>> Sanne From jmnarloch at gmail.com Wed Jun 17 11:37:13 2015 From: jmnarloch at gmail.com (Jakub Narloch) Date: Wed, 17 Jun 2015 17:37:13 +0200 Subject: [hibernate-dev] Hibernate O/RM Java 8 API. In-Reply-To: References: Message-ID: Hi Steve, Sorry for disapearing for a bit, but I would like to get back to this thread. I guess there might be some miscommunication from my side about what I would like to introduce into my extension and what I would like to contribute to Hibernate ORM directly. So to sum up I've continued working on my little project: https://github.com/jmnarloch/hstreams - and made the first public release. So if anyone wish to use Hibernate with Java 8 he is free to go. I've basically added everything that we had talk about: that is Optional query results, operating on Streams of query results, typed queries. I had noticed that Hibernate 5 will additionally add support for JDK 8 dates API, so I'm planning to reuse that. What more the extension can be already used with Hibernate 4.3.x and unless you decide to move to Java 8 I can release another version that will be compatible with Hibernate 5 . Now I'm aware that this would be short living sub project till the moment the Hibernate will migrate to Java 8, but untill that I can work on it. Also If you decide to migrate finally to latest Java version, I would be happy to prepare some pull requests based on the ideas I had already implemented in this side project. Regards, Jakub Narlohc 2015-05-19 22:52 GMT+02:00 Steve Ebersole : > On Tue, May 19, 2015 at 12:51 PM, Jakub Narloch > wrote: > >> >> >> 2015-05-19 18:16 GMT+02:00 Steve Ebersole : >>> >>> > - Enable registration of LocaleDate, LocalTime as query params. >>> >>> You can do that now: >>> * org.hibernate.Query#setParameter(java.lang.String, java.lang.Object) >>> * org.hibernate.Query#setParameter(java.lang.String, java.lang.Object, >>> org.hibernate.type.Type) >>> >>> I assume you mean adding method signatures accepting those specific >>> types? >>> >> Yes, I meant a convinient methods similar to those setDate and setTime, >> something like setLocalDate, setLocalTime >> > > But that would in fact introduce a Java 8 dependency on hibernate-core if > we did this directly. We could maybe use some form of "unwrap" notion > where hibernate-java8 could auto-register some delegate for "additional > param setting". We could use the same concept in relation to > hibernate-spatial as well for setting geolatte specific parameters. > > >> >>> > >>> > - Custom type handlers for LocalDate, LocalTime >>> > >>> > - Custom type handlers for Optional >>> >> Sorry for ambiguity. I was refering to "custom user types" from the >> reference, this is at least my understanding how Hibernate maps the Object >> to SQL in general. >> To sum up what I would like to be able to do "mapping" of an entity as >> fallows: >> >> class Employee { >> >> Optional manager; >> >> LocalDate createDate; >> LocalDate updateDate; >> } >> > > As far as the Java 8 date/time stuff... see the hibernate-java8 module... > that is its whole goal... Optional support has a little more to it, some > of which is alluded to in this discussion > > > >> >> So some extra org.hibernate.type.Type definitions will be needed similar >> to those that you had defined in hibernate-java8. >> > > Um, why would we need extra*s*? We need one... for Optional. > From steve at hibernate.org Wed Jun 17 14:47:39 2015 From: steve at hibernate.org (Steve Ebersole) Date: Wed, 17 Jun 2015 18:47:39 +0000 Subject: [hibernate-dev] TREAT operator and joined inheritance (HHH-9862) In-Reply-To: <1966207744.3463334.1434499363119.JavaMail.zimbra@redhat.com> References: <2011779521.2267219.1434393440897.JavaMail.zimbra@redhat.com> <902223750.2378776.1434400016312.JavaMail.zimbra@redhat.com> <1966207744.3463334.1434499363119.JavaMail.zimbra@redhat.com> Message-ID: org.hibernate.persister.entity.Queryable#getTypeDiscriminatorMetadata On Tue, Jun 16, 2015 at 7:02 PM Gail Badner wrote: > See below: > > ----- Original Message ----- > > From: "Steve Ebersole" > > To: "Gail Badner" , "Hibernate Dev" < > hibernate-dev at lists.jboss.org> > > Sent: Tuesday, June 16, 2015 11:00:49 AM > > Subject: Re: [hibernate-dev] TREAT operator and joined inheritance > (HHH-9862) > > > > As for the "multi-select" case you mention, JPA actually does not mention > > support for TREAT in select clauses. In fact it explicitly lists support > > for TREAT in the from and where clause. So because it explicitly > mentions > > those, I'd say it implicitly excludes support for them in select clause. > > > > > > The use of the TREAT operator is supported for downcasting within path > > expressions in the FROM and > > WHERE clauses. ... > > > > > > Yes, I noticed this as well. > > > So unfortunately there is no "properly" in this case because JPA does not > > define what is proper. There is just what we deem to be appropriate. > > We have some unit tests that have a single TREAT select expression on the > root entity using HQL and CriteriaBuilder: > > Using HQL (https://hibernate.atlassian.net/browse/HHH-8637): > > org.hibernate.test.jpa.ql.TreatKeywordTest.testFilteringDiscriminatorSubclasses > org.hibernate.test.jpa.ql.TreatKeywordTest.testFilteringJoinedSubclasses > > Using CriteriaBuilder (https://hibernate.atlassian.net/browse/HHH-9549): > org.hibernate.jpa.test.criteria.TreatKeywordTest.treatRoot > org.hibernate.jpa.test.criteria.TreatKeywordTest.treatRootReturnSuperclass > > As you can see, Hibernate supports one TREATed root entity in a SELECT > clause (no projections). Should we limit Hibernate support to that use case? > > > > > There is a lot of difficulty in getting the inner/outer join right > here. The > > difficulty is knowing the context that the TREAT occurs in in the code > that > > is building the join fragment. Ultimately this is done > > in > > > org.hibernate.persister.entity.AbstractEntityPersister#determineSubclassTableJoinType. > > But that method has no visibility into whether this is driven by a select > > or a where or a from or ... > > > > And in fact I'd argue that its not just a question of select versus from. > > Its really more a question of how many other treats occur for that same > > "from element" and whether they are used in an conjunctive (AND) or > > disjunctive (OR) way. But I am not convinced we'd ever be able to get > the > > inner/outer join right in all these cases. At the least the contextual > > info we'd need is well beyond what we have available to us given the > > current SQL generation engine here. And even if we did have all the > > information available to us. I am not sure it is reasonable way to apply > > restrictions. > > > > Maybe a slightly different way to look at this is better. Rather that > > attempting to alter the outer join (which is what Hibernate would use for > > the subclasses) to be inner joins in certain cases, maybe we instead just > > use a type restriction. Keeping in mind that by default Hibernate will > > want to render the joins for subclasses as outer joins, I think this is > > easiest to understand with some examples > > > > 1) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty > = > > 'x' or treat(p as Dog).canineProperty = 'y'" > > So by default Hibernate would want to render SQL here like: > > select ... > > from Pet p > > left outer join Dog d on ... > > left outer join Cat c on .. > > where c.felineProperty = 'x' > > or d.canineProperty = 'y' > > > > which is actually perfect in this case. > > > > 2) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty > = > > 'x' and treat(p as Dog).canineProperty = 'y'" > > Hibernate would render SQL like: > > from Pet p > > left outer join Dog d on ... > > left outer join Cat c on .. > > where c.felineProperty = 'x' > > and d.canineProperty = 'y' > > > > which again is actually perfect here. > > > > As it turns out the original "alter join for treat" support was done to > > handle the case of a singular restriction: > > > > 3) "select p.id, p.name from Pet p where treat(p as Cat).felineProperty > <> > > 'x'" > > Hibernate would render SQL like: > > from Pet p > > left outer join Dog d on ... > > left outer join Cat c on .. > > where c.felineProperty <> 'x' > > > > the problem here is that Dogs can also be returned. In retrospect > looking > > at all these cases I think it might have been better to instead render a > > restriction for the type into the where: > > > > from Pet p > > left outer join Dog d on ... > > left outer join Cat c on .. > > where ( and c.felineProperty <> 'x' ) > > > > ( is the case statement that is used to restrict > based > > on concrete type). Now we will only get back Cats. The nice thing is > that > > this approach works no matter the and/or context: > > > > select ... > > from Pet p > > left outer join Dog d on ... > > left outer join Cat c on .. > > where ( and c.felineProperty = 'x' ) > > or ( and d.canineProperty = 'y' ) > > > > from Pet p > > left outer join Dog d on ... > > left outer join Cat c on .. > > where ( and c.felineProperty = 'x' ) > > and ( and d.canineProperty = 'y' ) > > > > > > I agree that using should cover these cases. > > For joined subclasse, it looks like is generated > from the CaseFragment returned by > JoinedSubclassEntityPersister#discriminatorFragment. I imagine there is > something similar for single-table inheritance, but I haven't found it yet. > > > I'd have to think through treats in the from-clause a bit more. > > > > On Mon, Jun 15, 2015 at 3:27 PM Gail Badner wrote: > > > > > JPA 2.1 shows examples of using multiple downcasts in a restriction: > > > > > > 4.4.9 Downcasting > > > > > > SELECT e FROM Employee e > > > WHERE TREAT(e AS Exempt).vacationDays > 10 > > > OR TREAT(e AS Contractor).hours > 100 > > > > > > 6.5.7 Downcasting > > > > > > Example 3: > > > CriteriaQuery q = cb.createQuery(Employee.class); > > > Root e = q.from(Employee.class); > > > q.where( > > > cb.or(cb.gt(cb.treat(e, Exempt.class).get(Exempt_.vacationDays), > > > 10), > > > cb.gt(cb.treat(e, Contractor.class).get(Contractor_.hours), > > > 100))); > > > > > > These don't work in Hibernate for joined inheritance because Hibernate > > > uses an inner join for the downcasts. > > > > > > I've added a FailureExpected test case for this: > > > > https://github.com/hibernate/hibernate-orm/commit/1ec76887825bebda4c02ea2bc1590d374aa4415b > > > > > > IIUC, inner join is correct when TREAT is used in a JOIN clause. If > TREAT > > > is only used for restrictions in the WHERE clause, I *think* it should > be > > > an outer join. Is that correct? > > > > > > HHH-9862 also mentions that Hibernate doesn't work properly when there > are > > > multiple select expressions using different downcasts, as in: > > > > > > CriteriaBuilder cb = entityManager.getCriteriaBuilder(); > > > CriteriaQuery query = cb.createQuery(Object[].class); > > > Root root = query.from(Pet.class); > > > query.multiselect( > > > root.get("id"), > > > root.get("name"), > > > cb.treat(root, Cat.class).get("felineProperty"), > > > cb.treat(root, Dog.class).get("canineProperty") > > > ); > > > > > > I don't think this should work, at least not with implicit joins. Is > this > > > valid? > > > > > > Thanks, > > > Gail > > > _______________________________________________ > > > hibernate-dev mailing list > > > hibernate-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > > > > From smarlow at redhat.com Thu Jun 18 10:55:32 2015 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 18 Jun 2015 10:55:32 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <55817203.3010403@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> Message-ID: <5582DBE4.4080904@redhat.com> Sanne, The [1] pull request to bring Jipijapa source into WildFly master is merged. I pushed a copy of the (work in progress) ORM 5 changes to github [2]. Is there a WildFly pull request for the changes to upgrade to Hibernate Search 5.4.0.Alpha1? I didn't see one but I might of missed it. Scott [1] https://github.com/wildfly/wildfly/pull/7509 [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 From sanne at hibernate.org Thu Jun 18 11:59:20 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 18 Jun 2015 16:59:20 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <5582DBE4.4080904@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> Message-ID: On 18 June 2015 at 15:55, Scott Marlow wrote: > Sanne, > > The [1] pull request to bring Jipijapa source into WildFly master is merged. > > I pushed a copy of the (work in progress) ORM 5 changes to github [2]. > > Is there a WildFly pull request for the changes to upgrade to Hibernate > Search 5.4.0.Alpha1? I didn't see one but I might of missed it. No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* Hibernate ORM 5.0.0.CR1. The two should be updated in synch this time, in future there will be more flexibility. > > Scott > > [1] https://github.com/wildfly/wildfly/pull/7509 > > [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 From smarlow at redhat.com Thu Jun 18 12:17:30 2015 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 18 Jun 2015 12:17:30 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> Message-ID: <5582EF1A.6040703@redhat.com> On 06/18/2015 11:59 AM, Sanne Grinovero wrote: > On 18 June 2015 at 15:55, Scott Marlow wrote: >> Sanne, >> >> The [1] pull request to bring Jipijapa source into WildFly master is merged. >> >> I pushed a copy of the (work in progress) ORM 5 changes to github [2]. >> >> Is there a WildFly pull request for the changes to upgrade to Hibernate >> Search 5.4.0.Alpha1? I didn't see one but I might of missed it. > > No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* Hibernate > ORM 5.0.0.CR1. What needs to change on WildFly for the Hibernate Search upgrade? I started with just changing the WildFly (top level) pom.xml to reference HS 5.4.0.Alpha1. Do you expect that the latest ORM master branch will work with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? Locally, I am building the latest ORM master (built from source) and using Hibernate Search 5.4.0.Alpha1. When running the WildFly testsuite, I see a few different errors. One of them is from the HibernateSearchJPATestCase.testFullTextQuery test. http://pastebin.com/Q5xLrkpT shows the WildFly server.log contents from the Hibernate Search test. > > The two should be updated in synch this time, in future there will be > more flexibility. > >> >> Scott >> >> [1] https://github.com/wildfly/wildfly/pull/7509 >> >> [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 From sanne at hibernate.org Thu Jun 18 12:37:04 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Thu, 18 Jun 2015 17:37:04 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <5582EF1A.6040703@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> Message-ID: On 18 June 2015 at 17:17, Scott Marlow wrote: > > > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: >> >> On 18 June 2015 at 15:55, Scott Marlow wrote: >>> >>> Sanne, >>> >>> The [1] pull request to bring Jipijapa source into WildFly master is >>> merged. >>> >>> I pushed a copy of the (work in progress) ORM 5 changes to github [2]. >>> >>> Is there a WildFly pull request for the changes to upgrade to Hibernate >>> Search 5.4.0.Alpha1? I didn't see one but I might of missed it. >> >> >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* Hibernate >> ORM 5.0.0.CR1. > > > What needs to change on WildFly for the Hibernate Search upgrade? Nothing else changes. Just change the Hibernate Search version when you change the Hibernate ORM version. > I started > with just changing the WildFly (top level) pom.xml to reference HS > 5.4.0.Alpha1. +1 > Do you expect that the latest ORM master branch will work > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? I didn't test the latest ORM master branch, but it will work with ORM 5.0.0.CR1. > Locally, I am building the latest ORM master (built from source) and using > Hibernate Search 5.4.0.Alpha1. When running the WildFly testsuite, I see a > few different errors. One of them is from the > HibernateSearchJPATestCase.testFullTextQuery test. > http://pastebin.com/Q5xLrkpT shows the WildFly server.log contents from the > Hibernate Search test. That looks like related to an Hibernate ORM change, not Search. The entity used for that test doesn't declare the fields as "public"; that used to be ok in previous versions. You could workaround it by changing the test to use either public fields or traditional getters/setters? But we should check with Steve if that change was intentional? For now, better to workaround it in the test so we don't get stuck. Thanks! Sanne > > >> >> The two should be updated in synch this time, in future there will be >> more flexibility. >> >>> >>> Scott >>> >>> [1] https://github.com/wildfly/wildfly/pull/7509 >>> >>> [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 From steve at hibernate.org Thu Jun 18 13:41:56 2015 From: steve at hibernate.org (Steve Ebersole) Date: Thu, 18 Jun 2015 17:41:56 +0000 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> Message-ID: That should still be ok. If it does not work, that would be a bug. On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero wrote: > On 18 June 2015 at 17:17, Scott Marlow wrote: > > > > > > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: > >> > >> On 18 June 2015 at 15:55, Scott Marlow wrote: > >>> > >>> Sanne, > >>> > >>> The [1] pull request to bring Jipijapa source into WildFly master is > >>> merged. > >>> > >>> I pushed a copy of the (work in progress) ORM 5 changes to github [2]. > >>> > >>> Is there a WildFly pull request for the changes to upgrade to Hibernate > >>> Search 5.4.0.Alpha1? I didn't see one but I might of missed it. > >> > >> > >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* Hibernate > >> ORM 5.0.0.CR1. > > > > > > What needs to change on WildFly for the Hibernate Search upgrade? > > > Nothing else changes. Just change the Hibernate Search version when > you change the Hibernate ORM version. > > > I started > > with just changing the WildFly (top level) pom.xml to reference HS > > 5.4.0.Alpha1. > > +1 > > > Do you expect that the latest ORM master branch will work > > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? > > I didn't test the latest ORM master branch, but it will work with ORM > 5.0.0.CR1. > > > Locally, I am building the latest ORM master (built from source) and > using > > Hibernate Search 5.4.0.Alpha1. When running the WildFly testsuite, I > see a > > few different errors. One of them is from the > > HibernateSearchJPATestCase.testFullTextQuery test. > > http://pastebin.com/Q5xLrkpT shows the WildFly server.log contents from > the > > Hibernate Search test. > > That looks like related to an Hibernate ORM change, not Search. > The entity used for that test doesn't declare the fields as "public"; > that used to be ok in previous versions. > You could workaround it by changing the test to use either public > fields or traditional getters/setters? > But we should check with Steve if that change was intentional? For > now, better to workaround it in the test so we don't get stuck. > > Thanks! > Sanne > > > > > > >> > >> The two should be updated in synch this time, in future there will be > >> more flexibility. > >> > >>> > >>> Scott > >>> > >>> [1] https://github.com/wildfly/wildfly/pull/7509 > >>> > >>> [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Thu Jun 18 15:08:12 2015 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 18 Jun 2015 15:08:12 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> Message-ID: <5583171C.8010708@redhat.com> On 06/18/2015 01:41 PM, Steve Ebersole wrote: > That should still be ok. If it does not work, that would be a bug. A few other WildFly (JPA) tests also get the same error. > > On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero > wrote: > > On 18 June 2015 at 17:17, Scott Marlow > wrote: > > > > > > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: > >> > >> On 18 June 2015 at 15:55, Scott Marlow > wrote: > >>> > >>> Sanne, > >>> > >>> The [1] pull request to bring Jipijapa source into WildFly > master is > >>> merged. > >>> > >>> I pushed a copy of the (work in progress) ORM 5 changes to > github [2]. > >>> > >>> Is there a WildFly pull request for the changes to upgrade to > Hibernate > >>> Search 5.4.0.Alpha1? I didn't see one but I might of missed it. > >> > >> > >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* > Hibernate > >> ORM 5.0.0.CR1. > > > > > > What needs to change on WildFly for the Hibernate Search upgrade? > > > Nothing else changes. Just change the Hibernate Search version when > you change the Hibernate ORM version. > > > I started > > with just changing the WildFly (top level) pom.xml to reference HS > > 5.4.0.Alpha1. > > +1 > > > Do you expect that the latest ORM master branch will work > > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? > > I didn't test the latest ORM master branch, but it will work with > ORM 5.0.0.CR1. > > > Locally, I am building the latest ORM master (built from source) > and using > > Hibernate Search 5.4.0.Alpha1. When running the WildFly > testsuite, I see a > > few different errors. One of them is from the > > HibernateSearchJPATestCase.testFullTextQuery test. > > http://pastebin.com/Q5xLrkpT shows the WildFly server.log > contents from the > > Hibernate Search test. > > That looks like related to an Hibernate ORM change, not Search. > The entity used for that test doesn't declare the fields as "public"; > that used to be ok in previous versions. > You could workaround it by changing the test to use either public > fields or traditional getters/setters? > But we should check with Steve if that change was intentional? For > now, better to workaround it in the test so we don't get stuck. > > Thanks! > Sanne > > > > > > >> > >> The two should be updated in synch this time, in future there > will be > >> more flexibility. > >> > >>> > >>> Scott > >>> > >>> [1] https://github.com/wildfly/wildfly/pull/7509 > >>> > >>> [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Thu Jun 18 16:28:48 2015 From: smarlow at redhat.com (Scott Marlow) Date: Thu, 18 Jun 2015 16:28:48 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> Message-ID: <55832A00.2090000@redhat.com> I tried deploying a simple 2lc enabled test app and got a CNFE on Infinispan classes being referenced from the application classloader. http://pastebin.com/PREzm6bn shows the exception. I'm guessing this is a classloader issue in ORM 5 to be worked out. From smarlow at redhat.com Fri Jun 19 08:46:11 2015 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 19 Jun 2015 08:46:11 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <55832A00.2090000@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <55832A00.2090000@redhat.com> Message-ID: <55840F13.2070503@redhat.com> To be a little more specific, WildFly is now respecting the Hibernate Search desire for Infinispan classes to *not* be available to Hibernate core/entitymanager. This is important so that Hibernate Search can have the flexibility to use a different version of Infinispan than WildFly is using with hibernate-infinispan. This seemed to work with Hibernate ORM 4.3.x. On 06/18/2015 04:28 PM, Scott Marlow wrote: > I tried deploying a simple 2lc enabled test app and got a CNFE on > Infinispan classes being referenced from the application classloader. > > http://pastebin.com/PREzm6bn shows the exception. I'm guessing this is > a classloader issue in ORM 5 to be worked out. > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From steve at hibernate.org Fri Jun 19 09:32:27 2015 From: steve at hibernate.org (Steve Ebersole) Date: Fri, 19 Jun 2015 13:32:27 +0000 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <55840F13.2070503@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <55832A00.2090000@redhat.com> <55840F13.2070503@redhat.com> Message-ID: If Infinispan is not available to core/hem than I am not sure how ORM not being able to find Infinispan classes is a problem with class loading in ORM. Seems to me this is simply a problem in the ClassLoader made available to ORM. On Fri, Jun 19, 2015 at 7:46 AM Scott Marlow wrote: > To be a little more specific, WildFly is now respecting the Hibernate > Search desire for Infinispan classes to *not* be available to Hibernate > core/entitymanager. This is important so that Hibernate Search can have > the flexibility to use a different version of Infinispan than WildFly is > using with hibernate-infinispan. This seemed to work with Hibernate ORM > 4.3.x. > > On 06/18/2015 04:28 PM, Scott Marlow wrote: > > I tried deploying a simple 2lc enabled test app and got a CNFE on > > Infinispan classes being referenced from the application classloader. > > > > http://pastebin.com/PREzm6bn shows the exception. I'm guessing this is > > a classloader issue in ORM 5 to be worked out. > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Fri Jun 19 09:43:58 2015 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 19 Jun 2015 09:43:58 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <55832A00.2090000@redhat.com> <55840F13.2070503@redhat.com> Message-ID: <55841C9E.3000501@redhat.com> I'll see if I can capture a more complete exception call stack for the CNFE. On 06/19/2015 09:32 AM, Steve Ebersole wrote: > If Infinispan is not available to core/hem than I am not sure how ORM > not being able to find Infinispan classes is a problem with class > loading in ORM. Seems to me this is simply a problem in the ClassLoader > made available to ORM. > > On Fri, Jun 19, 2015 at 7:46 AM Scott Marlow > wrote: > > To be a little more specific, WildFly is now respecting the Hibernate > Search desire for Infinispan classes to *not* be available to Hibernate > core/entitymanager. This is important so that Hibernate Search can have > the flexibility to use a different version of Infinispan than WildFly is > using with hibernate-infinispan. This seemed to work with Hibernate ORM > 4.3.x. > > On 06/18/2015 04:28 PM, Scott Marlow wrote: > > I tried deploying a simple 2lc enabled test app and got a CNFE on > > Infinispan classes being referenced from the application classloader. > > > > http://pastebin.com/PREzm6bn shows the exception. I'm guessing > this is > > a classloader issue in ORM 5 to be worked out. > > > > _______________________________________________ > > hibernate-dev mailing list > > hibernate-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hibernate-dev > > > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Fri Jun 19 09:50:39 2015 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 19 Jun 2015 09:50:39 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <55841C9E.3000501@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <55832A00.2090000@redhat.com> <55840F13.2070503@redhat.com> <55841C9E.3000501@redhat.com> Message-ID: <55841E2F.3020106@redhat.com> Complete exception call stack is here http://pastebin.com/564BNWSF On 06/19/2015 09:43 AM, Scott Marlow wrote: > I'll see if I can capture a more complete exception call stack for the CNFE. > > On 06/19/2015 09:32 AM, Steve Ebersole wrote: >> If Infinispan is not available to core/hem than I am not sure how ORM >> not being able to find Infinispan classes is a problem with class >> loading in ORM. Seems to me this is simply a problem in the ClassLoader >> made available to ORM. >> >> On Fri, Jun 19, 2015 at 7:46 AM Scott Marlow > > wrote: >> >> To be a little more specific, WildFly is now respecting the Hibernate >> Search desire for Infinispan classes to *not* be available to Hibernate >> core/entitymanager. This is important so that Hibernate Search can have >> the flexibility to use a different version of Infinispan than WildFly is >> using with hibernate-infinispan. This seemed to work with Hibernate ORM >> 4.3.x. >> >> On 06/18/2015 04:28 PM, Scott Marlow wrote: >> > I tried deploying a simple 2lc enabled test app and got a CNFE on >> > Infinispan classes being referenced from the application classloader. >> > >> > http://pastebin.com/PREzm6bn shows the exception. I'm guessing >> this is >> > a classloader issue in ORM 5 to be worked out. >> > >> > _______________________________________________ >> > hibernate-dev mailing list >> > hibernate-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From smarlow at redhat.com Fri Jun 19 11:13:22 2015 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 19 Jun 2015 11:13:22 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <55841E2F.3020106@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <55832A00.2090000@redhat.com> <55840F13.2070503@redhat.com> <55841C9E.3000501@redhat.com> <55841E2F.3020106@redhat.com> Message-ID: <55843192.3050104@redhat.com> I'm not exactly sure how infinispan-commons finds the infinispan-core classloader with ORM 4.3, but clearly that is not working now with ORM 5.0. In WildFly, the infinispan-commons classloader doesn't have access to the infinispan-core classloader. This seems somewhat related to the CNFE on org.infinispan.commons.util.CloseableIteratorSet in hibernate-infinispan that we talked about on IRC a few weeks ago (http://pastebin.com/atGrC124). We worked around that by adding the infinispan-commons classloader to the hibernate-infinispan classloader (in WildFly 10). On 06/19/2015 09:50 AM, Scott Marlow wrote: > Complete exception call stack is here http://pastebin.com/564BNWSF > > On 06/19/2015 09:43 AM, Scott Marlow wrote: >> I'll see if I can capture a more complete exception call stack for the CNFE. >> >> On 06/19/2015 09:32 AM, Steve Ebersole wrote: >>> If Infinispan is not available to core/hem than I am not sure how ORM >>> not being able to find Infinispan classes is a problem with class >>> loading in ORM. Seems to me this is simply a problem in the ClassLoader >>> made available to ORM. >>> >>> On Fri, Jun 19, 2015 at 7:46 AM Scott Marlow >> > wrote: >>> >>> To be a little more specific, WildFly is now respecting the Hibernate >>> Search desire for Infinispan classes to *not* be available to Hibernate >>> core/entitymanager. This is important so that Hibernate Search can have >>> the flexibility to use a different version of Infinispan than WildFly is >>> using with hibernate-infinispan. This seemed to work with Hibernate ORM >>> 4.3.x. >>> >>> On 06/18/2015 04:28 PM, Scott Marlow wrote: >>> > I tried deploying a simple 2lc enabled test app and got a CNFE on >>> > Infinispan classes being referenced from the application classloader. >>> > >>> > http://pastebin.com/PREzm6bn shows the exception. I'm guessing >>> this is >>> > a classloader issue in ORM 5 to be worked out. >>> > >>> > _______________________________________________ >>> > hibernate-dev mailing list >>> > hibernate-dev at lists.jboss.org >>> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> > >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Fri Jun 19 11:47:21 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 19 Jun 2015 16:47:21 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <55843192.3050104@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <55832A00.2090000@redhat.com> <55840F13.2070503@redhat.com> <55841C9E.3000501@redhat.com> <55841E2F.3020106@redhat.com> <55843192.3050104@redhat.com> Message-ID: On 19 June 2015 at 16:13, Scott Marlow wrote: > I'm not exactly sure how infinispan-commons finds the infinispan-core > classloader with ORM 4.3, but clearly that is not working now with ORM 5.0. > > In WildFly, the infinispan-commons classloader doesn't have access to the > infinispan-core classloader. > > This seems somewhat related to the CNFE on > org.infinispan.commons.util.CloseableIteratorSet in hibernate-infinispan > that we talked about on IRC a few weeks ago (http://pastebin.com/atGrC124). > We worked around that by adding the infinispan-commons classloader to the > hibernate-infinispan classloader (in WildFly 10). I'm familiar with that kind of issues, so created and assigned it to myself: https://hibernate.atlassian.net/browse/HHH-9874 Thanks, Sanne > > > > On 06/19/2015 09:50 AM, Scott Marlow wrote: >> >> Complete exception call stack is here http://pastebin.com/564BNWSF >> >> On 06/19/2015 09:43 AM, Scott Marlow wrote: >>> >>> I'll see if I can capture a more complete exception call stack for the >>> CNFE. >>> >>> On 06/19/2015 09:32 AM, Steve Ebersole wrote: >>>> >>>> If Infinispan is not available to core/hem than I am not sure how ORM >>>> not being able to find Infinispan classes is a problem with class >>>> loading in ORM. Seems to me this is simply a problem in the ClassLoader >>>> made available to ORM. >>>> >>>> On Fri, Jun 19, 2015 at 7:46 AM Scott Marlow >>> > wrote: >>>> >>>> To be a little more specific, WildFly is now respecting the >>>> Hibernate >>>> Search desire for Infinispan classes to *not* be available to >>>> Hibernate >>>> core/entitymanager. This is important so that Hibernate Search >>>> can have >>>> the flexibility to use a different version of Infinispan than >>>> WildFly is >>>> using with hibernate-infinispan. This seemed to work with >>>> Hibernate ORM >>>> 4.3.x. >>>> >>>> On 06/18/2015 04:28 PM, Scott Marlow wrote: >>>> > I tried deploying a simple 2lc enabled test app and got a CNFE >>>> on >>>> > Infinispan classes being referenced from the application >>>> classloader. >>>> > >>>> > http://pastebin.com/PREzm6bn shows the exception. I'm guessing >>>> this is >>>> > a classloader issue in ORM 5 to be worked out. >>>> > >>>> > _______________________________________________ >>>> > hibernate-dev mailing list >>>> > hibernate-dev at lists.jboss.org >>>> >>>> > https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> > >>>> _______________________________________________ >>>> hibernate-dev mailing list >>>> hibernate-dev at lists.jboss.org >>>> >>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>>> >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > From steve at hibernate.org Sun Jun 21 09:19:12 2015 From: steve at hibernate.org (Steve Ebersole) Date: Sun, 21 Jun 2015 13:19:12 +0000 Subject: [hibernate-dev] 5.0.0.CR2 delay Message-ID: The timebox for CR2 release is next Wednesday. However I am taking some time off early next week. As a result I am going to push CR2 back one week. From steve at hibernate.org Sun Jun 21 15:26:39 2015 From: steve at hibernate.org (Steve Ebersole) Date: Sun, 21 Jun 2015 19:26:39 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: I have committed a lot of work on this PoC to my upstream repo. The initial walk to process the explicit from-clause is pretty much done. Overall I really like the shape of FromClause, FromElementSpace and FromElement. One question of structure here I have not decided yet is the relationship of joins. At the moment I simply collect all joins for a FromElementSpace in a single List. So FromElementSpace has a root FromElement and then a list of JoinedFromElements. The other option is to link up joins under their corresponding "left hand side". This second option is harder, and I am not sure what we gain specifically. The FromClauses, etc ultimately become part of the semantic query representation. The semantic tree is shaped by a second walk over the parse tree. This part is still a work in progress, although it is pretty far along. The current semantic tree is a first (rough) cut. Ultimately I'd like to move the shape of the semantic tree more in the direction of the JPA criteria contracts. Specifically, the parts I am missing right now is the concept of a Path. A Path would help unify/centralize some of the things from Expression and FromElement. I will need to refocus back on 5.0 for the next week or so. Once CR2 is done I will come back to this work, but soon we will need to have a discussion/vote about the use of Antlr v3 versus v4 and, if we go with v4, the specific approach. Also we need to prioritize this against the other roadmap items. To me the 2 highest priorities need to be this work and the Jandex/general-annotation-binding-redesign work. But we will need to rank them all. On Wed, Jun 17, 2015 at 7:47 AM Gunnar Morling wrote: > This seems very similar to what I had in mind with the decorator stuff. > The decorating elements would represent that manually implemented semantic > tree. > > In the end it probably doesn't even matter whether the elements of that > tree would have links to the parse tree elements they originated from or > whether that tree is completely "stand-alone". As you say, we'd > traverse/alter it with our own listeners. In my understanding that's as > good as it gets with Antlr4. > > > > 2015-06-16 23:04 GMT+02:00 Steve Ebersole : > >> I am not so sure that manually building a tree that would work with >> listeners/visitors generated from a second grammar is going to be an >> option. I have asked on SO and on the Antlr discussion group and >> basically >> got no responses as to how that might be possible. See >> https://groups.google.com/forum/#!topic/antlr-discussion/vBkwCovqHcI >> >> So the question is whether generating a semantic tree that is not Antlr >> specific is a viable alternative. I think it is. And we can still >> provide >> hand written listener and or visitor for processing this. >> > _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > > From steve at hibernate.org Sun Jun 21 15:42:31 2015 From: steve at hibernate.org (Steve Ebersole) Date: Sun, 21 Jun 2015 19:42:31 +0000 Subject: [hibernate-dev] Query handling : Antlr 3 versus Antlr 4 In-Reply-To: References: Message-ID: On Wed, Jun 17, 2015 at 7:39 AM Gunnar Morling > Yes, having such extension point seems reasonable. OGM would probably use > the same implementation as ORM, but other users may plug in another impl > based on their own type of entity definitions. Would the scope of that > extension point be solely attribute resolution or also handling of other > things such as literals? I'd hope the latter could be done in a unified way > by the parser? > That would be my preference as well. I think it makes the most sense to do this stuff once in a single place. Some specific capabilities we would need: 1) Ability to resolve "entity description" based on query token, including: 1.a) handling of "query imports" (aka, "MyEntity" -> "com.acme.MyEntity") 1.b) polymorphic query references (i.e., "from java.lang.Object") 2) Access to "Attribute descriptors", ideally hooked into "entity/type descriptors". 3) General classloading (Java/Enum constant/literal resolution) These things combined would allow us to perform all the generic semantic validations we need. Much of this is in place already. I am not sure in terms of exact types to be returned, but it'd help if the > returned structure contained information about the actually affected > "tables" (or more generally, "structures" in the query backend), so that > users don't each have to deal with resolving that information wrt. the > current mapping strategy. That need some extension point for specifying the > sub-types of given types. Again, OGM would probably share an impl. with ORM. > At the moment this is handled via org.hibernate.hql.parser.model.PolymorphicEntityTypeDescriptor which is an extension to EntityTypeDescriptor. E.g., in the query "from java.lang.Object" we'd report back PolymorphicEntityTypeDescriptor that aggregates all root entities. The parser can always just deal with the TypeDescriptor abstractions. From sanne at hibernate.org Mon Jun 22 08:11:18 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 22 Jun 2015 13:11:18 +0100 Subject: [hibernate-dev] Infinispan JTA lookup In-Reply-To: <14e11d14d44-6774-af5a@webprd-m102.mail.aol.com> References: <14e11d14d44-6774-af5a@webprd-m102.mail.aol.com> Message-ID: Hi Martin, I don't think you'll be able to convince the Infinispan of doing that: one of the major burdens of using Infinispan for users is that it is composed of so many jars, so we'll actually try to make the number of dependencies smaller. Apparently a lot of people do not know how to use dependency managers like Maven and struggle with such things.. :-( I'd also question the usefulness of doing this: even if the platform / container you're using does provide some JTA capability, it's not necessary the same one as your JPA implementor is being configured to use (some people really like to just use the JDBC transactions). Hibernate ORM has a similar Service to abstract "how" exactly to interact with transactions; I understand you don't want to rely on the Hibernate specific one but other JPA implementors would probably have a similar facility? Maybe we should just make an Hibernate Search "Service" for this and let users plug their custom one? You could provide a couple of implementations to satisfy the needs of the main JPA implementations. Cheers, Sanne On 20 June 2015 at 17:33, Martin Braun wrote: > Hi, > > I just stumbled upon the JTA TransactionManager lookup mechanism of > Infinispan and am now using this when a JTA transaction is needed. > This means I don't have hacky lookups of UserTransactions. Now I've been > wondering if it was possible to make the lookup mechanism > a separate module in Infinispan so I don't have to import the whole thing. > > I am talking about > org.infinispan.transaction.lookup.GenericTransactionManagerLookup in > particular. Do you think that I can convince the Infinispan team > (which includes you :D) to keep that in a different place than > infinispan-core? > > cheers, > > Martin Braun > martinbraun123 at aol.com > www.github.com/s4ke From hardy at hibernate.org Thu Jun 25 05:45:16 2015 From: hardy at hibernate.org (Hardy Ferentschik) Date: Thu, 25 Jun 2015 11:45:16 +0200 Subject: [hibernate-dev] Revamped in.relation.to is live now Message-ID: <20150625094516.GA34712@Nineveh.lan> Hi all, just a thumbs up that the revamped in.relation.to site is now live! If you want/need to blog, it is time to read through http://in.relation.to/README/ In particular you want to setup the site on your local machine (http://in.relation.to/README/#installation) and once that is done you want to make yourself familiar with how to write a blog post (http://in.relation.to/README/#write-a-blog). If you have problems with the setup, let us know. Quite a bit of work went into making the process as simple and errorproof as possible. Just make sure to follow the instructions. Once you are happy with your changes locally you can push to 'staging' and preview the changes (http://in.relation.to/README/#preview-changes-on-staging-in-relation-to). Last but not least, you can go live via a push to 'production' - http://in.relation.to/README/#publish-changes-to-production For all non bloggers - just enjoy the new, mobile friendly in.relation.to. --Hardy -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 496 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hibernate-dev/attachments/20150625/e332aa3d/attachment.bin From smarlow at redhat.com Fri Jun 26 10:02:12 2015 From: smarlow at redhat.com (Scott Marlow) Date: Fri, 26 Jun 2015 10:02:12 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> Message-ID: <558D5B64.90902@redhat.com> The ISE message is: " Can not set java.lang.Long field org.jboss.as.test.integration.hibernate.search.Book.id to org.jboss.as.test.integration.hibernate.search.Book ", which does sound like a bug. On 06/18/2015 01:41 PM, Steve Ebersole wrote: > That should still be ok. If it does not work, that would be a bug. > > On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero > wrote: > > On 18 June 2015 at 17:17, Scott Marlow > wrote: > > > > > > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: > >> > >> On 18 June 2015 at 15:55, Scott Marlow > wrote: > >>> > >>> Sanne, > >>> > >>> The [1] pull request to bring Jipijapa source into WildFly > master is > >>> merged. > >>> > >>> I pushed a copy of the (work in progress) ORM 5 changes to > github [2]. > >>> > >>> Is there a WildFly pull request for the changes to upgrade to > Hibernate > >>> Search 5.4.0.Alpha1? I didn't see one but I might of missed it. > >> > >> > >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* > Hibernate > >> ORM 5.0.0.CR1. > > > > > > What needs to change on WildFly for the Hibernate Search upgrade? > > > Nothing else changes. Just change the Hibernate Search version when > you change the Hibernate ORM version. > > > I started > > with just changing the WildFly (top level) pom.xml to reference HS > > 5.4.0.Alpha1. > > +1 > > > Do you expect that the latest ORM master branch will work > > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? > > I didn't test the latest ORM master branch, but it will work with > ORM 5.0.0.CR1. > > > Locally, I am building the latest ORM master (built from source) > and using > > Hibernate Search 5.4.0.Alpha1. When running the WildFly > testsuite, I see a > > few different errors. One of them is from the > > HibernateSearchJPATestCase.testFullTextQuery test. > > http://pastebin.com/Q5xLrkpT shows the WildFly server.log > contents from the > > Hibernate Search test. > > That looks like related to an Hibernate ORM change, not Search. > The entity used for that test doesn't declare the fields as "public"; > that used to be ok in previous versions. > You could workaround it by changing the test to use either public > fields or traditional getters/setters? > But we should check with Steve if that change was intentional? For > now, better to workaround it in the test so we don't get stuck. > > Thanks! > Sanne > > > > > > >> > >> The two should be updated in synch this time, in future there > will be > >> more flexibility. > >> > >>> > >>> Scott > >>> > >>> [1] https://github.com/wildfly/wildfly/pull/7509 > >>> > >>> [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev > From sanne at hibernate.org Fri Jun 26 12:12:00 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Fri, 26 Jun 2015 17:12:00 +0100 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <558D5B64.90902@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> <558D5B64.90902@redhat.com> Message-ID: On 26 June 2015 at 15:02, Scott Marlow wrote: > The ISE message is: > " > Can not set java.lang.Long field > org.jboss.as.test.integration.hibernate.search.Book.id to > org.jboss.as.test.integration.hibernate.search.Book > ", which does sound like a bug. Hi Scott, could you just make those fields in the test "public", so we can get an ORM5 version of WildFly to play with? We've lots more work to do which is blocked by that. I've opened HHH-9887 to track this, but I'd treat it as a minor regression which shouldn't slow us down with integration in WildFly; not least it would be much more convenient for us to reproduce this if we upgrade WildFly first. Thanks, Sanne > > On 06/18/2015 01:41 PM, Steve Ebersole wrote: >> >> That should still be ok. If it does not work, that would be a bug. >> >> On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero > > wrote: >> >> On 18 June 2015 at 17:17, Scott Marlow > > wrote: >> > >> > >> > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: >> >> >> >> On 18 June 2015 at 15:55, Scott Marlow > > wrote: >> >>> >> >>> Sanne, >> >>> >> >>> The [1] pull request to bring Jipijapa source into WildFly >> master is >> >>> merged. >> >>> >> >>> I pushed a copy of the (work in progress) ORM 5 changes to >> github [2]. >> >>> >> >>> Is there a WildFly pull request for the changes to upgrade to >> Hibernate >> >>> Search 5.4.0.Alpha1? I didn't see one but I might of missed it. >> >> >> >> >> >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* >> Hibernate >> >> ORM 5.0.0.CR1. >> > >> > >> > What needs to change on WildFly for the Hibernate Search upgrade? >> >> >> Nothing else changes. Just change the Hibernate Search version when >> you change the Hibernate ORM version. >> >> > I started >> > with just changing the WildFly (top level) pom.xml to reference HS >> > 5.4.0.Alpha1. >> >> +1 >> >> > Do you expect that the latest ORM master branch will work >> > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? >> >> I didn't test the latest ORM master branch, but it will work with >> ORM 5.0.0.CR1. >> >> > Locally, I am building the latest ORM master (built from source) >> and using >> > Hibernate Search 5.4.0.Alpha1. When running the WildFly >> testsuite, I see a >> > few different errors. One of them is from the >> > HibernateSearchJPATestCase.testFullTextQuery test. >> > http://pastebin.com/Q5xLrkpT shows the WildFly server.log >> contents from the >> > Hibernate Search test. >> >> That looks like related to an Hibernate ORM change, not Search. >> The entity used for that test doesn't declare the fields as "public"; >> that used to be ok in previous versions. >> You could workaround it by changing the test to use either public >> fields or traditional getters/setters? >> But we should check with Steve if that change was intentional? For >> now, better to workaround it in the test so we don't get stuck. >> >> Thanks! >> Sanne >> >> > >> > >> >> >> >> The two should be updated in synch this time, in future there >> will be >> >> more flexibility. >> >> >> >>> >> >>> Scott >> >>> >> >>> [1] https://github.com/wildfly/wildfly/pull/7509 >> >>> >> >>> [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 >> _______________________________________________ >> hibernate-dev mailing list >> hibernate-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> > From brmeyer at redhat.com Fri Jun 26 17:51:39 2015 From: brmeyer at redhat.com (Brett Meyer) Date: Fri, 26 Jun 2015 17:51:39 -0400 (EDT) Subject: [hibernate-dev] test case templates In-Reply-To: <1110970513.26953813.1435355293430.JavaMail.zimbra@redhat.com> Message-ID: <466285221.26956086.1435355499990.JavaMail.zimbra@redhat.com> Just wanted to point out a new repo in our GitHub org: https://github.com/hibernate/hibernate-test-case-templates Many users have asked to have templates to use when creating reproducer/regression tests for bug reports. As a starting point, I included both a standalone example, as well as one that uses our unit-test framework's BaseCoreFunctionalTest. Feel free to make modifications to these, upstream, as necessary. ORM is currently the only project with templates, but I assume this might be helpful for Search, Validator, and OGM as well. From sanne at hibernate.org Sat Jun 27 07:42:10 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Sat, 27 Jun 2015 12:42:10 +0100 Subject: [hibernate-dev] test case templates In-Reply-To: <466285221.26956086.1435355499990.JavaMail.zimbra@redhat.com> References: <1110970513.26953813.1435355293430.JavaMail.zimbra@redhat.com> <466285221.26956086.1435355499990.JavaMail.zimbra@redhat.com> Message-ID: Great idea Brett! Thanks for starting this On 26 June 2015 at 22:51, Brett Meyer wrote: > Just wanted to point out a new repo in our GitHub org: https://github.com/hibernate/hibernate-test-case-templates > > Many users have asked to have templates to use when creating reproducer/regression tests for bug reports. As a starting point, I included both a standalone example, as well as one that uses our unit-test framework's BaseCoreFunctionalTest. > > Feel free to make modifications to these, upstream, as necessary. ORM is currently the only project with templates, but I assume this might be helpful for Search, Validator, and OGM as well. > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From sanne at hibernate.org Sun Jun 28 19:43:57 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 29 Jun 2015 00:43:57 +0100 Subject: [hibernate-dev] development sprint start: Hibernate Search Message-ID: Hello, welcome to Hibernate Search time! [for those unaware: some of are now experimenting to work on 2-3 week sprints fully focused on a single Hibernate project, rotating the subject. We decided this privately as it's a matter of time-management for us, but I'm now opening the conversation up to all developers and contributors as it affects the project evolution and technical discussion; essentially it means we'll be focused on Hibernate Search more than other projects in the next few weeks, and aim at get some significant stuff done] My first and foremost goal for the next couple of weeks would be to drive forward a pain point which is all of: - showing active interest from several power-contributors [1,2,3] - highly demanded from product perspective - had lots of people *begging* for better solutions in the past You might have guessed: I'm talking about the backend configuration complexity in a clustered environment: both the JGroups and the JMS solutions expose the user to various complex system settings. With Emmanuel and Hardy I had some hints of conversations about it, but essentially to start this subject I'm proposing a meeting to discuss these; we can try and make it open to everyone, I might even make a couple of slides. # What do we want During our last meeting, a scary point was to hear that Emmanuel was considering the priority to be free form. It never was for me, and while we didn't dig during that call, we better clarify this soon. Let's please find a moment on IRC to discuss the goals, especially as I need to update the project roadmap. # How do we want it I've been hoping for a clear/formal set of requirements to be provided by some users, as there are many ways to look at the problem. But this never came, and I'm concluding that: A) if a paying customer or other kind of sponsor will want to discuss these requirements I better fly to them and talk face to face. B) I'm being lazy and selfish in expecting externals to clarify all details.. I shouldn't try to deflect this hard problem. I've been thinking of several possible ways, and there are lots of options, and some tradeoffs to choose from. One of these options is to use a distributed consensus - since we already use JGroups in various projects, JGroups RAFT [4] seems a natural candidate but while I'd love the excuse to play with it, it's a very new codebase. Another option would be the more mature Apache Kafka - great for log based replication so might even be complementary to the JGroups RAFT implementation - or just improve JMS (via the standard or via Apache Camel) to have it integrate with Transactions [5: just got a contribution!] and provide better failover options. Not least, I just heard that WildFly 10 is going to provide some form of automatic HA/JMS singleton consumer.. I just heard about it and will need to find more about it. While it's tempting to implement our own custom super clever backend, we should prioritize for an off-the-shelf method with high return on investment to solve the pain point. Also, as suggested by Hardy some months ago, it would be awesome to have the so called "Hibernate Search master node" not need any entity classes nor depend at all to the deployed application (nor its extensions like analyzers), so that if the solution still needs a "master role", we could simply provide a master app which doesn't need changes on application changes. This would necessarily be a change for 6.0, but let's either prepare for that, or get rid of the "master node" concept altogether. We have been sitting and thinking about the problem since a while, I'd love now to see some empirical progress: merge them as experimental, and have some natural selection happen while these would also help us to refine the requirements. ## Other topics Of course this isn't the only thing we'll be working on. The primary goal of current branch is still to deliver an Hibernate ORM 5 compatible version, but we're in an horrible position with that since WildFly 10 just released another alpha tag which still doesn't use Hibernate ORM v.5; Since it's [currently and temporarily] hard to run WildFly overriding the Hibernate ORM version, we won't be able to close in to a CR or Final release until at least the next Wildfly tag. In the meantime we can do some of research needed for the above topic, and make progress with the many issues open for 5.4 [6]. Another subject which we really should work on in this sprint, is to avoid transaction timeout on MassIndexer within a container [7] So, for tomorrow: to get started, JIRA is updated and you have all tasks assigned already. Let's start from there, and then schedule a meeting to discuss the above. Thanks! Sanne References : 1 - https://github.com/umbrew/org.umbrew.hibernate.database.worker.backend 2 - https://github.com/mrobson/hibernate-search-infinispan-jms 3 - https://forum.hibernate.org/viewtopic.php?f=9&t=1040179 4 - https://github.com/belaban/jgroups-raft 5 - https://hibernate.atlassian.net/browse/HSEARCH-668 6 - https://hibernate.atlassian.net/issues/?filter=12266 7 - https://hibernate.atlassian.net/browse/HSEARCH-1474 From smarlow at redhat.com Mon Jun 29 09:18:25 2015 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 29 Jun 2015 09:18:25 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> <558D5B64.90902@redhat.com> Message-ID: <559145A1.30400@redhat.com> On 06/26/2015 12:12 PM, Sanne Grinovero wrote: > On 26 June 2015 at 15:02, Scott Marlow wrote: >> The ISE message is: >> " >> Can not set java.lang.Long field >> org.jboss.as.test.integration.hibernate.search.Book.id to >> org.jboss.as.test.integration.hibernate.search.Book >> ", which does sound like a bug. > > Hi Scott, could you just make those fields in the test "public", so we > can get an ORM5 version of WildFly to play with? We've lots more work > to do which is blocked by that. > > I've opened HHH-9887 to track this, but I'd treat it as a minor > regression which shouldn't slow us down with integration in WildFly; > not least it would be much more convenient for us to reproduce this if > we upgrade WildFly first. Which ORM class should be debugged to find the cause? The same error occurs whether the 'id' field is public or not (see my comment on HHH-9887). The same failure occurs for WildFly tests: HibernateSearchJPATestCase, JPABeanValidationTestCase, DataSourceDefinitionJPATestCase, JPA2LCTestCase, WebJPATestCase. > > Thanks, > Sanne > > >> >> On 06/18/2015 01:41 PM, Steve Ebersole wrote: >>> >>> That should still be ok. If it does not work, that would be a bug. >>> >>> On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero >> > wrote: >>> >>> On 18 June 2015 at 17:17, Scott Marlow >> > wrote: >>> > >>> > >>> > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: >>> >> >>> >> On 18 June 2015 at 15:55, Scott Marlow >> > wrote: >>> >>> >>> >>> Sanne, >>> >>> >>> >>> The [1] pull request to bring Jipijapa source into WildFly >>> master is >>> >>> merged. >>> >>> >>> >>> I pushed a copy of the (work in progress) ORM 5 changes to >>> github [2]. >>> >>> >>> >>> Is there a WildFly pull request for the changes to upgrade to >>> Hibernate >>> >>> Search 5.4.0.Alpha1? I didn't see one but I might of missed it. >>> >> >>> >> >>> >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* >>> Hibernate >>> >> ORM 5.0.0.CR1. >>> > >>> > >>> > What needs to change on WildFly for the Hibernate Search upgrade? >>> >>> >>> Nothing else changes. Just change the Hibernate Search version when >>> you change the Hibernate ORM version. >>> >>> > I started >>> > with just changing the WildFly (top level) pom.xml to reference HS >>> > 5.4.0.Alpha1. >>> >>> +1 >>> >>> > Do you expect that the latest ORM master branch will work >>> > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? >>> >>> I didn't test the latest ORM master branch, but it will work with >>> ORM 5.0.0.CR1. >>> >>> > Locally, I am building the latest ORM master (built from source) >>> and using >>> > Hibernate Search 5.4.0.Alpha1. When running the WildFly >>> testsuite, I see a >>> > few different errors. One of them is from the >>> > HibernateSearchJPATestCase.testFullTextQuery test. >>> > http://pastebin.com/Q5xLrkpT shows the WildFly server.log >>> contents from the >>> > Hibernate Search test. >>> >>> That looks like related to an Hibernate ORM change, not Search. >>> The entity used for that test doesn't declare the fields as "public"; >>> that used to be ok in previous versions. >>> You could workaround it by changing the test to use either public >>> fields or traditional getters/setters? >>> But we should check with Steve if that change was intentional? For >>> now, better to workaround it in the test so we don't get stuck. >>> >>> Thanks! >>> Sanne >>> >>> > >>> > >>> >> >>> >> The two should be updated in synch this time, in future there >>> will be >>> >> more flexibility. >>> >> >>> >>> >>> >>> Scott >>> >>> >>> >>> [1] https://github.com/wildfly/wildfly/pull/7509 >>> >>> >>> >>> [2] https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 >>> _______________________________________________ >>> hibernate-dev mailing list >>> hibernate-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >>> >> From emmanuel at hibernate.org Mon Jun 29 09:47:58 2015 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Mon, 29 Jun 2015 15:47:58 +0200 Subject: [hibernate-dev] development sprint start: Hibernate Search In-Reply-To: References: Message-ID: <8BA5D466-B6A4-40EA-AA7C-52188117DA69@hibernate.org> Notes from the discussion on roadmap in decreasing order. Parallelize up to 4 tasks max. ### Must do Lucene 5 Highly available master re-election - JMS messages within the transaction - high available master re-election - Kafka? if JMS clustered MDB does not work Compatible with Java 9 - should be minor ### Should have Timeout exception for MassIndexer ElasticSearch backend Free Form Emmanuel > On 29 Jun 2015, at 01:43, Sanne Grinovero wrote: > > Hello, > welcome to Hibernate Search time! > > [for those unaware: some of are now experimenting to work on 2-3 week > sprints fully focused on a single Hibernate project, rotating the > subject. We decided this privately as it's a matter of time-management > for us, but I'm now opening the conversation up to all developers and > contributors as it affects the project evolution and technical > discussion; essentially it means we'll be focused on Hibernate Search > more than other projects in the next few weeks, and aim at get some > significant stuff done] > > My first and foremost goal for the next couple of weeks would be to > drive forward a pain point which is all of: > - showing active interest from several power-contributors [1,2,3] > - highly demanded from product perspective > - had lots of people *begging* for better solutions in the past > > You might have guessed: I'm talking about the backend configuration > complexity in a clustered environment: both the JGroups and the JMS > solutions expose the user to various complex system settings. > With Emmanuel and Hardy I had some hints of conversations about it, > but essentially to start this subject I'm proposing a meeting to > discuss these; we can try and make it open to everyone, I might even > make a couple of slides. > > # What do we want > During our last meeting, a scary point was to hear that Emmanuel was > considering the priority to be free form. It never was for me, and > while we didn't dig during that call, we better clarify this soon. > Let's please find a moment on IRC to discuss the goals, especially as > I need to update the project roadmap. > > # How do we want it > I've been hoping for a clear/formal set of requirements to be provided > by some users, as there are many ways to look at the problem. > But this never came, and I'm concluding that: > A) if a paying customer or other kind of sponsor will want to discuss > these requirements I better fly to them and talk face to face. > B) I'm being lazy and selfish in expecting externals to clarify all > details.. I shouldn't try to deflect this hard problem. > > I've been thinking of several possible ways, and there are lots of > options, and some tradeoffs to choose from. > One of these options is to use a distributed consensus - since we > already use JGroups in various projects, JGroups RAFT [4] seems a > natural candidate but while I'd love the excuse to play with it, it's > a very new codebase. > Another option would be the more mature Apache Kafka - great for log > based replication so might even be complementary to the JGroups RAFT > implementation - or just improve JMS (via the standard or via Apache > Camel) to have it integrate with Transactions [5: just got a > contribution!] and provide better failover options. > Not least, I just heard that WildFly 10 is going to provide some form > of automatic HA/JMS singleton consumer.. I just heard about it and > will need to find more about it. > > While it's tempting to implement our own custom super clever backend, > we should prioritize for an off-the-shelf method with high return on > investment to solve the pain point. > Also, as suggested by Hardy some months ago, it would be awesome to > have the so called "Hibernate Search master node" not need any entity > classes nor depend at all to the deployed application (nor its > extensions like analyzers), so that if the solution still needs a > "master role", we could simply provide a master app which doesn't need > changes on application changes. This would necessarily be a change for > 6.0, but let's either prepare for that, or get rid of the "master > node" concept altogether. > > We have been sitting and thinking about the problem since a while, I'd > love now to see some empirical progress: merge them as experimental, > and have some natural selection happen while these would also help us > to refine the requirements. > > ## Other topics > Of course this isn't the only thing we'll be working on. The primary > goal of current branch is still to deliver an Hibernate ORM 5 > compatible version, but we're in an horrible position with that since > WildFly 10 just released another alpha tag which still doesn't use > Hibernate ORM v.5; Since it's [currently and temporarily] hard to run > WildFly overriding the Hibernate ORM version, we won't be able to > close in to a CR or Final release until at least the next Wildfly tag. > In the meantime we can do some of research needed for the above topic, > and make progress with the many issues open for 5.4 [6]. > > Another subject which we really should work on in this sprint, is to > avoid transaction timeout on MassIndexer within a container [7] > > So, for tomorrow: to get started, JIRA is updated and you have all > tasks assigned already. Let's start from there, and then schedule a > meeting to discuss the above. > > Thanks! > Sanne > > References : > 1 - https://github.com/umbrew/org.umbrew.hibernate.database.worker.backend > 2 - https://github.com/mrobson/hibernate-search-infinispan-jms > 3 - https://forum.hibernate.org/viewtopic.php?f=9&t=1040179 > 4 - https://github.com/belaban/jgroups-raft > 5 - https://hibernate.atlassian.net/browse/HSEARCH-668 > 6 - https://hibernate.atlassian.net/issues/?filter=12266 > 7 - https://hibernate.atlassian.net/browse/HSEARCH-1474 > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Mon Jun 29 10:06:48 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 29 Jun 2015 14:06:48 +0000 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: <559145A1.30400@redhat.com> References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> <558D5B64.90902@redhat.com> <559145A1.30400@redhat.com> Message-ID: org.hibernate.property.access.spi.GetterFieldImpl Its possible something is amiss in org.hibernate.property.access.internal.PropertyAccessStrategyFieldImpl, but considering GetterFieldImpl is chosen (properly) and that is where things break down that is where I would look. I find it strange though that this works in our test suite. Maybe some strange class loader issue? On Mon, Jun 29, 2015 at 8:18 AM Scott Marlow wrote: > On 06/26/2015 12:12 PM, Sanne Grinovero wrote: > > On 26 June 2015 at 15:02, Scott Marlow wrote: > >> The ISE message is: > >> " > >> Can not set java.lang.Long field > >> org.jboss.as.test.integration.hibernate.search.Book.id to > >> org.jboss.as.test.integration.hibernate.search.Book > >> ", which does sound like a bug. > > > > Hi Scott, could you just make those fields in the test "public", so we > > can get an ORM5 version of WildFly to play with? We've lots more work > > to do which is blocked by that. > > > > I've opened HHH-9887 to track this, but I'd treat it as a minor > > regression which shouldn't slow us down with integration in WildFly; > > not least it would be much more convenient for us to reproduce this if > > we upgrade WildFly first. > > Which ORM class should be debugged to find the cause? The same error > occurs whether the 'id' field is public or not (see my comment on > HHH-9887). > > The same failure occurs for WildFly tests: HibernateSearchJPATestCase, > JPABeanValidationTestCase, DataSourceDefinitionJPATestCase, > JPA2LCTestCase, WebJPATestCase. > > > > > Thanks, > > Sanne > > > > > >> > >> On 06/18/2015 01:41 PM, Steve Ebersole wrote: > >>> > >>> That should still be ok. If it does not work, that would be a bug. > >>> > >>> On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero >>> > wrote: > >>> > >>> On 18 June 2015 at 17:17, Scott Marlow >>> > wrote: > >>> > > >>> > > >>> > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: > >>> >> > >>> >> On 18 June 2015 at 15:55, Scott Marlow >>> > wrote: > >>> >>> > >>> >>> Sanne, > >>> >>> > >>> >>> The [1] pull request to bring Jipijapa source into WildFly > >>> master is > >>> >>> merged. > >>> >>> > >>> >>> I pushed a copy of the (work in progress) ORM 5 changes to > >>> github [2]. > >>> >>> > >>> >>> Is there a WildFly pull request for the changes to upgrade to > >>> Hibernate > >>> >>> Search 5.4.0.Alpha1? I didn't see one but I might of missed > it. > >>> >> > >>> >> > >>> >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* > >>> Hibernate > >>> >> ORM 5.0.0.CR1. > >>> > > >>> > > >>> > What needs to change on WildFly for the Hibernate Search > upgrade? > >>> > >>> > >>> Nothing else changes. Just change the Hibernate Search version > when > >>> you change the Hibernate ORM version. > >>> > >>> > I started > >>> > with just changing the WildFly (top level) pom.xml to > reference HS > >>> > 5.4.0.Alpha1. > >>> > >>> +1 > >>> > >>> > Do you expect that the latest ORM master branch will work > >>> > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? > >>> > >>> I didn't test the latest ORM master branch, but it will work with > >>> ORM 5.0.0.CR1. > >>> > >>> > Locally, I am building the latest ORM master (built from > source) > >>> and using > >>> > Hibernate Search 5.4.0.Alpha1. When running the WildFly > >>> testsuite, I see a > >>> > few different errors. One of them is from the > >>> > HibernateSearchJPATestCase.testFullTextQuery test. > >>> > http://pastebin.com/Q5xLrkpT shows the WildFly server.log > >>> contents from the > >>> > Hibernate Search test. > >>> > >>> That looks like related to an Hibernate ORM change, not Search. > >>> The entity used for that test doesn't declare the fields as > "public"; > >>> that used to be ok in previous versions. > >>> You could workaround it by changing the test to use either public > >>> fields or traditional getters/setters? > >>> But we should check with Steve if that change was intentional? For > >>> now, better to workaround it in the test so we don't get stuck. > >>> > >>> Thanks! > >>> Sanne > >>> > >>> > > >>> > > >>> >> > >>> >> The two should be updated in synch this time, in future there > >>> will be > >>> >> more flexibility. > >>> >> > >>> >>> > >>> >>> Scott > >>> >>> > >>> >>> [1] https://github.com/wildfly/wildfly/pull/7509 > >>> >>> > >>> >>> [2] > https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org hibernate-dev at lists.jboss.org> > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> > >> > From steve at hibernate.org Mon Jun 29 10:10:14 2015 From: steve at hibernate.org (Steve Ebersole) Date: Mon, 29 Jun 2015 14:10:14 +0000 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> <558D5B64.90902@redhat.com> <559145A1.30400@redhat.com> Message-ID: The reason I say that is... here is the check that ultimately fails inside the VM: protected void ensureObj(Object var1) { if(!this.field.getDeclaringClass().isAssignableFrom(var1.getClass())) { this.throwSetIllegalArgumentException(var1); } } On Mon, Jun 29, 2015 at 9:06 AM Steve Ebersole wrote: > org.hibernate.property.access.spi.GetterFieldImpl > > Its possible something is amiss > in org.hibernate.property.access.internal.PropertyAccessStrategyFieldImpl, > but considering GetterFieldImpl is chosen (properly) and that is where > things break down that is where I would look. > > I find it strange though that this works in our test suite. Maybe some > strange class loader issue? > > On Mon, Jun 29, 2015 at 8:18 AM Scott Marlow wrote: > >> On 06/26/2015 12:12 PM, Sanne Grinovero wrote: >> > On 26 June 2015 at 15:02, Scott Marlow wrote: >> >> The ISE message is: >> >> " >> >> Can not set java.lang.Long field >> >> org.jboss.as.test.integration.hibernate.search.Book.id to >> >> org.jboss.as.test.integration.hibernate.search.Book >> >> ", which does sound like a bug. >> > >> > Hi Scott, could you just make those fields in the test "public", so we >> > can get an ORM5 version of WildFly to play with? We've lots more work >> > to do which is blocked by that. >> > >> > I've opened HHH-9887 to track this, but I'd treat it as a minor >> > regression which shouldn't slow us down with integration in WildFly; >> > not least it would be much more convenient for us to reproduce this if >> > we upgrade WildFly first. >> >> Which ORM class should be debugged to find the cause? The same error >> occurs whether the 'id' field is public or not (see my comment on >> HHH-9887). >> >> The same failure occurs for WildFly tests: HibernateSearchJPATestCase, >> JPABeanValidationTestCase, DataSourceDefinitionJPATestCase, >> JPA2LCTestCase, WebJPATestCase. >> >> > >> > Thanks, >> > Sanne >> > >> > >> >> >> >> On 06/18/2015 01:41 PM, Steve Ebersole wrote: >> >>> >> >>> That should still be ok. If it does not work, that would be a bug. >> >>> >> >>> On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero > >>> > wrote: >> >>> >> >>> On 18 June 2015 at 17:17, Scott Marlow > >>> > wrote: >> >>> > >> >>> > >> >>> > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: >> >>> >> >> >>> >> On 18 June 2015 at 15:55, Scott Marlow > >>> > wrote: >> >>> >>> >> >>> >>> Sanne, >> >>> >>> >> >>> >>> The [1] pull request to bring Jipijapa source into WildFly >> >>> master is >> >>> >>> merged. >> >>> >>> >> >>> >>> I pushed a copy of the (work in progress) ORM 5 changes to >> >>> github [2]. >> >>> >>> >> >>> >>> Is there a WildFly pull request for the changes to upgrade >> to >> >>> Hibernate >> >>> >>> Search 5.4.0.Alpha1? I didn't see one but I might of >> missed it. >> >>> >> >> >>> >> >> >>> >> No there isn't, as Hibernate Search 5.4.0.Alpha1 *requires* >> >>> Hibernate >> >>> >> ORM 5.0.0.CR1. >> >>> > >> >>> > >> >>> > What needs to change on WildFly for the Hibernate Search >> upgrade? >> >>> >> >>> >> >>> Nothing else changes. Just change the Hibernate Search version >> when >> >>> you change the Hibernate ORM version. >> >>> >> >>> > I started >> >>> > with just changing the WildFly (top level) pom.xml to >> reference HS >> >>> > 5.4.0.Alpha1. >> >>> >> >>> +1 >> >>> >> >>> > Do you expect that the latest ORM master branch will work >> >>> > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? >> >>> >> >>> I didn't test the latest ORM master branch, but it will work with >> >>> ORM 5.0.0.CR1. >> >>> >> >>> > Locally, I am building the latest ORM master (built from >> source) >> >>> and using >> >>> > Hibernate Search 5.4.0.Alpha1. When running the WildFly >> >>> testsuite, I see a >> >>> > few different errors. One of them is from the >> >>> > HibernateSearchJPATestCase.testFullTextQuery test. >> >>> > http://pastebin.com/Q5xLrkpT shows the WildFly server.log >> >>> contents from the >> >>> > Hibernate Search test. >> >>> >> >>> That looks like related to an Hibernate ORM change, not Search. >> >>> The entity used for that test doesn't declare the fields as >> "public"; >> >>> that used to be ok in previous versions. >> >>> You could workaround it by changing the test to use either public >> >>> fields or traditional getters/setters? >> >>> But we should check with Steve if that change was intentional? >> For >> >>> now, better to workaround it in the test so we don't get stuck. >> >>> >> >>> Thanks! >> >>> Sanne >> >>> >> >>> > >> >>> > >> >>> >> >> >>> >> The two should be updated in synch this time, in future there >> >>> will be >> >>> >> more flexibility. >> >>> >> >> >>> >>> >> >>> >>> Scott >> >>> >>> >> >>> >>> [1] https://github.com/wildfly/wildfly/pull/7509 >> >>> >>> >> >>> >>> [2] >> https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 >> >>> _______________________________________________ >> >>> hibernate-dev mailing list >> >>> hibernate-dev at lists.jboss.org > hibernate-dev at lists.jboss.org> >> >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev >> >>> >> >> >> > From smarlow at redhat.com Mon Jun 29 10:49:17 2015 From: smarlow at redhat.com (Scott Marlow) Date: Mon, 29 Jun 2015 10:49:17 -0400 Subject: [hibernate-dev] WildFly 10 + Hibernate ORM 5 integration status update... In-Reply-To: References: <556898D2.8010103@redhat.com> <5568A16D.5090201@redhat.com> <55816BAF.7080708@redhat.com> <55817203.3010403@redhat.com> <5582DBE4.4080904@redhat.com> <5582EF1A.6040703@redhat.com> <558D5B64.90902@redhat.com> <559145A1.30400@redhat.com> Message-ID: <55915AED.8000600@redhat.com> On 06/29/2015 10:06 AM, Steve Ebersole wrote: > org.hibernate.property.access.spi.GetterFieldImpl > > Its possible something is amiss > in org.hibernate.property.access.internal.PropertyAccessStrategyFieldImpl, > but considering GetterFieldImpl is chosen (properly) and that is where > things break down that is where I would look. The PropertyAccessFieldImpl ctor is passed with the 'containerJavaType' parameter set to class org.jboss.as.test.integration.hibernate.search.Book that is defined by the javax.persistence.spi.PersistenceUnitInfo.getNewTempClassLoader(). I wonder if org.hibernate.boot.internal.ClassLoaderAccessImpl.classForName(String name), should use the PersistenceUnitInfo.getClassLoader() instead of getNewTempClassLoader(), once we are in the second bootstrap phase (which we are when ClassLoaderAccessImpl.classForName(String) is called in this particular WildFly test case). From a WildFly classloading perspective, once we call EntityManagerFactoryBuilderImpl.build(), all application class enhancing/rewriting should be complete, so we should prefer the PersistenceUnitInfo.getClassLoader() once we reach the second bootstrap phase. > > I find it strange though that this works in our test suite. Maybe some > strange class loader issue? > > On Mon, Jun 29, 2015 at 8:18 AM Scott Marlow > wrote: > > On 06/26/2015 12:12 PM, Sanne Grinovero wrote: > > On 26 June 2015 at 15:02, Scott Marlow > wrote: > >> The ISE message is: > >> " > >> Can not set java.lang.Long field > >> org.jboss.as.test.integration.hibernate.search.Book.id > to > >> org.jboss.as.test.integration.hibernate.search.Book > >> ", which does sound like a bug. > > > > Hi Scott, could you just make those fields in the test "public", > so we > > can get an ORM5 version of WildFly to play with? We've lots more work > > to do which is blocked by that. > > > > I've opened HHH-9887 to track this, but I'd treat it as a minor > > regression which shouldn't slow us down with integration in WildFly; > > not least it would be much more convenient for us to reproduce > this if > > we upgrade WildFly first. > > Which ORM class should be debugged to find the cause? The same error > occurs whether the 'id' field is public or not (see my comment on > HHH-9887). > > The same failure occurs for WildFly tests: HibernateSearchJPATestCase, > JPABeanValidationTestCase, DataSourceDefinitionJPATestCase, > JPA2LCTestCase, WebJPATestCase. > > > > > Thanks, > > Sanne > > > > > >> > >> On 06/18/2015 01:41 PM, Steve Ebersole wrote: > >>> > >>> That should still be ok. If it does not work, that would be a bug. > >>> > >>> On Thu, Jun 18, 2015 at 11:38 AM Sanne Grinovero > > >>> >> wrote: > >>> > >>> On 18 June 2015 at 17:17, Scott Marlow > >>> >> > wrote: > >>> > > >>> > > >>> > On 06/18/2015 11:59 AM, Sanne Grinovero wrote: > >>> >> > >>> >> On 18 June 2015 at 15:55, Scott Marlow > > >>> >> > wrote: > >>> >>> > >>> >>> Sanne, > >>> >>> > >>> >>> The [1] pull request to bring Jipijapa source into > WildFly > >>> master is > >>> >>> merged. > >>> >>> > >>> >>> I pushed a copy of the (work in progress) ORM 5 > changes to > >>> github [2]. > >>> >>> > >>> >>> Is there a WildFly pull request for the changes to > upgrade to > >>> Hibernate > >>> >>> Search 5.4.0.Alpha1? I didn't see one but I might of > missed it. > >>> >> > >>> >> > >>> >> No there isn't, as Hibernate Search 5.4.0.Alpha1 > *requires* > >>> Hibernate > >>> >> ORM 5.0.0.CR1. > >>> > > >>> > > >>> > What needs to change on WildFly for the Hibernate > Search upgrade? > >>> > >>> > >>> Nothing else changes. Just change the Hibernate Search > version when > >>> you change the Hibernate ORM version. > >>> > >>> > I started > >>> > with just changing the WildFly (top level) pom.xml to > reference HS > >>> > 5.4.0.Alpha1. > >>> > >>> +1 > >>> > >>> > Do you expect that the latest ORM master branch will work > >>> > with HS 5.4.0.Alpha1 or is ORM 5.0.0.CR1 better? > >>> > >>> I didn't test the latest ORM master branch, but it will > work with > >>> ORM 5.0.0.CR1. > >>> > >>> > Locally, I am building the latest ORM master (built > from source) > >>> and using > >>> > Hibernate Search 5.4.0.Alpha1. When running the WildFly > >>> testsuite, I see a > >>> > few different errors. One of them is from the > >>> > HibernateSearchJPATestCase.testFullTextQuery test. > >>> > http://pastebin.com/Q5xLrkpT shows the WildFly server.log > >>> contents from the > >>> > Hibernate Search test. > >>> > >>> That looks like related to an Hibernate ORM change, not > Search. > >>> The entity used for that test doesn't declare the fields > as "public"; > >>> that used to be ok in previous versions. > >>> You could workaround it by changing the test to use either > public > >>> fields or traditional getters/setters? > >>> But we should check with Steve if that change was > intentional? For > >>> now, better to workaround it in the test so we don't get > stuck. > >>> > >>> Thanks! > >>> Sanne > >>> > >>> > > >>> > > >>> >> > >>> >> The two should be updated in synch this time, in > future there > >>> will be > >>> >> more flexibility. > >>> >> > >>> >>> > >>> >>> Scott > >>> >>> > >>> >>> [1] https://github.com/wildfly/wildfly/pull/7509 > >>> >>> > >>> >>> [2] > https://github.com/scottmarlow/wildfly/tree/hibernate5_june18 > >>> _______________________________________________ > >>> hibernate-dev mailing list > >>> hibernate-dev at lists.jboss.org > > > > >>> https://lists.jboss.org/mailman/listinfo/hibernate-dev > >>> > >> > From sanne at hibernate.org Mon Jun 29 10:58:30 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Mon, 29 Jun 2015 15:58:30 +0100 Subject: [hibernate-dev] The Hibernate Search / Apache Tika interaction with WildFly modules Message-ID: TLDR - Remove all "optional" Maven dependencies from the project - Things like the TikaBridge need to live in their own build unit (their own jar) - Components which don't have all dependencies shall not be included in WildFly modules These are my notes after debugging HSEARCH-1885. A service can be optionally loaded by the Service Loader pattern, but all dependencies of each module must be available to the static module definition. Our current WildFly modules include the hibernate-search-engine jar, which has an optional dependencies to Apache Tika. We don't provide a module of Apache Tika as it has many dependencies, so there was the assumption that extensions can be loaded from the user classpath (as it normally works). This one specifically, can't currently be loaded from the user EAR/WAR as that causes a > java.lang.NoClassDefFoundError: org/apache/tika/parser/Parser The problem is that, while we initialize the org.hibernate.search.bridge.builtin.TikaBridge using the correct classloader (an aggregate from Hibernate ORM which includes the user deployment), this only initialized the definition of the TikaBridge itself. After its class initialization, when this is first used this will trigger initialization of its import statements; it imports org.apache.tika.parser.Parser (among others), but at this point we're out of the scope of the custom classloader usage, so the current module is being used as the extension was in fact *loaded from* the classloader for hibernate-search-engine. The point is that the TikaBridge - while it was loaded from the aggregated classloader - it was ultimately found in the hibernate-search-engine module and at that point was associated with that. A possible workaround is to set the TCCL to the aggregate classloader during initialization of the TikaBridge and its dependencies, but this is problematic as we can't predict which other dependencies will be needed at runtime, when the Tika parsing happens of any random data: one would also need to store a pointer to this classloader within the FieldBridge, and then override the TCCL at runtime each time the bridge is invoked.. that's horrible. The much simpler solution is to make sure the TikaBridge class is loaded *and associated* to a classloader which is actually able to load its extensions! In other words, if the user deployment includes the Tika extensions, it should also include the TikaBridge. So the correct solution is to break out this into a Tika module, and not include it within the WildFly module, but have the users include it as an extension point, as they would with other custom FieldBridges. This problem would apply to any other dependency using the "optional" qualifier of Maven; currently only our Tika integration relies on it, so let's remove it but please let's also avoid "optional" in the future. Thanks, Sanne From sanne at hibernate.org Tue Jun 30 07:57:28 2015 From: sanne at hibernate.org (Sanne Grinovero) Date: Tue, 30 Jun 2015 12:57:28 +0100 Subject: [hibernate-dev] HSEARCH: Removing dynamic analyzer mapping? Message-ID: Among the many changes of Apache Lucene 5, it is no longer possible to override the Analyzer on a per-document base. You have to pick a single Analyzer when opening the IndexWriter. Of course the Analyzer can still return a different tokenization chain for each field, but the field->tokenizer mapping has to be consistent for the lifecycle of the IndexWriter. This means we might need to drop our "Dynamic Analyzer" feature: http://docs.jboss.org/hibernate/search/5.4/reference/en-US/html_single/#_dynamic_analyzer_selection I did ask to restore the functionality: https://issues.apache.org/jira/browse/LUCENE-6212 So, the alternatives I'm seeing: # Dropping the Dynamic Analyzer feature # Cheat and pass in a mutable Analyzer - needs some caution re concurrent usage # Cheat and pass in a pre-analyzed Document # Fork & patch the IndexWriter Patching the functionality back in Lucene is trivial, but the Lucene team needs to agree on the use case and then the release time will be long. We should discuss both a short-term solution and the better long-term solution. My favourite long-term solution would be to do pre-analysis: in our master/slave clustering approach, that would have several other benefits: - move the analyzer work to the slaves - reduce the network payloads - remove the need to be able to serialize analyzers But I'd prefer to do this in a second "polishing phase" rather than consider such a backend rewrite as a blocker for Lucene 5. WDYT? Thanks, Sanne From emmanuel at hibernate.org Tue Jun 30 09:12:39 2015 From: emmanuel at hibernate.org (Emmanuel Bernard) Date: Tue, 30 Jun 2015 15:12:39 +0200 Subject: [hibernate-dev] HSEARCH: Removing dynamic analyzer mapping? In-Reply-To: References: Message-ID: <7F442E1C-9920-4FF6-9845-47026B78F09F@hibernate.org> If we feel short handed, we could do the following: 1. disable the feature and raise an exception when someone uses it with a pointer to the JIRA to restore it that way we will know how many people we pissed off and we can feed the use cases to our Lucene friends 2. Work on a workaround if the JIRa becomes popular or compelling. A mutable analyzer or the preanalized approach has my preference. > On 30 Jun 2015, at 13:57, Sanne Grinovero wrote: > > Among the many changes of Apache Lucene 5, it is no longer possible to > override the Analyzer on a per-document base. > > You have to pick a single Analyzer when opening the IndexWriter. > Of course the Analyzer can still return a different tokenization chain > for each field, but the field->tokenizer mapping has to be consistent > for the lifecycle of the IndexWriter. > > This means we might need to drop our "Dynamic Analyzer" feature: > http://docs.jboss.org/hibernate/search/5.4/reference/en-US/html_single/#_dynamic_analyzer_selection > > I did ask to restore the functionality: > https://issues.apache.org/jira/browse/LUCENE-6212 > > So, the alternatives I'm seeing: > # Dropping the Dynamic Analyzer feature > # Cheat and pass in a mutable Analyzer - needs some caution re concurrent usage > # Cheat and pass in a pre-analyzed Document > # Fork & patch the IndexWriter > > Patching the functionality back in Lucene is trivial, but the Lucene > team needs to agree on the use case and then the release time will be > long. > > We should discuss both a short-term solution and the better long-term solution. > > My favourite long-term solution would be to do pre-analysis: in our > master/slave clustering approach, that would have several other > benefits: > - move the analyzer work to the slaves > - reduce the network payloads > - remove the need to be able to serialize analyzers > But I'd prefer to do this in a second "polishing phase" rather than > consider such a backend rewrite as a blocker for Lucene 5. > > WDYT? > > Thanks, > Sanne > _______________________________________________ > hibernate-dev mailing list > hibernate-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hibernate-dev From steve at hibernate.org Tue Jun 30 12:12:51 2015 From: steve at hibernate.org (Steve Ebersole) Date: Tue, 30 Jun 2015 16:12:51 +0000 Subject: [hibernate-dev] 5.0.0.CR2 delay In-Reply-To: References: Message-ID: Any objections to holding off on this until the following week? A pretty serious issue[1] came up and I'd really like to take the time to make sure it gets addressed properly. [1] - https://hibernate.atlassian.net/browse/HHH-9887 On Sun, Jun 21, 2015 at 8:19 AM Steve Ebersole wrote: > The timebox for CR2 release is next Wednesday. However I am taking some > time off early next week. As a result I am going to push CR2 back one week. >