forwarding this email to hibernate-dev to get some more feedback.
Here are some of my thoughts. The method level validation is suggested in
Appendix C of the Bean Validation specification. The method signatures
there all return a set of ConstraintViolations.
The question is how binding these signatures are.
Root bean and property path are indeed questionable for method level
validation. By extending ConstraintViolation you still have to deal with
these values. Maybe instead of
extending we should have a new interface MethodConstraintViolation.
Emmanuel, do you have some more feedback on what was discussed regarding
------- Forwarded message -------
From: "Gunnar Morling" <gunnar.morling(a)googlemail.com>
To: "Hardy Ferentschik" <hibernate(a)ferentschik.de>
while working on method-level validation I wondered how to represent the
failing validation of method parameters. I think
javax.validation.ConstraintViolation in its current form is only partly
suitable for the needs of parameter/return value validation:
* The concept of property path seems not to fit right. How would the
property path look for a failing parameter constraint? One could come up
with something like "MyBean.MyMethod(String,Int).Parameters" but I
can't say I'd like that
* The concept of a root bean does only fit partly. I think one would need
a safe reference to the "root method" and the parameter index. A direct
reference to the class hosting the validated method seems only partly
useful (and could anytimes be retrieved from a Method reference).
I therefore prototyped this into this direction:
* Created a MethodConstraintViolation that extends ConstraintViolation by
adding fields the causing method and the parameter index (one surely would
add a flag or something in case of return value validation)
* In case a constraint directly at one of a method's parameters fail,
ConstraintViolation#rootBean and propertyPath are null (but method and
parameterIndex are set)
* In case a "cascading" constraint fails (meaning in this context, a
parameter is annotated with @Valid and the referenced bean has constraints
itself which in turn fail), method and parameterIndex are set, rootBean
and propertyPath refer to parameter bean itself, *not* the class hosting
the validated method
WDYT, is this the right way to go? Maybe we should brainstorm a bit on IRC?
I'm in need to use some of the test classes from Hibernate Search
(core module) in the tests for the new Infinispan module;
I was initially depending on the "hibernate-search-testing" Maven
module, which was recently split from our core test classes,
but then I started having seriously weird issues, finally finding out
that classes which were moved to the new module where
never removed from the old place;
in practice with eclipse's (totally broken) merging of classpaths I
was having both copies on classpath and picking the wrong
versions: this is because eclipse does merge the main and the test
classpaths (it has no notion of the different scopes),
and the new Infinispan module obviously does need to depend to the core module.
So I started cleaning up, deleting duplicate files from the core tests
to make sure I was going to use the ones provided
by hibernate-search-testing; this resulted in another blocking issue:
"A cycle was detected in the build path of project",
i.e. Eclipse now refuses to build anything, as core Search depends on
another artifact to run its tests.
>From what I remember from the discussion when we created this module,
we followed this way because it was similar
to what core did; I'm now stuck and would like to use the same
approach used by Infinispan: it creates Maven
artifacts for the tests of the same module, <type>test-jar</type>.
This was the other alternative we discussed at the time,
as far as I remember we decided to go for the current configuration
because of consistency with core.
The only alternative I see it so move all tests into the separate
project, but don't really like that;
final alternative is for me to ditch eclipse, but I'd rather keep
options open for everyone and
I'm not sure how far you tested this same things on IDEA either?
Maybe you could try removing classes such as
org.hibernate.search.test.SearchTestCase from the core module
and see how far you get in IDEA, but I wouldn't expect a great
I see that the Infinispan second level cache defines a nice property
"hibernate.cache.infinispan.cachemanager" to search an existing
CacheManager via JNDI.
Now in case of Hibernate Search's DirectoryProvider making use of
Infinispan, I suppose that people will want to lookup the same
which then would be used for both purposes, even if very likely the
configuration will contain different caches for each use case.
So from Hibernate Search's new module, shall I look for the same
property? The "cache" part in the name is unfortunate, still I would
like to define it just once.
1 - add a new property "hibernate.infinispan.jndiname", and have the
2LC look for this as fallback, I'll look for the same
2 - I suppose JBoss6 will bind it to JNDI by default, could we use
this name as default in Hibernate to bring configuration need to the
In that case I'd love to add some cache configurations for the dirty
purposes of Hibernate Search in the AS distribution, so that stuff
works with minimal configuration hazards.
as you all know, the forums are terribly overloaded with information
about Hibernate "getting started", and so is google.
as a reference, this is just an example of general feedback:
I get many comments on the same line from everybody I meet around, not
sure what could be done besides bringing on the work on "Getting
Started Guide", maybe this user is right and this documentation should
have a new forum to collect feedback on it?
I don't think we would be able to help all people in a "getting
started" forum, but maybe it could work if we set strong rules and try
have people limit their posts about documentation feedback - a JIRA
project for the getting started guides?
We could set it up as a new forum area, but as a locked forum with
pointers to reference manuals and instructions to file issues, limited
to docs only.
I don't think this is brilliant, just tossing out some ideas; please
yesterday I broke the rule to never stop a running system.
I am working on Mac and had po2xml via MacPorts (http://www.macports.org/).
It WAS part of the kdesdk4 bundle. While installing a unrelated port the
kindly informed me that my port installation was outdated. It recommended a
$ port selfupdate
$ port upgrade installed
Needless to say that po2xml disappeared :(
split2po and xml2pot are still around, so I am not sure whether this is a
temporary error in the bundle. Any Mac user out there who has the latest
macport installed and can tell me whether po2xml is now in a different
Until yesterday I was wondering what the current talk about back-porting
issues was all about.
Thanks to HHH-5729 I got my own share of problems ;-)
I know Steve is preparing some guidelines regarding this, but there are my
thoughts as a result
from yesterdays experience.
My workflow was:
$ git checkout -b HHH-5729
$ git commit
$ git commit
$ git format-patch -M master
Now I had a patch file for each commit. These patch files I copied to my
second checkout of core
where I am working on the branch 3.6. I know, I know, changing branches in
git is fast, but I prefer
(and recommend) everyone to have a separate checkout for the 3.6 branch.
The main reason is the
switch in build tools (maven -> gradle) and additional directory renames
which make IDE setup
refreshes a nightmare.
Anyways, back to back-porting. Once I had the patch files in the right
place, I tried:
$ git am *.patch
Here of course the trouble started. The patches could not be applied due
to the renaming and in some
cases merging of directories. Here are the problems I had to deal with in
* core was renamed to hibernate-core
* testing was moved into hibernate core (and sources moved from
src/main/java -> src/test/java)
* testsuite was moved into hibernate-core
Nothing a little bit of sed work couldn't fix. In fact after fixing the
paths names in the patch files,
the patches applied just fine.
This got me thinking whether we should write a little shell script taking
care of these things and checking
it into the 3.6 branch. The workflow would be something like this then:
$ git format-patch -M master
$ (copy patches to 3.6 branch)
$ adjustPatches.sh *.patch
$ git am *.patch
Thoughts? Am I till missing part of the problem?
can someone enlighten me on how to configure the logging at the moment?
For an issue I am working on I wanted to increase the log level to debug
in the hibernate-core module of master.
As always I went to log4.properties and started to edit it. Needless to
say that nothing happened. First I thought
something must be wrong with the IDE setup, but it turned out the same
from the command line build :(
At some stage while growing more and more frustrated I remembered that
there is talk and work in progress for changing
the logging. I finally checked the logging dependencies where I found:
slf4j_simple: 'org.slf4j:slf4j-simple:' + slf4jVersion,
jcl_slf4j: 'org.slf4j:jcl-over-slf4j:' + slf4jVersion,
So I take it that we are not using log4j anymore, but how do I configure
logging now? And why do we still have the
log4j.properties files checked in, if they are obsolete?