Hibernate Validator 5.2.1.Final was just released. Most important
new feature is Java 8 support, but there is more - http://in.relation.to/2015/07/30/hibernate-validator-521-final/
All artifacts are available in the Maven repositories (JBoss and Maven Central), but distribution
bundle upload is still pending while SourceForge struggles to recover from its storage fault.
with Infinispan in embedded mode we used AtomicMaps and
FineGrainedAtomicMaps as an alternative way to map attributes and
In particular the relations are interesting because in SQL world one
would run a query on junction tables, and on Infinispan embedded
queries would only be an option on Hibernate Search / Infinispan Query
annotated attributes, and also the AtomicMaps allow us to only load
the section of relevant data (like on a RDBMs).
The difference between the two kinds of AtomicMaps is in the locking
level, each similar to the same kind of locking we'd normally have.
On Hot Rod, AtomicMaps are not available so we opened (a long time
ago) a feature request to implement them for Hot Rod - at least Java
clients. Still, we don't have transactions in this case either so the
locking benefits are also unavailable.
I think that in the case of Hot Rod clients we should not use
AtomicMaps, but rather resort to a protobuf schema generation, and
essentially use the more traditional "query on jointables" approach.
Hot Rod nowadays supports queries, and they can be indexed or non
indexed so we could enable indexing on the ad-hoc tables we build for
relations mapping, have the user "opt in" for additional indexes, and
still allow some level of querying for the fields which have not been
indexed; of course w/o join operations.
We can generate an appropriate schema and upload it to Hot Rod
automatically; that sounds like a great usability improvement for all
Java clients dealing with Infinispan/remote, as its schema ads quite
some stuff to the learning curve.
Still, this automatic generation is a new and challenging field; some notes:
- protobuf schemas are generational -> more effective if you can
generate the new one based on the existing one
- there's a Java encoder by Adrian here:
- Typically one would need to assign a stable sequence id to each field
- previous points will likely want us to dump the output resource
somewhere, maybe even persist on Infinispan?
On a very different topic, we also typically require from a
GridDialect implementor to provide sequence generation capability. I
don't see a solution for that over Hot Rod, it doesn't currently have
any safe incremental id, but when I asked today I was told that
Infinispan 8 might have it; Tristan pointed out you can upload a
script and have it run on the server, which in turn has access to the
transactions API so this should be possible but doesn't look too easy.
Finally, we'll need using the distributed remote iterator for
So my conclusion is that to support Hot Rod we'd be better off to make
a completely different GridDialect than the one for Infinispan
embedded, as I can hardly see some shared code.
Would you agree on try basing the approach on a brand new dialect and
on protobuf schema generation? In terms of features, we can implement
them all except:
- transactions & locks
- join queries
[18:45] <sebersole> whoa..
[18:45] <sebersole> sannegrinovero: we definitely need this for mariadb on
[18:46] <sebersole> sync_binlog=0 innodb_flush_log_at_trx_commit=0
[18:46] <sebersole> tests were much faster
These were suggested to me on the #mariadb list. I added them to my local
/etc/my.cnf file and the tests were significantly faster.
I have been adding a facet to GridDialect and found it surprisingly hard:
* I was not sure which non datastore dialects was supposed to implement the facet nor really how to find these non datastore dialects. I am talking about GridDialectLogger, ForwardingGridDialect and InvokedOperationsLoggingDialect. I am sure there are more of these non datastore dialects but I haven’t found them.
* Adding one method, to a facet means having to go to a lot of places to write all these logging and delegating logics. Changing a method sig a least gives you the help of the compiler but not for adding them.
* Find how consumers of the facet are supposed to use them was not obvious to me. It seems a given consumer stores all the possible facets as class field and do a null check before using them.
* when I finally ran my tests everything exploded because each facet must have a MyFacetInitiator
* when I added the initiator, it still blew up at my face because each Initiator must be listed in the OgmIntegrator
* I’m also concerned about facet discoverability, I don’t think it’s trivial for a dialect implementor to get the list of facets easily, which will tend to bring minimal dialects without the advanced features.
I wonder if and / or how we should improve that state of affairs.
As I have developed the matrix tests against MariaDB I have had in the back
of my head whether we might want to develop a Dialect for MariaDB. As I
understand it, MariaDB does strive to remain compatible with MySQL but it
does offer some enhancements. And it might have some minor deviations.
Today I think I found one such deviation, in their respective support for
casting decimal/floating-point values. I came across this because of some
There are quite a few resolved issues in Jira wrt MySQL and casting. So I
have to assume these tests work on MySQL which tells me there is a
deviation in their behavior...
The tests in question (more or less) do: "select ... cast( x as big_decimal
) ...". We understand big_decimal as java.sql.Types#NUMERIC for which the
MySQL Dialect says to use "decimal" for casting purposes.
On MariaDB at least "decimal" (no precision) leads to a data trunction. We
are expecting a value like "12.399999618530273" but get back "12.0".
Because for MariaDB DECIMAL is equivalent to DECIMAL(10,0).
So as it is, for this minor case at least the MySQL dialects work against
MySQL databases (we assume) but fail against MariaDB databases.
The other difficulty with a dialect specific for MariaDB is
auto-detection. The MariaDB JDBC drivers report the underlying database as