I am trying to finish up the announce and website changes for 5.3.0.Beta1.
staging.in.relation.to changes built fine.
However I am getting incomprehensible (to me) failures on the CI server
trying to build the staging website. Help?
I'm fed up with Pax Exam and would love to replace it as the
hibernate-osgi integration test harness. Most of the Karaf committers
I've been working with hate it more than I do. Every single time we
upgrade the Karaf version, something less-than-minor in hibernate-osgi,
upgrade/change dependencies, or attempt to upgrade Pax Exam itself,
there's some new obfuscated failure. And no matter how much I pray, it
refuses to let us get to the container logs to figure out what
happened. Tis a house of cards.
One alternative that recently came up elsewhere: use Docker to bootstrap
the container, hit it with our features.xml, install a test bundle that
exposes functionality externally (over HTTP, Karaf commands, etc), then
hit the endpoints and run assertions.
Pros: true "integration test", plain vanilla Karaf, direct access to all
logs, easier to eventually support and test other containers.
Cons: Need Docker installed for local test runs, probably safer to
isolate the integration test behind a disabled-by-default Maven profile.
Any gut reactions?
OSGi is fun and I'm not at all bitter,
God bless JBoss Nexus...
JBoss Nexus is doing some over-zealous validations of a relocation POM.
Even so far as I have found, Maven/Sonatype only expect a very minimal POM
for relocation artifacts, yet JBoss Nexus' validations are checking that
all normal POM values are defined. I'll have to figure this one out. In
the meantime, anyone know the proper place to ask about this? JBoss Jira?
Anyone know what happened to the ability to export release notes in text
format from Jira? When I click on a version's release notes it takes me to
the HTML format which it always did, but there is no longer an option to
choose the text format
while the new build machines are fast, some of you pointed out we're
now spending a relative high amount of time downloading maven
dependencies, this problem being compounded by the fact we "nuke" idle
slaves shortly after they become idle.
I just spent the day testing a distributed file system, and it's now
running in "production".
It's used exclusively to store the Gradle and Maven caches. This is
stateful and independent from the lifecycle of individual slave nodes.
Unfortunately this solution is not viable for Docker images, so while
I experimented with the idea I backed off from moving the docker
storage graph to a similar device. Please don't waste time trying that
w/o carefully reading the Docker documentation or talking with me :)
Also, beyond correctness of storage semantics, it's likely far less
efficient for Docker.
To learn more about our new cache:
I'd add that - because of other IO tuning in place - writes might
appear out of order to other nodes, and conflicts are not handled.
Shouldn't be a problem since snapshots now have timestamps, but this
might be something to keep in mind.
Please never rely on this as "storage": it's just meant as cache and
we reserve the right to wipe it all out at any time.
HHH-12150 is currently set to be fixed in 5.3.0. I have some time I can
spend on this. There's another issue involving @MapKeyColumn, HHH-10575.
Should I work on these, or something else for 5.3.0.Beta?
While reviewing old PRs we have in the ORM project, I stumbled on this one
about serializing the SessionFactory.
I created a new PR, rebased on top of the current master branch and all
tests are passing fine.
If anyone wants to take a look, this is the PR:
I'm thinking we should integrate it in 5.3.Alpha and stabilize it if there
are some unforeseen changes.
The only drawback is that, if we allow the SF to be Serializable, upgrading
will be much more difficult in case we change object structure.
We could make it clear that this might not be supported or use the
serialVersionUID to point to Hibernate version: major.minor.patch.
The main benefit is that, for a microservices architecture, Hibernate could
start much faster this way.
TL;DR: I installed a plugin to prioritize Jenkins jobs, please let me know
if you notice anything wrong. Also, I will remove the Heavy Job plugin
soon, let me know if you're not okay with that.
I recently raised the issue on HipChat that some Jenkins builds are
triggered in batch, something like 4 or 5 at a time. Since builds are
executed in the order they are requested, this forces the next requested
builds to wait for more than one hour before being executed, regardless of
One example of such batch is whenever something is pushed to Hibernate ORM
master (or Search master, probably): one build is triggered for tests
against H2, another for tests against PostgreSQL, another for tests against
MariaDB, and so on.
It turns out there is a solution for this problem: the PrioritySorter
plugin. I installed the plugin on CI and configured it to give higher
priority to the following builds:
- Builds triggered by users (highest priority)
- Release builds (builds in the "Release" view)
- Website builds (builds in the "Website" view)
- PR builds (builds in the "PR" view)
In practice, such builds will be moved to the front of the queue whenever
they are triggered, resulting in reduced waiting times.
I hope we will be able to use this priority feature instead of the Heavy
Job plugin (which allows to assign weights to jobs), and avoid concurrent
builds completely. With the current setup, someone releasing his/her
project will only have to wait for the currently executing build to finish,
and will get the highest priority on the release builds. Maybe this is
enough? If you disagree, please raise your concerns now: I will disable the
Heavy Job plugin soon and set each slave to only offer one execution slot.
Please let me know if you notice anything wrong. I tested the plugin on a
local Jenkins instance, but who knows...
yoann(a)hibernate.org / yrodiere(a)redhat.com
Hibernate NoORM team
so HHH-5529 <https://hibernate.atlassian.net/browse/HHH-5529> defines a
feature which I'd like to work on but want to hear opinions first.
Currently, bulk deletes only clear join tables of the affected entity
type. I guess one could argue that this was done because collection
table in contrast to join table entries should be bound to the entity
table lifecycle by using an FK with delete cascading. Or maybe it just
wasn't implemented because nobody stepped up.
I'd like to fill this gap and implement the deletion of the collection
table entries, but make that and the join table entry deletion configurable.
Does anyone have anything against that?
Would you prefer a single configuration option for join table and
collection table clearing? If we enable that option by default,
collection tables will then be cleared whereas currently users would get
a FK violation. Don't know if that can be classified as breaking behavior.
Or have two configuration options? Even then, would we enable collection
table entry deletion by default?
Mit freundlichen Grüßen,