Great, thanks for all the work!
Now that we have on-demand slave spawning, maybe we could get rid of our
"hack" consisting in assigning 5 slots to each slave and a weight of 3 to
each job? I would expect the website and release jobs to rarely wait in the
queue, and if they do we can always set up a specific "priority queue" for
those jobs, with a dedicated slave pool.
Just asking for this because last time I checked, it was not possible to
assign weight to jobs defined as Jenkins pipelines. So these jobs ended up
with a weight of 1, and we ended up running multiple instances of those on
the same slave... which is obviously not good.
I can do the boring job editing work on each and every job, I'm just asking
if it is seems ok to you... ?
Yoann Rodière
Hibernate NoORM Team
yoann(a)hibernate.org
On 5 January 2018 at 00:52, Steve Ebersole <steve(a)hibernate.org> wrote:
Awesome Sanne! Great work.
Anything you need us to do to our jobs?
On Thu, Jan 4, 2018, 5:20 PM Sanne Grinovero <sanne(a)hibernate.org> wrote:
> Hi all,
>
> we're having shiny new boxes running CI: more secure, way faster and
> less "out of disk space" prolems I hope.
>
> # Slaves
>
> Slaves have been rebuilt from scratch:
> - from Fedora 25 to Fedora 27
> - NVMe disks for all storage, including databases, JDKs, dependency
> stores, indexes and journals
> - Now using C5 instances to benefit from Amazon's new "Nitro"
engines
[1]
> - hardware offloading of network operations by enabling ENA [2]
> - NVMe drives also using provisioned IO
>
> This took a bit of unexpected low level work as .. Fedora images don't
> support ENA yet so I had to create a custom Fedora re-distribution AMI
> first, it wasn't possible to simply compile the kernel modules for the
> standard Fedora images. These features are expected to come in future
> Fedora Cloud images but I didn't want to wait so made our own :) [3]
>
> # Cloud scaling
>
> Idle slaves will self-terminate after some timeout (currently 30m).
> When there are many jobs queueing up, more slaves (up to 5) will
> automatically start.
>
> If you're the first to trigger a build you'll have to be patient, as
> it's possible after some quiet time (after the night?) all slaves are
> gone; the system will boot up new ones automatically ASAP but this
> initial boot takes some extra couple of minutes.
>
> # Master node
>
> Well, security patching mostly, but also finally figured out how to
> workaround the bugs which were preventing us to upgrade Jenkins.
>
> So now Jenkins is upgraded to latest, including *all plugins*. It
> seems to work but let's keep an eye on it, those plugins are not all
> maintained at the quality one would expect.
>
> In particular attempting to change EC2 configuration properties will
> now trigger a super annoying NPE [4]; either don't make further
> changes or resort to XML editing of the configuration.
>
> # Next
>
> I'm not entirely done; eventually I'd like to convert our master node
> to ENA/C5/NVMe as well - especially to be able to move all master and
> slaves into the same physical cluster - but I'll stop now and get back
> to Java so you all get a chance to identify problems caused by the new
> slaves before I cause more trouble..
>
> Thanks,
> Sanne
>
> 1 -
>
https://www.theregister.co.uk/2017/11/29/aws_reveals_nitro_
architecture_bare_metal_ec2_guard_duty_security_tool/
> 2 -
>
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/
enhanced-networking-ena.html
> 3 -
https://pagure.io/atomic-wg/issue/271
> 4 -
https://issues.jenkins-ci.org/browse/JENKINS-46856
> _______________________________________________
> hibernate-dev mailing list
> hibernate-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
_______________________________________________
hibernate-dev mailing list
hibernate-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hibernate-dev