I see a lot of confusion on that thread.
Prometheus is a quite an internal detail for Hawkular Services/CloudForms
We happen to also have a team working on Prometheus directly working with
various consumers such as OpenShift and Fuse.
On Fri, Dec 8, 2017 at 9:54 PM, Matthew Wringe <mwringe(a)redhat.com> wrote:
On Fri, Dec 8, 2017 at 1:38 PM, Viet Nguyen
<vnguyen(a)redhat.com> wrote:
> Clarification: I'm not suggesting an Assemble-Your-Own solution whereby
> the users independently install Prometheus then Hawkular Services.
>
> Rather what I'm proposing is somewhat analogous to Fedora model. We want
> to be the community leader in Prometheus on OpenShift
No we don't. This is not the goal of Hawkular Services and has never been
our goal.
Hawkular Services is about being able to monitor and manage middleware
servers. Thats our purpose.
We are a consumer of Prometheus and using it as our metrics solution. In
the past this was done using Hawkular Metrics. And in the future we might
switch to some other metrics solution.
We are currently using Prometheus as a tool. That is it.
There are other teams working on the Prometheus experience in OpenShift,
and we will be working with them. We will be reusing as much as possible
their containers, scripts and configurations. Its these teams whose goal is
the community leadership of running Prometheus on OpenShift, not us.
> and on top of that our expertise in Middleware monitoring brings values
> to Middleware users by providing our Prometheus++ solution (core Prometheus
> + Middleware monitoring enhancement).
>
> >If someone wants p8s for something other than middleware
> >monitoring, then they will have to use a different p8s pod.
>
> Let me flip that inside out if I may. If someone wants to contribute to
> P8S other than Middleware monitoring, then they will have to go to a
> different project.
Yes of course, they will go to the OpenShift team working directly with
Prometheus. Just as we are.
> With that I feel we may be missing a huge opportunity here. A large
> community of P8s users will go elsewhere to contribute. We'll be on our
> own island.
>
> I'm bringing up the sidecar approach because naturally it imposes
> loose-coupling between P8s and HS even when the two are in the same pod.
>
>
> Viet
>
>
>
> ----- Original Message -----
> From: "Matthew Wringe" <mwringe(a)redhat.com>
> To: "Discussions around Hawkular development" <
> hawkular-dev(a)lists.jboss.org>
> Sent: Thursday, December 7, 2017 6:09:34 AM
> Subject: Re: [Hawkular-dev] OpenShift Deployment
>
> On Wed, Dec 6, 2017 at 3:17 PM, Viet Nguyen < vnguyen(a)redhat.com > wrote:
>
>
> >Hi all,
>
> >TL;DR: Technically it's ALL-IN-ONE POD but make HawkularService an
> option aka "sidecar" container
>
> I believe the 'sidecar' container concept is more used for when you have
> a main container and then a small container to add in some functionality
> (eg auth proxy, metric agent, etc)
>
> In our case it would just be a pod with two containers which are tightly
> coupled.
>
>
>
> >Would our team also look at providing Prometheus (without HS) as
> >the defacto choice for OpenShift?
>
> For Hawkular Services, we will have our own Prometheus which is private
> to our needs. Someone will not be able to use our pod and optionally only
> use Prometheus from it.
>
>
>
> >What I'm proposing is still technically an ALL-IN-ONE pod option.
> > However, instead of looking at (Prometheus + HS) as a monolithic
> >solution we can position HS as an enhancement to the plain vanilla
> >Prometheus. This add-on sidecar[1] approach can satisfy both
> >Middleware users and non-middleware community users who may not
> >necessarily need HawkularServices. Let's say I want to use library >X
> and X only comes with X+Y (which will cost me CPU and RAM
> >resources) I may be less inclined to use the library.
>
> We are not entertaining this idea and conceptually its closer to having
> Hawkular Services as the main container and p8s as the side car container.
>
> If someone wants middleware monitoring, they have to use our pod with
> Hawkular Services and p8s. Its important that we control how our own p8s
> instance works and to prevent someone from modifying it to their own
> purposes (it would be too difficult to handle all the different scenarios
> here).
>
> If someone wants p8s for something other than middleware monitoring, then
> they will have to use a different p8s pod.
>
>
>
> [1] more on "sidecar" containers
>
http://blog.kubernetes.io/2015/06/the-distributed-system-
> toolkit-patterns.html
>
>
> Viet
>
>
>
>
> ----- Original Message -----
> From: "Matthew Wringe" < mwringe(a)redhat.com >
> To: "Discussions around Hawkular development" <
> hawkular-dev(a)lists.jboss.org >
> Sent: Monday, November 27, 2017 7:38:50 AM
> Subject: [Hawkular-dev] OpenShift Deployment
>
> With the changes that are now going to include Prometheus, how do we want
> to deploy this in OpenShift?
>
> We can have a few options:
>
> ALL-IN-ONE CONTAINER
> We put both Hawkular Services and Prometheus in the same container.
>
> Pros:
> - easy to deploy in plain docker (but this doesn't appear to be a usecase
> we are targetting anyways)
> - shares the same network connection (even localhost) and ip address (eg
> but both services are on the different ports).
> - Does't require any special wiring of components.
> - Can share the same volume mount
> - version of components can't get out of sync.
>
> Cons:
> - workflow doesn't work nicely. Docker containers are meant to only run a
> single application and running two can cause problems. Eg lifecycle events
> would become tricky and require some hacks to get around things.
> - can't independently deploy things
> - can't reuse or share any existing Prometheus docker containers.
>
> ALL-IN-ONE POD
> Hawkular Services and Prometheus are in their own containers, but they
> are both deployed within the same pod.
>
> Pros:
> - shares the same network connection.
> - bound to the same machine (useful if sharing the same hostpath pv) and
> don' need to worry about external network configurations (eg firewalls
> between OpenShift nodes)
> - pvs can be shared or separate.
> - lifecycle events will work properly.
>
> Cons:
> - lifecycle hooks will mean that both containers will have to pass before
> either one will enter the ready state. So if Prometheus is failing for some
> reason, Hawkular Services will not be available under the service.
> - cannot independently update one container. If we need to deploy a new
> container we will need to bring down the whole pod.
> - are stuck with a 1:1 ratio between Hawkular Services and Prometheus
>
>
> SEPARATE PODS
> Hawkular Services and Prometheus have their own separate pods.
>
> Pros:
> - can independently run components and each component has its own
> separate lifecycle
> - if in the future we want to cluster Hawkular Services. this will make
> it a lot easier and will also allow for running an n:m ratio between
> Hawkular Services and Prometheus
> - probably the more 'correct' way to deploy things as we don't have a
> strong requirement for Hawkular Services and Prometheus to run together.
>
> Cons:
> - more complex wiring. We will need to have extra services and routes
> created to handle this. This mean more things running and more chances for
> things to go wrong. Also more things to configure
> - reusing a PV between Hawkular Services and Prometheus could be more
> challenging (especially if we are using hostpath pvs). Updating the
> Prometheus scrape endpoint may require a new component and container.
> _______________________________________________
> hawkular-dev mailing list
> hawkular-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/hawkular-dev
>
_______________________________________________
hawkular-dev mailing list
hawkular-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev