Right now the plan is to have 1 Hawkular Services for 1 Prometheus, so if you need to scale up you will need to deploy another instance of these two components. 

We could in the future move to supporting an m:n ratio for Hawkular Services and Prometheus, but this can get a bit tricky to handle and is not something we are seriously considering at the moment.

I would really rather not have one type of setup for small deployments and another setup for larger deployments. Its going to add in extra work to make sure both setups perform properly, and to figure out how to migrate from one setup to another (eg someone starts out small and grows, so they now need to move to the large setup).


On Mon, Nov 27, 2017 at 4:17 PM, Julie Stickler <jstickle@redhat.com> wrote:
I seem to remember at some point in a meeting recently John Doyle saying that we had a large number of JON customers with small implementations and a few customers with really large JON implementations.  So I'm wondering, would it make sense to have more than one option since once size probably won't fit all?

Smaller implementation (monitoring handful of servers) - ALL-IN-ONE POD - Hawkular Services and Prometheus are in their own containers, but they are both deployed within the same pod.  For ease of use and upgrade for customers with a smaller monitoring footprint.

Large implementation (monitoring thousands of servers) - SEPARATE PODS -  Hawkular Services and Prometheus have their own separate pods.  To allow larger customers to scale up the n:m ratio between Hawkular Services and Prometheus to meet their needs, and upgrade the components separately as needed.

JULIE STICKLER

TECHNICAL WRITER

Red Hat 

Westford – 3S353

jstickle@redhat.com    T: 1(978)-399-0463-(812-0463)     IRC: jstickler


On Mon, Nov 27, 2017 at 10:38 AM, Matthew Wringe <mwringe@redhat.com> wrote:
With the changes that are now going to include Prometheus, how do we want to deploy this in OpenShift?

We can have a few options:

ALL-IN-ONE CONTAINER
We put both Hawkular Services and Prometheus in the same container.

Pros:
- easy to deploy in plain docker (but this doesn't appear to be a usecase we are targetting anyways)
- shares the same network connection (even localhost) and ip address (eg but both services are on the different ports).
- Does't require any special wiring of components.
- Can share the same volume mount
- version of components can't get out of sync.

Cons:
- workflow doesn't work nicely. Docker containers are meant to only run a single application and running two can cause problems. Eg lifecycle events would become tricky and require some hacks to get around things.
- can't independently deploy things
- can't reuse or share any existing Prometheus docker containers.

ALL-IN-ONE POD
Hawkular Services and Prometheus are in their own containers, but they are both deployed within the same pod.

Pros:
- shares the same network connection.
- bound to the same machine (useful if sharing the same hostpath pv) and don' need to worry about external network configurations (eg firewalls between OpenShift nodes)
- pvs can be shared or separate.
- lifecycle events will work properly.

Cons:
- lifecycle hooks will mean that both containers will have to pass before either one will enter the ready state. So if Prometheus is failing for some reason, Hawkular Services will not be available under the service.
- cannot independently update one container. If we need to deploy a new container we will need to bring down the whole pod.
- are stuck with a 1:1 ratio between Hawkular Services and Prometheus


SEPARATE PODS
Hawkular Services and Prometheus have their own separate pods.

Pros:
- can independently run components and each component has its own separate lifecycle
- if in the future we want to cluster Hawkular Services. this will make it a lot easier and will also allow for running an n:m ratio between Hawkular Services and Prometheus
- probably the more 'correct' way to deploy things as we don't have a strong requirement for Hawkular Services and Prometheus to run together.

Cons:
- more complex wiring. We will need to have extra services and routes created to handle this. This mean more things running and more chances for things to go wrong. Also more things to configure
- reusing a PV between Hawkular Services and Prometheus could be more challenging (especially if we are using hostpath pvs). Updating the Prometheus scrape endpoint may require a new component and container.

_______________________________________________
hawkular-dev mailing list
hawkular-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev



_______________________________________________
hawkular-dev mailing list
hawkular-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev