[Hawkular-dev] OpenShift Deployment

John Mazzitelli mazz at redhat.com
Tue Dec 5 17:57:06 EST 2017


For now, at this point in time, I vote for simplicity above things like scalability issues. Let's make it simple and easy for people to try this stuff out. Worry about Prometheus filling up a volume when we actually have people using this at a scale where that actually will be a real problem.

$0.02 deposited.

----- Original Message -----
> On Tue, Dec 5, 2017 at 4:09 PM, Paul Gier < pgier at redhat.com > wrote:
> 
> 
> 
> If we use a single shared volume, I'd be a little worried about Prometheus
> data filling up all the space and causing problems in the hawkular-services
> container. We've seen some runaway Prometheus storage issues in open shift.
> 
> One component eating all the space of the other is something we might want to
> be worried about.
> 
> Prometheus considers its metrics storage to not be durable and that is our
> stance with Hawkular Services as well.
> 
> The only thing we can really do is to give recommendations to users for how
> much storage they should need. Since we are tightly coupling Hawkular
> Services with Prometheus, it might be easier to just keep them together and
> give the recommendation for both running.
> 
> 
> 
> 
> 
> On Dec 5, 2017 2:13 PM, "Matthew Wringe" < mwringe at redhat.com > wrote:
> 
> 
> 
> Going with the all-in-one pod approach, there is also the question of how
> many volumes we want to use.
> 
> We can have everything in one volume with multiple directories (eg /p8s,
> /hawkular-inventory, etc). This would make it easier to install as we only
> have to have one directory to deal with.
> 
> Of we could have multiple volumes, one for Hawkular Services and one for
> Prometheus. This might give us some more flexibility in the future, but I
> can't see why we would need this extra complexity at the moment.
> 
> Any thoughts on this? Or maybe some perspective I am missing?
> 
> On Mon, Dec 4, 2017 at 5:35 AM, Thomas Heute < theute at redhat.com > wrote:
> 
> 
> 
> +1
> 
> On Thu, Nov 30, 2017 at 2:56 PM, Matthew Wringe < mwringe at redhat.com > wrote:
> 
> 
> 
> Yeah, I think we should go with the all-in-one pod approach for now. If we
> discover certain use cases that wont work properly we can re-evaluate.
> 
> On Thu, Nov 30, 2017 at 5:15 AM, Lucas Ponce < lponce at redhat.com > wrote:
> 
> 
> 
> 
> 
> On Thu, Nov 30, 2017 at 10:55 AM, Lucas Ponce < lponce at redhat.com > wrote:
> 
> 
> 
> 
> 
> On Mon, Nov 27, 2017 at 4:38 PM, Matthew Wringe < mwringe at redhat.com > wrote:
> 
> 
> 
> With the changes that are now going to include Prometheus, how do we want to
> deploy this in OpenShift?
> 
> We can have a few options:
> 
> ALL-IN-ONE CONTAINER
> We put both Hawkular Services and Prometheus in the same container.
> 
> Pros:
> - easy to deploy in plain docker (but this doesn't appear to be a usecase we
> are targetting anyways)
> - shares the same network connection (even localhost) and ip address (eg but
> both services are on the different ports).
> - Does't require any special wiring of components.
> - Can share the same volume mount
> - version of components can't get out of sync.
> 
> Cons:
> - workflow doesn't work nicely. Docker containers are meant to only run a
> single application and running two can cause problems. Eg lifecycle events
> would become tricky and require some hacks to get around things.
> - can't independently deploy things
> - can't reuse or share any existing Prometheus docker containers.
> 
> ALL-IN-ONE POD
> Hawkular Services and Prometheus are in their own containers, but they are
> both deployed within the same pod.
> 
> Pros:
> - shares the same network connection.
> - bound to the same machine (useful if sharing the same hostpath pv) and don'
> need to worry about external network configurations (eg firewalls between
> OpenShift nodes)
> - pvs can be shared or separate.
> - lifecycle events will work properly.
> 
> Cons:
> - lifecycle hooks will mean that both containers will have to pass before
> either one will enter the ready state. So if Prometheus is failing for some
> reason, Hawkular Services will not be available under the service.
> - cannot independently update one container. If we need to deploy a new
> container we will need to bring down the whole pod.
> - are stuck with a 1:1 ratio between Hawkular Services and Prometheus
> 
> 
> One technical requeriment is that Hawkular Services needs to now where is
> Prometheus server at initialization.
> One technical requeriment is that Hawkular Services needs to *know* where is
> Prometheus server at initialization.
> 
> [Sorry, typing fast]
> 
> 
> 
> 
> So, I guess that all-in-one pod will simplify things on this case.
> 
> I would start with this architecture first and harden the basic scenarios.
> 
> 
> 
> 
> 
> SEPARATE PODS
> Hawkular Services and Prometheus have their own separate pods.
> 
> Pros:
> - can independently run components and each component has its own separate
> lifecycle
> - if in the future we want to cluster Hawkular Services. this will make it a
> lot easier and will also allow for running an n:m ratio between Hawkular
> Services and Prometheus
> - probably the more 'correct' way to deploy things as we don't have a strong
> requirement for Hawkular Services and Prometheus to run together.
> 
> Cons:
> - more complex wiring. We will need to have extra services and routes created
> to handle this. This mean more things running and more chances for things to
> go wrong. Also more things to configure
> - reusing a PV between Hawkular Services and Prometheus could be more
> challenging (especially if we are using hostpath pvs). Updating the
> Prometheus scrape endpoint may require a new component and container.
> 


More information about the hawkular-dev mailing list