[Hawkular-dev] OpenShift Pet vs Cattle metaphor

Matt Wringe mwringe at redhat.com
Thu Oct 20 08:41:44 EDT 2016


----- Original Message -----
> From: "Lukas Krejci" <lkrejci at redhat.com>
> To: hawkular-dev at lists.jboss.org
> Sent: Thursday, 20 October, 2016 6:28:09 AM
> Subject: Re: [Hawkular-dev] OpenShift Pet vs Cattle metaphor
> 
> I have hardly any hands-on experience with Openshift or Kubernetes, but I was
> under the impression the pods can be configured using environment variables.

If you are dealing with a specific single pod that you are monitoring, then yes, you could manually set a unique feed id via something like a environment variable, or just give the pod a specified name.

Since we can't use petsets for our Cassandra instances yet, we do something similar. Each Cassandra instance in a cluster has its own unique id which has a corresponding persistent volume (eg hawkular-cassandra-node-1, hawkular-cassandra-node-2, etc). But this is not a great solution since we have to manually increase or decrease the cluster size with templates instead of using the tools which are meant to do this (eg replicasets/petsets). Using the proper tools will make things much easier for admins and will be much less prone to errors.

The problem is that for the most part you are probably not going to be dealing with single pods. Users are going to be running groups of pods in a replicaset, and the number of pods running there is dynamic. It can increase or decrease, can be all stopped and restarted again, etc.

Even if say you know you only want to be running one pod for your application, you may want to do something like bring up a new version of our application in one pod and have it running a bit before shutting down your old pod. This overall would cause some strange values in your collected metrics if you had the same feed id for both.

You may want to rethink how to handle this. I would expect each pod to have its own feed and metrics being gathered since they are separate, individual things. I would then expect the console figure out how to manage and group this information in a manner that makes sense.

This is how we handle it in OpenShift. Each pod has its own metrics being gathered for it, and we have the metadata for the pods stored as tags. From this I can see the graphs for the currently running pods, as well as get a graph of the overall usage across a replica set (currently running and previously running pods).

> 
> If that is true, couldn't the agent just read its feed id from a well-known
> env var? Or if that is not possible then surely we can have some wrapper
> script that would take the env var and pass it on to the agent as a -D system
> property (which I believe the agent accepts already).
> 
> On Wednesday, October 19, 2016 4:00:21 PM CEST Matt Wringe wrote:
> > ----- Original Message -----
> > 
> > > From: "Jiri Kremser" <jkremser at redhat.com>
> > > To: "Discussions around Hawkular development"
> > > <hawkular-dev at lists.jboss.org> Sent: Wednesday, 19 October, 2016 11:49:26
> > > AM
> > > Subject: Re: [Hawkular-dev] OpenShift Pet vs Cattle metaphor
> > > 
> > > Hi Matt,
> > > thanks for response.
> > > 
> > > PetSets are meant for clustering of pods which have persistent storage.
> > > If
> > > this is not your use case, what exactly are you trying to do? There may
> > > be
> > > better ways to handle it.
> > > 
> > > I am trying to figure out how to monitor wildfly in the openshift. If I
> > > am
> > > not mistaken all the metric ids contain the feed id, feed id (at least
> > > for
> > > wildfly is autogenerated if it's not provided in the xml config). If
> > > container/pod is killed and re-created its history is lost with the feed
> > > id. That's why I thought the Pet Sets with persistent ids can help.
> > 
> > I don't know if this sounds like the right approach.
> > 
> > The pods you are monitoring are still "cattle". They may be started,
> > stopped, removed, increase in the number of instances running, decreased in
> > the number of instances running or some other change may occur.
> > 
> > I might have a replica set of 3 pods right now, 10 minutes from now due to
> > increased load my replica set may autoscale to 5 pods, and 2 hours from now
> > it may decrease down to 2 pods.
> > 
> > If you want to track metrics from individual pods, then you should be
> > setting the feed id to something unique like the namespace plus the pod
> > name. The feed will change when the pod is recreated, but that is what you
> > want, its no longer the same thing being monitored.
> > 
> > If you want to keep track of something like a particular replica set, then
> > you need to use inventory or tags and query based on that.
> > > jk
> > > 
> > > On Wed, Oct 19, 2016 at 3:49 PM, Matt Wringe < mwringe at redhat.com >
> > > wrote:
> > > 
> > > 
> > > ----- Original Message -----
> > > 
> > > > From: "Jiri Kremser" < jkremser at redhat.com >
> > > > To: "Discussions around Hawkular development" <
> > > > hawkular-dev at lists.jboss.org >
> > > > Sent: Thursday, 13 October, 2016 8:21:10 AM
> > > > Subject: [Hawkular-dev] OpenShift Pet vs Cattle metaphor
> > > > 
> > > > Hello,
> > > > today, I was on L&L about storage in OpenShift and I learn interesting
> > > > thing.
> > > > I always thought, that everything needs to be immutable and stateless
> > > > and
> > > > all the state needs to be handled by means of NFS persistent volumes.
> > > > Luckily, there is a feature in Kubernetess (since 1.3) that allows the
> > > > PODs
> > > > to be treated as pets. It's called PetSet [1] and it assigns a unique
> > > > ID
> > > > (and persistent DNS record) to a POD that runs in this "mode".
> > > 
> > > For OpenShift, we would have moved to using PetSets for our Cassandra
> > > pod,
> > > but its not a fully supported feature yet. In the next version we will be
> > > moving over to using it.
> > > 
> > > It will make changing the cluster size for Cassandra nodes a lot easier
> > > once we can use this.
> > > 
> > > > Common use-case for PetSet is a set of pods with a relational DBs that
> > > > uses
> > > > some kind of master-slave replication and slaves needs to know the
> > > > master's
> > > > address. But it can be used for anything. We can use the hostname as
> > > > the
> > > > feed id for instance.
> > > > 
> > > > I don't know how much this will be popular because it kind of defeats
> > > > the
> > > > purpose of immutable infrastructure but it can save us some work with
> > > > the
> > > > feed identity. And of course we need to support also the "normal" POD
> > > > scenario.
> > > 
> > > PetSets are meant for clustering of pods which have persistent storage.
> > > If
> > > this is not your use case, what exactly are you trying to do? There may
> > > be
> > > better ways to handle it.
> > > 
> > > > [1]: http://kubernetes.io/docs/user-guide/petset/
> > > > 
> > > > jk
> > > > 
> > > > _______________________________________________
> > > > hawkular-dev mailing list
> > > > hawkular-dev at lists.jboss.org
> > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev
> > > 
> > > _______________________________________________
> > > hawkular-dev mailing list
> > > hawkular-dev at lists.jboss.org
> > > https://lists.jboss.org/mailman/listinfo/hawkular-dev
> > > 
> > > 
> > > _______________________________________________
> > > hawkular-dev mailing list
> > > hawkular-dev at lists.jboss.org
> > > https://lists.jboss.org/mailman/listinfo/hawkular-dev
> > 
> > _______________________________________________
> > hawkular-dev mailing list
> > hawkular-dev at lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/hawkular-dev
> 
> 
> --
> Lukas Krejci
> _______________________________________________
> hawkular-dev mailing list
> hawkular-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hawkular-dev
> 


More information about the hawkular-dev mailing list