Hawkular OpenShift Agent is now available
by John Mazzitelli
(FYI: we had an "underwhelming" participation in the naming poll. What was decided upon was a more descriptive name rather than some code name anyway)
Hawkular OpenShift Agent source has been published on github.com here:
https://github.com/hawkular/hawkular-openshift-agent
For now, we'll track issues in github until we figure out what we want to do (if anything) in JIRA and where.
If interested, read the README - it provides build and run instructions.
Currently the Hawkular OpenShift Agent supports the following:
1) Watches OpenShift as pods and configmaps are added, modified, and removed
2) As things change in OpenShift, the agent adjusts what it monitors
3) All metric data is stored in Hawkular Metrics
4) Pods tell the agent what should be monitored via an annotation which names a config map. In that config map is a single YAML configuration that contains all the endpoint information the agent needs in order to monitor it and store its data. Pods can ask for multiple endpoints to be monitored, and multiple pods within the node can be monitored - but only one node is monitored. If you have multiple nodes, you need one agent per node.
5) Each endpoint data can have its data stored in its own tenant (as defined in the config map yaml)
6) The agent can monitor any endpoints you define in the global agent config file also - you don't need to have pods/config maps for this (this is useful if the agent is running outside of Open Shift or there are some things in the agent's node that you want monitored without having to look up pods/configmaps).
7) Currently, Prometheus endpoints are supported (both binary and text protocols).
There are many things we need to get done.
* Jolokia support is not yet implemented
* Secure access (encryption and authentication) to both Hawkular-Metrics and the metric endpoints
* Details on how to run the agent within Open Shift (daemon set?)
* Tag the metrics being stored (there are no tags being associated with the metrics yet)
* Determine the names of the metrics (right now its just using the names of the prometheus metrics as-is)
* etc, etc, etc
Many thanks to Matt Wringe who got this kicked off with his ideas and Open Shift integration code which was the foundation of the current codebase.
--John Mazz
6 years, 7 months
OpenShift Pet vs Cattle metaphor
by Jiri Kremser
Hello,
today, I was on L&L about storage in OpenShift and I learn interesting
thing. I always thought, that everything needs to be immutable and
stateless and all the state needs to be handled by means of NFS persistent
volumes. Luckily, there is a feature in Kubernetess (since 1.3) that allows
the PODs to be treated as pets. It's called PetSet [1] and it assigns a
unique ID (and persistent DNS record) to a POD that runs in this "mode".
Common use-case for PetSet is a set of pods with a relational DBs that uses
some kind of master-slave replication and slaves needs to know the master's
address. But it can be used for anything. We can use the hostname as the
feed id for instance.
I don't know how much this will be popular because it kind of defeats the
purpose of immutable infrastructure but it can save us some work with the
feed identity. And of course we need to support also the "normal" POD
scenario.
[1]: http://kubernetes.io/docs/user-guide/petset/
jk
6 years, 7 months
New repo for various travis scripts?
by Joel Takvorian
Hi,
I just wonder if we should create a new git repo to store the different
files that are required for integration tests on the hawkular clients
(ruby, java, now dropwizard...). For now there's just 2 required files
afaik: ".travis/wait_for_services.rb" and the docker-compose file, but
there may be more in the future.
So, rather than storing a copy of each file in each client that use that
docker-based integration tests, isn't it better to store them in a new repo
and download them from travis script?
There's also a maven install script that I picked from inventory and copied
to the java client repo, that would fit in this scripts repo as well.
Joel
6 years, 7 months
open shift agent - what to call it?
by John Mazzitelli
OK, folks, as much as I hate these "what should we name this thing?" threads, I have to do it.
We are at the point where we are going to start going full-throttle on building out an agent that can monitor things on Open Shift (and Heiko wants to be able to monitor things outside of Open Shift - I'll let him chime in on what his use cases are to get a better feel for what he's thinking)
We need a name ASAP so we can create a repository under the Hawkular github namespace and put the code up there so people can start working on it. I would like to do this sooner rather than later - say, by Thursday???
Matt was thinking "hawkulark" (Hawk-U-Lark, Hawkular-K) because "k" == kubernetes.
I was thinking "GoHawk" (rhymes with "mohawk") because it is implemented in "Go"
I wasn't keen on relying on "kubernetes" as part of the name since its really targeting Open Shift and even then doesn't have to run in Open Shift (back to the ideas Heiko has for this thing).
"GoHawk" doesn't seem to be a winner simply because what happens if we implement other hawkular feeds in Golang?
I'm assuming we'll come up with a name and agree to it collectively as a group - but I nominate Thomas H, Heiko R, and John D. as the committee to give the final approval/tie-breaking authority :) It won't be me. I suck at coming up with names.
--John Mazz
P.S. Who knows how to setup one of those online polls/surveys where you can enter your submissions and vote for other submissions?
6 years, 7 months
gohawk - need Go code to write to H-Metrics
by John Mazzitelli
I am close to having GoHawk [1] be able to take flight :) He's still a fledgling, not quite ready to leave the nest yet.. but close. I could even demo what I got if some folks are interested in learning how GoHawk is configured (YAML!!!), seeing it react to changes to an Open Shift node environment on the fly, collecting Prometheus data, and mock-storing the metrics to h-Metrics.
BUT! Right now I'm at the point where I need code that writes data to Hawkular Metrics from a Go client. Anyone have code that shows how to do this? This isn't code that QUERIES the H-Metrics for existing metric data - it is code that WRITES metrics to Hawkular Metrics. I already have a array of MetricHeader objects ([]metrics.MetricHeader) - I just need code that builds up the HTTP request and sends it (including any encryption/credential parameters/settings required?).
[1] https://github.com/jmazzitelli/gohawk
6 years, 7 months
Hawkular APM 0.11.0.Final now available
by Gary Brown
Hi
The Hawkular APM team are pleased to announce the release of version 0.11.0.Final.
The release details, including distributions, can be found here: https://github.com/hawkular/hawkular-apm/releases/tag/0.11.0.Final
The release includes:
* Improvements in the UI for displaying service dependency and trace instance information
* Zipkin integration now includes Kafka support (with json and thrift encoded data)
* Initial implementation of a Java opentracing provider
* Integration with Hawkular Alerts to trigger alerts based on trace completion events
Blogs and videos will follow in the next couple of days to demonstrate these capabilities.
Regards
Gary
6 years, 7 months
MiQ log/middleware.log
by mike thompson
So currently, we aren’t really logging anything to this middleware.log (as far as I can tell). Should we be? What is our policy around using this log (versus evm.log)?
This may be more important once we are in CFME and have to debug some customer issues.
Should all of our logging goto this log? Should some? If so what?
Sorry, just a bit confused about the purpose of this log since it shows empty for me.
— Mike
6 years, 7 months