OpenShift agent - multiple identity for certs
by Gareth Healy
Currently it seems you can only provide the agent configmap with the
identity field. But what i want to actually do, is provide this based on
the pods config map, i.e.:
data:
hawkular-openshift-agent: |
endpoints:
- type: prometheus
protocol: "https"
port: 9779
path: /metrics
collection_interval_secs: 5
metrics:
- name: my-first-metric
type: counter
identity:
cert_file: /var/run/secrets/client-crt/client.crt
private_key_file: /var/run/secrets/client-key/client.key
The reason being, i might have multiple prometheus endpoints that have
different certs.
Is that possible? or planned for the future?
Cheers.
7 years, 12 months
Hawkular Usage and other stats
by Kavin Kankeshwar
Hi,
I am evaluating Hawkular seems very interesting, I just wanted to know if
you guys have some usage stats and community involvement ?
I see see only few Github Stars etc, but the project is being actively
developed based on commit history.
Just wanted to figure out about hawkular if its production ready and I can
start using it if needed at my workplace. Obviously once we start using if
there are any changes I need i am willing to submit patches etc. But just
wanted to check on stats before i dive in . :)
Thanks!
Regards,
--
Kavin.Kankeshwar
8 years
srcdeps is apparently broken or at least not working on travis
by John Mazzitelli
I can build this locally fine. However, srcdep plugin when running on travis is failing to compile h-inventory.
See the tons of compile errors here:
https://travis-ci.org/hawkular/hawkular-agent#L391
We need to either:
a) fix what is wrong with srcdep and travis
b) release inventory (and other dependencies in order to build things further downstream) so we don't use srcdeps
At this point, my okhttp upgrade is dead in the water since I can't get the h-agent or h-services repos to go green.
8 years
HAWKULAR Jira cleanup
by Heiko W.Rupp
Hey,
I have cleaned up the HAWKULAR jira project and closed some outdated items.
Can you please all have a look at the items you have either opened or are
assigned to and see if they are still relevant and close them if not?
Thanks
Heiko
8 years
Move to WF 10.1.0?
by Jay Shaughnessy
I noticed that on Openshift we are running Hawkular Metrics on WildFly
10.1.0. It was upped from 10.0.0 several months ago due to a blocking
issue that had been fixed in EAP but not WF 10.0. I ran into a new
issue when trying to deploy Metrics master on OS Origin. It failed to
deploy on WF 10.1.0. I was able to solve the issue without a major
change but it called out the fact that we are building Hawkular against
WF 10.0.1 bom and running itests against 10.0.0 server.
Because OS is a primary target platform I'm wondering if we should bump
the parent pom deps to the 10.1.0 bom and server (as well as upping a
few related deps as well, like ISPN). As part of my investigation I did
this locally for parent pom, commons, alerting and metrics and did not
see any issues.
Thoughts?
8 years
srcdep changes?
by John Mazzitelli
I just found out the problem I'm having with srcdeps in h-services is because this wasn't merged:
https://github.com/hawkular/hawkular-services/pull/104
Once I merged that locally, I am able to put in the SRC-revision-### in the version string and it works.
But this brings up a question: what changed? Can you not use srcdeps anymore by simply adding SRC-revision-### in the version string? Because I see now a complicated .mvn directory with extensions.xml and srcdeps.yaml configuration files... did we have srcdep config files before?
I didn't really look closely at the srcdep and mvn changes that went in, but I guess I should have. Is this now more complicated to use than simply changing a version string to include SRC-revision?? Because that was really nice and easy to use (almost magical :-)
8 years
Hawkular-metrics resource requirements questions
by Daniel Miranda
Greetings,
I'm looking for a distributed time-series database, preferably backed by
Cassandra, to help monitor about 30 instances in AWS (with a perspective of
quick growth in the future). Hawkular Metrics seems interesting due to it's
native clustering support and use of compression, since naively using
Cassandra is quite inefficient - KairosDB seems to need about 12B/sample
[1], which is *way* higher than other systems with custom storage backends
(Prometheus can do ~1B/sample [2]).
I would like to know if there are any existing benchmarks for how
Hawkular's ingestion and compression perform, and what kind of resources I
would need to handle something like 100 samples/producer/second, hopefully
with retention for 7 and 30 days (the latter with reduced precision).
My planned setup is Collectd -> Riemann -> Hawkular (?) with Grafana for
visualization.
Thanks in advance,
Daniel
8 years