[Hawkular-dev] Hawkular-metrics resource requirements questions

Daniel Miranda danielkza2 at gmail.com
Thu Dec 8 12:05:25 EST 2016


Forgot the links. The uncompressed storage estimates are actually for
NewTS, but they should not be much different for any other Cassandra-backed
TSDB without compression.

[1] https://www.adventuresinoss.com/2016/01/22/opennms-at-scale/
[2] https://prometheus.io/docs/operating/storage/

Em qui, 8 de dez de 2016 às 15:00, Daniel Miranda <danielkza2 at gmail.com>
escreveu:

> Greetings,
>
> I'm looking for a distributed time-series database, preferably backed by
> Cassandra, to help monitor about 30 instances in AWS (with a perspective of
> quick growth in the future). Hawkular Metrics seems interesting due to it's
> native clustering support and use of compression, since naively using
> Cassandra is quite inefficient - KairosDB seems to need about 12B/sample
> [1], which is *way* higher than other systems with custom storage backends
> (Prometheus can do ~1B/sample [2]).
>
> I would like to know if there are any existing benchmarks for how
> Hawkular's ingestion and compression perform, and what kind of resources I
> would need to handle something like 100 samples/producer/second, hopefully
> with retention for 7 and 30 days (the latter with reduced precision).
>
> My planned setup is Collectd -> Riemann -> Hawkular (?) with Grafana for
> visualization.
>
> Thanks in advance,
> Daniel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161208/b8bc68a2/attachment.html 


More information about the hawkular-dev mailing list