From cacosta at redhat.com Tue Aug 1 22:00:48 2017 From: cacosta at redhat.com (Caina Costa) Date: Tue, 1 Aug 2017 23:00:48 -0300 Subject: [Hawkular-dev] Dynamic UI PoC Presentation In-Reply-To: <5206C9B4-3646-4A74-8D8A-901057910DCB@redhat.com> References: <5206C9B4-3646-4A74-8D8A-901057910DCB@redhat.com> Message-ID: First I must preface this with the information that I do not know how this is going to be implemented on Hawkular-Inventory side, and am explaining with the perspective of the consumer of that data on ManageIQ. The idea is to explain how the Dynamic/Generic UI PoC might model Karaf servers when Hawkular starts reporting on it, for that reason I'm also going to follow through my thought process instead of just presenting the final solution. This is more about how we could model the data internally than how to fetch and process that data before presenting it to the API. Because of this, I'm not going to tackle Views, because they are only supposed to be representation for the Entities. With the entities defined, we can make any kind of representations we want, and that depends more on what the UI requires than any internal presentation of the data in objects. Currently, at the Proof of Concept, we have this structure representing all the reported entities: * Entity (base class, never matched) * MiddlewareResource * MiddlewareServer * WildFlyServer * OperatingSystem * JavaRuntime Without implementing anything, the structure presented will match Karaf Servers by default, by matching it to MiddlewareServer, and the resources it exposes are matched to MiddlewareResources. OperatingSystem and JavaRuntime should be the same. Going forward, we need to map the servers, which we do by subclassing MiddlewareServer, and we have KarafServer. Next step is going to the resources, which we can implement like this: * CamelContext < MiddlewareResource * Routes (CamelRoute) * Consumers (CamelConsumer) * State * CamelRoute < MiddlewareResource * URI * State * CamelConsumer < MiddlewareResource * URI * Route * State * CamelProcessor < MiddlewareResource * No Data? Questions: 1. How is this supposed to be shown on the UI? 3. How is the relationship between context and routes/consumers is going to be presented by Hawkular? 4. How is the relationship between consumers and routes going to be presented by Hawkular? 5. For JVM memory collection, Mazz said that the information could be get through JMX directly, how would that be presented on the inventory API? Currently, for WildFly, this appears in the "JavaRuntime" resource, is this going to be the same for Karaf servers? 6. Are the metrics for CamelRoutes, CamelConsumers, CamelProcessors and CamelRoutes be presented directly onto themselves, or should their data be aggregated in any way on the KarafServer? On Mon, Jul 31, 2017 at 1:08 PM, Heiko Rupp wrote: > Hey Caina, > > > On 20 Jul 2017, at 22:29, Caina Costa wrote: > > This is another update to the proof of concept, and today we are doing big >> improvements to cover other parts of the representation that we did not >> cover yet: fetching data from Hawkular, and turning that into >> entities/views. >> > > It's great that you are making progress. There are still some > missing pieces for me, but perhaps just because I didn't > try the code. > Before I go on, let me put a diagram in: > > > In Blue are pieces that should not need any modification if > a new type of Server is added. In the picture I have three servers > Type A and Type B, which have their configuration for the > resources they represent and report and send metrics about. > They would now also get additional data (meta data) about > the layout of those resources on the MiQ UI (Def A and B). > So if I modify Def A, the way the UI is layouted in Layout A > should change. > > There are now some extra definitions, that I want to briefly > explain: > In the current setup of the agent and inventory, each server of > a given type (Type B) sends its own Inventory report and would > also send its own version of the Definition, hence Def B and Def B'. > > What we could do it to now use Def* from the agent, but create > (Hawkular-)server side definitions SDef* for that purpose that > would then direct the layouting as Def A and so would do. > But that is a bit of a 2ndary concern at the moment. > > I think what would be good if you could implement your current > status end-to-end with a new type of server (something Fusy) > including the definition file on Hawkular side and an actual rendered > UI so that we can continue from there. > > Thanks > Heiko > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170801/07b5900d/attachment.html From lponce at redhat.com Wed Aug 2 11:01:44 2017 From: lponce at redhat.com (Lucas Ponce) Date: Wed, 2 Aug 2017 17:01:44 +0200 Subject: [Hawkular-dev] Fwd: [jboss-community #455658] Problem with Nexus for a Hawkular artifact In-Reply-To: References: Message-ID: FYI ---------- Forwarded message ---------- From: dhladky at redhat.com via RT Date: Wed, Aug 2, 2017 at 5:00 PM Subject: [jboss-community #455658] Problem with Nexus for a Hawkular artifact To: lponce at redhat.com Ticket #455658 It can be accessed online at: https://engineering.redhat. com/rt/Ticket/Display.html?id=455658 On Tue Aug 01 10:07:12 2017, dhladky at redhat.com wrote: > Hi, > > I can not tell anything about the Apache repository, however regarding Maven > Central I created this ticket: > https://issues.sonatype.org/browse/MVNCENTRAL-2567 Response from Sonatype: One of our sync jobs was hung and never recovered. We terminated the job and restarted, and I'm already seeing various 0.9.7 artifacts on Central. We're updating our jobs to ensure that they're not hung indefinitely. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170802/7024fee7/attachment.html From jmartine at redhat.com Thu Aug 3 11:53:59 2017 From: jmartine at redhat.com (Josejulio Martinez Magana) Date: Thu, 3 Aug 2017 10:53:59 -0500 Subject: [Hawkular-dev] Hawkular Services 0.39 released Message-ID: Hello, Hawkular-services 0.39 was released yesterday [1]. Changes include: - Agent version 1.0.0.CR6 - An event is forwarded to subscribed clients (like ManageIQ) when a Wildfly Server changes its availability. Docker images have been pushed to hawkular/hawkular-services. [1] https://github.com/hawkular/hawkular-services/releases/tag/0.39.0.Final -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170803/b02d8393/attachment.html From hrupp at redhat.com Tue Aug 8 08:58:17 2017 From: hrupp at redhat.com (Heiko Rupp) Date: Tue, 08 Aug 2017 14:58:17 +0200 Subject: [Hawkular-dev] Recording of Pallavi's presentation available Message-ID: <70CC9DAC-11E7-4F16-9AB4-7C0F3D2888B2@redhat.com> Hey, Pallavi gave a presentation about her work on the Hawkular Android Client. This is now available for replay: https://youtu.be/clfuTdISb4g From jtakvori at redhat.com Wed Aug 9 02:41:50 2017 From: jtakvori at redhat.com (Joel Takvorian) Date: Wed, 9 Aug 2017 08:41:50 +0200 Subject: [Hawkular-dev] A convention for metrics (short) name Message-ID: Hi, What would you say about having a convention of a special tag (let's say "_name") that would point to a (short) intelligible name for a metric. That convention wouldn't be mandatory in any case of course, but the UI could check if that tag exists and use that name, instead of the full metric id, for better display. WDYT? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170809/4123e543/attachment.html From hrupp at redhat.com Wed Aug 9 04:53:54 2017 From: hrupp at redhat.com (Heiko Rupp) Date: Wed, 09 Aug 2017 10:53:54 +0200 Subject: [Hawkular-dev] A convention for metrics (short) name In-Reply-To: References: Message-ID: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> On 9 Aug 2017, at 8:41, Joel Takvorian wrote: > What would you say about having a convention of a special tag (let's > say "_name") that would point to a (short) intelligible name for a > metric. That convention wouldn't be mandatory in any case of course, > but the UI could check if that tag exists and use that name, instead > of the full metric id, for better display. Makes sense to me. We had a displayName in RHQ. From theute at redhat.com Wed Aug 9 05:27:01 2017 From: theute at redhat.com (Thomas Heute) Date: Wed, 9 Aug 2017 11:27:01 +0200 Subject: [Hawkular-dev] A convention for metrics (short) name In-Reply-To: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> References: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> Message-ID: We definitely need to solve that usability issue, I definitely experienced it in Grafana. For Grafana in particular, I know Prometheus driver has a "label name" that is defined in Grafana and can be templatized with metrics labels. It's purely on Grafana side and only works for Grafana. Having a convention is a quick solution, do we see it used by Metrics internals or only by agents and UIs ? Does it need to support templating with other tags or do we expect the client to be smarter to tags value change ? I would also suggest displayName (or display_name or else) rather than just name, I think it's clearer that it's for UI and can change. On Wed, Aug 9, 2017 at 10:53 AM, Heiko Rupp wrote: > On 9 Aug 2017, at 8:41, Joel Takvorian wrote: > > > What would you say about having a convention of a special tag (let's > > say "_name") that would point to a (short) intelligible name for a > > metric. That convention wouldn't be mandatory in any case of course, > > but the UI could check if that tag exists and use that name, instead > > of the full metric id, for better display. > > Makes sense to me. > We had a displayName in RHQ. > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170809/41fdba34/attachment.html From jshaughn at redhat.com Wed Aug 9 09:32:15 2017 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Wed, 9 Aug 2017 09:32:15 -0400 Subject: [Hawkular-dev] A convention for metrics (short) name In-Reply-To: References: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> Message-ID: <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> I'm fairly sure the Agent already has a "name" tag or something similar. We definitely need a convention for this, if not an actual requirement. The metricName is way too tedious, our use of tags should work towards Prometheus's use of labels. On 8/9/2017 5:27 AM, Thomas Heute wrote: > We definitely need to solve that usability issue, I definitely > experienced it in Grafana. > > For Grafana in particular, I know Prometheus driver has a "label name" > that is defined in Grafana and can be templatized with metrics labels. > It's purely on Grafana side and only works for Grafana. > > Having a convention is a quick solution, do we see it used by Metrics > internals or only by agents and UIs ? Does it need to support > templating with other tags or do we expect the client to be smarter to > tags value change ? > > I would also suggest displayName (or display_name or else) rather than > just name, I think it's clearer that it's for UI and can change. > > > > > On Wed, Aug 9, 2017 at 10:53 AM, Heiko Rupp > wrote: > > On 9 Aug 2017, at 8:41, Joel Takvorian wrote: > > > What would you say about having a convention of a special tag (let's > > say "_name") that would point to a (short) intelligible name for a > > metric. That convention wouldn't be mandatory in any case of course, > > but the UI could check if that tag exists and use that name, instead > > of the full metric id, for better display. > > Makes sense to me. > We had a displayName in RHQ. > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170809/0fac88fc/attachment.html From jtakvori at redhat.com Wed Aug 9 09:56:13 2017 From: jtakvori at redhat.com (Joel Takvorian) Date: Wed, 9 Aug 2017 15:56:13 +0200 Subject: [Hawkular-dev] A convention for metrics (short) name In-Reply-To: <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> References: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> Message-ID: I think going toward prom's label, ie. each possible value of a label IS actually a different metric, would really be a huge change for us and on the other hand we would miss our current "tags as meta-data" feature, that prom is lacking afaik... But maybe that was not what you meant? I know "_name" is like a reserved label in prometheus but for them it has really a lot of implications. On Wed, Aug 9, 2017 at 3:32 PM, Jay Shaughnessy wrote: > > I'm fairly sure the Agent already has a "name" tag or something similar. > We definitely need a convention for this, if not an actual requirement. > The metricName is way too tedious, our use of tags should work towards > Prometheus's use of labels. > > > On 8/9/2017 5:27 AM, Thomas Heute wrote: > > We definitely need to solve that usability issue, I definitely experienced > it in Grafana. > > For Grafana in particular, I know Prometheus driver has a "label name" > that is defined in Grafana and can be templatized with metrics labels. It's > purely on Grafana side and only works for Grafana. > > Having a convention is a quick solution, do we see it used by Metrics > internals or only by agents and UIs ? Does it need to support templating > with other tags or do we expect the client to be smarter to tags value > change ? > > I would also suggest displayName (or display_name or else) rather than > just name, I think it's clearer that it's for UI and can change. > > > > > On Wed, Aug 9, 2017 at 10:53 AM, Heiko Rupp wrote: > >> On 9 Aug 2017, at 8:41, Joel Takvorian wrote: >> >> > What would you say about having a convention of a special tag (let's >> > say "_name") that would point to a (short) intelligible name for a >> > metric. That convention wouldn't be mandatory in any case of course, >> > but the UI could check if that tag exists and use that name, instead >> > of the full metric id, for better display. >> >> Makes sense to me. >> We had a displayName in RHQ. >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > > > > _______________________________________________ > hawkular-dev mailing listhawkular-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170809/d8b2879d/attachment.html From hrupp at redhat.com Wed Aug 9 09:56:34 2017 From: hrupp at redhat.com (Heiko Rupp) Date: Wed, 09 Aug 2017 15:56:34 +0200 Subject: [Hawkular-dev] A convention for metrics (short) name In-Reply-To: <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> References: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> Message-ID: On 9 Aug 2017, at 15:32, Jay Shaughnessy wrote: > I'm fairly sure the Agent already has a "name" tag or something > similar. We definitely need a convention for this, if not an actual > requirement. The metricName is way too tedious, our use of tags > should work towards Prometheus's use of labels. I think Prometheus has a special label key of __name__ (double underscore each) to get the raw metric name From mwringe at redhat.com Wed Aug 9 10:02:54 2017 From: mwringe at redhat.com (Matthew Wringe) Date: Wed, 9 Aug 2017 10:02:54 -0400 Subject: [Hawkular-dev] A convention for metrics (short) name In-Reply-To: <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> References: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> Message-ID: On Wed, Aug 9, 2017 at 9:32 AM, Jay Shaughnessy wrote: > > I'm fairly sure the Agent already has a "name" tag or something similar. > Yeah, the agent had a name, unit, and I think maybe something else. We used those to display the custom metrics a bit more nicely in the OpenShift console. > We definitely need a convention for this, if not an actual requirement. > The metricName is way too tedious, our use of tags should work towards > Prometheus's use of labels. > > > On 8/9/2017 5:27 AM, Thomas Heute wrote: > > We definitely need to solve that usability issue, I definitely experienced > it in Grafana. > > For Grafana in particular, I know Prometheus driver has a "label name" > that is defined in Grafana and can be templatized with metrics labels. It's > purely on Grafana side and only works for Grafana. > > Having a convention is a quick solution, do we see it used by Metrics > internals or only by agents and UIs ? Does it need to support templating > with other tags or do we expect the client to be smarter to tags value > change ? > > I would also suggest displayName (or display_name or else) rather than > just name, I think it's clearer that it's for UI and can change. > > > > > On Wed, Aug 9, 2017 at 10:53 AM, Heiko Rupp wrote: > >> On 9 Aug 2017, at 8:41, Joel Takvorian wrote: >> >> > What would you say about having a convention of a special tag (let's >> > say "_name") that would point to a (short) intelligible name for a >> > metric. That convention wouldn't be mandatory in any case of course, >> > but the UI could check if that tag exists and use that name, instead >> > of the full metric id, for better display. >> >> Makes sense to me. >> We had a displayName in RHQ. >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > > > > _______________________________________________ > hawkular-dev mailing listhawkular-dev at lists.jboss.orghttps://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170809/16c2fab0/attachment-0001.html From jshaughn at redhat.com Wed Aug 9 10:27:49 2017 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Wed, 9 Aug 2017 10:27:49 -0400 Subject: [Hawkular-dev] A convention for metrics (short) name In-Reply-To: References: <845842BE-1243-44D1-9D37-26BCD882F8AD@redhat.com> <677d9f5e-ac2c-4e91-d065-51043de76c1e@redhat.com> Message-ID: No, that is not what I meant. All I am saying is yes, I agree that we should have a convention for a simple name tag, but also that adding many tags representing the "path components" of the metrics' owning resource is a good idea. Prometheus' query power comes from it's ability to quickly slice and dice TS based on labels. Our mode is not the same but there can still be a lot of power in tagQuery given robust tagging. On 8/9/2017 9:56 AM, Joel Takvorian wrote: > I think going toward prom's label, ie. each possible value of a label > IS actually a different metric, would really be a huge change for us > and on the other hand we would miss our current "tags as meta-data" > feature, that prom is lacking afaik... > > But maybe that was not what you meant? I know "_name" is like a > reserved label in prometheus but for them it has really a lot of > implications. > > > On Wed, Aug 9, 2017 at 3:32 PM, Jay Shaughnessy > wrote: > > > I'm fairly sure the Agent already has a "name" tag or something > similar. We definitely need a convention for this, if not an > actual requirement. The metricName is way too tedious, our use of > tags should work towards Prometheus's use of labels. > > > On 8/9/2017 5:27 AM, Thomas Heute wrote: >> We definitely need to solve that usability issue, I definitely >> experienced it in Grafana. >> >> For Grafana in particular, I know Prometheus driver has a "label >> name" that is defined in Grafana and can be templatized with >> metrics labels. It's purely on Grafana side and only works for >> Grafana. >> >> Having a convention is a quick solution, do we see it used by >> Metrics internals or only by agents and UIs ? Does it need to >> support templating with other tags or do we expect the client to >> be smarter to tags value change ? >> >> I would also suggest displayName (or display_name or else) rather >> than just name, I think it's clearer that it's for UI and can change. >> >> >> >> >> On Wed, Aug 9, 2017 at 10:53 AM, Heiko Rupp > > wrote: >> >> On 9 Aug 2017, at 8:41, Joel Takvorian wrote: >> >> > What would you say about having a convention of a special >> tag (let's >> > say "_name") that would point to a (short) intelligible >> name for a >> > metric. That convention wouldn't be mandatory in any case >> of course, >> > but the UI could check if that tag exists and use that >> name, instead >> > of the full metric id, for better display. >> >> Makes sense to me. >> We had a displayName in RHQ. >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> >> >> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170809/058b7683/attachment.html From mazz at redhat.com Fri Aug 11 16:07:44 2017 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 11 Aug 2017 16:07:44 -0400 (EDT) Subject: [Hawkular-dev] hawkular-alerts integration with prometheus, elasticsearch, kafka In-Reply-To: <1816963295.70741703.1502481944858.JavaMail.zimbra@redhat.com> Message-ID: <1765686292.70743691.1502482064875.JavaMail.zimbra@redhat.com> I wrote a blog to demo hAlerts and its integration with Prometheus, ElasticSearch, and Kafka, showing how we can fire alerts and notifications based on things happening with Prometheus metrics, ElasticSearch logs, and Kafka stream data. I need it reviewed and merged for it to be published: https://github.com/hawkular/hawkular.github.io/pull/334 The demo video is here: https://youtu.be/mM1mwJneKO4 From theute at redhat.com Fri Aug 11 17:23:08 2017 From: theute at redhat.com (Thomas Heute) Date: Fri, 11 Aug 2017 21:23:08 +0000 Subject: [Hawkular-dev] New Hawkular Blog Post: Hawkular Alerts with Prometheus, ElasticSearch, Kafka Message-ID: <598e203cbe3ec_2a2f106b31844945@mini-queue-03.resque.ife.mail> New Hawkular blog post from noreply at hawkular.org (John Mazzitelli): http://ift.tt/2vYe1WG Federated Alerts Hawkular Alerts aims to be a federated alerting system. That is to say, it can fire alerts and send notifications that are triggered by data coming from a number of third-party external systems. Thus, Hawkular Alerts is more than just an alerting system for use with Hawkular Metrics. In fact, Hawkular Alerts can be used independently of Hawkular Metrics. This means you do not even have to be using Hawkular Metrics to take advantage of the functionality provided by Hawkular Alerts. This is a key differentiator between Hawkular Alerts and other alerting systems. Most alerting systems only alert on data coming from their respective storage systems (e.g. the Prometheus Alert Engine alerts only on Prometheus data). Hawkular Alerts, on the other hand, can trigger alerts based on data from various systems. Alerts vs. Events Before we begin, a quick clarification is in order. When it is said that Hawkular Alerts fires an "alert" it means some data came into Hawkular Alerts that matched some conditions which triggered the creation of an alert in Hawkular Alerts backend storage (which can then trigger additional actions such as sending emails or calling a webhook). An "alert" typically refers to a problem that has been detected, and someone should take action to fix it. An alert has a lifecycle attached to it - alerts are opened, then acknowledged by some user who will hopefully fix the problem, then resolved when the problem can be considered closed. However, there can be conditions that occur that do not represent problems but nevertheless are events you want recorded. There is no lifecycle associated with events and no additional actions are triggered by events, but "events" are fired by Hawkular Alerts in the same general manner as "alerts" are. In this document, when it is said that Hawkular Alerts can fire "alerts" based on data coming from external third-party systems such as Prometheus, ElasticSearch, and Kakfa, this also means events can be fired as well as alerts. What this means is you can record any event (not just a "problem", aka "alert") that can be gleaned from this data coming from external third-party systems. See alerting philosophy for more. Demo There is a recorded demo found here that will illustrate what this document is describing. After you read this document, you should watch the demo to gain further clarity on what is being explained. The demo is the multiple-sources example which you can run yourself found here (note: at the time of writing, this example is only found in the next branch, to be merged in master soon). Prometheus Hawkular Alerts can take the results of Prometheus metric queries and use the queried data for triggers that can fire alerts. This Hawkular Alerts trigger will fire an alert (and send an email) when a Prometheus metric indicates our store?s inventory of widgets is consistently low (as defined by the Prometheus query you see in the "expression" field of the condition): "trigger":{ "id": "low-stock-prometheus-trigger", "name": "Low Stock", "description": "The number of widgets in stock is consistently low.", "severity": "MEDIUM", "enabled": true, "tags": { "prometheus": "Prometheus" }, "actions":[ { "actionPlugin": "email", "actionId": "email-notify-owner" } ] }, "conditions":[ { "type": "EXTERNAL", "alerterId": "prometheus", "dataId": "prometheus-dataid", "expression": "rate(products_in_inventory{product=\"widget\"}[30s])<2" } ] Integration with Prometheus Alert Engine As a side note, though not demostrated in the example, Hawkular Alerts also has an integration with Prometheus' own Alert Engine. This means the alerts generated by Prometheus itself can be forward to Hawkular Alerts which can, in turn, be used for additional processing, perhaps for use with data that is unavailable to Prometheus that can tell Hawkular Alerts to fire other alerts. For example, Hawkular Alerts can take Prometheus alerts as input and feed it back into other conditions that trigger on the Prometheus alert along with ElasticSearch logs. ElasticSearch Hawkular Alerts can examine logs stored in ElasticSearch and trigger alerts based on patterns that match within the ElasticSearch log messages. This Hawkular Alerts trigger will fire an alert (and send an email) when ElasticSearch logs indicate sales are being lost due to inventory being out of stock of items (as defined by the condition which looks for a log category of "FATAL" which happens to mean a lost sale in the case of the store?s logs). Notice dampening is enabled on this trigger - this alert will only fire when the logs indicate lost sales every 3 times. "trigger":{ "id": "lost-sale-elasticsearch-trigger", "name": "Lost Sale", "description": "A sale was lost due to inventory out of stock.", "severity": "CRITICAL", "enabled": true, "tags": { "Elasticsearch": "Localhost instance" }, "context": { "timestamp": "@timestamp", "filter": "{\"match\":{\"category\":\"inventory\"}}", "interval": "10s", "index": "store", "mapping": "level:category, at timestamp:ctime,message:text,category:dataId,index:tags" }, "actions":[ { "actionPlugin": "email", "actionId": "email-notify-owner" } ] }, "dampenings": [ { "triggerMode": "FIRING", "type":"STRICT", "evalTrueSetting": 3 } ], "conditions":[ { "type": "EVENT", "dataId": "inventory", "expression": "category == 'FATAL'" } ] Kafka Hawkular Alerts can examine data retrieved from Kafka message streams and trigger alerts based that Kafka data. This Hawkular Alerts trigger will fire an alert when data over a Kakfa topic indicates a large purchase was made to fill the store?s inventory (as defined by the condition which evaluates to true when any number over 17 is received on the Kafka topic): "trigger":{ "id": "large-inventory-purchase-kafka-trigger", "name": "Large Inventory Purchase", "description": "A large purchase was made to restock inventory.", "severity": "LOW", "enabled": true, "tags": { "Kafka": "Localhost instance" }, "context": { "topic": "store", "kafka.bootstrap.servers": "localhost:9092", "kafka.group.id": "hawkular-alerting" }, "actions":[ ] }, "conditions":[ { "type": "THRESHOLD", "dataId": "store", "operator": "GT", "threshold": 17 } ] But, Wait! There?s More! The above only mentions the different ways Hawkular Metrics retrieves data for use in determining what alerts to fire. What is not covered here is the fact that Hawkular Alerts can stream data in the other direction as well - Hawkular Alerts can send alert and event data to things like an ElasticSearch server or a Kafka broker. There are additional examples (mentioned below) that can demonstrate this capability. The point is Hawkular Alerts should be seen as a shared, common alerting engine that can be shared for use by multiple third-party systems and can be used as both a consumer and producer - as a consumer of the data from external third-party systems (which is used to fire alerts and events) and as a producer to send notifications of alerts and events to external third-party systems. More Examples Take a look at the Hawkular Alerts examples for more examples on using external systems as data to be used for triggering alerts. (note: at the time of writing, some examples are currently in the next branch such as the Kafka ones). from Hawkular Blog -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170811/4ac25bc0/attachment-0001.html From jstickle at redhat.com Mon Aug 14 14:17:59 2017 From: jstickle at redhat.com (Julie Stickler) Date: Mon, 14 Aug 2017 14:17:59 -0400 Subject: [Hawkular-dev] hawkular.org - Redux Initiative In-Reply-To: References: <2A025908-6659-4FAA-8736-EB718765F7E5@redhat.com> Message-ID: Another couple of good documentation articles to think about. Fixing docs one README at a time https://opensource.com/open-organization/17/6/documentation-feedmereadmes- project Designing web pages for mobile (Google is going to start using mobile version for search rankings) http://searchengineland.com/designing-content-mobile-first-index-280071 JULIE STICKLER TECHNICAL WRITER Red Hat Westford ? 3S353 jstickle at redhat.com T: 1(978)-399-0463-(812-0463) IRC: jstickler On Thu, Jul 6, 2017 at 7:51 PM, John Sanda wrote: > > > On Thu, Jul 6, 2017 at 3:34 PM, Stefan Negrea wrote: > >> >> >> Thank you, >> Stefan Negrea >> >> >> On Tue, Jul 4, 2017 at 11:55 AM, Edgar Hern?ndez >> wrote: >> >>> >>> On 07/04/2017 02:41 AM, Thomas Heute wrote: >>> >>> Agreed, Hawkular Services need more love, not to disappear. >>> It may be renamed to "Hawkular ManageIQ Provider" on the website if that >>> helps with clarity, but shouldn't disappear. >>> >>> Also there is no quickstart anymore, it's very rough for people. >>> >>> >>> >>> What!? This really comes as a surprise for me. All this time my thinking >>> was that "services" was the thing bundling all parts together and, because >>> of the quickstarts, I believed that "services" was the preferred way to get >>> Hawkular. This idea is further supported by exploring the DockerHub, where >>> only images of Hawkular-services are available. >>> >>> Also, right after "Inventory" got removed, "services" and "alerts" were >>> somewhat redundant for me because "metrics" is included with "alerts" (and >>> "services" also includes both). And, now, starting with "metrics 0.27.0 >>> " I >>> can see that metrics also includes alerts. So, the thing looks more >>> redundant now. But with time I learned that "services" provides the >>> operations api, used by ManageIQ. I don't know if there are other features >>> provided by "services". But those extra features are not documented in the >>> website and people new to Hawkular won't realize what's the idea behind >>> "services". >>> >>> - Edgar. >>> >> >> Hawkular Metrics started bundling Alerting in October of last year and >> there are no plans to not bundle it. >> > > ?I don't want to side track the discussion too much, but if we are going > to target kubernetes/openshift as the base platform, then maybe it does > make sense to consider changing the deployment. We should probably have > separate containers for metrics and for alerts.? > > > >> >> We need to make a differentiation between the components (Alerting, >> Metrics, Trancing) and their integration into other projects. Hawkular >> Services is a specific integration for ManageIQ. We need to adjust the >> content around the core components to be able to build a community around >> the core components. The integrations are just delivery methods for these >> components. The integration of Hawkular Metrics and Alerting (there is a >> special bundle for that just like Hawkular Services) in OpenShift Origin >> has been used a lot more people than Hawkular Services ever was. Yet it is >> nowhere featured or discussed. We need to refocus the front page on the >> core components. Hawkular Services or Hawkular Metrics & Alerting OpenShift >> Origin (or other future integrations) should be mentioned but in another >> section. >> >> >>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170814/2b923250/attachment.html From theute at redhat.com Mon Aug 14 17:21:44 2017 From: theute at redhat.com (Thomas Heute) Date: Mon, 14 Aug 2017 21:21:44 +0000 Subject: [Hawkular-dev] New Hawkular Blog Post: Advanced Behaviour Detection with Nelson Rules Message-ID: <5992146813f2a_481a82f320624f6@mini-queue-02.resque.ife.mail> New Hawkular blog post from noreply at hawkular.org (Lucas Ponce): http://ift.tt/2uIqytE Modeling Conditions Hawkular Alerting offers several types of Conditions for defining Triggers. Most of the Conditions deal with numeric data but String, Availability and Event data are also supported. Modeling scenarios for detecting behaviours is highly dependent on the nature of the Domain being represented. The Domain may only require simple numeric threshold conditions to efficiently detect unexpected situations. In other domains, it can be non-trivial to identify unusual metric variations that may lead to a problem. Simple thresholds are not expressive enough to detect metric patterns or trends that can identify potential problems. Nelson Rules Hawkular Alerting supports Conditions based on Nelson Rules to enable advanced detection on Numeric metrics. These rules are based on the mean and the standard deviation of the samples and offer additional techniques for modeling complex scenarios. For example, ... "trigger":{ "id": "nelson-rule-trigger", "name": "Nelson Rule Trigger", "description": "An example Trigger that uses Nelson Rules Conditions.", "enabled": true, "actions":[] }, "conditions":[ { "type": "NELSON", (1) "dataId": "metric-data-id", "activeRules": ["Rule1","Rule2"], (2) "sampleSize": 75 (3) } ] ... Mark this Condition as a NelsonRule Define the Nelson Rules to activate (Rule1, Rule2, ??, Rule8) for metric-data-id (all rules are activated by default) Define the sampleSize (by default this value is set to 50) Each rule represents a specific pattern as described below: Rule 1 One sample is grossly out of control. Rule 2 Some prolonged error has been detected. Rule 3 An unusual trend has been detected. Rule 4 The oscillation of a metric is beyond an expected amount of noise. Note that the rule is concerned with directionality only. The position of the mean and the size of the standard deviation have no bearing. Rule 5 There is a medium tendency for samples to be mediumly out of control. The side of the mean for the third point is unspecified. Rule 6 There is a strong tendency for samples to be out of control. Rule 7 A greater variation would be expected. Rule 7 Jumping from above to below whilst missing the first standard deviation band is rarely random. Conclusion Applying Nelson Rules in our scenario can help to detect potential "out of control" situations. But as discussed, modeling scenarios are highly dependent of the nature of the Domain; applying Nelson Rules is a useful tool to help identify a problem. Although, the alerts are predictive and a Domain?s Analyst may need to evaluate the quality of the model. from Hawkular Blog -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170814/bee4a9da/attachment.html From theute at redhat.com Mon Aug 21 06:18:24 2017 From: theute at redhat.com (Thomas Heute) Date: Mon, 21 Aug 2017 10:18:24 +0000 Subject: [Hawkular-dev] New Hawkular Blog Post: Canary Deployment in OpenShift using OpenTracing based Application Metrics Message-ID: <599ab3701e8ed_11f9176b3309491c@mini-queue-03.resque.ife.mail> New Hawkular blog post from noreply at hawkular.org (Gary Brown): http://ift.tt/2vWrA6Y In a previous article we showed how OpenTracing instrumentation can be used to collect application metrics, in addition to (but independent from) reported tracing data, from services deployed within a cloud environment (e.g. Kubernetes or OpenShift). In this article we will show how this information can be used to aid a Canary deployment strategy within OpenShift. Figure 1: Error ratio per service and version The updated example application We will be using the same example as used in the previous article. However since writing that article, the configuration of the tracer and Prometheus metrics support has been simplified. There is now no explicit configuration of either, with only some auto configuration of MetricLabel beans to identify some custom labels to be added to the Prometheus metrics, e.g. Metrics configuration used in both services: @Configuration public class MetricsConfiguration { @Bean public MetricLabel transactionLabel() { return new BaggageMetricLabel("transaction", "n/a"); (1) } @Bean public MetricLabel versionLabel() { return new ConstMetricLabel("version", System.getenv("VERSION")); (2) } } 1 This metric label identifies the business transaction associated with the metrics, which can be used to isolate the specific number of requests, duration and errors that occurred when the service was used within the particular business transaction 2 This metric label identifies the service version, which is especially useful in the Canary deployment use case being discussed in this article The first step is to following the instructions in the example for deploying and using the services within OpenShift. Once the ./genorders.sh script has been running for a while, to generate plenty of metrics for version 0.0.1 of the services, then deploy the new version of the services. This is achieved by: updating the versions in the pom.xml files, within the simple/accountmgr and simple/ordermgr folders from 0.0.1 to 0.0.2 re-run the mvn clean install docker:build command from the simple folder deploy the canary versions of the services using the command oc create -f services-canary-kubernetes.yml As our services accountmgr and ordermgr determine the backing deployment based on the respective labels app: accountmgr and app: ordermgr, simply having a second deployment with these labels will make them serve requests in a round-robin manner. This deployment script has been pre-configured with the 0.0.2 version, and to only start a single instance of the new version of the services. This may be desirable if you want to monitor the behaviour of the new service versions over a reasonable time period, but as we want to see results faster we will scale them up to see more activity. You can do this by expanding the deployment area for each service in the OpenShift web console and selecting the up arrow to scale up each service: Figure 2: Scaling up canary deployment Now we can monitor the Prometheus dashboard, using the following query, to see the error ratio per service and version: sum(increase(span_count{error="true",span_kind="server"}[1m])) without (pod,instance,job,namespace,endpoint,transaction,error,operation,span_kind) / sum(increase(span_count{span_kind="server"}[1m])) without (pod,instance,job,namespace,endpoint,transaction,error,operation,span_kind) The result of this query can be seen in Figure 1 at the top of the article. This chart shows that version 0.0.2 of the accountmgr service has not generated any errors, while the 0.0.2 of the ordermgr appears to be less error prone than version 0.0.1. Based on these metrics, we could decide that the new versions of these services are better than the previous, and therefore update the main service deployments to use the new versions. In the OpenShift web console you can do this by clicking the three vertical dots in the upper right hand side of the deployment region and selecting Edit YAML from the menu. This will display an editor window where you can change the version from 0.0.1 to 0.0.2 in the YAML file. Figure 3: Update the service version After you save the YAML configuration file, in the web console you can see the service going through a "rolling update" as OpenShift incrementally changes each service instance over to the new version. Figure 4: Rolling update After the rolling update has completed for both the ordermgr and accountmgr services, then you can scale down or completely remove the canary version of each deployment. An alternative to performing the rolling update would simply be to name the canary version something else (i.e. specific to the version being tested), and when it comes time to switch over, simply scale down the previous deployment version. This would be more straightforward, but wouldn?t show off the cool rolling update approach in the OpenShift web console :-) Figure 5: Scaling down canary deployment Although we have updated both services at the same time, this is not necessary. Generally microservices would be managed by separate teams and subject to their own deployment lifecycles. Conclusion This article has shown how application metrics, captured by instrumenting services using the OpenTracing API, can be used to support a simple Canary deployment strategy. These metrics can similarly be used with other deployment strategies, such as A/B testing, which can be achieved using a weighted load balancing capability within OpenShift. Links OpenTracing: http://opentracing.io Github repository with demo: http://ift.tt/2rX5MoK OpenTracing java metrics: http://ift.tt/2rWUNvF Kubernetes: https://kubernetes.io OpenShift: https://openshift.io Jaeger: http://ift.tt/2eOSqHE Prometheus: https://prometheus.io from Hawkular Blog -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170821/0658c3de/attachment-0001.html From mr at ramendik.ru Sat Aug 26 13:43:42 2017 From: mr at ramendik.ru (Mikhail Ramendik) Date: Sat, 26 Aug 2017 18:43:42 +0100 Subject: [Hawkular-dev] Configuring hawkular services with collectd? Message-ID: Hello, I would like to try out hawkular services. I want to monitor the local host for a start, so collectd is an obvious choice for collecting metrics (CPU for a start). I have successfully set up and started cassandra, hawkular-services and grafana. I can see the hawkular services welcome message at http://localhost:8080 I have also started ptrans and collectd. I am attaching their configuration files. (The username and password for hawular-services: myUsername/myPassword, as in the installation guide) However, I do not seem to get any metrics in hawkular. The following command: curl -u myUsername:myPassword -X GET "http://localhost:8080/hawkular/metrics/gauges" -H "Hawkular-Tenant: hawkular" returns an empty string. I would appreciate advice about fixing my setup so I can get CPU usage in hawkular-services (and see it in Grafana). Thanks! -- Yours, Mikhail Ramendik Unless explicitly stated, all opinions in my mail are my own and do not reflect the views of any organization -------------- next part -------------- A non-text attachment was scrubbed... Name: ptrans.conf Type: application/octet-stream Size: 2734 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170826/d1533731/attachment-0002.obj -------------- next part -------------- A non-text attachment was scrubbed... Name: collectd.conf Type: application/octet-stream Size: 37966 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20170826/d1533731/attachment-0003.obj From mr at ramendik.ru Sat Aug 26 17:28:00 2017 From: mr at ramendik.ru (Mikhail Ramendik) Date: Sat, 26 Aug 2017 22:28:00 +0100 Subject: [Hawkular-dev] Configuring hawkular services with collectd? In-Reply-To: References: Message-ID: On 26 August 2017 at 18:43, Mikhail Ramendik wrote: > I have also started ptrans and collectd. I am attaching their > configuration files. (The username and password for hawular-services: > myUsername/myPassword, as in the installation guide) > > However, I do not seem to get any metrics in hawkular. I have found the problem. I needed to enable authentication in ptrans.conf. -- Yours, Mikhail Ramendik Unless explicitly stated, all opinions in my mail are my own and do not reflect the views of any organization