From lkrejci at redhat.com Thu Dec 1 08:15:02 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 01 Dec 2016 14:15:02 +0100 Subject: [Hawkular-dev] [Update] Storing Inventory data in Cassandra In-Reply-To: <94eb2c193eacb45c0b0542838ea6@google.com> References: <94eb2c193eacb45c0b0542838ea6@google.com> Message-ID: <19748585.ZqmjcDEyaa@localhost.localdomain> tl;dr - This is gonna be fun (in the worst sense of the word) We had a very good discussion with Stefan and John from the metrics team and I think we identified the main problematic areas of the cassandra backend and brainstormed possible solutions to them. 1) Inconsistencies Today, inventory guarantees that if some entity has a certain sync hash, the users can be sure that the entity and all its children are in a certain state (names, configurations, defined operations, etc, all exactly match). This is also used to identify identical stuff across feeds (i.e. if find all identical types defined by various feeds and operate on them as if they were a single type - /traversal/f;feed/rt;Type/identical/rl;defines/type=resource). This can no longer be guaranteed in Cassandra. We could try to overcome this by trying several aproaches: a) using a "staging" area for updates (copy current state, apply changes and then "replace" the live area with staging) but that essentially means implementing serialized transactions on top of an eventually consistent storage - something I am not completely ecstatic about given the manpower and time constraints we have. b) Essentially considering C* as a blob store and just dump serialized (portions of) the graph to it, with all the processing being done in memory on the inventory server. This still means we have to implement transactional behavior on our own (albeit in memory, not in C*) and it still means that stored data could conflict if inventory was clustered. c) Give up consistent sync and just write everything all the time. Inconsistencies will arise because sync doesn't touch external relationships (feed doesn't know that some glue code discovered that a war is part of "something bigger", be it a cluster, a logical app, whatever). At that point we can also outright get rid of the hashes, because they will never be guaranteed to be consistent. This means that we will no longer be able to tell whether two feeds define the same resource types, because that depends on the rt's having the same hash. 2) Performant Traversals Right now I am trying to implement a naive approach to graph traversals where each "hop" between nodes of the graph is represented by (at least one) query (possibly there can be very many queries for a single hop if it is required to retrieve results per every incoming vertex in the traversal). This has been identified as a potential performance problem. The only "remedy" suggested for this was to consider 1b) - just store the whole portions of the graph as a "blob" in C* and do the processing in-memory. This scares me a little bit because it opens up many possibilities for operating on stale/incorrect data, raises the question of how to "partition" the graph (more granularly than by tenant) and at the same time avoid complexity of handling inter-partition relationships, etc. 3) Conclusion We will start with a naive implementation with no guarantees of consistency and will try to identify the concrete problematic areas of the code (the above already hints at some we assume will cause problems). Then we will try to modify the implementation/storage model/functionality/guarantees iteratively to fix the concrete problems identified. Lukas On Wednesday, November 30, 2016 12:08:05 PM CET theute at redhat.com wrote: I can't join today, Heiko neither. Feel free to go ahead with the call though and please send a feedback on hawkular-dev. Storing Inventory data in Cassandra Currently, we're storing inventory data in an SQL database. Metrics on the other hand store the data in Cassandra. We're exploring how to unify the storage backends for Hawkular components and hence the title. We'll use https://docs.google.com/document/d/1Lgv8WE1j0r7rir5hTpV-xutKFChyoNPEH3XSz020ayk as a starting point for the discussion. To join the Meeting: https://bluejeans.com/8169978803 To join via Browser: https://bluejeans.com/8169978803/browser To join with Lync: https://bluejeans.com/8169978803/lync To join via Room System: Video Conferencing System: bjn.vc -or-199.48.152.152 Meeting ID : 8169978803 To join via phone : 1) Dial: +44 203 574 6870 (see all numbers - https://www.intercallonline.com/ listNumbersByCode.action?confCode=8169978803) 2) Enter Conference ID : 8169978803 When Wed Nov 30, 2016 3pm ? 4pm Zurich Where https://bluejeans.com/8169978803 (map) Who ? lkrejci at redhat.com - creator ? jsanda at redhat.com ? jtakvori at redhat.com ? hawkular-dev at lists.jboss.org -- Lukas Krejci From mazz at redhat.com Fri Dec 2 17:17:47 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 2 Dec 2016 17:17:47 -0500 (EST) Subject: [Hawkular-dev] wildfly agent readme In-Reply-To: <1311442769.1319338.1480717019042.JavaMail.zimbra@redhat.com> Message-ID: <829911694.1319352.1480717067710.JavaMail.zimbra@redhat.com> I threw together a more useful readme for the wildfly agent. Just goes over some of the configuration settings necessary to get the agent to monitor things. Hoping this can be useful to Tom C. ;-) See: https://github.com/hawkular/hawkular-agent From mazz at redhat.com Mon Dec 5 10:51:34 2016 From: mazz at redhat.com (John Mazzitelli) Date: Mon, 5 Dec 2016 10:51:34 -0500 (EST) Subject: [Hawkular-dev] hawkular openshift agent now published on docker hub In-Reply-To: <2121962000.2052709.1480952926434.JavaMail.zimbra@redhat.com> Message-ID: <357832029.2053613.1480953094731.JavaMail.zimbra@redhat.com> We now have an initial version (0.1.0) of Hawkular OpenShift Agent published on docker hub. NAME: hawkular/hawkular-openshift-agent:dev LOCATION: https://hub.docker.com/r/hawkular/hawkular-openshift-agent/ If you want to see quick demos of this agent, you can view them here: * https://www.youtube.com/watch?v=jvOPlz7lzyM * https://www.youtube.com/watch?v=Fj_OriyvMc0 From garethahealy at gmail.com Mon Dec 5 12:36:10 2016 From: garethahealy at gmail.com (Gareth Healy) Date: Mon, 5 Dec 2016 17:36:10 +0000 Subject: [Hawkular-dev] */raw/query doesn't return CORS allow headers Message-ID: I am using the grafana plugin with Hawkular on OCP. If we *dont* set the CORS allowed value to all, then the grafana plugin gets AJAX errors due to CORS. As shown by some simple cURL commands below. Hawkular CORS set to: http://test.com **/counters/stats examples:* Fails due to Origin mismatch but still returns data - didn't expect to get data back if a 400 is returned... not sure if thats a bug or not. localhost:hawkular-client-java garethah$ curl -u admin:admin --header "Hawkular-Tenant: unit-testing" --request GET " http://192.168.99.100:8080/hawkular/metrics/counters/stats?bucketDuration=1d&percentiles=90.0&metrics=noofzsny&stacked=true" --header "Origin: http://bob.com" -vvv * Trying 192.168.99.100... * Connected to 192.168.99.100 (127.0.0.1) port 8080 (#0) * Server auth using Basic with user 'admin' > GET /hawkular/metrics/counters/stats?bucketDuration=1d&percentiles=90.0&metrics=noofzsny&stacked=true HTTP/1.1 > Host: 192.168.99.100:8080 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/7.49.1 > Accept: */* > Hawkular-Tenant: unit-testing > Origin: http://bob.com > < HTTP/1.1 400 Bad Request < Expires: 0 < Cache-Control: no-cache, no-store, must-revalidate < X-Powered-By: Undertow/1 < Server: WildFly/10 < Pragma: no-cache < Date: Mon, 05 Dec 2016 17:25:22 GMT < Connection: keep-alive < Content-Type: application/json < Content-Length: 217 < * Connection #0 to host 192.168.99.100 left intact [{"start":1480929922811,"end":1481016322811,"min":-5.3461447394508227E18,"avg":1.13394506239459277E18,"median":1.41820444399757005E18,"max":6.5335287915888394E18,"sum":1.1339450623945933E19,"samples":1,"empty":false}] Working example with correctly returned Access-Control-Allow-Origin: localhost:hawkular-client-java garethah$ curl -u admin:admin --header "Hawkular-Tenant: unit-testing" --request GET " http://192.168.99.100:8080/hawkular/metrics/counters/stats?bucketDuration=1d&percentiles=90.0&metrics=noofzsny&stacked=true" --header "Origin: http://test.com" -vvv * Trying 192.168.99.100... * Connected to 192.168.99.100 (127.0.0.1) port 8080 (#0) * Server auth using Basic with user 'admin' > GET /hawkular/metrics/counters/stats?bucketDuration=1d&percentiles=90.0&metrics=noofzsny&stacked=true HTTP/1.1 > Host: 192.168.99.100:8080 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/7.49.1 > Accept: */* > Hawkular-Tenant: unit-testing > Origin: http://test.com > < HTTP/1.1 200 OK < Expires: 0 < Cache-Control: no-cache, no-store, must-revalidate < X-Powered-By: Undertow/1 < Access-Control-Allow-Headers: origin,accept,content-type,hawkular-tenant < Server: WildFly/10 < Pragma: no-cache < Date: Mon, 05 Dec 2016 17:26:13 GMT < Connection: keep-alive < Access-Control-Allow-Origin: http://test.com < Access-Control-Allow-Credentials: true < Content-Type: application/json < Content-Length: 217 < Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS, HEAD < Access-Control-Max-Age: 259200 < * Connection #0 to host 192.168.99.100 left intact [{"start":1480929973847,"end":1481016373847,"min":-5.3461447394508227E18,"avg":1.13394506239459277E18,"median":1.41820444399757005E18,"max":6.5335287915888394E18,"sum":1.1339450623945933E19,"samples":1,"empty":false}] **/counters/raw/query examples:* Gets data but doesn't return any Access-Control-Allow-Origin headers thus will fail in grafana. localhost:hawkular-client-java garethah$ curl -u admin:admin --header "Hawkular-Tenant: unit-testing" --request POST " http://192.168.99.100:8080/hawkular/metrics/counters/raw/query" --data "{order:\"ASC\",ids:[\"noofzsny\"]}" --header "Content-Type: application/json" --header "Origin: http://test.com" -vvv * Trying 192.168.99.100... * Connected to 192.168.99.100 (127.0.0.1) port 8080 (#0) * Server auth using Basic with user 'admin' > POST /hawkular/metrics/counters/raw/query HTTP/1.1 > Host: 192.168.99.100:8080 > Authorization: Basic YWRtaW46YWRtaW4= > User-Agent: curl/7.49.1 > Accept: */* > Hawkular-Tenant: unit-testing > Content-Type: application/json > Origin: http://test.com > Content-Length: 30 > * upload completely sent off: 30 out of 30 bytes < HTTP/1.1 200 OK < Expires: 0 < Cache-Control: no-cache, no-store, must-revalidate < X-Powered-By: Undertow/1 < Server: WildFly/10 < Pragma: no-cache < Date: Mon, 05 Dec 2016 17:30:38 GMT < Connection: keep-alive < Content-Type: application/json < Content-Length: 590 < * Connection #0 to host 192.168.99.100 left intact [{"id":"noofzsny","data":[{"timestamp":1480943333446,"value":-5346144739450823145},{"timestamp":1480943363446,"value":5257714416350875295},{"timestamp":1480943393446,"value":4269323419475977241},{"timestamp":1480943423446,"value":4996234959867023108},{"timestamp":1480943453446,"value":-4477830536950343320},{"timestamp":1480943483446,"value":3744561193794180662},{"timestamp":1480943513446,"value":-3619119654582223963},{"timestamp":1480943543446,"value":6533528791588839899},{"timestamp":1480943573446,"value":225819548751014015},{"timestamp":1480943603446,"value":-244636774898588607}]}] Have i missed something in the grafana / hawkular setup? or is this a bug? Cheers. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161205/ad038fe1/attachment-0001.html From hrupp at redhat.com Tue Dec 6 03:30:31 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Tue, 06 Dec 2016 09:30:31 +0100 Subject: [Hawkular-dev] */raw/query doesn't return CORS allow headers In-Reply-To: References: Message-ID: I think this is why you need to use "Proxy mode". Joel for sure knows more :) From garethahealy at gmail.com Tue Dec 6 04:07:39 2016 From: garethahealy at gmail.com (Gareth Healy) Date: Tue, 6 Dec 2016 09:07:39 +0000 Subject: [Hawkular-dev] */raw/query doesn't return CORS allow headers In-Reply-To: References: Message-ID: Thanks Heiko, just re-read docs and you are correct. https://github.com/hawkular/hawkular-grafana-datasource#configuration On Tue, Dec 6, 2016 at 8:30 AM, Heiko W.Rupp wrote: > I think this is why you need to use "Proxy mode". > Joel for sure knows more :) > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161206/ccfc1ad8/attachment.html From jtakvori at redhat.com Tue Dec 6 04:10:54 2016 From: jtakvori at redhat.com (Joel Takvorian) Date: Tue, 6 Dec 2016 10:10:54 +0100 Subject: [Hawkular-dev] */raw/query doesn't return CORS allow headers In-Reply-To: References: Message-ID: I'm going to file a bug anyway, since there's some unexpected behaviours On Tue, Dec 6, 2016 at 10:07 AM, Gareth Healy wrote: > Thanks Heiko, just re-read docs and you are correct. > > https://github.com/hawkular/hawkular-grafana-datasource#configuration > > On Tue, Dec 6, 2016 at 8:30 AM, Heiko W.Rupp wrote: > >> I think this is why you need to use "Proxy mode". >> Joel for sure knows more :) >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161206/97bd64b6/attachment.html From snegrea at redhat.com Wed Dec 7 12:38:16 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Wed, 7 Dec 2016 11:38:16 -0600 Subject: [Hawkular-dev] Hawkular Metrics 0.22.0 - Release Message-ID: Hello, I am happy to announce release 0.22.0 of Hawkular Metrics. This release is anchored by performance and compression enhancements. Here is a list of major changes: - *Compression* - Prevent OutOfMemoryError on Cassandra when compression job runs ( HWKMETRICS-520 ) - Avoid compression job executing in a loop when execution falls behind (HWKMETRICS-536 ) - Avoid future executions of compression job from not running if Cassandra is shutdown abruptly (HWKMETRICS-518 ) - Added a flag to disable the compression job; the data will be persisted and retrieved without compression (HWKMETRICS-524 ) - The block size for compression is now configurable (HWKMETRICS-545 ) - The compression job can now be triggered manually (HWKMETRICS-502 ) - *Server Clustering* - The external alerter is now cluster-aware and will not process the same request on multiple nodes (HWKMETRICS-515 ) - Schema updates are correctly applied when multiple servers are started at the same time (HWKMETRICS-514 ) - Added Cassandra connection information to the status page and created an admin version with detailed Cassandra cluster information ( HWKMETRICS-526 ) - Internal system metrics are now persisted under admin tenant; this gives a good overview of the current system load (HWKMETRICS-550 ) - *REST API* - Added endpoint to allow fetching of available tag names ( HWKMETRICS-532 ) - Fixed an issue where the API would report an internal server error on invalid query (HWKMETRICS-543 ) - *Hawkular Alerting - Updates* - End to end performance enhancements - Major improvements to REST API documentation - New cross-tenant endpoints for for fetching alerts - Email and webhook action plugins are now packaged in the main distribution (HWKMETRICS-552 ) *Hawkular Alerting - included* - Version 1.4.0 - Project details and repository: Github - Documentation: REST API Documentation , Examples , Developer Guide *Hawkular Metrics Clients* - Python: https://github.com/hawkular/hawkular-client-python - Go: https://github.com/hawkular/hawkular-client-go - Ruby: https://github.com/hawkular/hawkular-client-ruby - Java: https://github.com/hawkular/hawkular-client-java *Release Links* Github Release: https://github.com/hawkular/hawkular-metrics/releases/tag/0.22.0 JBoss Nexus Maven artifacts: http://origin-repository.jboss.org/nexus/content/repositorie s/public/org/hawkular/metrics/ Jira release tracker: https://issues.jboss.org/projects/HWKMETRICS/versions/12332012 A big "Thank you" goes to John Sanda, Matt Wringe, Michael Burman, Joel Takvorian, Jay Shaughnessy, Lucas Ponce, and Heiko Rupp for their project contributions. Thank you, Stefan Negrea -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161207/617bb9cf/attachment.html From snegrea at redhat.com Fri Dec 9 17:15:43 2016 From: snegrea at redhat.com (snegrea at redhat.com) Date: Fri, 09 Dec 2016 22:15:43 +0000 Subject: [Hawkular-dev] Invitation: Hawkular Metrics - Tag Filtering - JsonPath @ Mon Dec 12, 2016 9:30am - 10:30am (CST) (hawkular-dev@lists.jboss.org) Message-ID: <001a11354b885833b605434118f9@google.com> You have been invited to the following event. Title: Hawkular Metrics - Tag Filtering - JsonPath Hello, This session will be a design & implementation review for tag filtering of metric definitions based on Json Path. The feature is at proposal stage (not yet merged), so all feedback is greatly appreciated. The filtering mechanism could be extended to Alerting and Inventory components; feedback from those projects would also be great. Design Document: http://jbosson.etherpad.corp.redhat.com/286 PR: https://github.com/hawkular/hawkular-metrics/pull/706 Jira https://issues.jboss.org/browse/HWKMETRICS-523 When: Mon Dec 12, 2016 9:30am ? 10:30am Central Time Where: http://bluejeans.com/3980552127 Calendar: hawkular-dev at lists.jboss.org Who: * snegrea at redhat.com - organizer * lponce at redhat.com * jsanda at redhat.com * lkrejci at redhat.com * miburman at redhat.com * mwringe at redhat.com * jshaughn at redhat.com * hawkular-dev at lists.jboss.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=bmFldHJvY2NwNzBhOHBuanF1cGJtODRvdG8gaGF3a3VsYXItZGV2QGxpc3RzLmpib3NzLm9yZw&tok=MTgjc25lZ3JlYUByZWRoYXQuY29tYzY4MTdmMWM4NGUzNTA3M2QzNTM2ZGJhM2E3MWMzZWM5OTZlODAzMQ&ctz=America/Chicago&hl=en Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account hawkular-dev at lists.jboss.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to modify your RSVP response. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161209/c5bad32a/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2522 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161209/c5bad32a/attachment-0002.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2569 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161209/c5bad32a/attachment-0003.bin From mwringe at redhat.com Fri Dec 9 17:54:59 2016 From: mwringe at redhat.com (Matt Wringe) Date: Fri, 9 Dec 2016 17:54:59 -0500 (EST) Subject: [Hawkular-dev] Invitation: Hawkular Metrics - Tag Filtering - JsonPath @ Mon 2016-12-12 10:30 - 11:30 (mwringe@redhat.com) In-Reply-To: <94eb2c1922045f86b705434118b6@google.com> References: <94eb2c1922045f86b705434118b6@google.com> Message-ID: <1778309388.5627276.1481324099905.JavaMail.zimbra@redhat.com> I have a meeting conflict and I cannot make it. Without having something in place to do data migrations from the old format to the new one, its not going to be that use useful for the OpenShift case as the moment. Aside from this issue, data migration between formats is something that we do need to figure out. Sooner or later we are going to want to store things differently (especially with Heapster being deprecated) ----- Original Message ----- > From: "Stefan Negrea" > To: mwringe at redhat.com, jshaughn at redhat.com, hawkular-dev at lists.jboss.org, lponce at redhat.com, miburman at redhat.com, > lkrejci at redhat.com, jsanda at redhat.com > Sent: Friday, 9 December, 2016 5:15:44 PM > Subject: Invitation: Hawkular Metrics - Tag Filtering - JsonPath @ Mon 2016-12-12 10:30 - 11:30 (mwringe at redhat.com) > > more details ? > Hawkular Metrics - Tag Filtering - JsonPath > Hello, > > This session will be a design & implementation review for tag filtering of > metric definitions based on Json Path. The feature is at proposal stage (not > yet merged), so all feedback is greatly appreciated. > > The filtering mechanism could be extended to Alerting and Inventory > components; feedback from those projects would also be great. > > Design Document: > http://jbosson.etherpad.corp.redhat.com/286 > > PR: > https://github.com/hawkular/hawkular-metrics/pull/706 > > Jira > https://issues.jboss.org/browse/HWKMETRICS-523 > > > > > > > > > When > Mon 2016-12-12 10:30 ? 11:30 Eastern Time > Where > http://bluejeans.com/3980552127 ( map ) > Calendar > mwringe at redhat.com > Who > ? > snegrea at redhat.com - organizer > > ? > jshaughn at redhat.com > > ? > hawkular-dev at lists.jboss.org > > ? > lponce at redhat.com > > ? > miburman at redhat.com > > ? > mwringe at redhat.com > > ? > lkrejci at redhat.com > > ? > jsanda at redhat.com > > > Going? Yes - Maybe - No more options ? > > > Invitation from Google Calendar > > You are receiving this email at the account mwringe at redhat.com because you > are subscribed for invitations on calendar mwringe at redhat.com. > > To stop receiving these emails, please log in to > https://www.google.com/calendar/ and change your notification settings for > this calendar. > > Forwarding this invitation could allow any recipient to modify your RSVP > response. Learn More . > From danielkza2 at gmail.com Thu Dec 8 12:00:22 2016 From: danielkza2 at gmail.com (Daniel Miranda) Date: Thu, 08 Dec 2016 17:00:22 +0000 Subject: [Hawkular-dev] Hawkular-metrics resource requirements questions Message-ID: Greetings, I'm looking for a distributed time-series database, preferably backed by Cassandra, to help monitor about 30 instances in AWS (with a perspective of quick growth in the future). Hawkular Metrics seems interesting due to it's native clustering support and use of compression, since naively using Cassandra is quite inefficient - KairosDB seems to need about 12B/sample [1], which is *way* higher than other systems with custom storage backends (Prometheus can do ~1B/sample [2]). I would like to know if there are any existing benchmarks for how Hawkular's ingestion and compression perform, and what kind of resources I would need to handle something like 100 samples/producer/second, hopefully with retention for 7 and 30 days (the latter with reduced precision). My planned setup is Collectd -> Riemann -> Hawkular (?) with Grafana for visualization. Thanks in advance, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161208/7ff1cc11/attachment.html From danielkza2 at gmail.com Thu Dec 8 12:05:25 2016 From: danielkza2 at gmail.com (Daniel Miranda) Date: Thu, 08 Dec 2016 17:05:25 +0000 Subject: [Hawkular-dev] Hawkular-metrics resource requirements questions In-Reply-To: References: Message-ID: Forgot the links. The uncompressed storage estimates are actually for NewTS, but they should not be much different for any other Cassandra-backed TSDB without compression. [1] https://www.adventuresinoss.com/2016/01/22/opennms-at-scale/ [2] https://prometheus.io/docs/operating/storage/ Em qui, 8 de dez de 2016 ?s 15:00, Daniel Miranda escreveu: > Greetings, > > I'm looking for a distributed time-series database, preferably backed by > Cassandra, to help monitor about 30 instances in AWS (with a perspective of > quick growth in the future). Hawkular Metrics seems interesting due to it's > native clustering support and use of compression, since naively using > Cassandra is quite inefficient - KairosDB seems to need about 12B/sample > [1], which is *way* higher than other systems with custom storage backends > (Prometheus can do ~1B/sample [2]). > > I would like to know if there are any existing benchmarks for how > Hawkular's ingestion and compression perform, and what kind of resources I > would need to handle something like 100 samples/producer/second, hopefully > with retention for 7 and 30 days (the latter with reduced precision). > > My planned setup is Collectd -> Riemann -> Hawkular (?) with Grafana for > visualization. > > Thanks in advance, > Daniel > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161208/b8bc68a2/attachment.html From mithomps at redhat.com Mon Dec 12 13:44:21 2016 From: mithomps at redhat.com (mike thompson) Date: Mon, 12 Dec 2016 10:44:21 -0800 Subject: [Hawkular-dev] eBook: The New Stack: Monitoring and Management with Docker Containers Message-ID: <5F7C3380-8620-4453-9CB2-B396EA1FB052@redhat.com> A freebie ebook from The New Stack. From an overview viewpoint, I thought this was quite good: https://www.dropbox.com/s/qijeadn6ptffs6t/TheNewStack_Book5_Monitoring_and_Management_with_Docker_and_Containers.pdf?dl=0 or you can sign up: http://thenewstack.io/ebookseries/ Hawkular is even mentioned on page 76. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161212/a38feac3/attachment.html From jshaughn at redhat.com Mon Dec 12 15:05:27 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Mon, 12 Dec 2016 15:05:27 -0500 Subject: [Hawkular-dev] Docker build on a VM Message-ID: I was stumped for quite a while about why I couldn't get a docker build to run on my VM. The symptom is that the docker build can not reach the outside world, and therefore can't pull in what it needs. This happens even though the VM itself has no connectivity issues. Note, I assume this is a VM thing because it happened to both mazz and myself, using Virtual Box fedora vms, but it may not be limited to VMs. I finally stumbled on a stack overflow entry that solved the issue [1]. Basically, you need to tell docker about your DNS servers, and also Google's DNS server 8.8.8.8. Your mileage may vary, perhaps you'll only need a subset of of servers, but hopefully this helps you out, because it was a pita. [1] http://stackoverflow.com/questions/25130536/dockerfile-docker-build-cant-download-packages-centos-yum-debian-ubuntu-ap -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161212/16a2e273/attachment.html From mazz at redhat.com Mon Dec 12 15:20:36 2016 From: mazz at redhat.com (John Mazzitelli) Date: Mon, 12 Dec 2016 15:20:36 -0500 (EST) Subject: [Hawkular-dev] Docker build on a VM In-Reply-To: References: Message-ID: <972190278.4842486.1481574036031.JavaMail.zimbra@redhat.com> I saw the same problems. One thing I don't understand is why my host's resolve.conf isn't good enough? https://docs.docker.com/engine/userguide/networking/default_network/configure-dns/ "Regarding DNS settings, in the absence of the --dns=IP_ADDRESS..., --dns-search=DOMAIN..., or --dns-opt=OPTION... options, Docker makes each container?s /etc/resolv.conf look like the /etc/resolv.conf of the host machine (where the docker daemon runs). When creating the container?s /etc/resolv.conf, the daemon filters out all localhost IP address nameserver entries from the host?s original file." I'm not having any connectivity issues from my host machine - so why does the docker container have problems if it is using the same /etc/resolve.conf? It's all magic to me. ----- Original Message ----- > > I was stumped for quite a while about why I couldn't get a docker build to > run on my VM. The symptom is that the docker build can not reach the outside > world, and therefore can't pull in what it needs. This happens even though > the VM itself has no connectivity issues. Note, I assume this is a VM thing > because it happened to both mazz and myself, using Virtual Box fedora vms, > but it may not be limited to VMs. I finally stumbled on a stack overflow > entry that solved the issue [1]. > > Basically, you need to tell docker about your DNS servers, and also Google's > DNS server 8.8.8.8. Your mileage may vary, perhaps you'll only need a subset > of of servers, but hopefully this helps you out, because it was a pita. > > [1] > http://stackoverflow.com/questions/25130536/dockerfile-docker-build-cant-download-packages-centos-yum-debian-ubuntu-ap > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From jsanda at redhat.com Mon Dec 12 23:22:11 2016 From: jsanda at redhat.com (John Sanda) Date: Mon, 12 Dec 2016 23:22:11 -0500 Subject: [Hawkular-dev] Hawkular-metrics resource requirements questions In-Reply-To: References: Message-ID: <07E42FD0-0898-43EC-AEFF-10FF611AE1E9@redhat.com> Hey Daniel, Sorry for the late reply. The person who did all the compression work (gayak on freenode) probably will not be around much for the rest of the year. He would be the best person to answer questions on compression; however, I should have some numbers to report back to you tomorrow. With respect to performance handling 100 samples/second is not a problem, but just like with any other Cassandra TSDB, your hardware configuration is going to be a big factor. If you do not have good I/O performance for the commit log, ingestion is going to suffer. I will let Stefan chime with some thoughts on EC2 instance types. Lastly, we welcome and appreciate community involvement. Your use case sounds really interesting, and we?ll do our best to get your questions answered and get you up and running. - John > On Dec 8, 2016, at 12:05 PM, Daniel Miranda wrote: > > Forgot the links. The uncompressed storage estimates are actually for NewTS, but they should not be much different for any other Cassandra-backed TSDB without compression. > > [1] https://www.adventuresinoss.com/2016/01/22/opennms-at-scale/ > [2] https://prometheus.io/docs/operating/storage/ > > Em qui, 8 de dez de 2016 ?s 15:00, Daniel Miranda > escreveu: > Greetings, > > I'm looking for a distributed time-series database, preferably backed by Cassandra, to help monitor about 30 instances in AWS (with a perspective of quick growth in the future). Hawkular Metrics seems interesting due to it's native clustering support and use of compression, since naively using Cassandra is quite inefficient - KairosDB seems to need about 12B/sample [1], which is *way* higher than other systems with custom storage backends (Prometheus can do ~1B/sample [2]). > > I would like to know if there are any existing benchmarks for how Hawkular's ingestion and compression perform, and what kind of resources I would need to handle something like 100 samples/producer/second, hopefully with retention for 7 and 30 days (the latter with reduced precision). > > My planned setup is Collectd -> Riemann -> Hawkular (?) with Grafana for visualization. > > Thanks in advance, > Daniel > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161212/54310256/attachment-0001.html From jsanda at redhat.com Mon Dec 12 23:41:22 2016 From: jsanda at redhat.com (John Sanda) Date: Mon, 12 Dec 2016 23:41:22 -0500 Subject: [Hawkular-dev] Hawkular-metrics resource requirements questions In-Reply-To: <07E42FD0-0898-43EC-AEFF-10FF611AE1E9@redhat.com> References: <07E42FD0-0898-43EC-AEFF-10FF611AE1E9@redhat.com> Message-ID: <3B86DC51-B887-46F9-8C27-13C78B090CEC@redhat.com> While this is anecdotal, it offers a bit of perspective. I have done testing on my laptop with sustained ingestion rates of 5,000 and 6,000 samples / 10 seconds without a single dropped mutation. This is with a single Cassandra 3.0.9 node create with ccm on a laptop running Fedora 24 and having 16 GB RAM, 8 cores, and SSD. We also use test environment with virtual machines and shared storage where we might have a tough time sustaining those ingestion rates on a single node depending on the time of day. > On Dec 12, 2016, at 11:22 PM, John Sanda wrote: > > Hey Daniel, > > Sorry for the late reply. The person who did all the compression work (gayak on freenode) probably will not be around much for the rest of the year. He would be the best person to answer questions on compression; however, I should have some numbers to report back to you tomorrow. > > With respect to performance handling 100 samples/second is not a problem, but just like with any other Cassandra TSDB, your hardware configuration is going to be a big factor. If you do not have good I/O performance for the commit log, ingestion is going to suffer. I will let Stefan chime with some thoughts on EC2 instance types. > > Lastly, we welcome and appreciate community involvement. Your use case sounds really interesting, and we?ll do our best to get your questions answered and get you up and running. > > - John > >> On Dec 8, 2016, at 12:05 PM, Daniel Miranda > wrote: >> >> Forgot the links. The uncompressed storage estimates are actually for NewTS, but they should not be much different for any other Cassandra-backed TSDB without compression. >> >> [1] https://www.adventuresinoss.com/2016/01/22/opennms-at-scale/ >> [2] https://prometheus.io/docs/operating/storage/ >> >> Em qui, 8 de dez de 2016 ?s 15:00, Daniel Miranda > escreveu: >> Greetings, >> >> I'm looking for a distributed time-series database, preferably backed by Cassandra, to help monitor about 30 instances in AWS (with a perspective of quick growth in the future). Hawkular Metrics seems interesting due to it's native clustering support and use of compression, since naively using Cassandra is quite inefficient - KairosDB seems to need about 12B/sample [1], which is *way* higher than other systems with custom storage backends (Prometheus can do ~1B/sample [2]). >> >> I would like to know if there are any existing benchmarks for how Hawkular's ingestion and compression perform, and what kind of resources I would need to handle something like 100 samples/producer/second, hopefully with retention for 7 and 30 days (the latter with reduced precision). >> >> My planned setup is Collectd -> Riemann -> Hawkular (?) with Grafana for visualization. >> >> Thanks in advance, >> Daniel >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161212/8275dd08/attachment.html From mazz at redhat.com Tue Dec 13 13:17:09 2016 From: mazz at redhat.com (John Mazzitelli) Date: Tue, 13 Dec 2016 13:17:09 -0500 (EST) Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <1951494552.5306423.1481649796009.JavaMail.zimbra@redhat.com> Message-ID: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> We have been using version 2.x of the okhttp library and its associated WebSocket library. Moving to the latest 3.x stream would keep us up-to-date and would be useful because we recently saw some odd behavior where the websocket library was spitting out warnings about resources leaking and that problem I think is fixed in the 3.x versions. So I wrote this JIRA: https://issues.jboss.org/browse/HAWKULAR-1138 There are four PRs associated with that JIRA for parent-pom, commons, inventory, and agent that I need peer reviewed. We then need to publish these in an organized fashion (parent-pom first, then we move commons pulling in the new parent pom, then inventory and agent pulling in the new commons and parent pom). Also: Metrics: I noticed hawkular-metrics defines a property for a VERY old okhttp version (2.0.0) but it doesn't seem to even be using it. See https://github.com/hawkular/hawkular-metrics/search?q=squareup - I think metrics should get rid of that obsolete version property definition. Does anyone know of anywhere else we are using okhttp? From kavinkankeshwar at gmail.com Tue Dec 13 17:29:51 2016 From: kavinkankeshwar at gmail.com (Kavin Kankeshwar) Date: Tue, 13 Dec 2016 14:29:51 -0800 Subject: [Hawkular-dev] Hawkular Usage and other stats Message-ID: Hi, I am evaluating Hawkular seems very interesting, I just wanted to know if you guys have some usage stats and community involvement ? I see see only few Github Stars etc, but the project is being actively developed based on commit history. Just wanted to figure out about hawkular if its production ready and I can start using it if needed at my workplace. Obviously once we start using if there are any changes I need i am willing to submit patches etc. But just wanted to check on stats before i dive in . :) Thanks! Regards, -- Kavin.Kankeshwar -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161213/8c81bb12/attachment.html From theute at redhat.com Wed Dec 14 05:28:29 2016 From: theute at redhat.com (Thomas Heute) Date: Wed, 14 Dec 2016 11:28:29 +0100 Subject: [Hawkular-dev] Hawkular Usage and other stats In-Reply-To: References: Message-ID: In terms of production-ready, we ship Hawkular with OpenShift to Red Hat customers, and we are about to ship it to CloudForms customers as well. It's still in very active development though and we are head down in it, so we have?'t make a lot of noise on Hawkular so far. We lost our GitHub star history on a repository rename, so that didn't help on that side :) Feel free to star us ;) (The number of repos is also not really helping ;)) We welcome all contributions of course (or ideas, feedback), the more agents/usecases will have, the more the community will grow, at the moment we have Wildfy and OpenShift agents which are very important for us, but other agents would definitely help community awareness. Could you tell us how you'd like to use Hawkular ? Thomas On Tue, Dec 13, 2016 at 11:29 PM, Kavin Kankeshwar < kavinkankeshwar at gmail.com> wrote: > Hi, > > I am evaluating Hawkular seems very interesting, I just wanted to know if > you guys have some usage stats and community involvement ? > > I see see only few Github Stars etc, but the project is being actively > developed based on commit history. > > Just wanted to figure out about hawkular if its production ready and I can > start using it if needed at my workplace. Obviously once we start using if > there are any changes I need i am willing to submit patches etc. But just > wanted to check on stats before i dive in . :) > > Thanks! > > Regards, > -- > Kavin.Kankeshwar > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161214/5d32d317/attachment.html From theute at redhat.com Wed Dec 14 08:12:48 2016 From: theute at redhat.com (theute at redhat.com) Date: Wed, 14 Dec 2016 13:12:48 +0000 Subject: [Hawkular-dev] Invitation: Hawkular Inventory // GraphDB @ Fri Dec 16, 2016 2:30pm - 3:30pm (CET) (hawkular-dev@lists.jboss.org) Message-ID: <001a114126b2eecf2805439e1774@google.com> You have been invited to the following event. Title: Hawkular Inventory // GraphDB http://bluejeans.com/theute I'd like us to discuss how we use graphDB features for the inventory. Because if we don't and don't have concrete plans to use Graph queries, then we may want to simplify things... When: Fri Dec 16, 2016 2:30pm ? 3:30pm Zurich Calendar: hawkular-dev at lists.jboss.org Who: * theute at redhat.com - creator * jsanda at redhat.com * lkrejci at redhat.com * jmazzite at redhat.com * jtakvori at redhat.com * jkremser at redhat.com * hawkular-dev at lists.jboss.org * hrupp at redhat.com Event details: https://www.google.com/calendar/event?action=VIEW&eid=MG4zNTNocWhzdDZjZWV1bDY5aGQ5YjFsaTQgaGF3a3VsYXItZGV2QGxpc3RzLmpib3NzLm9yZw&tok=NjMjcmVkaGF0LmNvbV9mbWlnMm9zdTY5a21hNDdqcmRjMnZlbjRtb0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29tNzVkM2NiNjEzY2IwNzBkNjAwNDM4ODZjNzdlZjEwYmEzMzE0MzNlZQ&ctz=Europe/Zurich&hl=en Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account hawkular-dev at lists.jboss.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to modify your RSVP response. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161214/bf8ebaae/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2242 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161214/bf8ebaae/attachment-0002.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2287 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161214/bf8ebaae/attachment-0003.bin From mazz at redhat.com Wed Dec 14 08:30:22 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 14 Dec 2016 08:30:22 -0500 (EST) Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> Message-ID: <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> I just submitted a PR for h-services, and currently working on testing for apm and datamining - I'll submit PRs for those soon. Just FYI: All of these PRs will be red in travis because I haven't released parent-pom with the new okhttp dep. I wanted to get all the PRs in, see them all pass on my box, and then ask that we merge them all in an orchestrated dance. So I just need the stakeholders to peer review the code changes. ----- Original Message ----- > We have been using version 2.x of the okhttp library and its associated > WebSocket library. > > Moving to the latest 3.x stream would keep us up-to-date and would be useful > because we recently saw some odd behavior where the websocket library was > spitting out warnings about resources leaking and that problem I think is > fixed in the 3.x versions. > > So I wrote this JIRA: https://issues.jboss.org/browse/HAWKULAR-1138 > > There are four PRs associated with that JIRA for parent-pom, commons, > inventory, and agent that I need peer reviewed. We then need to publish > these in an organized fashion (parent-pom first, then we move commons > pulling in the new parent pom, then inventory and agent pulling in the new > commons and parent pom). > > Also: Metrics: I noticed hawkular-metrics defines a property for a VERY old > okhttp version (2.0.0) but it doesn't seem to even be using it. See > https://github.com/hawkular/hawkular-metrics/search?q=squareup - I think > metrics should get rid of that obsolete version property definition. > > Does anyone know of anywhere else we are using okhttp? > > > > > > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From jsanda at redhat.com Wed Dec 14 08:50:09 2016 From: jsanda at redhat.com (John Sanda) Date: Wed, 14 Dec 2016 08:50:09 -0500 Subject: [Hawkular-dev] Hawkular-metrics resource requirements questions In-Reply-To: <3B86DC51-B887-46F9-8C27-13C78B090CEC@redhat.com> References: <07E42FD0-0898-43EC-AEFF-10FF611AE1E9@redhat.com> <3B86DC51-B887-46F9-8C27-13C78B090CEC@redhat.com> Message-ID: <60DDD6B9-711E-46C1-BBA1-07595BE814E7@redhat.com> I have some numbers to share for compression. I ran a test simulation that stored 7 days of data points for 5,000 metrics with a sampling rate of 10 secs. I use a 10 sec sampling rate a lot since that is what has been used in OpenShift. I used only gauge metrics and used a uniform distribution for the values. The size of the raw data on disk *without* gorilla compression was 192 MB. We are still using Cassandra?s default compression, LZ4. With gorilla compression the size of the live data on disk is 4.2 MB, which is in stark contrast with the size of the raw data. There are several things to note. First, we compress data in 2 hour blocks by default, which makes it a bit difficult to say how big a compressed sample is. The data set will affect the overall compression as well. It would be nice though if we did publish something about the compressed sample size even if it is with some caveats. I mentioned that the the 4.2 MB is the size of the *live* data. As hawkular-metrics ingests data it does not cache and compress it in memory. Instead the data points are written to the data table just as they were prior to introducing gorilla compression. There is a background job that does the compression. It runs every 2 hours. When a 2 hour block of a data points for a time series is compressed and persisted, the corresponding raw data is deleted. If you are familiar with Cassandra, then you might be aware that deletes do not happen immediately. If hawkular-metrics is running with a single Cassandra node and/or if the replication_factor is 1, then gc_grace_seconds will be set to zero, so deleted data should get purged pretty fast. For a multi-node C* cluster with replication, gc_grace_seconds is set to 7 days. It is a lot of little details, but they can impact the actual numbers you see, which is why I stressed that I was measuring the size of the live data. I hope this is helpful. - John > On Dec 12, 2016, at 11:41 PM, John Sanda wrote: > > While this is anecdotal, it offers a bit of perspective. I have done testing on my laptop with sustained ingestion rates of 5,000 and 6,000 samples / 10 seconds without a single dropped mutation. This is with a single Cassandra 3.0.9 node create with ccm on a laptop running Fedora 24 and having 16 GB RAM, 8 cores, and SSD. We also use test environment with virtual machines and shared storage where we might have a tough time sustaining those ingestion rates on a single node depending on the time of day. > >> On Dec 12, 2016, at 11:22 PM, John Sanda > wrote: >> >> Hey Daniel, >> >> Sorry for the late reply. The person who did all the compression work (gayak on freenode) probably will not be around much for the rest of the year. He would be the best person to answer questions on compression; however, I should have some numbers to report back to you tomorrow. >> >> With respect to performance handling 100 samples/second is not a problem, but just like with any other Cassandra TSDB, your hardware configuration is going to be a big factor. If you do not have good I/O performance for the commit log, ingestion is going to suffer. I will let Stefan chime with some thoughts on EC2 instance types. >> >> Lastly, we welcome and appreciate community involvement. Your use case sounds really interesting, and we?ll do our best to get your questions answered and get you up and running. >> >> - John >> >>> On Dec 8, 2016, at 12:05 PM, Daniel Miranda > wrote: >>> >>> Forgot the links. The uncompressed storage estimates are actually for NewTS, but they should not be much different for any other Cassandra-backed TSDB without compression. >>> >>> [1] https://www.adventuresinoss.com/2016/01/22/opennms-at-scale/ >>> [2] https://prometheus.io/docs/operating/storage/ >>> >>> Em qui, 8 de dez de 2016 ?s 15:00, Daniel Miranda > escreveu: >>> Greetings, >>> >>> I'm looking for a distributed time-series database, preferably backed by Cassandra, to help monitor about 30 instances in AWS (with a perspective of quick growth in the future). Hawkular Metrics seems interesting due to it's native clustering support and use of compression, since naively using Cassandra is quite inefficient - KairosDB seems to need about 12B/sample [1], which is *way* higher than other systems with custom storage backends (Prometheus can do ~1B/sample [2]). >>> >>> I would like to know if there are any existing benchmarks for how Hawkular's ingestion and compression perform, and what kind of resources I would need to handle something like 100 samples/producer/second, hopefully with retention for 7 and 30 days (the latter with reduced precision). >>> >>> My planned setup is Collectd -> Riemann -> Hawkular (?) with Grafana for visualization. >>> >>> Thanks in advance, >>> Daniel >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161214/bfdf9004/attachment.html From mazz at redhat.com Wed Dec 14 09:00:51 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 14 Dec 2016 09:00:51 -0500 (EST) Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> Message-ID: <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> OK, I think I got everything - at least, from what I could find. There are now 7 pull requests on the following repos: * parent-pom * commons * inventory * agent * services * apm * datamining If everyone peer-reviews and agrees, I can merge and release the parent-pom. This puts our okhttp version up to 3.4.2 and will let the commons PR run til green. Once we see that pass, that would get merged and released and at that point all the rest of the PRs can be re-tested via travis and hopefully all go green and can be merged. So - I need to know if anyone has any reservations about releasing parent-pom 51 with okhttp upgraded from 2.x to 3.4.2. Speak now or forever hold your peace. ----- Original Message ----- > I just submitted a PR for h-services, and currently working on testing for > apm and datamining - I'll submit PRs for those soon. > > Just FYI: All of these PRs will be red in travis because I haven't released > parent-pom with the new okhttp dep. I wanted to get all the PRs in, see them > all pass on my box, and then ask that we merge them all in an orchestrated > dance. So I just need the stakeholders to peer review the code changes. > > ----- Original Message ----- > > We have been using version 2.x of the okhttp library and its associated > > WebSocket library. > > > > Moving to the latest 3.x stream would keep us up-to-date and would be > > useful > > because we recently saw some odd behavior where the websocket library was > > spitting out warnings about resources leaking and that problem I think is > > fixed in the 3.x versions. > > > > So I wrote this JIRA: https://issues.jboss.org/browse/HAWKULAR-1138 > > > > There are four PRs associated with that JIRA for parent-pom, commons, > > inventory, and agent that I need peer reviewed. We then need to publish > > these in an organized fashion (parent-pom first, then we move commons > > pulling in the new parent pom, then inventory and agent pulling in the new > > commons and parent pom). > > > > Also: Metrics: I noticed hawkular-metrics defines a property for a VERY old > > okhttp version (2.0.0) but it doesn't seem to even be using it. See > > https://github.com/hawkular/hawkular-metrics/search?q=squareup - I think > > metrics should get rid of that obsolete version property definition. > > > > Does anyone know of anywhere else we are using okhttp? > > > > > > > > > > > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From gbrown at redhat.com Wed Dec 14 10:09:24 2016 From: gbrown at redhat.com (Gary Brown) Date: Wed, 14 Dec 2016 10:09:24 -0500 (EST) Subject: [Hawkular-dev] Hawkular APM 0.13.0.Final released In-Reply-To: <1874952187.2805629.1481727766805.JavaMail.zimbra@redhat.com> Message-ID: <906108491.2806872.1481728164304.JavaMail.zimbra@redhat.com> Hi We are pleased to announce the availability of Hawkular APM version 0.13.0.Final. The release can be found here: https://github.com/hawkular/hawkular-apm/releases This release includes: * OpenTracing (Java and JavaScript) providers: - sampling API - use deployment metadata information from OpenShift environment to automatically name services/versions * OpenTracing based JVM agent - ability to define custom ByteMan rules * OpenShift - template improvements to separate out management of Elasticsearch cluster - vertx-opentracing example updated to separate services into individual deployments, with Ansible script for single command install * UI - trace instance diagram colour coded to show areas with performance issues - filter sidebar applied to transaction pages Regards Hawkular APM Team From mazz at redhat.com Wed Dec 14 10:27:14 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 14 Dec 2016 10:27:14 -0500 (EST) Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> Message-ID: <1490200221.5590094.1481729234104.JavaMail.zimbra@redhat.com> OK, Joel found an issue with the cors filters - he submitted two PRs on APM and Inventory to fix. I won't merge anything or release anything until tomorrow to give everyone a chance to review and chime in if they see anything else wrong with this or any reason why we shouldn't do this. Tomorrow, if all is OK, I'll merge and release parent-pom and commons. Then we'll merge Joel's two PRs to fix the cors filters. I'll rebase my PRs on the new masters and then merge all PRs. I'll let the other repo leads do releases as they see fit. ----- Original Message ----- > OK, I think I got everything - at least, from what I could find. > > There are now 7 pull requests on the following repos: > > * parent-pom > * commons > * inventory > * agent > * services > * apm > * datamining > > If everyone peer-reviews and agrees, I can merge and release the parent-pom. > This puts our okhttp version up to 3.4.2 and will let the commons PR run til > green. Once we see that pass, that would get merged and released and at that > point all the rest of the PRs can be re-tested via travis and hopefully all > go green and can be merged. > > So - I need to know if anyone has any reservations about releasing parent-pom > 51 with okhttp upgraded from 2.x to 3.4.2. Speak now or forever hold your > peace. > > ----- Original Message ----- > > I just submitted a PR for h-services, and currently working on testing for > > apm and datamining - I'll submit PRs for those soon. > > > > Just FYI: All of these PRs will be red in travis because I haven't released > > parent-pom with the new okhttp dep. I wanted to get all the PRs in, see > > them > > all pass on my box, and then ask that we merge them all in an orchestrated > > dance. So I just need the stakeholders to peer review the code changes. > > > > ----- Original Message ----- > > > We have been using version 2.x of the okhttp library and its associated > > > WebSocket library. > > > > > > Moving to the latest 3.x stream would keep us up-to-date and would be > > > useful > > > because we recently saw some odd behavior where the websocket library was > > > spitting out warnings about resources leaking and that problem I think is > > > fixed in the 3.x versions. > > > > > > So I wrote this JIRA: https://issues.jboss.org/browse/HAWKULAR-1138 > > > > > > There are four PRs associated with that JIRA for parent-pom, commons, > > > inventory, and agent that I need peer reviewed. We then need to publish > > > these in an organized fashion (parent-pom first, then we move commons > > > pulling in the new parent pom, then inventory and agent pulling in the > > > new > > > commons and parent pom). > > > > > > Also: Metrics: I noticed hawkular-metrics defines a property for a VERY > > > old > > > okhttp version (2.0.0) but it doesn't seem to even be using it. See > > > https://github.com/hawkular/hawkular-metrics/search?q=squareup - I think > > > metrics should get rid of that obsolete version property definition. > > > > > > Does anyone know of anywhere else we are using okhttp? > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From snegrea at redhat.com Wed Dec 14 11:32:49 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Wed, 14 Dec 2016 10:32:49 -0600 Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <1490200221.5590094.1481729234104.JavaMail.zimbra@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> <1490200221.5590094.1481729234104.JavaMail.zimbra@redhat.com> Message-ID: Hello, Metrics no longer uses okhttp. It was used in code that has been removed from the project. The only thing left is the property in root pom. I just submitted a PR to remove the property ( https://github.com/hawkular/hawkular-metrics/pull/711). Thank you, Stefan Negrea On Wed, Dec 14, 2016 at 9:27 AM, John Mazzitelli wrote: > OK, Joel found an issue with the cors filters - he submitted two PRs on > APM and Inventory to fix. > > I won't merge anything or release anything until tomorrow to give everyone > a chance to review and chime in if they see anything else wrong with this > or any reason why we shouldn't do this. > > Tomorrow, if all is OK, I'll merge and release parent-pom and commons. > Then we'll merge Joel's two PRs to fix the cors filters. I'll rebase my PRs > on the new masters and then merge all PRs. I'll let the other repo leads do > releases as they see fit. > > ----- Original Message ----- > > OK, I think I got everything - at least, from what I could find. > > > > There are now 7 pull requests on the following repos: > > > > * parent-pom > > * commons > > * inventory > > * agent > > * services > > * apm > > * datamining > > > > If everyone peer-reviews and agrees, I can merge and release the > parent-pom. > > This puts our okhttp version up to 3.4.2 and will let the commons PR run > til > > green. Once we see that pass, that would get merged and released and at > that > > point all the rest of the PRs can be re-tested via travis and hopefully > all > > go green and can be merged. > > > > So - I need to know if anyone has any reservations about releasing > parent-pom > > 51 with okhttp upgraded from 2.x to 3.4.2. Speak now or forever hold your > > peace. > > > > ----- Original Message ----- > > > I just submitted a PR for h-services, and currently working on testing > for > > > apm and datamining - I'll submit PRs for those soon. > > > > > > Just FYI: All of these PRs will be red in travis because I haven't > released > > > parent-pom with the new okhttp dep. I wanted to get all the PRs in, see > > > them > > > all pass on my box, and then ask that we merge them all in an > orchestrated > > > dance. So I just need the stakeholders to peer review the code changes. > > > > > > ----- Original Message ----- > > > > We have been using version 2.x of the okhttp library and its > associated > > > > WebSocket library. > > > > > > > > Moving to the latest 3.x stream would keep us up-to-date and would be > > > > useful > > > > because we recently saw some odd behavior where the websocket > library was > > > > spitting out warnings about resources leaking and that problem I > think is > > > > fixed in the 3.x versions. > > > > > > > > So I wrote this JIRA: https://issues.jboss.org/browse/HAWKULAR-1138 > > > > > > > > There are four PRs associated with that JIRA for parent-pom, commons, > > > > inventory, and agent that I need peer reviewed. We then need to > publish > > > > these in an organized fashion (parent-pom first, then we move commons > > > > pulling in the new parent pom, then inventory and agent pulling in > the > > > > new > > > > commons and parent pom). > > > > > > > > Also: Metrics: I noticed hawkular-metrics defines a property for a > VERY > > > > old > > > > okhttp version (2.0.0) but it doesn't seem to even be using it. See > > > > https://github.com/hawkular/hawkular-metrics/search?q=squareup - I > think > > > > metrics should get rid of that obsolete version property definition. > > > > > > > > Does anyone know of anywhere else we are using okhttp? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > hawkular-dev mailing list > > > > hawkular-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161214/a60f1b2d/attachment.html From jshaughn at redhat.com Wed Dec 14 13:16:24 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Wed, 14 Dec 2016 13:16:24 -0500 Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <1490200221.5590094.1481729234104.JavaMail.zimbra@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> <1490200221.5590094.1481729234104.JavaMail.zimbra@redhat.com> Message-ID: <3661a3a4-a9b2-2a7a-78b1-7f40f06d07a0@redhat.com> Joel, we may need one for Alerts as well? On 12/14/2016 10:27 AM, John Mazzitelli wrote: > OK, Joel found an issue with the cors filters - he submitted two PRs on APM and Inventory to fix. > > I won't merge anything or release anything until tomorrow to give everyone a chance to review and chime in if they see anything else wrong with this or any reason why we shouldn't do this. > > Tomorrow, if all is OK, I'll merge and release parent-pom and commons. Then we'll merge Joel's two PRs to fix the cors filters. I'll rebase my PRs on the new masters and then merge all PRs. I'll let the other repo leads do releases as they see fit. > > ----- Original Message ----- >> OK, I think I got everything - at least, from what I could find. >> >> There are now 7 pull requests on the following repos: >> >> * parent-pom >> * commons >> * inventory >> * agent >> * services >> * apm >> * datamining >> >> If everyone peer-reviews and agrees, I can merge and release the parent-pom. >> This puts our okhttp version up to 3.4.2 and will let the commons PR run til >> green. Once we see that pass, that would get merged and released and at that >> point all the rest of the PRs can be re-tested via travis and hopefully all >> go green and can be merged. >> >> So - I need to know if anyone has any reservations about releasing parent-pom >> 51 with okhttp upgraded from 2.x to 3.4.2. Speak now or forever hold your >> peace. >> >> ----- Original Message ----- >>> I just submitted a PR for h-services, and currently working on testing for >>> apm and datamining - I'll submit PRs for those soon. >>> >>> Just FYI: All of these PRs will be red in travis because I haven't released >>> parent-pom with the new okhttp dep. I wanted to get all the PRs in, see >>> them >>> all pass on my box, and then ask that we merge them all in an orchestrated >>> dance. So I just need the stakeholders to peer review the code changes. >>> >>> ----- Original Message ----- >>>> We have been using version 2.x of the okhttp library and its associated >>>> WebSocket library. >>>> >>>> Moving to the latest 3.x stream would keep us up-to-date and would be >>>> useful >>>> because we recently saw some odd behavior where the websocket library was >>>> spitting out warnings about resources leaking and that problem I think is >>>> fixed in the 3.x versions. >>>> >>>> So I wrote this JIRA: https://issues.jboss.org/browse/HAWKULAR-1138 >>>> >>>> There are four PRs associated with that JIRA for parent-pom, commons, >>>> inventory, and agent that I need peer reviewed. We then need to publish >>>> these in an organized fashion (parent-pom first, then we move commons >>>> pulling in the new parent pom, then inventory and agent pulling in the >>>> new >>>> commons and parent pom). >>>> >>>> Also: Metrics: I noticed hawkular-metrics defines a property for a VERY >>>> old >>>> okhttp version (2.0.0) but it doesn't seem to even be using it. See >>>> https://github.com/hawkular/hawkular-metrics/search?q=squareup - I think >>>> metrics should get rid of that obsolete version property definition. >>>> >>>> Does anyone know of anywhere else we are using okhttp? >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161214/978c02eb/attachment.html From lkrejci at redhat.com Wed Dec 14 16:31:24 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Wed, 14 Dec 2016 22:31:24 +0100 Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> Message-ID: <4341367.BtDjUVq4TO@localhost.localdomain> On Wednesday, December 14, 2016 9:00:51 AM CET John Mazzitelli wrote: > OK, I think I got everything - at least, from what I could find. > > There are now 7 pull requests on the following repos: > > * parent-pom > * commons > * inventory > * agent > * services > * apm > * datamining > > If everyone peer-reviews and agrees, I can merge and release the parent-pom. > This puts our okhttp version up to 3.4.2 and will let the commons PR run > til green. Once we see that pass, that would get merged and released and at > that point all the rest of the PRs can be re-tested via travis and > hopefully all go green and can be merged. > > So - I need to know if anyone has any reservations about releasing > parent-pom 51 with okhttp upgraded from 2.x to 3.4.2. Speak now or forever > hold your peace. > So you want me to review and agree on something that is never going to pass the tests until it's too late (i.e. until parent is released)? ;) > ----- Original Message ----- > > > I just submitted a PR for h-services, and currently working on testing for > > apm and datamining - I'll submit PRs for those soon. > > > > Just FYI: All of these PRs will be red in travis because I haven't > > released > > parent-pom with the new okhttp dep. I wanted to get all the PRs in, see > > them all pass on my box, and then ask that we merge them all in an > > orchestrated dance. So I just need the stakeholders to peer review the > > code changes. > > > > ----- Original Message ----- > > > > > We have been using version 2.x of the okhttp library and its associated > > > WebSocket library. > > > > > > Moving to the latest 3.x stream would keep us up-to-date and would be > > > useful > > > because we recently saw some odd behavior where the websocket library > > > was > > > spitting out warnings about resources leaking and that problem I think > > > is > > > fixed in the 3.x versions. > > > > > > So I wrote this JIRA: https://issues.jboss.org/browse/HAWKULAR-1138 > > > > > > There are four PRs associated with that JIRA for parent-pom, commons, > > > inventory, and agent that I need peer reviewed. We then need to publish > > > these in an organized fashion (parent-pom first, then we move commons > > > pulling in the new parent pom, then inventory and agent pulling in the > > > new > > > commons and parent pom). > > > > > > Also: Metrics: I noticed hawkular-metrics defines a property for a VERY > > > old > > > okhttp version (2.0.0) but it doesn't seem to even be using it. See > > > https://github.com/hawkular/hawkular-metrics/search?q=squareup - I think > > > metrics should get rid of that obsolete version property definition. > > > > > > Does anyone know of anywhere else we are using okhttp? > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From mazz at redhat.com Wed Dec 14 18:27:34 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 14 Dec 2016 18:27:34 -0500 (EST) Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <4341367.BtDjUVq4TO@localhost.localdomain> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> <4341367.BtDjUVq4TO@localhost.localdomain> Message-ID: <561624381.5717694.1481758054760.JavaMail.zimbra@redhat.com> > > So - I need to know if anyone has any reservations about releasing > > parent-pom 51 with okhttp upgraded from 2.x to 3.4.2. Speak now or forever > > hold your peace. Obviously after parent-pom and commons goes green, I'm going to restart all the PR travis builds to make sure they go green before merging anything :p But I'm more concerned about someone saying, "These changes you made are no good because..." or "I can't use this okhttp3 because..." From jtakvori at redhat.com Thu Dec 15 03:06:31 2016 From: jtakvori at redhat.com (Joel Takvorian) Date: Thu, 15 Dec 2016 09:06:31 +0100 Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <3661a3a4-a9b2-2a7a-78b1-7f40f06d07a0@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> <1490200221.5590094.1481729234104.JavaMail.zimbra@redhat.com> <3661a3a4-a9b2-2a7a-78b1-7f40f06d07a0@redhat.com> Message-ID: On Wed, Dec 14, 2016 at 7:16 PM, Jay Shaughnessy wrote: > > Joel, we may need one for Alerts as well? > > You already merged it ;) https://github.com/hawkular/hawkular-alerts/pull/271 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161215/923432f1/attachment.html From mazz at redhat.com Thu Dec 15 07:52:32 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 15 Dec 2016 07:52:32 -0500 (EST) Subject: [Hawkular-dev] moving okhttp library from 2.x to 3.x In-Reply-To: <561624381.5717694.1481758054760.JavaMail.zimbra@redhat.com> References: <1447242712.5316061.1481653029964.JavaMail.zimbra@redhat.com> <453958488.5547276.1481722222512.JavaMail.zimbra@redhat.com> <1073256243.5564079.1481724051248.JavaMail.zimbra@redhat.com> <4341367.BtDjUVq4TO@localhost.localdomain> <561624381.5717694.1481758054760.JavaMail.zimbra@redhat.com> Message-ID: <46295764.5834850.1481806352219.JavaMail.zimbra@redhat.com> I'm going to start the upgrading of the okhttp library soon. The plan of attack is as follows (BTW: its a PITA having to upgrade a common library thanks to all our repositories :) 1) Merge and release parent-pom 51 2) Pin h-commons to parent-pom 51, rebase h-commons PR, and see travis go green. Then merge and release h-commons. 3) Pin to parent-pom 51 and the new commons release on all the other PRs for all the other repositories. Wait for them to go green. 4) Merge (or ask for them to be merged) the PRs once they go green. 5) Project leads can then release as they see fit. (I, myself, will try to release the wildfly agent soon) From mazz at redhat.com Thu Dec 15 23:22:35 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 15 Dec 2016 23:22:35 -0500 (EST) Subject: [Hawkular-dev] srcdeps is apparently broken or at least not working on travis In-Reply-To: <1303535750.5984634.1481861892835.JavaMail.zimbra@redhat.com> Message-ID: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> I can build this locally fine. However, srcdep plugin when running on travis is failing to compile h-inventory. See the tons of compile errors here: https://travis-ci.org/hawkular/hawkular-agent#L391 We need to either: a) fix what is wrong with srcdep and travis b) release inventory (and other dependencies in order to build things further downstream) so we don't use srcdeps At this point, my okhttp upgrade is dead in the water since I can't get the h-agent or h-services repos to go green. From mazz at redhat.com Thu Dec 15 23:31:29 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 15 Dec 2016 23:31:29 -0500 (EST) Subject: [Hawkular-dev] srcdeps is apparently broken or at least not working on travis In-Reply-To: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> References: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> Message-ID: <1863445073.5984803.1481862689551.JavaMail.zimbra@redhat.com> And here is the errors in h-services - it doesn't even look like srcdeps is attempting to build them here: https://travis-ci.org/hawkular/hawkular-services/builds/184443394#L411-L413 ----- Original Message ----- > I can build this locally fine. However, srcdep plugin when running on travis > is failing to compile h-inventory. > > See the tons of compile errors here: > > https://travis-ci.org/hawkular/hawkular-agent#L391 > > We need to either: > > a) fix what is wrong with srcdep and travis > b) release inventory (and other dependencies in order to build things further > downstream) so we don't use srcdeps > > At this point, my okhttp upgrade is dead in the water since I can't get the > h-agent or h-services repos to go green. > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From ppalaga at redhat.com Fri Dec 16 09:20:23 2016 From: ppalaga at redhat.com (Peter Palaga) Date: Fri, 16 Dec 2016 15:20:23 +0100 Subject: [Hawkular-dev] srcdeps is apparently broken or at least not working on travis In-Reply-To: <1863445073.5984803.1481862689551.JavaMail.zimbra@redhat.com> References: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> <1863445073.5984803.1481862689551.JavaMail.zimbra@redhat.com> Message-ID: Hi Mazz, given that (1) I can [1] build the PR278 [2] locally using Oracle Java 1.8.0_92 (2) The error message on Travis comes from the compiler, the underlying Java being quite ancient 1.8.0_31 (3) Inventory's Travis can also build and also uses a newer Java 1.8.0_111 I conclude that the old Java 1.8.0_92 on Agent's Travis is the main suspect. You should find a way to upgrade it. [1] Well I cannot fully build the PR278 locally - the build is not fully passing, but the srcdeps build of Inventory finishes successfully and I get a different non-srcdeps error later in the process. [2] https://github.com/hawkular/hawkular-agent/pull/278 Best, Peter On 2016-12-16 05:31, John Mazzitelli wrote: > And here is the errors in h-services - it doesn't even look like srcdeps is attempting to build them here: > > https://travis-ci.org/hawkular/hawkular-services/builds/184443394#L411-L413 > > ----- Original Message ----- >> I can build this locally fine. However, srcdep plugin when running on travis >> is failing to compile h-inventory. >> >> See the tons of compile errors here: >> >> https://travis-ci.org/hawkular/hawkular-agent#L391 >> >> We need to either: >> >> a) fix what is wrong with srcdep and travis >> b) release inventory (and other dependencies in order to build things further >> downstream) so we don't use srcdeps >> >> At this point, my okhttp upgrade is dead in the water since I can't get the >> h-agent or h-services repos to go green. >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From danielkza2 at gmail.com Fri Dec 16 09:23:54 2016 From: danielkza2 at gmail.com (Daniel Miranda) Date: Fri, 16 Dec 2016 12:23:54 -0200 Subject: [Hawkular-dev] Hawkular-metrics resource requirements questions In-Reply-To: <60DDD6B9-711E-46C1-BBA1-07595BE814E7@redhat.com> References: <60DDD6B9-711E-46C1-BBA1-07595BE814E7@redhat.com> Message-ID: <2da5fa89-f273-ba7f-2ac2-46f1c40f2de7@gmail.com> Thank you very much John, that is, indeed very helpful. It seems compression will be exactly what I'm looking for for long-term storage. I did some tests with KairosDB in the meantime, and it seems it can sustain ~20K data points/s with a 3-node Cassandra cluster of t2.large AWS instances (no provisioned IOPS, just 20GB of standard EBS storage). I'll to do some similar tests with Hawkular and report my findings. Can you share the program that you used for the simulated testing, so that I can try a similar pattern with KairosDB? Regards, Daniel From mazz at redhat.com Fri Dec 16 09:26:37 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 16 Dec 2016 09:26:37 -0500 (EST) Subject: [Hawkular-dev] srcdeps is apparently broken or at least not working on travis In-Reply-To: References: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> <1863445073.5984803.1481862689551.JavaMail.zimbra@redhat.com> Message-ID: <4938181.6128368.1481898397784.JavaMail.zimbra@redhat.com> What about the h-services build? It doesn't even seem to be doing a srcdep build? It's just saying the dep cannot be resolved: https://travis-ci.org/hawkular/hawkular-services/builds/184443394#L411-L413 ----- Original Message ----- > Hi Mazz, > > given that > > (1) I can [1] build the PR278 [2] locally using Oracle Java 1.8.0_92 > (2) The error message on Travis comes from the compiler, the underlying > Java being quite ancient 1.8.0_31 > (3) Inventory's Travis can also build and also uses a newer Java > 1.8.0_111 > > I conclude that the old Java 1.8.0_92 on Agent's Travis is the main > suspect. You should find a way to upgrade it. > > [1] Well I cannot fully build the PR278 locally - the build is not fully > passing, but the srcdeps build of Inventory finishes successfully and I > get a different non-srcdeps error later in the process. > > [2] https://github.com/hawkular/hawkular-agent/pull/278 > > Best, > > Peter > > On 2016-12-16 05:31, John Mazzitelli wrote: > > And here is the errors in h-services - it doesn't even look like srcdeps is > > attempting to build them here: > > > > https://travis-ci.org/hawkular/hawkular-services/builds/184443394#L411-L413 > > > > ----- Original Message ----- > >> I can build this locally fine. However, srcdep plugin when running on > >> travis > >> is failing to compile h-inventory. > >> > >> See the tons of compile errors here: > >> > >> https://travis-ci.org/hawkular/hawkular-agent#L391 > >> > >> We need to either: > >> > >> a) fix what is wrong with srcdep and travis > >> b) release inventory (and other dependencies in order to build things > >> further > >> downstream) so we don't use srcdeps > >> > >> At this point, my okhttp upgrade is dead in the water since I can't get > >> the > >> h-agent or h-services repos to go green. > >> > >> _______________________________________________ > >> hawkular-dev mailing list > >> hawkular-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > >> > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > From mazz at redhat.com Fri Dec 16 09:55:54 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 16 Dec 2016 09:55:54 -0500 (EST) Subject: [Hawkular-dev] srcdeps is apparently broken or at least not working on travis In-Reply-To: <4938181.6128368.1481898397784.JavaMail.zimbra@redhat.com> References: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> <1863445073.5984803.1481862689551.JavaMail.zimbra@redhat.com> <4938181.6128368.1481898397784.JavaMail.zimbra@redhat.com> Message-ID: <1698247176.6134748.1481900154121.JavaMail.zimbra@redhat.com> I changed the travis config to hopefully get a newer java, but now I get this: https://travis-ci.org/hawkular/hawkular-agent/builds/184547424#L493 [ERROR] Failed to execute goal on project hawkular-inventory-impl-tinkerpop-sql-provider: Could not resolve dependencies for project org.hawkular.inventory:hawkular-inventory-impl-tinkerpop-sql-provider:jar:1.1.3.Final-SRC-revision-b9baf812c880565fd135540d6565172e9badb642: Failed to collect dependencies at org.umlg:sqlg-h2-dialect:jar:1.3.2-SRC-revision-b8cbea0f96fcbbd5150e7a4f9c469850b9973331 -> org.umlg:sqlg-core:jar:1.3.2-SNAPSHOT: Failed to read artifact descriptor for org.umlg:sqlg-core:jar:1.3.2-SNAPSHOT: Could not transfer artifact org.umlg:sqlg-core:pom:1.3.2-SNAPSHOT from/to codehaus-snapshots (https://nexus.codehaus.org/snapshots/): nexus.codehaus.org: Unknown host nexus.codehaus.org -> [Help 1] So I'm going to wait for inventory to release and then just pin the agent on that inventory release rather than try to get the srcdep to build. Then we can worry about h-services. Lukas - how soon will inventory be released? :) > ----- Original Message ----- > > Hi Mazz, > > > > given that > > > > (1) I can [1] build the PR278 [2] locally using Oracle Java 1.8.0_92 > > (2) The error message on Travis comes from the compiler, the underlying > > Java being quite ancient 1.8.0_31 > > (3) Inventory's Travis can also build and also uses a newer Java > > 1.8.0_111 > > > > I conclude that the old Java 1.8.0_92 on Agent's Travis is the main > > suspect. You should find a way to upgrade it. > > > > [1] Well I cannot fully build the PR278 locally - the build is not fully > > passing, but the srcdeps build of Inventory finishes successfully and I > > get a different non-srcdeps error later in the process. > > > > [2] https://github.com/hawkular/hawkular-agent/pull/278 > > > > Best, > > > > Peter > > > > On 2016-12-16 05:31, John Mazzitelli wrote: > > > And here is the errors in h-services - it doesn't even look like srcdeps > > > is > > > attempting to build them here: > > > > > > https://travis-ci.org/hawkular/hawkular-services/builds/184443394#L411-L413 > > > > > > ----- Original Message ----- > > >> I can build this locally fine. However, srcdep plugin when running on > > >> travis > > >> is failing to compile h-inventory. > > >> > > >> See the tons of compile errors here: > > >> > > >> https://travis-ci.org/hawkular/hawkular-agent#L391 > > >> > > >> We need to either: > > >> > > >> a) fix what is wrong with srcdep and travis > > >> b) release inventory (and other dependencies in order to build things > > >> further > > >> downstream) so we don't use srcdeps > > >> > > >> At this point, my okhttp upgrade is dead in the water since I can't get > > >> the > > >> h-agent or h-services repos to go green. > > >> > > >> _______________________________________________ > > >> hawkular-dev mailing list > > >> hawkular-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > >> > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From jsanda at redhat.com Fri Dec 16 11:20:31 2016 From: jsanda at redhat.com (John Sanda) Date: Fri, 16 Dec 2016 11:20:31 -0500 Subject: [Hawkular-dev] Hawkular-metrics resource requirements questions In-Reply-To: <2da5fa89-f273-ba7f-2ac2-46f1c40f2de7@gmail.com> References: <60DDD6B9-711E-46C1-BBA1-07595BE814E7@redhat.com> <2da5fa89-f273-ba7f-2ac2-46f1c40f2de7@gmail.com> Message-ID: <71E701DB-36F1-4FF7-A648-DFA42431E7DB@redhat.com> > On Dec 16, 2016, at 9:23 AM, Daniel Miranda wrote: > > Thank you very much John, that is, indeed very helpful. It seems compression will be exactly what I'm looking for for long-term storage. > > I did some tests with KairosDB in the meantime, and it seems it can sustain ~20K data points/s with a 3-node Cassandra cluster of t2.large AWS instances (no provisioned IOPS, just 20GB of standard EBS storage). > I'll to do some similar tests with Hawkular and report my findings. > > Can you share the program that you used for the simulated testing, so that I can try a similar pattern with KairosDB? > > Regards, > Daniel The test lives in my repo in branch named generate-data[1]. The test is named GenerateDataITest.java[2]. * Generating raw/uncompressed data Checkout the generate-data branch from my repo. `mvn install -DskipTests -Dlicense.skip -Dcheckstyle.skip` (you only need to build a handful of modules, but this is easier since it reduces number of steps) `cd core/metrics-core-service` `mvn verify -Dit.test=GenerateDataITest` This will generate 7 days of raw data for 5,000 metrics with a data point for every 10 seconds. It make take some time to finish. If the test encounters any errors like a write timeout, it will abort. When the test finishes, run `nodetool drain`. Measure the size of the /hawkulartest/data-* directory. * Generating compressed data This will reuse the raw data generated from the previous steps. Restart Cassandra (has to be restarted since you did a drain) `mvn verify -Dit.test=GenerateDataITest -Dcompress` When the test finishes, run `nodetool drain`. Measure the size of the /hawkulartest/data_compressed* directory. [1] https://github.com/jsanda/hawkular-metrics/tree/generate-data [2] https://github.com/jsanda/hawkular-metrics/blob/77ed5345b145a3c1f1d4c17885d9ebd31a18421b/core/metrics-core-service/src/test/java/org/hawkular/metrics/core/impl/GenerateDataITest.java -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161216/c7cd9a02/attachment.html From snegrea at redhat.com Fri Dec 16 11:22:10 2016 From: snegrea at redhat.com (snegrea at redhat.com) Date: Fri, 16 Dec 2016 16:22:10 +0000 Subject: [Hawkular-dev] Invitation: Hawkular Metrics - 2017 Roadmap - Ideas @ Wed Dec 21, 2016 8am - 9am (CST) (hawkular-dev@lists.jboss.org) Message-ID: <001a113f6cccd0c7290543c8f89c@google.com> You have been invited to the following event. Title: Hawkular Metrics - 2017 Roadmap - Ideas If you have any ideas or feature requests for Hawkular Metrics please join and share these with the Hawkular Metrics team. This is an open session to collect community feedback for 2017 for Hawkular Metrics. If you cannot attend please add your ideas to: http://jbosson.etherpad.corp.redhat.com/290 When: Wed Dec 21, 2016 8am ? 9am Central Time Where: http://bluejeans.com/3980552127 Calendar: hawkular-dev at lists.jboss.org Who: * snegrea at redhat.com - creator * miburman at redhat.com * jsanda at redhat.com * mwringe at redhat.com * hawkular-dev at lists.jboss.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=NWpsdnI5aTRhcXJxOWRpMjljNmJyOGl2Z28gaGF3a3VsYXItZGV2QGxpc3RzLmpib3NzLm9yZw&tok=NjMjcmVkaGF0LmNvbV9qMWVudWRzbGtnNTlscjJxcGhxODQ0a20wMEBncm91cC5jYWxlbmRhci5nb29nbGUuY29tZGU4Y2EyZjhkYTlkYWUwY2YwZjFmNTE1OTMxYzMyNWI2N2E1ZDc5ZQ&ctz=America/Chicago&hl=en Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account hawkular-dev at lists.jboss.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to modify your RSVP response. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161216/27fc170f/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1956 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161216/27fc170f/attachment-0002.bin -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 1996 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161216/27fc170f/attachment-0003.bin From ppalaga at redhat.com Fri Dec 16 15:04:35 2016 From: ppalaga at redhat.com (Peter Palaga) Date: Fri, 16 Dec 2016 21:04:35 +0100 Subject: [Hawkular-dev] srcdeps is apparently broken or at least not working on travis In-Reply-To: <1698247176.6134748.1481900154121.JavaMail.zimbra@redhat.com> References: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> <1863445073.5984803.1481862689551.JavaMail.zimbra@redhat.com> <4938181.6128368.1481898397784.JavaMail.zimbra@redhat.com> <1698247176.6134748.1481900154121.JavaMail.zimbra@redhat.com> Message-ID: <1ef12385-7503-9ec2-0104-81ec451df239@redhat.com> On 2016-12-16 15:55, John Mazzitelli wrote: > I changed the travis config to hopefully get a newer java, but now I get this: > > https://travis-ci.org/hawkular/hawkular-agent/builds/184547424#L493 > > [ERROR] Failed to execute goal on project hawkular-inventory-impl-tinkerpop-sql-provider:Could not resolve dependencies for project org.hawkular.inventory:hawkular-inventory-impl-tinkerpop-sql-provider:jar:1.1.3.Final-SRC-revision-b9baf812c880565fd135540d6565172e9badb642: Failed to collect dependencies at org.umlg:sqlg-h2-dialect:jar:1.3.2-SRC-revision-b8cbea0f96fcbbd5150e7a4f9c469850b9973331 -> org.umlg:sqlg-core:jar:1.3.2-SNAPSHOT: Failed to read artifact descriptor for org.umlg:sqlg-core:jar:1.3.2-SNAPSHOT: Could not transfer artifact org.umlg:sqlg-core:pom:1.3.2-SNAPSHOT from/to codehaus-snapshots (https://nexus.codehaus.org/snapshots/): nexus.codehaus.org: Unknown host nexus.codehaus.org -> [Help 1] Looks like this one: https://github.com/travis-ci/travis-ci/issues/4629 The workaround is cp -t ~/.m2 .travis.maven.settings.xml > So I'm going to wait for inventory to release and then just pin the agent on that inventory release rather than try to get the srcdep to build. A release of Inventory should also be a valid solution but Luk?? should take care to switch to an org.umlg release before he releases Inventory. ATM, org.umlg is a source dependency of Inventory and a release of Inventory will fail because of that. -- P > Then we can worry about h-services. > > Lukas - how soon will inventory be released? :) > >> ----- Original Message ----- >>> Hi Mazz, >>> >>> given that >>> >>> (1) I can [1] build the PR278 [2] locally using Oracle Java 1.8.0_92 >>> (2) The error message on Travis comes from the compiler, the underlying >>> Java being quite ancient 1.8.0_31 >>> (3) Inventory's Travis can also build and also uses a newer Java >>> 1.8.0_111 >>> >>> I conclude that the old Java 1.8.0_92 on Agent's Travis is the main >>> suspect. You should find a way to upgrade it. >>> >>> [1] Well I cannot fully build the PR278 locally - the build is not fully >>> passing, but the srcdeps build of Inventory finishes successfully and I >>> get a different non-srcdeps error later in the process. >>> >>> [2] https://github.com/hawkular/hawkular-agent/pull/278 >>> >>> Best, >>> >>> Peter >>> >>> On 2016-12-16 05:31, John Mazzitelli wrote: >>>> And here is the errors in h-services - it doesn't even look like srcdeps >>>> is >>>> attempting to build them here: >>>> >>>> https://travis-ci.org/hawkular/hawkular-services/builds/184443394#L411-L413 >>>> >>>> ----- Original Message ----- >>>>> I can build this locally fine. However, srcdep plugin when running on >>>>> travis >>>>> is failing to compile h-inventory. >>>>> >>>>> See the tons of compile errors here: >>>>> >>>>> https://travis-ci.org/hawkular/hawkular-agent#L391 >>>>> >>>>> We need to either: >>>>> >>>>> a) fix what is wrong with srcdep and travis >>>>> b) release inventory (and other dependencies in order to build things >>>>> further >>>>> downstream) so we don't use srcdeps >>>>> >>>>> At this point, my okhttp upgrade is dead in the water since I can't get >>>>> the >>>>> h-agent or h-services repos to go green. >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>> >>> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From jshaughn at redhat.com Fri Dec 16 15:44:08 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Fri, 16 Dec 2016 15:44:08 -0500 Subject: [Hawkular-dev] Move to WF 10.1.0? Message-ID: I noticed that on Openshift we are running Hawkular Metrics on WildFly 10.1.0. It was upped from 10.0.0 several months ago due to a blocking issue that had been fixed in EAP but not WF 10.0. I ran into a new issue when trying to deploy Metrics master on OS Origin. It failed to deploy on WF 10.1.0. I was able to solve the issue without a major change but it called out the fact that we are building Hawkular against WF 10.0.1 bom and running itests against 10.0.0 server. Because OS is a primary target platform I'm wondering if we should bump the parent pom deps to the 10.1.0 bom and server (as well as upping a few related deps as well, like ISPN). As part of my investigation I did this locally for parent pom, commons, alerting and metrics and did not see any issues. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161216/301868a1/attachment.html From mwringe at redhat.com Fri Dec 16 16:06:40 2016 From: mwringe at redhat.com (Matt Wringe) Date: Fri, 16 Dec 2016 16:06:40 -0500 (EST) Subject: [Hawkular-dev] Move to WF 10.1.0? In-Reply-To: References: Message-ID: <1055033961.8119418.1481922400538.JavaMail.zimbra@redhat.com> ----- Original Message ----- > From: "Jay Shaughnessy" > To: "Discussions around Hawkular development" > Sent: Friday, 16 December, 2016 3:44:08 PM > Subject: [Hawkular-dev] Move to WF 10.1.0? > > > I noticed that on Openshift we are running Hawkular Metrics on WildFly > 10.1.0. It was upped from 10.0.0 several months ago due to a blocking issue > that had been fixed in EAP but not WF 10.0. I ran into a new issue when > trying to deploy Metrics master on OS Origin. It failed to deploy on WF > 10.1.0. I was able to solve the issue without a major change but it called > out the fact that we are building Hawkular against WF 10.0.1 bom and running > itests against 10.0.0 server. > > Because OS is a primary target platform I'm wondering if we should bump the > parent pom deps to the 10.1.0 bom and server (as well as upping a few > related deps as well, like ISPN). As part of my investigation I did this > locally for parent pom, commons, alerting and metrics and did not see any > issues. > > Thoughts? We should as a policy be moving to newer Wildfly instances once they are available. Wildfly doesn't back port fixes. For the OpenShift case, the issue we ran into was https://issues.jboss.org/browse/UNDERTOW-472 (see https://bugzilla.redhat.com/show_bug.cgi?id=1366018#c31). And it was serious enough that we updated our own image to the 10.1.0.CR1 instead of waiting for the official 10.1.0 images to be available. In the future we should probably open jiras with Hawkular to make sure that we are more in sync. From mazz at redhat.com Fri Dec 16 17:20:38 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 16 Dec 2016 17:20:38 -0500 (EST) Subject: [Hawkular-dev] srcdep changes? In-Reply-To: <1175012890.6251365.1481926571101.JavaMail.zimbra@redhat.com> Message-ID: <393117476.6251557.1481926838167.JavaMail.zimbra@redhat.com> I just found out the problem I'm having with srcdeps in h-services is because this wasn't merged: https://github.com/hawkular/hawkular-services/pull/104 Once I merged that locally, I am able to put in the SRC-revision-### in the version string and it works. But this brings up a question: what changed? Can you not use srcdeps anymore by simply adding SRC-revision-### in the version string? Because I see now a complicated .mvn directory with extensions.xml and srcdeps.yaml configuration files... did we have srcdep config files before? I didn't really look closely at the srcdep and mvn changes that went in, but I guess I should have. Is this now more complicated to use than simply changing a version string to include SRC-revision?? Because that was really nice and easy to use (almost magical :-) From jsanda at redhat.com Fri Dec 16 20:54:59 2016 From: jsanda at redhat.com (John Sanda) Date: Fri, 16 Dec 2016 20:54:59 -0500 Subject: [Hawkular-dev] approach for managing consistency with Cassandra Message-ID: This thread post https://goo.gl/8cpSwM from cassandra-users list has a good write up on an approach for implementing transactions across multiple tables in order to provide stronger consistency. I found it particularly interesting in light of the discussions of inventory and consistency. - John -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161216/c0b2247c/attachment.html From hrupp at redhat.com Sat Dec 17 06:23:35 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Sat, 17 Dec 2016 12:23:35 +0100 Subject: [Hawkular-dev] Move to WF 10.1.0? In-Reply-To: References: Message-ID: <77B41126-FC14-41FD-BAC7-0177C6C2A265@redhat.com> On 16 Dec 2016, at 21:44, Jay Shaughnessy wrote: > I noticed that on Openshift we are running Hawkular Metrics on WildFly > 10.1.0. It was upped from 10.0.0 several months The underlying question is if this uses any features/bugs/fixes that are (not) in EAP. If the EAP we use supports all this, that is in 10.1, then upgrading to WF 10.1 is a good move. From jtakvori at redhat.com Mon Dec 19 03:09:52 2016 From: jtakvori at redhat.com (Joel Takvorian) Date: Mon, 19 Dec 2016 09:09:52 +0100 Subject: [Hawkular-dev] approach for managing consistency with Cassandra In-Reply-To: References: Message-ID: Thanks John, very interesting reading. No luck for us in inventory, as there's essentially updates rather than inserts, which is a bit more complicated than the solution described: "generally the client needs to be smart enough to merge updates based on a timestamp, with a periodic batch job that cleans out obsolete inserts" But now we're considering the alternative of reading/writing the whole graph at once and process queries in memory. If "the whole graph" is too big to fit in memory without problems, then we should find a way to partition it. On Sat, Dec 17, 2016 at 2:54 AM, John Sanda wrote: > This thread post https://goo.gl/8cpSwM from cassandra-users list has a > good write up on an approach for implementing transactions across multiple > tables in order to provide stronger consistency. I found it particularly > interesting in light of the discussions of inventory and consistency. > > - John > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161219/1112459d/attachment.html From hrupp at redhat.com Mon Dec 19 04:43:30 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Mon, 19 Dec 2016 10:43:30 +0100 Subject: [Hawkular-dev] HAWKULAR Jira cleanup Message-ID: <9B1070D4-DDEA-407E-A95A-4F1610F18B54@redhat.com> Hey, I have cleaned up the HAWKULAR jira project and closed some outdated items. Can you please all have a look at the items you have either opened or are assigned to and see if they are still relevant and close them if not? Thanks Heiko From jsanda at redhat.com Mon Dec 19 09:24:24 2016 From: jsanda at redhat.com (John Sanda) Date: Mon, 19 Dec 2016 09:24:24 -0500 Subject: [Hawkular-dev] approach for managing consistency with Cassandra In-Reply-To: References: Message-ID: <45115BB7-897D-43D0-9A16-D54E58997F5F@redhat.com> > On Dec 19, 2016, at 3:09 AM, Joel Takvorian wrote: > > Thanks John, very interesting reading. > > No luck for us in inventory, as there's essentially updates rather than inserts, which is a bit more complicated than the solution described: "generally the client needs to be smart enough to merge updates based on a timestamp, with a periodic batch job that cleans out obsolete inserts? Can you explain more what you mean about there being updates rather than inserts? Inserts and updates are the same in Cassandra. Think of a put operation on a map. > > But now we're considering the alternative of reading/writing the whole graph at once and process queries in memory. If "the whole graph" is too big to fit in memory without problems, then we should find a way to partition it. > > > On Sat, Dec 17, 2016 at 2:54 AM, John Sanda > wrote: > This thread post https://goo.gl/8cpSwM from cassandra-users list has a good write up on an approach for implementing transactions across multiple tables in order to provide stronger consistency. I found it particularly interesting in light of the discussions of inventory and consistency. > > - John > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161219/f940af00/attachment.html From jtakvori at redhat.com Tue Dec 20 04:46:22 2016 From: jtakvori at redhat.com (Joel Takvorian) Date: Tue, 20 Dec 2016 10:46:22 +0100 Subject: [Hawkular-dev] approach for managing consistency with Cassandra In-Reply-To: <45115BB7-897D-43D0-9A16-D54E58997F5F@redhat.com> References: <45115BB7-897D-43D0-9A16-D54E58997F5F@redhat.com> Message-ID: Regardless the underlying mechanisms, there's still a logical distinction between updates and inserts, and here the proposed algorithms seem relevant, if I understand, only for inserted rows: step 4 of reading says "The client code culls rows where the transactionUUID existed in the IncompleteTransactions table." which means, applied to the Inventory case, that it will not return any Entity that is currently modified in an ongoing transaction. Which would not be the desired behaviour for us, we'd like to have the pre-transaction state. It's explicitly written in the post, quoting: "This is just an example, one that is reasonably performant for ledger-style *non-updated inserts*. For *transactions involving updates* to possibly existing data, more effort is required, generally the client needs to be smart enough to merge updates based on a timestamp, with a periodic batch job that cleans out obsolete inserts." Or is there something I haven't understood here? On Mon, Dec 19, 2016 at 3:24 PM, John Sanda wrote: > > On Dec 19, 2016, at 3:09 AM, Joel Takvorian wrote: > > Thanks John, very interesting reading. > > No luck for us in inventory, as there's essentially updates rather than > inserts, which is a bit more complicated than the solution described: "generally > the client needs to be smart enough to merge updates based on a timestamp, > with a periodic batch job that cleans out obsolete inserts? > > > Can you explain more what you mean about there being updates rather than > inserts? Inserts and updates are the same in Cassandra. Think of a put > operation on a map. > > > But now we're considering the alternative of reading/writing the whole > graph at once and process queries in memory. If "the whole graph" is too > big to fit in memory without problems, then we should find a way to > partition it. > > > On Sat, Dec 17, 2016 at 2:54 AM, John Sanda wrote: > >> This thread post https://goo.gl/8cpSwM from cassandra-users list has a >> good write up on an approach for implementing transactions across multiple >> tables in order to provide stronger consistency. I found it particularly >> interesting in light of the discussions of inventory and consistency. >> >> - John >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161220/6e55f190/attachment.html From lkrejci at redhat.com Wed Dec 21 04:15:42 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Wed, 21 Dec 2016 10:15:42 +0100 Subject: [Hawkular-dev] srcdeps is apparently broken or at least not working on travis In-Reply-To: <1ef12385-7503-9ec2-0104-81ec451df239@redhat.com> References: <661845884.5984676.1481862155404.JavaMail.zimbra@redhat.com> <1698247176.6134748.1481900154121.JavaMail.zimbra@redhat.com> <1ef12385-7503-9ec2-0104-81ec451df239@redhat.com> Message-ID: <741160642.HqSd3rFIAP@localhost.localdomain> On Friday, December 16, 2016 9:04:35 PM CET you wrote: > On 2016-12-16 15:55, John Mazzitelli wrote: > > I changed the travis config to hopefully get a newer java, but now I get > > this: > > > > https://travis-ci.org/hawkular/hawkular-agent/builds/184547424#L493 > > > > [ERROR] Failed to execute goal on project > > hawkular-inventory-impl-tinkerpop-sql-provider:Could not resolve > > dependencies for project > org.hawkular.inventory:hawkular-inventory-impl-tinkerpop-sql-provider:jar:1. > 1.3.Final-SRC-revision-b9baf812c880565fd135540d6565172e9badb642: Failed to > collect dependencies at > org.umlg:sqlg-h2-dialect:jar:1.3.2-SRC-revision-b8cbea0f96fcbbd5150e7a4f9c46 > 9850b9973331 -> org.umlg:sqlg-core:jar:1.3.2-SNAPSHOT: Failed to read > artifact > descriptor for org.umlg:sqlg-core:jar:1.3.2-SNAPSHOT: Could not transfer > artifact org.umlg:sqlg-core:pom:1.3.2-SNAPSHOT from/to > codehaus-snapshots (https://nexus.codehaus.org/snapshots/): > nexus.codehaus.org: Unknown host nexus.codehaus.org -> [Help 1] > > Looks like this one: https://github.com/travis-ci/travis-ci/issues/4629 > The workaround is > > cp -t ~/.m2 .travis.maven.settings.xml > > > So I'm going to wait for inventory to release and then just pin the agent > > on that inventory release rather than try to get the srcdep to build. > A release of Inventory should also be a valid solution but Luk?? should > take care to switch to an org.umlg release before he releases Inventory. > ATM, org.umlg is a source dependency of Inventory and a release of > Inventory will fail because of that. > Inventory has long depended on a srcdep of Sqlg because it is deemed OK to do that. So do I now have to depend on a non-srcdep for a release to work? > -- P > > > Then we can worry about h-services. > > > > Lukas - how soon will inventory be released? :) > > > >> ----- Original Message ----- > >> > >>> Hi Mazz, > >>> > >>> given that > >>> > >>> (1) I can [1] build the PR278 [2] locally using Oracle Java 1.8.0_92 > >>> (2) The error message on Travis comes from the compiler, the underlying > >>> > >>> Java being quite ancient 1.8.0_31 > >>> > >>> (3) Inventory's Travis can also build and also uses a newer Java > >>> > >>> 1.8.0_111 > >>> > >>> I conclude that the old Java 1.8.0_92 on Agent's Travis is the main > >>> suspect. You should find a way to upgrade it. > >>> > >>> [1] Well I cannot fully build the PR278 locally - the build is not fully > >>> passing, but the srcdeps build of Inventory finishes successfully and I > >>> get a different non-srcdeps error later in the process. > >>> > >>> [2] https://github.com/hawkular/hawkular-agent/pull/278 > >>> > >>> Best, > >>> > >>> Peter > >>> > >>> On 2016-12-16 05:31, John Mazzitelli wrote: > >>>> And here is the errors in h-services - it doesn't even look like > >>>> srcdeps > >>>> is > >>>> attempting to build them here: > >>>> > >>>> https://travis-ci.org/hawkular/hawkular-services/builds/184443394#L411-> >>>> L413 > >>>> > >>>> ----- Original Message ----- > >>>> > >>>>> I can build this locally fine. However, srcdep plugin when running on > >>>>> travis > >>>>> is failing to compile h-inventory. > >>>>> > >>>>> See the tons of compile errors here: > >>>>> > >>>>> https://travis-ci.org/hawkular/hawkular-agent#L391 > >>>>> > >>>>> We need to either: > >>>>> > >>>>> a) fix what is wrong with srcdep and travis > >>>>> b) release inventory (and other dependencies in order to build things > >>>>> further > >>>>> downstream) so we don't use srcdeps > >>>>> > >>>>> At this point, my okhttp upgrade is dead in the water since I can't > >>>>> get > >>>>> the > >>>>> h-agent or h-services repos to go green. > >>>>> > >>>>> _______________________________________________ > >>>>> hawkular-dev mailing list > >>>>> hawkular-dev at lists.jboss.org > >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > >>>> > >>>> _______________________________________________ > >>>> hawkular-dev mailing list > >>>> hawkular-dev at lists.jboss.org > >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > >> > >> _______________________________________________ > >> hawkular-dev mailing list > >> hawkular-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From jshaughn at redhat.com Thu Dec 22 14:47:37 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Thu, 22 Dec 2016 14:47:37 -0500 Subject: [Hawkular-dev] Hawkular Alerting 1.5.0.Final has been released! Message-ID: <8ede7264-c6da-5b4c-d973-b6368cf5850f@redhat.com> The Hawkular Alerting team is happy to announce the release of Hawkular Alerting 1.5.0.Final. This is a feature and fix release. Enhancement * [HWKALERTS-209 ] - Add new NelsonCondition for native Nelson Rule detection o A brand new condition type to perform automatic Nelson Rule detection of misbehaving metrics. * [HWKALERTS-207 ] - Allow ExternalCondition to be fired on Event submission o External conditions can now be matched via Event and Data submissions. Bug * [HWKALERTS-210 ] - Autoresolve does not work on clustering setup o Critical fix if using multi-condition triggers in a clustered environment! * [HWKALERTS-208 ] - Email plugin lacks some support for newer condition types Hawkular Alerting 1.5.0.Final is available: * Immediately as a standalone distribution * Soon as part of Hawkular Metrics 0.23.0.Final o or immediately if building Hawkular Metrics from source * Soon as part of Hawkular Services (supporting ManageIQ) o When Hawkular Services upgrades to Metrics 0.23.0.Final For more details about this release: https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12315924&version=12332918 http://www.hawkular.org/ http://www.hawkular.org/community/docs/developer-guide/alerts.html http://www.hawkular.org/docs/rest/rest-alerts.html https://github.com/hawkular/hawkular-alerts https://github.com/hawkular/hawkular-alerts/tree/master/examples #hawkular on freenode Hawkular Alerting Team Jay Shaughnessy (jshaughn at redhat.com) Lucas Ponce (lponce at redhat.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161222/d3ae8d25/attachment-0001.html From tcunning at redhat.com Thu Dec 22 15:52:34 2016 From: tcunning at redhat.com (Thomas Cunningham) Date: Thu, 22 Dec 2016 15:52:34 -0500 Subject: [Hawkular-dev] HawkFX on Mac OS X issues? Message-ID: Hi, I'm trying to use HawkFX on Mac OS X - I have it working great on Fedora but would like to set it up on my Mac OS X box as well. I've got no experience with ruby or jruby so I may be doing something wrong here in installation, because I'm seeing the following : lilguylaptop:hawkfx cunningt$ jruby -S -G hawkfx.rb NameError: uninitialized constant G Did you mean? GC const_missing at org/jruby/RubyModule.java:3348
at -G: I installed rvm through homebrew and followed the current instructions in the README.adoc. Should I be installing from source using a tool other than homebrew to install rvm? It looks like I'm using rvm 1.28.0 and jruby 9.1.5.0. Does that seem right? Other possibly useful info : Mac OS X 10.12 java 1.8.0_31 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161222/b9c868e6/attachment.html From hrupp at redhat.com Fri Dec 23 03:29:09 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Fri, 23 Dec 2016 09:29:09 +0100 Subject: [Hawkular-dev] HawkFX on Mac OS X issues? In-Reply-To: References: Message-ID: <17CA7990-BA77-4214-9185-173A94863F4E@redhat.com> On 22 Dec 2016, at 21:52, Thomas Cunningham wrote: > Hi, > > I'm trying to use HawkFX on Mac OS X - I have it working great on Fedora > but would like to set it up on my Mac OS X box as well. I've got no > experience with ruby or jruby so I may be doing something wrong here in > installation, because I'm seeing the following : > > lilguylaptop:hawkfx cunningt$ jruby -S -G hawkfx.rb Can you try to run -G -S hawkfx.rb (reverse the flags)? I am running on OS/X myself and it works (with the reversed flags). Otherwise "bundle exec jruby hawkfx.rb" should also work. Note that Jruby 9.1.6.0 has an issue, but you use 9.1.5.0 which is good. From kavinkankeshwar at gmail.com Thu Dec 22 14:59:41 2016 From: kavinkankeshwar at gmail.com (Kavin Kankeshwar) Date: Thu, 22 Dec 2016 11:59:41 -0800 Subject: [Hawkular-dev] Hawkular Usage and other stats In-Reply-To: References: Message-ID: Thanks, Im trying to get my team to use Hawkular, I have hawkular up and running, But i dont have a whole hello world like microservcies project for hawkular to show all the features available. Also basically i am looking for something which allows to troubleshooting monitor performance, Allow to drill down to figure out whats happening , Along with that, I also want something which can be used for Alerting on API's if services is up or down and Metrics from apps. Basically looking for something for end user(ops/engineers/managers) would be a one stop shop to show entire health and numbers for the entire system. Just looking for some base to bootstrap and if some of our requirements are not there, either discuss with community and/or propose/submit solutions back. If anyone has some demo projects and maybe docs/slides for Hawkular ecosystem, i would love to get it up and running internally to start conversation on what it can do , what it cannot do (for our use cases) etc. Regards, -- Kavin.Kankeshwar On Wed, Dec 14, 2016 at 2:28 AM, Thomas Heute wrote: > In terms of production-ready, we ship Hawkular with OpenShift to Red Hat > customers, and we are about to ship it to CloudForms customers as well. > > It's still in very active development though and we are head down in it, > so we have?'t make a lot of noise on Hawkular so far. We lost our GitHub > star history on a repository rename, so that didn't help on that side :) > Feel free to star us ;) (The number of repos is also not really helping ;)) > > We welcome all contributions of course (or ideas, feedback), the more > agents/usecases will have, the more the community will grow, at the moment > we have Wildfy and OpenShift agents which are very important for us, but > other agents would definitely help community awareness. > > Could you tell us how you'd like to use Hawkular ? > > Thomas > > > > > > On Tue, Dec 13, 2016 at 11:29 PM, Kavin Kankeshwar < > kavinkankeshwar at gmail.com> wrote: > >> Hi, >> >> I am evaluating Hawkular seems very interesting, I just wanted to know if >> you guys have some usage stats and community involvement ? >> >> I see see only few Github Stars etc, but the project is being actively >> developed based on commit history. >> >> Just wanted to figure out about hawkular if its production ready and I >> can start using it if needed at my workplace. Obviously once we start using >> if there are any changes I need i am willing to submit patches etc. But >> just wanted to check on stats before i dive in . :) >> >> Thanks! >> >> Regards, >> -- >> Kavin.Kankeshwar >> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161222/e821a823/attachment.html From hrupp at redhat.com Fri Dec 23 04:40:35 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Fri, 23 Dec 2016 10:40:35 +0100 Subject: [Hawkular-dev] Hawkular Alerting 1.5.0.Final has been released! In-Reply-To: <8ede7264-c6da-5b4c-d973-b6368cf5850f@redhat.com> References: <8ede7264-c6da-5b4c-d973-b6368cf5850f@redhat.com> Message-ID: <4629106A-B0E4-4850-AC76-B3D19F4F84A6@redhat.com> Cool, congrats! From theute at redhat.com Fri Dec 23 05:51:11 2016 From: theute at redhat.com (Thomas Heute) Date: Fri, 23 Dec 2016 11:51:11 +0100 Subject: [Hawkular-dev] Hawkular Alerting 1.5.0.Final has been released! In-Reply-To: <8ede7264-c6da-5b4c-d973-b6368cf5850f@redhat.com> References: <8ede7264-c6da-5b4c-d973-b6368cf5850f@redhat.com> Message-ID: ? On Thu, Dec 22, 2016 at 8:47 PM, Jay Shaughnessy wrote: > > The Hawkular Alerting team is happy to announce the release of Hawkular > Alerting 1.5.0.Final. This is a feature and fix release. > Enhancement > > - [HWKALERTS-209 ] - > Add new NelsonCondition for native Nelson Rule detection > - A brand new condition type to perform automatic Nelson Rule > detection of misbehaving metrics. > - [HWKALERTS-207 ] - > Allow ExternalCondition to be fired on Event submission > - External conditions can now be matched via Event and Data > submissions. > > Bug > > - [HWKALERTS-210 ] - > Autoresolve does not work on clustering setup > - Critical fix if using multi-condition triggers in a clustered > environment! > - [HWKALERTS-208 ] - > Email plugin lacks some support for newer condition types > > > > Hawkular Alerting 1.5.0.Final is available: > > - Immediately as a standalone distribution > - Soon as part of Hawkular Metrics 0.23.0.Final > - or immediately if building Hawkular Metrics from source > - Soon as part of Hawkular Services (supporting ManageIQ) > - When Hawkular Services upgrades to Metrics 0.23.0.Final > > > For more details about this release: https://issues.jboss.org/ > secure/ReleaseNote.jspa?projectId=12315924&version=12332918 > > http://www.hawkular.org/ > http://www.hawkular.org/community/docs/developer-guide/alerts.html > http://www.hawkular.org/docs/rest/rest-alerts.html > > https://github.com/hawkular/hawkular-alerts > https://github.com/hawkular/hawkular-alerts/tree/master/examples > > #hawkular on freenode > > Hawkular Alerting Team > Jay Shaughnessy (jshaughn at redhat.com) > Lucas Ponce (lponce at redhat.com) > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161223/76d8cb26/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: Nelson-Muntz-image-nelson-muntz-36389279-500-384.jpg Type: image/jpeg Size: 21677 bytes Desc: not available Url : http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161223/76d8cb26/attachment-0001.jpg From garethahealy at gmail.com Sat Dec 24 07:45:13 2016 From: garethahealy at gmail.com (Gareth Healy) Date: Sat, 24 Dec 2016 12:45:13 +0000 Subject: [Hawkular-dev] OpenShift agent - multiple identity for certs Message-ID: Currently it seems you can only provide the agent configmap with the identity field. But what i want to actually do, is provide this based on the pods config map, i.e.: data: hawkular-openshift-agent: | endpoints: - type: prometheus protocol: "https" port: 9779 path: /metrics collection_interval_secs: 5 metrics: - name: my-first-metric type: counter identity: cert_file: /var/run/secrets/client-crt/client.crt private_key_file: /var/run/secrets/client-key/client.key The reason being, i might have multiple prometheus endpoints that have different certs. Is that possible? or planned for the future? Cheers. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161224/5ba7a273/attachment.html From mazz at redhat.com Sat Dec 24 10:30:21 2016 From: mazz at redhat.com (John Mazzitelli) Date: Sat, 24 Dec 2016 10:30:21 -0500 (EST) Subject: [Hawkular-dev] OpenShift agent - multiple identity for certs In-Reply-To: References: Message-ID: <1123764861.7233900.1482593421608.JavaMail.zimbra@redhat.com> > Currently it seems you can only provide the agent configmap with the identity > field. But what i want to actually do, is provide this based on the pods > config map> > [chomp] > Is that possible? or planned for the future? I was hoping this wasn't going to be needed :) But we did talk about it. It is not possible today because there is one major problem with what you suggest that would need to be solved somehow: > cert_file: /var/run/secrets/client-crt/client.crt > private_key_file: /var/run/secrets/client-key/client.key That is inside your configmap on your OpenShift project (which may or may not be the same project where the agent is deployed). So - what file system is that actually referring to? And how does the agent get access to those files? From mazz at redhat.com Sat Dec 24 14:32:32 2016 From: mazz at redhat.com (John Mazzitelli) Date: Sat, 24 Dec 2016 14:32:32 -0500 (EST) Subject: [Hawkular-dev] OpenShift agent - multiple identity for certs In-Reply-To: References: Message-ID: <1392503459.7237332.1482607952453.JavaMail.zimbra@redhat.com> BTW: I would like to know more about why you want this. The "Identity" configuration identifies the agent (so having one key-pair makes sense - it identifies your agent. Having multiple key-pairs per agent will actually mean your agent has different identities depending on what endpoint it is talking to - not sure this is what we want). If you have multiple Prometheus endpoints (each with their own server key/cert) I don't see why you would need different agent identities defined in your endpoints. The "identity" is the client's identification, nothing to do with the server, and a client should have one identity, not multiple. Now, if the concern is that your different Prometheus endpoint server certs are signed by different CAs (or are all self-signed) that is a different issue I think. It is assumed the host's default root CA set would be good enough to verify server endpoints, but if not, we would need to provide to the agent with all the CA certificates necessary for endpoints to be verified. Note: for the record, the agent doesn't do any server verification today - see https://github.com/hawkular/hawkular-openshift-agent/blob/master/http/http_client.go#L33 - so the agent should be able to collect metrics from any endpoint today. In the future we would need to be able to provide the agent with a trust store that contains all the CA certs required to talk to all the endpoints, assuming the host's default root CA set is not good enough. This is what we haven't implemented yet. Probably something like "ca_cert_file" defined in the "Identity" section, which would mean the Identity section would not only tell the agent what its own key-pair is, but will also say what its trusted CAs are. ----- Original Message ----- > Currently it seems you can only provide the agent configmap with the identity > field. But what i want to actually do, is provide this based on the pods > config map, i.e.: > > data: > hawkular-openshift-agent: | > endpoints: > - type: prometheus > protocol: "https" > port: 9779 > path: /metrics > collection_interval_secs: 5 > metrics: > - name: my-first-metric > type: counter > identity: > cert_file: /var/run/secrets/client-crt/client.crt > private_key_file: /var/run/secrets/client-key/client.key > The reason being, i might have multiple prometheus endpoints that have > different certs. > > Is that possible? or planned for the future? > > Cheers. From garethahealy at gmail.com Sun Dec 25 04:56:17 2016 From: garethahealy at gmail.com (Gareth Healy) Date: Sun, 25 Dec 2016 09:56:17 +0000 Subject: [Hawkular-dev] OpenShift agent - multiple identity for certs In-Reply-To: <1123764861.7233900.1482593421608.JavaMail.zimbra@redhat.com> References: <1123764861.7233900.1482593421608.JavaMail.zimbra@redhat.com> Message-ID: One of the first services i am trying to monitor is etcd. etcd in OCP is configured as per the below: /var/lib/origin/openshift.local.config/master/master-config.yaml etcdClientInfo: ca: ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: - https://10.2.2.2:4001 Which responds with the below cURL: curl https://10.2.2.2:4001/metrics --cacert ./ca.crt --cert ./master.etcd-client.crt --key ./master.etcd-client.key So without the "Identity" configuration section set on the agent config, i'd get a TLS error. As etcd is a core part of OCP, I don't have much control over the client certs and expect there might be other services which require the same setup using different certs that i might want to monitor. Hope that makes things clear, and Merry Christmas. Cheers. On Sat, Dec 24, 2016 at 3:30 PM, John Mazzitelli wrote: > > Currently it seems you can only provide the agent configmap with the > identity > > field. But what i want to actually do, is provide this based on the pods > > config map> > > [chomp] > > Is that possible? or planned for the future? > > I was hoping this wasn't going to be needed :) But we did talk about it. > > It is not possible today because there is one major problem with what you > suggest that would need to be solved somehow: > > > cert_file: /var/run/secrets/client-crt/client.crt > > private_key_file: /var/run/secrets/client-key/client.key > > That is inside your configmap on your OpenShift project (which may or may > not be the same project where the agent is deployed). > > So - what file system is that actually referring to? And how does the > agent get access to those files? > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161225/ac25cf17/attachment.html From mazz at redhat.com Sun Dec 25 08:23:16 2016 From: mazz at redhat.com (John Mazzitelli) Date: Sun, 25 Dec 2016 08:23:16 -0500 (EST) Subject: [Hawkular-dev] OpenShift agent - multiple identity for certs In-Reply-To: References: <1123764861.7233900.1482593421608.JavaMail.zimbra@redhat.com> Message-ID: <807912327.7243365.1482672196708.JavaMail.zimbra@redhat.com> Gareth, OK, there are a couple things here that I'm confused about. This is how I would understand things working. If you want to connect to any https endpoint, the agent will need SOME identity (so you have to give it SOME public/private key pair - which is what the Identity section does). It doesn't have to be the key-pair of the server (in fact, under normal situations it is not - the server is identified with its own public/private key and the client with another). But the point is, if you are connecting to an https endpoint, you can't leave Identity section out of the agent config. So when you say, "without the "Identity" configuration section set on the agent config, i'd get a TLS error" this is what I would expect. You can't leave the Identity section out when connecting via https because in that case the agent has no keys to talk TLS to the server. What does your agent config look like when you get things to work? (I assume you do get it to work because you said without the Identity you get a TLS error, which implies you do get it to work WITH an Identity section - is this correct?) What key files are you putting in the agent Identity when you get it to work? So I guess what I am saying is - have you tried to generate your own certificate and assigned it to your agent's Identity and then tried to connect to multiple https endpoints? Because as I mentioned earlier in another post, the agent today doesn't do server-cert verification, so it should "just work". You shouldn't need different Identities per endpoint. Once we add in verification, the endpoints you want to collect metrics from would need their server-side certs to be signed with a CA that the agent trusts (i.e. from the agent host's default root CA set) - we would then have to add the ability for the agent to be told about different CAs in case your server-side certs are, say, self-signed or signed with your own CA that isn't a trusted one found in the host's default root CA set. Oh, and, Merry Christmas! John Mazz ----- Original Message ----- > One of the first services i am trying to monitor is etcd. etcd in OCP is > configured as per the below: > > /var/lib/origin/openshift.local.config/master/master-config.yaml > > > etcdClientInfo: > ca: ca.crt > certFile: master.etcd-client.crt > keyFile: master.etcd-client.key > urls: > - https://10.2.2.2:4001 > > Which responds with the below cURL: > > curl https://10.2.2.2:4001/metrics --cacert ./ca.crt --cert > ./master.etcd-client.crt --key ./master.etcd-client.key > > So without the "Identity" configuration section set on the agent config, > i'd get a TLS error. As etcd is a core part of OCP, I don't have much > control over the client certs and expect there might be other services > which require the same setup using different certs that i might want to > monitor. > > Hope that makes things clear, and Merry Christmas. > > Cheers. > > On Sat, Dec 24, 2016 at 3:30 PM, John Mazzitelli wrote: > > > > Currently it seems you can only provide the agent configmap with the > > identity > > > field. But what i want to actually do, is provide this based on the pods > > > config map> > > > [chomp] > > > Is that possible? or planned for the future? > > > > I was hoping this wasn't going to be needed :) But we did talk about it. > > > > It is not possible today because there is one major problem with what you > > suggest that would need to be solved somehow: > > > > > cert_file: /var/run/secrets/client-crt/client.crt > > > private_key_file: /var/run/secrets/client-key/client.key > > > > That is inside your configmap on your OpenShift project (which may or may > > not be the same project where the agent is deployed). > > > > So - what file system is that actually referring to? And how does the > > agent get access to those files? > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > From garethahealy at gmail.com Wed Dec 28 06:02:16 2016 From: garethahealy at gmail.com (Gareth Healy) Date: Wed, 28 Dec 2016 11:02:16 +0000 Subject: [Hawkular-dev] OpenShift agent - multiple identity for certs In-Reply-To: <807912327.7243365.1482672196708.JavaMail.zimbra@redhat.com> References: <1123764861.7233900.1482593421608.JavaMail.zimbra@redhat.com> <807912327.7243365.1482672196708.JavaMail.zimbra@redhat.com> Message-ID: Hi John, Below are the steps i am doing; 1. The certs for etcd are here (see path below), both server and client. I am only interested in the client for the agent: [root at localhost master]# pwd /var/lib/origin/openshift.local.config/master [root at localhost master]# ls -ltr *etc* -rw-rw-rw-. 1 root root 1078 Sep 23 18:31 master.etcd-client.crt -rw-rw-rw-. 1 root root 1679 Sep 23 18:31 master.etcd-client.key -rw-rw-rw-. 1 root root 1675 Sep 23 18:31 etcd.server.key -rw-rw-rw-. 1 root root 2550 Sep 23 18:31 etcd.server.crt 2. I add the client certs as secrets and mount them to the agent: oc project openshift-infra oc secrets new etcd-client-crt master.etcd-client.crt oc secrets new etcd-client-key master.etcd-client.key oc volume rc/hawkular-openshift-agent --add --name=etcd-client-crt --type=secret --secret-name=etcd-client-crt --mount-path=/run/secrets/etcd-client-crt oc volume rc/hawkular-openshift-agent --add --name=etcd-client-key --type=secret --secret-name=etcd-client-key --mount-path=/run/secrets/etcd-client-key 3. Then edit the config map of the agent and add in the below, which matches the above secret mounts: oc edit configmap hawkular-openshift-agent-configuration identity: cert_file: /run/secrets/etcd-client-crt/master.etcd-client.crt private_key_file: /run/secrets/etcd-client-key/master.etcd-client.key 4. Restart the pod to force a refresh and check the logs, which shows: I1228 10:20:18.799687 1 prometheus_metrics_collector.go:97] DEBUG: Told to collect [2] Prometheus metrics from [https://172.17.0.8:9779/metrics ] I1228 10:20:18.984615 1 metrics_storage.go:152] DEBUG: Stored datapoints for [2] metrics I now have a working agent collecting from etcd. Since etcd is mutual auth - maybe thats what is causing the confusion, as you keep mentioning server-certs - i am not sure how generating my own client certs helps. But since you said try it, i have with the below commands but as i would expect, got a TLS error: openssl req -newkey rsa:2048 -nodes -keyout agent.key -out agent.csr -subj "/C=UK/ST=Yorkshire/L=Leeds/O=Home/CN=hawkular-agent" openssl x509 -signkey agent.key -in agent.csr -req -days 365 -out agent.crt Just incase there are cross wires; etcd in OCP requires mutual auth (thats how i understand it), so thats the reason i am adding in the etcd client certs to the identity section of the agent configmap. If i needed to monitor another endpoint which was also mutual auth, with the current setup i wouldn't be able to do that. If theres anything you want me to try, happy to do so. Cheers. On Sun, Dec 25, 2016 at 1:23 PM, John Mazzitelli wrote: > Gareth, > > OK, there are a couple things here that I'm confused about. This is how I > would understand things working. > > If you want to connect to any https endpoint, the agent will need SOME > identity (so you have to give it SOME public/private key pair - which is > what the Identity section does). It doesn't have to be the key-pair of the > server (in fact, under normal situations it is not - the server is > identified with its own public/private key and the client with another). > But the point is, if you are connecting to an https endpoint, you can't > leave Identity section out of the agent config. > > So when you say, "without the "Identity" configuration section set on the > agent config, i'd get a TLS error" this is what I would expect. You can't > leave the Identity section out when connecting via https because in that > case the agent has no keys to talk TLS to the server. > > What does your agent config look like when you get things to work? (I > assume you do get it to work because you said without the Identity you get > a TLS error, which implies you do get it to work WITH an Identity section - > is this correct?) What key files are you putting in the agent Identity when > you get it to work? > > So I guess what I am saying is - have you tried to generate your own > certificate and assigned it to your agent's Identity and then tried to > connect to multiple https endpoints? Because as I mentioned earlier in > another post, the agent today doesn't do server-cert verification, so it > should "just work". You shouldn't need different Identities per endpoint. > Once we add in verification, the endpoints you want to collect metrics from > would need their server-side certs to be signed with a CA that the agent > trusts (i.e. from the agent host's default root CA set) - we would then > have to add the ability for the agent to be told about different CAs in > case your server-side certs are, say, self-signed or signed with your own > CA that isn't a trusted one found in the host's default root CA set. > > Oh, and, Merry Christmas! > > John Mazz > > ----- Original Message ----- > > One of the first services i am trying to monitor is etcd. etcd in OCP is > > configured as per the below: > > > > /var/lib/origin/openshift.local.config/master/master-config.yaml > > > > > > etcdClientInfo: > > ca: ca.crt > > certFile: master.etcd-client.crt > > keyFile: master.etcd-client.key > > urls: > > - https://10.2.2.2:4001 > > > > Which responds with the below cURL: > > > > curl https://10.2.2.2:4001/metrics --cacert ./ca.crt --cert > > ./master.etcd-client.crt --key ./master.etcd-client.key > > > > So without the "Identity" configuration section set on the agent config, > > i'd get a TLS error. As etcd is a core part of OCP, I don't have much > > control over the client certs and expect there might be other services > > which require the same setup using different certs that i might want to > > monitor. > > > > Hope that makes things clear, and Merry Christmas. > > > > Cheers. > > > > On Sat, Dec 24, 2016 at 3:30 PM, John Mazzitelli > wrote: > > > > > > Currently it seems you can only provide the agent configmap with the > > > identity > > > > field. But what i want to actually do, is provide this based on the > pods > > > > config map> > > > > [chomp] > > > > Is that possible? or planned for the future? > > > > > > I was hoping this wasn't going to be needed :) But we did talk about > it. > > > > > > It is not possible today because there is one major problem with what > you > > > suggest that would need to be solved somehow: > > > > > > > cert_file: /var/run/secrets/client-crt/client.crt > > > > private_key_file: /var/run/secrets/client-key/client.key > > > > > > That is inside your configmap on your OpenShift project (which may or > may > > > not be the same project where the agent is deployed). > > > > > > So - what file system is that actually referring to? And how does the > > > agent get access to those files? > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161228/73142866/attachment-0001.html From mazz at redhat.com Wed Dec 28 09:29:49 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 28 Dec 2016 09:29:49 -0500 (EST) Subject: [Hawkular-dev] OpenShift agent - multiple identity for certs In-Reply-To: References: <1123764861.7233900.1482593421608.JavaMail.zimbra@redhat.com> <807912327.7243365.1482672196708.JavaMail.zimbra@redhat.com> Message-ID: <1863845535.7436383.1482935389211.JavaMail.zimbra@redhat.com> OK, yes, its the mutual auth that I was missing. This is why you need to use that specific client cert, because I assume the server-side will only trust that client cert and no others (unless you somehow add your generated cert to some kind of "trust store" so the server trusts it as well). Hmm... so this is an interesting problem. For now, I would say the answer is "generate a cert for your agent, and tell your servers to trust it" if that is even possible but even so I assume that isn't a nice way to do it?? The problem with the agent is the endpoints are defined by your pods (they are config maps on the projects where the pods are deployed) and the people doing those pod deployments are typically not the same admin person who configures and sets up the agent. So how would the pod configmap authors know the locations and names of the key-pairs or have privileges to install those keys on the agent (I'm thinking of the use case where OpenShift is running and developers/deployers are deploying their own applications and want to collect their own app metrics). Anyway, I created an issue in github where we can discuss this further there: https://github.com/hawkular/hawkular-openshift-agent/issues/75 ----- Original Message ----- > Hi John, > > Below are the steps i am doing; > > 1. The certs for etcd are here (see path below), both server and client. I > am only interested in the client for the agent: > > [root at localhost master]# pwd > /var/lib/origin/openshift.local.config/master > > [root at localhost master]# ls -ltr *etc* > -rw-rw-rw-. 1 root root 1078 Sep 23 18:31 master.etcd-client.crt > -rw-rw-rw-. 1 root root 1679 Sep 23 18:31 master.etcd-client.key > -rw-rw-rw-. 1 root root 1675 Sep 23 18:31 etcd.server.key > -rw-rw-rw-. 1 root root 2550 Sep 23 18:31 etcd.server.crt > > 2. I add the client certs as secrets and mount them to the agent: > > oc project openshift-infra > oc secrets new etcd-client-crt master.etcd-client.crt > oc secrets new etcd-client-key master.etcd-client.key > > oc volume rc/hawkular-openshift-agent --add --name=etcd-client-crt > --type=secret --secret-name=etcd-client-crt > --mount-path=/run/secrets/etcd-client-crt > oc volume rc/hawkular-openshift-agent --add --name=etcd-client-key > --type=secret --secret-name=etcd-client-key > --mount-path=/run/secrets/etcd-client-key > > 3. Then edit the config map of the agent and add in the below, which > matches the above secret mounts: > > > oc edit configmap hawkular-openshift-agent-configuration > > identity: > cert_file: /run/secrets/etcd-client-crt/master.etcd-client.crt > private_key_file: /run/secrets/etcd-client-key/master.etcd-client.key > > 4. Restart the pod to force a refresh and check the logs, which shows: > > > I1228 10:20:18.799687 1 prometheus_metrics_collector.go:97] DEBUG: > Told to collect [2] Prometheus metrics from [https://172.17.0.8:9779/metrics > ] > I1228 10:20:18.984615 1 metrics_storage.go:152] DEBUG: Stored > datapoints for [2] metrics > > I now have a working agent collecting from etcd. > > Since etcd is mutual auth - maybe thats what is causing the confusion, as > you keep mentioning server-certs - i am not sure how generating my own > client certs helps. > > > But since you said try it, i have with the below commands but as i would > expect, got a TLS error: > > openssl req -newkey rsa:2048 -nodes -keyout agent.key -out agent.csr -subj > "/C=UK/ST=Yorkshire/L=Leeds/O=Home/CN=hawkular-agent" > openssl x509 -signkey agent.key -in agent.csr -req -days 365 -out agent.crt > > Just incase there are cross wires; etcd in OCP requires mutual auth (thats > how i understand it), so thats the reason i am adding in the etcd client > certs to the identity section of the agent configmap. If i needed to > monitor another endpoint which was also mutual auth, with the current setup > i wouldn't be able to do that. > > If theres anything you want me to try, happy to do so. > > Cheers. > > On Sun, Dec 25, 2016 at 1:23 PM, John Mazzitelli wrote: > > > Gareth, > > > > OK, there are a couple things here that I'm confused about. This is how I > > would understand things working. > > > > If you want to connect to any https endpoint, the agent will need SOME > > identity (so you have to give it SOME public/private key pair - which is > > what the Identity section does). It doesn't have to be the key-pair of the > > server (in fact, under normal situations it is not - the server is > > identified with its own public/private key and the client with another). > > But the point is, if you are connecting to an https endpoint, you can't > > leave Identity section out of the agent config. > > > > So when you say, "without the "Identity" configuration section set on the > > agent config, i'd get a TLS error" this is what I would expect. You can't > > leave the Identity section out when connecting via https because in that > > case the agent has no keys to talk TLS to the server. > > > > What does your agent config look like when you get things to work? (I > > assume you do get it to work because you said without the Identity you get > > a TLS error, which implies you do get it to work WITH an Identity section - > > is this correct?) What key files are you putting in the agent Identity when > > you get it to work? > > > > So I guess what I am saying is - have you tried to generate your own > > certificate and assigned it to your agent's Identity and then tried to > > connect to multiple https endpoints? Because as I mentioned earlier in > > another post, the agent today doesn't do server-cert verification, so it > > should "just work". You shouldn't need different Identities per endpoint. > > Once we add in verification, the endpoints you want to collect metrics from > > would need their server-side certs to be signed with a CA that the agent > > trusts (i.e. from the agent host's default root CA set) - we would then > > have to add the ability for the agent to be told about different CAs in > > case your server-side certs are, say, self-signed or signed with your own > > CA that isn't a trusted one found in the host's default root CA set. > > > > Oh, and, Merry Christmas! > > > > John Mazz > > > > ----- Original Message ----- > > > One of the first services i am trying to monitor is etcd. etcd in OCP is > > > configured as per the below: > > > > > > /var/lib/origin/openshift.local.config/master/master-config.yaml > > > > > > > > > etcdClientInfo: > > > ca: ca.crt > > > certFile: master.etcd-client.crt > > > keyFile: master.etcd-client.key > > > urls: > > > - https://10.2.2.2:4001 > > > > > > Which responds with the below cURL: > > > > > > curl https://10.2.2.2:4001/metrics --cacert ./ca.crt --cert > > > ./master.etcd-client.crt --key ./master.etcd-client.key > > > > > > So without the "Identity" configuration section set on the agent config, > > > i'd get a TLS error. As etcd is a core part of OCP, I don't have much > > > control over the client certs and expect there might be other services > > > which require the same setup using different certs that i might want to > > > monitor. > > > > > > Hope that makes things clear, and Merry Christmas. > > > > > > Cheers. > > > > > > On Sat, Dec 24, 2016 at 3:30 PM, John Mazzitelli > > wrote: > > > > > > > > Currently it seems you can only provide the agent configmap with the > > > > identity > > > > > field. But what i want to actually do, is provide this based on the > > pods > > > > > config map> > > > > > [chomp] > > > > > Is that possible? or planned for the future? > > > > > > > > I was hoping this wasn't going to be needed :) But we did talk about > > it. > > > > > > > > It is not possible today because there is one major problem with what > > you > > > > suggest that would need to be solved somehow: > > > > > > > > > cert_file: /var/run/secrets/client-crt/client.crt > > > > > private_key_file: /var/run/secrets/client-key/client.key > > > > > > > > That is inside your configmap on your OpenShift project (which may or > > may > > > > not be the same project where the agent is deployed). > > > > > > > > So - what file system is that actually referring to? And how does the > > > > agent get access to those files? > > > > _______________________________________________ > > > > hawkular-dev mailing list > > > > hawkular-dev at lists.jboss.org > > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > > > > From garethahealy at gmail.com Thu Dec 29 08:16:29 2016 From: garethahealy at gmail.com (Gareth Healy) Date: Thu, 29 Dec 2016 13:16:29 +0000 Subject: [Hawkular-dev] Ability to group by datapoint tag in Grafana Message-ID: The OpenShift Agent when monitoring a prometheus endpoint creates a single metric with tagged datapoints, i.e.: https://github.com/coreos/etcd/blob/master/Documentation/v2/metrics.md# http-requests I1228 21:02:01.820530 1 metrics_storage.go:155] TRACE: Stored [3] [counter] datapoints for metric named [pod/fa32a887-cd08-11e6-ab2e-525400c583ad/custom/etcd_http_received_total]: [ {2016-12-28 21:02:01.638767339 +0000 UTC 622 map[method:DELETE]} {2016-12-28 21:02:01.638767339 +0000 UTC 414756 map[method:GET]} {2016-12-28 21:02:01.638767339 +0000 UTC 33647 map[method:PUT]} ] But when trying to view this via the grafana datasource, only 1 metric and the aggregated counts are shown. What i'd like to do is something like the below: { "start": 1482999755690, "end": 1483000020093, "order": "ASC", "tags": "pod_namespace:etcd-testing", "groupDatapointsByTagKey": "method" } Search via tags or name (as-is) and group the datapoints by a tag key, which would give you 3 lines, instead of 1. Does that sound possible? Cheers. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161229/fba8b2e4/attachment.html From mazz at redhat.com Thu Dec 29 11:33:22 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 29 Dec 2016 11:33:22 -0500 (EST) Subject: [Hawkular-dev] Ability to group by datapoint tag in Grafana In-Reply-To: References: Message-ID: <1982058557.7528867.1483029202597.JavaMail.zimbra@redhat.com> This would be a feature request on Hawkular-Metrics (if they don't do something like this already - I do not know. I will defer to the H-Metrics folks to talk about how they do querying off of tags). ----- Original Message ----- > The OpenShift Agent when monitoring a prometheus endpoint creates a single > metric with tagged datapoints, i.e.: > > > > > https://github.com/coreos/etcd/blob/master/Documentation/v2/metrics.md#http-requests > > > > > I1228 21:02:01.820530 1 metrics_storage.go:155] TRACE: Stored [3] [counter] > datapoints for metric named > [pod/fa32a887-cd08-11e6-ab2e-525400c583ad/custom/etcd_http_received_total]: [ > {2016-12-28 21:02:01.638767339 +0000 UTC 622 map[method:DELETE]} > {2016-12-28 21:02:01.638767339 +0000 UTC 414756 map[method:GET]} > {2016-12-28 21:02:01.638767339 +0000 UTC 33647 map[method:PUT]} > ] > > But when trying to view this via the grafana datasource, only 1 metric and > the aggregated counts are shown. What i'd like to do is something like the > below: > > > > { > "start": 1482999755690, > "end": 1483000020093, > "order": "ASC", > "tags": "pod_namespace:etcd-testing", > "groupDatapointsByTagKey": "method" > } > > Search via tags or name (as-is) and group the datapoints by a tag key, which > would give you 3 lines, instead of 1. > > Does that sound possible? > > Cheers. > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From garethahealy at gmail.com Fri Dec 30 11:57:13 2016 From: garethahealy at gmail.com (Gareth Healy) Date: Fri, 30 Dec 2016 16:57:13 +0000 Subject: [Hawkular-dev] Ability to group by datapoint tag in Grafana In-Reply-To: <1982058557.7528867.1483029202597.JavaMail.zimbra@redhat.com> References: <1982058557.7528867.1483029202597.JavaMail.zimbra@redhat.com> Message-ID: I did some hacking (day 1 of Rx, so probably not the best "solution"). But works... - https://github.com/garethahealy/hawkular-metrics/commit/75c616f9be71a0b85ce5dee310c4dff828bb8f38 Sample output: - https://gist.github.com/garethahealy/00a90bbee2556b6f0a338ece87096c89 And some test cURL commands: - https://gist.github.com/garethahealy/0f46aad5d2da41b82aad5af317fad788 Cheers. On Thu, Dec 29, 2016 at 4:33 PM, John Mazzitelli wrote: > This would be a feature request on Hawkular-Metrics (if they don't do > something like this already - I do not know. I will defer to the H-Metrics > folks to talk about how they do querying off of tags). > > ----- Original Message ----- > > The OpenShift Agent when monitoring a prometheus endpoint creates a > single > > metric with tagged datapoints, i.e.: > > > > > > > > > > https://github.com/coreos/etcd/blob/master/Documentation/v2/metrics.md# > http-requests > > > > > > > > > > I1228 21:02:01.820530 1 metrics_storage.go:155] TRACE: Stored [3] > [counter] > > datapoints for metric named > > [pod/fa32a887-cd08-11e6-ab2e-525400c583ad/custom/etcd_http_received_total]: > [ > > {2016-12-28 21:02:01.638767339 +0000 UTC 622 map[method:DELETE]} > > {2016-12-28 21:02:01.638767339 +0000 UTC 414756 map[method:GET]} > > {2016-12-28 21:02:01.638767339 +0000 UTC 33647 map[method:PUT]} > > ] > > > > But when trying to view this via the grafana datasource, only 1 metric > and > > the aggregated counts are shown. What i'd like to do is something like > the > > below: > > > > > > > > { > > "start": 1482999755690, > > "end": 1483000020093, > > "order": "ASC", > > "tags": "pod_namespace:etcd-testing", > > "groupDatapointsByTagKey": "method" > > } > > > > Search via tags or name (as-is) and group the datapoints by a tag key, > which > > would give you 3 lines, instead of 1. > > > > Does that sound possible? > > > > Cheers. > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20161230/4829a31a/attachment.html From mazz at redhat.com Fri Dec 30 13:29:03 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 30 Dec 2016 13:29:03 -0500 (EST) Subject: [Hawkular-dev] Ability to group by datapoint tag in Grafana In-Reply-To: References: <1982058557.7528867.1483029202597.JavaMail.zimbra@redhat.com> Message-ID: <2099209931.7580520.1483122543068.JavaMail.zimbra@redhat.com> Thanks. You should submit a feature request in the H-Metrics JIRA and link that JIRA to a PR for the H-Metrics team to peer review: https://issues.jboss.org/projects/HWKMETRICS ----- Original Message ----- > I did some hacking (day 1 of Rx, so probably not the best "solution"). But > works... > > - > https://github.com/garethahealy/hawkular-metrics/commit/75c616f9be71a0b85ce5dee310c4dff828bb8f38 > > > Sample output: > > - https://gist.github.com/garethahealy/00a90bbee2556b6f0a338ece87096c89 > > > And some test cURL commands: > > - https://gist.github.com/garethahealy/0f46aad5d2da41b82aad5af317fad788 > > > Cheers. > > On Thu, Dec 29, 2016 at 4:33 PM, John Mazzitelli wrote: > > > This would be a feature request on Hawkular-Metrics (if they don't do > > something like this already - I do not know. I will defer to the H-Metrics > > folks to talk about how they do querying off of tags). > > > > ----- Original Message ----- > > > The OpenShift Agent when monitoring a prometheus endpoint creates a > > single > > > metric with tagged datapoints, i.e.: > > > > > > > > > > > > > > > https://github.com/coreos/etcd/blob/master/Documentation/v2/metrics.md# > > http-requests > > > > > > > > > > > > > > > I1228 21:02:01.820530 1 metrics_storage.go:155] TRACE: Stored [3] > > [counter] > > > datapoints for metric named > > > [pod/fa32a887-cd08-11e6-ab2e-525400c583ad/custom/etcd_http_received_total]: > > [ > > > {2016-12-28 21:02:01.638767339 +0000 UTC 622 map[method:DELETE]} > > > {2016-12-28 21:02:01.638767339 +0000 UTC 414756 map[method:GET]} > > > {2016-12-28 21:02:01.638767339 +0000 UTC 33647 map[method:PUT]} > > > ] > > > > > > But when trying to view this via the grafana datasource, only 1 metric > > and > > > the aggregated counts are shown. What i'd like to do is something like > > the > > > below: > > > > > > > > > > > > { > > > "start": 1482999755690, > > > "end": 1483000020093, > > > "order": "ASC", > > > "tags": "pod_namespace:etcd-testing", > > > "groupDatapointsByTagKey": "method" > > > } > > > > > > Search via tags or name (as-is) and group the datapoints by a tag key, > > which > > > would give you 3 lines, instead of 1. > > > > > > Does that sound possible? > > > > > > Cheers. > > > > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > >