[Metrics] Pluggable aggregation functions: next steps
by Thomas Segismont
Hi,
I have looked at Aakarsh's repo:
https://github.com/Akki5/hawkular_plugin/
It's a good start with an interface describing a doubles to double
function, a classloader for implementation loading and a set of initial
implementations.
In order to integrate this work into Metrics, I think we should follow
the following steps:
=====
#1 Change the contract
Doubles to double works great for avg/min/max/... functions on gauge
metrics. But we need to consider other metric types.
Also, the interface should not only accept data point values, but whole
data points. Because some functions need the timestamp to compute the
result. % of up availability is a good example.
And functions may return different types: Double, Long, AvailabilityType.
#2 Update configuration options to let the user set a plugins directory
Metrics doc needs will have to be updated.
#3 Create a function repository for each metric type
We can build on JDK's service loader + Aakarsh's classloader implementation.
#4 Add builtin aggregate functions
Extract existing Metrics code (min, max, avg, % of up avail, downtime
duration) into builtin functions.
#5 Document the process of implementing a pluggable function
We need to think about function naming as well. Should we use a prefix
to identify a builtin function?
=====
I will start another thread to discuss REST and Core API data query changes.
Thoughts?
Thanks,
Thomas
7 years, 10 months
Hawkular BTM 0.2.0.Final released
by Gary Brown
Hi all
I'm pleased to announce the release of version 0.2.0.Final of Hawkular BTM. The main focus for this release has been on enhancing the business transaction collection capabilities.
A quick demo of this version, showing monitoring of two Vertx applications, can be found here: https://youtu.be/TtAXiYhqTSk
Highlights of this release:
* URI inclusion/exclusion support, allowing business transactions to be filtered based on initial URIs of interest.
* Propagate business transaction name, identified based on inclusion URI, through subsequent fragments for the same business transaction instance.
* Child node suppression - provide a mechanism for ignoring child nodes where they add no value. The specific case that prompted this mechanism was when instrumenting JDBC prepared statements.
* Provide mechanism for capturing header values from different message types, for use where a simple map is not available
* Define instrumentation rules for Vertx (HTTP and EventBus).
* Administration REST service, responsible for providing the collector configuration. This means that the configuration no longer needs to be defined in the client (execution) environment.
* Batch reporting of business transactions to the server.
* Configuration switch to determine if only named business transactions should be reported. Default is false, to enable discovery of business transaction (fragments) available from the execution environment(s) being monitored, but when in a production environment, we would only want the fragments of interest to be reported.
* Instrumentation rule versioning mechanism. This will enable rules that are only applicable up until a certain version of a technology to be superseded by newer versions of the rule.
The release can be found here: https://github.com/hawkular/hawkular-btm/releases/tag/0.2.0.Final
The detailed release notes can be found here: https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12316120&versi...
Feature requests and bugs should be reported in our project jira: https://issues.jboss.org/browse/HWKBTM
Regards
Gary
7 years, 10 months
metric data type
by John Mazzitelli
OK, folks... how do we solve the following?
There are now two independent enums to define metric data type - one in inventory and one in metrics.
org.hawkular.inventory.api.model.MetricDataType
org.hawkular.metrics.client.common.MetricType
>From an agent or feed perspective, I now have to decide which one I want. Pretty annoying, but OK I can translate between the two if I need to (if the agent is talking to inventory, it will use their enum; if talking to metrics, use their enum). In the agent configuration, <metric-dmr> will need to use the common values between the two in order to support both. But this leads to a more difficult problem to come to grips with - inventory and metric enums for metric type have different values!
Inventory has GAUGE, AVAILABILITY, COUNTER, COUNTER_RATE.
Metrics API has GAUGE, COUNTER, TIMING, SIMPLE.
Right now, the wildfly agent only supports gauge and counter (and inventory availability).
7 years, 10 months
Inventory 0.2.0.Alpha1 released
by Peter Palaga
Hi *,
Most important changes:
* Titan graph DB backend instead of Tinkergraph
* "Canonical path" which is a path going down from tenant do the
entity in question following the "contains" relationships. Canonical
paths are guaranteed to be unique per inventory installation.
Full list of resolved Jiras:
https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12315923&versi...
We hope that Inventory 0.2.0.Alpha1 reaches Hawkular master soon. The
changes needed in Hawkular main are ready in dev/inventory-0.2.0.Alpha1
branch and the ones for Agent should be accepted soon
https://github.com/hawkular/hawkular-agent/pull/23 The last unsolved
blocker is that the relevant Agent branch depends on Metrics
0.5.0-SNAPSHOT and we have no info when it is going to be released.
It was me this time, who released Inventory, just for the sake of making
sure that the Bus Factor for performing Inventory releases is higher than 1.
Best,
Peter
7 years, 10 months
0.3.0.Final of Hawkular Alerts
by Jay Shaughnessy
We're happy to announce 0.3.0.Final of Hawkular Alerts. A bunch of
good stuff this time around, here are the relevant Jiras and a few
highlight blurbs:
** Enhancement
* [HWKALERTS-17] - Refactor to eliminate EE dependencies of EJB3
Now packaged as a WAR as opposed to an EAR, for easier consumption
in another EAR.
* [HWKALERTS-50] - Integrate with Hawkular Metrics
A value-add External Alerter deployment (WAR) that adds more
alerting power when integrated with Hawkular Metrics. Thanks to John
Sanda for his help.
* [HWKALERTS-52] - Hooks for external alerts
A mechanism for integrating External Alerters into Hawkular
Alerts. Allows external clients to leverage the Hawkular Alerts Trigger
and Action infrastructure.
* [HWKALERTS-57] - Improve the way actions plugins access
information related to an alert
Now Actions and Notifiers have full access (via JSON) to the Alert
details.
* [HWKALERTS-59] - Review if 204 return code can be changed to an
empty list
API CHANGE! Get Alerts now returns and empty list as opposed to 204
when the criteria results in in no alerts.
* [HWKALERTS-60] - Review end point for alert resolution
New REST endpoints for Ack or Resolve of a single alert.
* [HWKALERTS-62] - Allow any data to store contextual information
Optionally Store a Map of context data with any datum sent into
alerts for evaluation. Then use that context info in your actions.
* [HWKALERTS-65] - Adapt email plugin to support multiple
conditions and multiple states
Big enhancements in e-mail notifier for rich messages at different
life-cycle points.
* [HWKALERTS-67] - Update gson dependencies with jackson
Remove use of GSON, now depends on fasterxml provided by Wildfly.
** Feature Request
* [HWKALERTS-64] - Enable trigger on alert resolution
- Now can AutoEnable an [Auto]Disabled Trigger when its alerts are
resolved.
- Now automatically returns an AutoResolve Trigger to firing mode
when its alerts are manually resolved.
Please contact us with any questions, comments or contributions!
Jay Shaughnessy (jshaughn(a)redhat.com)
Lucas Ponce (lponce(a)redhat.com)
7 years, 10 months
Re: [Hawkular-dev] inventory API: can I get a feed ID from a resource ID?
by John Mazzitelli
I'm not sure how the UI knows the feed ID to use. I'll defer to the UI guys how they do it. I wonder how they get the feed ID to use with the resource ID?
The agent is easy - I know my feed ID :) For the UI, I'm not sure how they know what the agent's feed ID is when talking about my agent's resources (versus the pinger's resources, let's say - or another agent).
----- Original Message -----
> The REST URLs always resembled what is now formalized in the "canonical path"
> of the entities.
>
> If you accessed resource with ID "a", you never did that just by using that
> "a". You had to mention the tenantID (deduced from the auth info), the
> environmentID (part of the URL) and feedID (if the resource lived under a
> feed). That is precisely what the canonical path contains, too.
>
> So from the user of the REST interface nothing really changed apart from the
> fact that 0.2.0.Alpha1 supports resource hierarchy (which is expressed as a
> URL path, too).
>
> On Monday, July 20, 2015 09:16:23 John Mazzitelli wrote:
> > This is important to know from a UI perspective. Right now, I store the
> > resource ID under the tenant ID.
> >
> > The UI (IIRC) only has the resource ID. From what you just said Lukas, that
> > doesn't seem like its unique enough.
> >
> > This might be an issue with the UI.
> >
> > ----- Original Message -----
> >
> > > In Inventory 0.2.0.Alpha1 which I am about to release today if no
> > > breakage
> > > was
> > > discovered during my absence:
> > >
> > > If you have the resource java object, which I think you have in agent,
> > > you
> > > simply do:
> > >
> > > resourceObject.getPath().ids().getFeedId();
> > >
> > > Long version:
> > >
> > > All inventory entities now store a "canonical path" which is a path going
> > > down
> > > from tenant do the entity in question following the "contains"
> > > relationships.
> > >
> > > The above call will take the resource object's path analyze it using the
> > > "ids()" call and will return the feed id, if the resource is contained
> > > within a feed or null, if the resource lives directly under an
> > > environment.
> > >
> > > Also if you just store the resource ID, remember that that is only
> > > "locally
> > > unique" within your feed, so to reliably get the correct resource, you
> > > only
> > > can search for it within the feed:
> > >
> > > inventory.tenants().getAll("asdf").environments().get("asf").feeds("myfeed
> > > ") .resources().get("resource-id");
> > >
> > > If you also store the canonical path of the resource, you could do:
> > >
> > > inventory.inspect(CanonicalPath.fromString("resource-path"),
> > > Resources.Single.class);
> > >
> > > which would return to you the same access object as the above call.
> > >
> > > So when you create your resource, you supply locally unique id, and can
> > > use
> > > the access object returned from the create method to get the newly
> > > created
> > > resource which will contain its full canonical path and all the other
> > > details.
> > >
> > > On Friday, July 17, 2015 15:11:42 John Mazzitelli wrote:
> > > > To the inventory folks: is there an API that gives me a feed ID if all
> > > > I
> > > > know is a resource ID? If there is no API, can I get one? :)
> > > >
> > > > We'll need a way to determine what feed is responsible for managing
> > > > what
> > > > parts of the inventory. So, given that clients like the UI will only
> > > > know
> > > > about things like resource ID, that's all they will be able to give the
> > > > kettle - but the server-side components will need to take that resource
> > > > ID
> > > > and get its associated feed ID so it can pass messages to the feed that
> > > > is
> > > > managing that resource. _______________________________________________
> > > > hawkular-dev mailing list
> > > > hawkular-dev(a)lists.jboss.org
> > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev
>
>
7 years, 10 months
inventory API: can I get a feed ID from a resource ID?
by John Mazzitelli
To the inventory folks: is there an API that gives me a feed ID if all I know is a resource ID? If there is no API, can I get one? :)
We'll need a way to determine what feed is responsible for managing what parts of the inventory. So, given that clients like the UI will only know about things like resource ID, that's all they will be able to give the kettle - but the server-side components will need to take that resource ID and get its associated feed ID so it can pass messages to the feed that is managing that resource.
7 years, 10 months
Integration branch for Inventory 0.2.0.Alpha1
by Lukas Krejci
Hi all,
I plan to release inventory 0.2.0.Alpha1 soonish because it would be good for
it to take some soak time in Hawkular prior to the next milestone. The main
reason being the move to Titan and Cassandra as inventory's backend.
The other big(gish) feature is the support for resource hierarchy (at last!) a
move to using canonical paths to entities on many places, etc.
In another words, there will be breakage.
I've opened the integration branch in my own fork of Hawkular:
https://github.com/metlos/hawkular/tree/dev/inventory-0.2.0.Alpha1
I already made it pass the integration tests. There will need to be changes
for Pinger and agent (and possibly for the UI, too) which I am seeking for
help with.
I've started working on the agent updates for that branch (which triggered
quite a lot of changes on the inventory side to make the transition (and
future changes) smoother) but nothing is committed yet for that.
Cheers,
Lukas
7 years, 10 months
Fwd: Writing good pull requests (Fwd: [wildfly-dev] How to contribute pull requests to WildFly)
by Peter Palaga
Things to consider. Forwarded from thecore -- P
-------- Forwarded Message --------
Subject: Writing good pull requests (Fwd: [wildfly-dev] How to
contribute pull requests to WildFly)
Date: Thu, 16 Jul 2015 16:30:26 -0500
From: David M. Lloyd <david.lloyd(a)redhat.com>
To: The Core <thecore(a)redhat.com>
I recently sent this to wildfly-dev. I think however that these
guidelines are probably applicable to many, if not all, of our git-based
OSS projects, so if any of you are interested in this topic, here you go.
-------- Forwarded Message --------
Subject: [wildfly-dev] How to contribute pull requests to WildFly
Date: Thu, 16 Jul 2015 16:08:08 -0500
From: David M. Lloyd <david.lloyd(a)redhat.com>
To: WildFly Dev <wildfly-dev(a)lists.jboss.org>
Since migrating to git, and a review-oriented contribution structure,
we've seen a massive improvement in the quality and quantity of
community contribution in both WildFly itself and its affiliated
projects. However, while we have lots of documents about how to get the
WildFly code, hack on it, use git to make a branch, submit a pull
request, etc., one thing rarely (if ever) talked about is any kind of
standards as to how to actually build and structure your pull requests,
so I'm going to establish that right now.
I have a few reasons for doing this. First, the reviewers are stretched
thin - very thin. This has several bad effects, including (but not
limited to):
* Pull requests sitting in the queue for extended periods of time
* Giant pull requests getting less review than small pull requests
* Pull requests receiving highly variable quality-of-review
Secondly, we see problematic pull requests in a wide variety of shapes
and sizes, from the single-unreviewable-mega-commit PR to the
thousand-tiny-commit PR to the mixed-form-and-function-commit PR, as
well as some stealthier cases like the build-is-broken-between-commits PR.
Thirdly, the project has reached a size and scope where we really need
to have more eyes on every change - as many as possible in fact.
To that effect, and borrowing some concepts heavily from the Linux
Kernel project's documentation [1], I offer this. You're welcome.
(For background on how to get started with contribution, the hacking
guide [2] is still the best place to start; after any initial
discussion, I'll probably throw this up alongside that.)
WildFly Contribution and Pull Request Standards
-----------------------------------------------
While a complete git tutorial is far, far out of scope of this guide,
there are a few important rules and guidelines to adhere to when
creating a pull request for WildFly or one of its constituent or related
sub-projects.
1) Describe the pull request adequately.
The description *should* include a JIRA number directly from the project
in question, whose corresponding JIRA issue will in turn have been
linked to the pull request you are just now creating. The description
*should* also include a decent, human-readable summary of what is
changing. Proper spelling and grammar is a plus!
2) Make sure it builds and tests pass *first*.
It is highly annoying to reviewers when they find they've spent a great
deal of time reviewing some code only to discover that it doesn't even
compile. In particular, it's common for a patch to trip CheckStyle if
it hadn't been previously compile-tested at the least.
While it is tempting to rely on the automated CI/GitHub integration to
do our build and test for us (and I'm guilty of having done this too),
it generally just causes trouble, so please don't do it!
3) Separate your changes - but not *too* much.
This comes directly from [1], and I agree with it 100%:
"Separate each _logical change_ into a separate patch.
"For example, if your changes include both bug fixes and performance
enhancements for a single driver, separate those changes into two
or more patches. If your changes include an API update, and a new
driver which uses that new API, separate those into two patches.
"On the other hand, if you make a single change to numerous files,
group those changes into a single patch. Thus a single logical change
is contained within a single patch.
"The point to remember is that each patch should make an easily understood
change that can be verified by reviewers. Each patch should be justifiable
on its own merits.
"If one patch depends on another patch in order for a change to be
complete, that is OK. Simply note "this patch depends on patch X"
in your patch description.
"When dividing your change into a series of patches, take special care to
ensure that [WildFly] builds and runs properly after each patch in the
series. Developers using "git bisect" to track down a problem can end up
splitting your patch series at any point; they will not thank you if you
introduce bugs in the middle.
"If you cannot condense your patch set into a smaller set of patches,
then only post say 15 or so at a time and wait for review and integration."
I also want to emphasize how important it is to separate *functional*
and *non-functional* changes. The latter category includes reformatting
(which generally should *not* be done without a strong justification).
4) Avoid massive and/or "stream of consciousness" branches
We all know that development can sometimes be an iterative process, and
we learn as we go. Nonetheless, we do not need or want a complete
record of all the highs and lows in the history of every change (for
example, an "add foobar" commit followed later by a "remove foobar"
commit in the same PR) - particularly for large changes or in large
projects (like WildFly proper). It is good practice for such change
authors to go back and rearrange and/or restructure the commits of a
pull request such that they incrementally introduce the change in a
logical manner, as one single conceptual change per PR.
If a PR consists of dozens or hundreds of nontrivial commits, you will
want to strongly consider dividing it up into multiple PRs, as PRs of
this size simply cannot be effectively reviewed. They will either be
merged without adequate review, or outright ignored or closed. Which
one is worse, I leave to your imagination.
5) Pay attention and respond to review comments
While in general it is my experience that WildFly contributors are good
about this, I'm going to quote this passage from [1] regardless:
"Your patch will almost certainly get comments from reviewers on ways in
which the patch can be improved. You must respond to those comments;
ignoring reviewers is a good way to get ignored in return. [...]
"Be sure to tell the reviewers what changes you are making and to thank them
for their time. Code review is a tiring and time-consuming process, and
reviewers sometimes get grumpy. Even in that case, though, respond
politely and address the problems they have pointed out."
In addition, when something needs to be changed, the proper manner to do
so is generally to modify the original commit, not to add more commits
to the chain to fix issues as they're reported. See (4).
6) Don't get discouraged
It may come to pass that you have to iterate on your pull request many
times before it is considered acceptable. Don't be discouraged by this
- instead, consider that to be a sign that the reviewers care highly
about the quality of the code base. At the same time though, consider
that it is frustrating for reviewers to have to say the same things over
and over again, so please do take care to provide as high-quality
submissions as possible, and see (5)!
7) You can review code too!
You don't have to be an official reviewer in order to review a pull
request. If you see a pull request dealing with an area you are
familiar with, feel free to examine it and comment as needed. In
addition, *all* pull requests need to be reviewed for basic
(non-machine-verifiable) correctness, including noticing bad code, NPE
risks, and anti-patterns as well as "boring stuff" like spelling and
grammar and documentation.
8) On major refactorings
When doing major and/or long-term refactors, while rare, it is possible
that the above constraints become impractical, especially with regard to
grouping changes. In this case, you can use a work branch on a (GitHub)
fork of WildFly, applying the above rules in micro-scale to just that
branch. In this case you could possibly ask a reviewer to also review
some or all of the pull requests to that branch. Merge commits would
then be used to periodically synchronize with upstream.
In this way, when the long-term branch is ready to "come home" to the
main branch, the reviewers may have a good idea that the (potentially
quite numerous) changes in the work branch have been reviewed already.
[1] https://www.kernel.org/doc/Documentation/SubmittingPatches
[2] https://developer.jboss.org/wiki/HackingOnWildFly
--
- DML
_______________________________________________
wildfly-dev mailing list
wildfly-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/wildfly-dev
--
- DML
7 years, 10 months
New or noteworthy in hawkular-parent 18
by Peter Palaga
Hi *,
New or noteworthy in hawkular-parent 18 [1]:
* WF BoM upgraded to 9.0.0.Final
* WF Plugin configured to use our WF version
* Added gatling-maven-plugin
* Added Jackson Core
* Removed JBoss Logging, EJB and Servlet APIs from Parent dep
management because they are managed in WF BoM. Note that WF BoM
manages these as provided thus making them non-transitive and as a
consequence of that, they need to be added explicitly on some places.
I have sent PRs with an upgrade to parent 18 to all consuming repos.
Note that in HK main, I sent it to dev/inventory-0.2.0.Alpha1 branch.
Best,
Peter
[1] https://github.com/hawkular/hawkular-parent-pom/commits/18
7 years, 10 months