Integration of APM into Hawkular Services
by Gary Brown
Hi
Hawkular APM is currently built as a separate distribution independent from other Hawkular components. However in the near future we will want to explore integration with other components, such as Alerts, Metrics and Inventory.
Therefore I wanted to explore the options we have for building an integrated environment, to provide the basis for such integration work, without impacting the more immediate plans for Hawkular Services.
The two possible approaches are:
1) Provide a maven profile …
[View More]as part of the Hawkular Services build, that will include the APM server. The UI could be deployed separately as a war, or possibly integrated into the UI build?
2) As suggested by Juca, the APM distribution could be built upon the hawkular-services distribution.
There are pros/cons with both approaches:
My preference is option (1) as it moves us closer to a fully integrated hawkular-services solution, but relies on a separate build using the profile (not sure if that would result in a separate release distribution).
Option 2 would provide the full distribution as a release, but the downside is the size of the distribution (and its dependencies, such as cassandra), when user only interested in APM. Unclear whether a standalone APM distribution will still be required in the future - at present the website is structured to support this.
Thoughts?
Regards
Gary
[View Less]
8 years, 7 months
[Inventory] What constitutes a "syncable" change of an entity?
by Lukas Krejci
Hi all,
tl;dr: This probably only concerns Mazz and Austin :)
The subject is a little bit cryptic, so let me explain - this deals with
inventory sync and what to consider a change that is worth being synced on an
entity.
Today whether an entity is update during sync depends on whether some of this
"vital" or rather "identifying" properties change. Namely:
Feed: only ID and the hashes of child entities are considered
ResourceType: only ID and hashes of configs and child operation types …
[View More]are
considered
MetricType: id + data type + unit
OperationType: id + hashes of contained configs (return type and param types)
Metric: id
Resource: id + hashes of contained metrics, contained resources, config and
connection config
>From the above, one can see that not all changes to an entity will result in
the change being synchronized during the /sync call, because for example an
addition of a new generic property to a metric doesn't make its identity hash
change.
I start to think this is not precisely what we want to happen during the /sync
operation.
On one hand, I think it is good that we still can claim 2 resources being
identical, because their "structure" is the same, regardless of what the
generic properties on them look like (because anyone can add arbitrary
properties to them). This enables us to do the ../identical/.. magic in
traversals.
On the other hand the recent discussion about attaching an h-metric ID as a
generic property to a metric iff it differs from its id/path in inventory got
me thinking. In the current set up, if agent reported that it changed the h-
metric ID for some metric, the change would not be persisted, because /sync
would see the metric as the same (because changing a generic property doesn't
change the identity hash of the metric).
I can see 3 solutions to this:
* formalize the h-metric ID in some kind of dedicated structure in inventory
that would contribute to the identity hash (i.e. similar to the "also-known-
as" map I proposed in the thread about h-metric ID)
* change the way we compute the identity hash and make it consider everything
on an entity to contribute (I'm not sure I like this since it would limit the
usefulness of ../identical/.. traversals).
* compute 2 hashes - 1 for tracking the identity (i.e. the 1 we have today)
and a second one for tracking changes in content (i.e. one that would consider
any change)
Fortunately, none of the above is a huge change. The scaffolding is all there
so any of the approaches would amount to only a couple of days work.
WDYT?
--
Lukas Krejci
[View Less]
8 years, 7 months
Command Gateway - zip file is empty
by Juraci Paixão Kröhling
Team,
I was working on a change to hservices.t.g.r.c when I noticed a set of
exceptions, which might have been happening for days now. Basically,
they seem to be caused by this, for hawkular-command-gateway-war.war:
> Caused by: java.util.zip.ZipException: zip file is empty
This happens for the latest master, and only on that specific
environment: it works fine locally on my laptop and works fine on Docker
for Heiko. I couldn't find a reasonable answer for this.
A consequence is …
[View More]that the command-gateway endpoints are not working on
that environment, as the deployment failed. The agent, for instance,
calls the following URL quite often:
http://hawkular02:8080/hawkular/command-gateway/feed/ .
Does anyone has any idea on what might be wrong? Did I miss something
from the logs?
Here's a complete server.log:
https://paste.fedoraproject.org/404906/47075585/
The OS is RHEL 7.2, but not sure how relevant this is.
[View Less]
8 years, 7 months
deployments on server group
by Jiri Kremser
Hi,
today's hawkular inventory+WFagent combo have no information about the
deployments of server groups. So my question is what approach should I use,
if we want to visualize the deployments, possibly datasources as members of
a server group?
When using jboss-cli.sh I can see that a server running in a domain mode
has a deployment subsystem under the /server-group=main-server-group and I
can deploy a war into it and check that it's there. But I can't see
anything like that in the inventory …
[View More]neither for the server group resource
[1] nor for the member "domain server" [2].
Do I have to somehow change the configuration of the WF agent subsystem in
host.xml to report those deployments? I assumed installer should do that
when running with --target-config=.../host.xml
btw. I used this cmd for deploying a war:
deploy
/home/jkremser/workspace/ticket-monster/demo/target/ticket-monster.war
--all-server-groups
[1]:
http://localhost:18080/hawkular/inventory/deprecated/feeds/master.Unnamed...
[2]:
http://localhost:18080/hawkular/inventory/deprecated/feeds/master.Unnamed...
jk
[View Less]
8 years, 7 months
H-Services vs. src-deps
by Juraci Paixão Kröhling
Team,
It seems there's a small confusion about src-deps and H-Services.
H-Services is a module that is released every week, given that there's
at least one new commit since the last release.
The src-deps plugin is very helpful for our daily work, as it allows us
to use a given commit as a dependency for our modules, but it's not
appropriate for released artifacts, as it kinda breaks the stability
promise that "a released artifact uses only released artifacts" that is
common on Maven. …
[View More]Besides, I believe there were problems in the past
between src-deps and the maven-release-plugin.
So, avoid sending PRs with src-deps to H-Services. If for some reason
you really need to, switch it to a proper version before Tuesday morning.
- Juca.
[View Less]
8 years, 7 months
Metric under Resource Type
by Austin Kuo
Hi,
I posted my feed, resources and metrics to inventory with /bulk api.
But the errors shows:
"errorMsg" : "Entity of type 'Metric' cannot be created under an entity of
type 'Resource'."
Why is this not allowed since I can do it with normal create entity api?
How can I create a metric under a certain resource with bulk api?
Thanks!
8 years, 7 months
deploy / undeploy / remove
by John Mazzitelli
(This is a long email - it is for people involved in deploying applications via the Hawkular WildFly Agent. If you are writing UI code to deploy/undeploy, I suggest you read it. Send questions/comments - because I'm not even sure if I understand everything 100%)
I just realized why there is confusion regarding how to tell the agent to deploy/undeploy/remove applications.
There are different operations on different resources that do apparently the same thing. But I think this is mainly due to …
[View More]the fact the agent is trying to support both standalone and domain modes and to make it easier on the clients.
On the "WildFly Server" resource (which is the standalone server), these operations are defined by the agent:
<operation-dmr name="Deploy" internal-name="deploy"/>
<operation-dmr name="Undeploy" internal-name="undeploy"/>
On the "Host Controller" resource (which is only in domain mode), there are these same two operations defined.
So, you can deploy an application through the "WildFly Server" (standalone mode) and "Host Controller" (domain mode) - you need to pass in content when you do this, hence why DeployApplicationRequest JSON has to be used.
But notice you can also undeploy an application through those top level resources as well. You do this via UndeployApplicationRequest JSON.
Why don't we use ExecuteOperationRequest JSON here but instead require these two special JSON commands? Because there is no "deploy" or "undeploy" DMR operations on those wildfly resources (i.e. the top level server or host controller). However, there are several deploy/undeploy related DMR operations - like full-replace-deployment, upload-deployment-bytes, replace-deployment, etc. But rather than force the UI clients to have to know about all of these, the agent provides just the simple "Deploy" and "Undeploy" which do what you'd expect them to do. We make it easy by providing some flags in the DeployApplicationRequest JSON to let you say if you want it enabled or not and the agent will always force the deployment (that means, if there was already a deployment, the new deployment replaces the old). So the client doesn't have to fool around with the different combinations ("does this deployment already exist? No? Then just send in the content. Do you want it enabled? Deploy it, too. Yes, deployment already exists? Then send a undeploy command first, then send a remove command, then send a deploy command - or whatever WildFly wants you to do, I'm not even sure").
The agent is using the new wildfly maven core API to do the deploy and undeploy - so its very easy to wrap that stuff around our Deploy/Undeploy stuff - we let that library deal with all the details on how to do this. This, again, is why we don't use ExecuteOperationRequest - because if we did, the CLIENT would have to do all the stuff this wildfly maven core API does to get content to be deployed.
OK, but it gets more confusing because now if you look at the deployment resources, there are ANOTHER set of operations dealing with deploy/undeploy and these DO USE ExecuteOperationRequest JSON to invoke them (because these really do just pass through to the WildFly management controller and execute as is).
For example, on any deployment inside a standalone server you will see these:
/deployment=hawkular-rest-api.war/:read-operation-names
{
"deploy",
"redeploy",
"remove",
"undeploy",
}
Very confusing to have "deploy" on an deployment!! It is already deployed! I don't fully understand this, but I think this is to deploy an application whose content is uploaded but is NOT enabled. You'll see that deployment resource exists in the server but its not "enabled" because it isn't deployed. You "enable" it by deploying it.
And you'll see here you can redeploy the app, remove it (that removes the content entirely I believe - I think the entire deployment resource goes away when you do this), and undeploy (which to me means "disable").
This is why you will see a separate set of deployment related operations defined in the agent's "Deployment" resource type:
<operation-dmr name="Redeploy" internal-name="redeploy"/>
<operation-dmr name="Remove" internal-name="remove"/>
<operation-dmr name="Undeploy" internal-name="undeploy"/>
Very confusing indeed, but these are meant to be executed via the ExecuteOperationRequest JSON and they map one-to-one to the WildFly's operations that I showed above. Note there is no "Deploy" that we map - there is no reason for that other than it probably didn't make sense to us to add it. And I'm not even sure if the client should be executing these operations IF IN DOMAIN MODE! You should be going through the host controller to deploy and undeploy applications when in domain mode.
Clear as mud?
So the next question is - which ones should we be using? Send ExecuteOperationRequest JSON to the "Deployment" resource itself? Or (Un)DeployApplicationRequest JSON to the top level Server/HostController?
To my mind, we should be using the (Un)DeployApplicationRequest JSON on the top level servers. I am almost thinking we should remove the <operation-dmr> definitions for the Redeploy/Remove/Undeploy on the Deployment resources themselves - you really should only be using those when in standalone mode (so your client now has to know to only call them if the server is in standalone mode) and it just adds a second way to do something which makes it confusing to know which to use.
I think there should be a design meeting with all the involved parties (developers, UI design, requirement authors) so we can figure out what we really need. I know this is in the PRD but quite honestly that's like reading the fine print of a legal document and my eyes glaze over before I finish the first paragraph.
[View Less]
8 years, 7 months
Hawkular Metrics 0.18.0 - Release
by Stefan Negrea
Hello Everybody,
I am happy to announce release 0.18.0 of Hawkular Metrics. This release is
anchored by performance enhancements and a new internal job scheduler.
Here is a list of major changes:
1.
*InfluxDB API - REMOVED*
- The InfluxDB compatibility API has been removed from the code base.
- This was an addition to make project integrations easier. As the
REST interface matured, the role of the InfluxDB compatibility interface
was reduced only serve as the …
[View More]Grafana interface. With the release of the
native Grafana plugin, this was no longer needed.
- For more details: HWKMETRICS-431
<https://issues.jboss.org/browse/HWKMETRICS-431>
2.
*Fetching Stats Data - Multiple Metrics - Experimental*
- Prior to this release, it was possible to only fetch stats for a
single metric type at a time. This release added POST
/metrics/stats/query endpoint that allows querying for mixed type
stats for multiple metrics.
- The endpoint accepts a list of metrics ids and allows filtering by
providing start time, end time, sort order and limit, as well as the
typical stats options such as bucket duration, number of buckets, or
percentiles.
- For more details: HWKMETRICS-424
<https://issues.jboss.org/browse/HWKMETRICS-424>
3.
*Performance Enhancements*
- All the JAX-RS handlers are now singletons. This reduces the GC
pressure and was relatively simple change since the code was completely
stateless. The change lead to a significant performance
increase. For more
details: HWKMETRICS-437
<https://issues.jboss.org/browse/HWKMETRICS-437>
4.
*Job Scheduler - New Implementation - Experimental*
- The new internal job scheduler is by far the biggest contribution in
this release.
- This is the foundation for a number of features that will make
their way into upcoming releases; a few examples are metric aggregates,
adjustable data retention, or complex data purges.
- The implementation keeps the Hawkular Metrics server stateless so
scaling will be just as easy going forward, with zero additional
configuration.
- The job scheduler will be used only for internal tasks.
- For more details: HWKMETRICS-360
<https://issues.jboss.org/browse/HWKMETRICS-360>, HWKMETRICS-375
<https://issues.jboss.org/browse/HWKMETRICS-375>
*Hawkular Metrics Clients*
- Python: https://github.com/hawkular/hawkular-client-python
- Go: https://github.com/hawkular/hawkular-client-go
- Ruby: https://github.com/hawkular/hawkular-client-ruby
- Java: https://github.com/hawkular/hawkular-client-java
Release Links
Github Release:
https://github.com/hawkular/hawkular-metrics/releases/tag/0.18.0
JBoss Nexus Maven artifacts:
http://origin-repository.jboss.org/nexus/content/repositories/public/org/...
Jira release tracker:
https://issues.jboss.org/browse/HWKMETRICS/fixforversion/12330870
A big "Thank you" goes to John Sanda, Thomas Segismont, Mike Thompson, Matt
Wringe, Michael Burman, and Heiko Rupp for their project contributions.
Thank you,
Stefan Negrea
[View Less]
8 years, 8 months
Approach for firebase push notification
by Anuj Garg
Hello all,
I was doubtful if we gonna provide apk of android client of hawkular
through Google play store or user will have to compile for themselves. I
assume the case to be of play store.
And if that is the case then clients can not use their own google account
to setup push notifications for alerts as configration file is needed to be
inside apk.
I suggest that hawkular can provide one instance of firebase account for it
and all the hawkular servers will use the same.
With the workflow I …
[View More]suggest, there will not remain the need of setting up
unified push server to provide notification.
Steps :-
- With any user creation on any hawkular serve, there will be created a
32 Byte ID that we can assume to be unique.
- Any client that sign in to that user will retrieve that string and
will register to that as topic subscription.
- When ever a new alert is created. It will fire a HTTP request to
Firebase with unique id as toopic and Server key provided by hawkular
inbuilt.
- Rest work of manupulating the recieved alert will be handled on client
side.
Please write your views on this.
Thanks
Anuj Garg
[View Less]
8 years, 8 months