[inventory] Reading and querying metric type data type
by Lukas Krejci
Hi all,
while working on [1] I came across an incosistency in how the data type of a
metric type (i.e. gauge, availability, counter, ...) are stored and presented.
When you read a metric type using the REST API, you get something like this
back (some fields ommitted for brevity):
{
"path" : "/t;tnt/mt;myMetricType",
"unit" : "NONE",
"type" : "GAUGE",
"collectionInterval" : 0,
"id" : "myMetricType"
}
The metric data type is called "type" here (somewhat confusingly) and contains
the value in upper case.
If you wanted to filter by it, prior to fix to the above mentioned JIRA, you
had to:
/traversal/...;propertyName=__metric_data_type;propertyValue=gauge
Notice 2 things:
* the name of the property is different
* the value is in lower case
Now the question is how to bring order to this mess. Because the ideal fix
will break the format of the data, I'd like to discuss this with the rest of
the team so that we come to a compromise that will amount to the least amount
of work for all the inventory clients.
There are 2 obvious problems here:
1) the mismatch between the property name in the JSON output and the property
name used when querying
2) the mismatch of the letter case in JSON output and queried property value
For 1) I'd like rename BOTH the JSON output and queried property name to
"metricDataType". We can't "type" because its already taken to mean the type
of the entity and we can't use "__metric_data_type" because it's inconsistent
with the rest of the properties.
For 2) I'd like to consolidate on the upper case usage because that's the
default Jackson serialization of the enum.
Both of these (and especially 1)) can break existing clients.
What would be the path of least resistance for your usecase?
[1] https://issues.jboss.org/browse/HWKINVENT-192
Thanks,
--
Lukas Krejci
9 years, 6 months
Hawkular agent for Karaf?
by Thomas Cunningham
Hi,
I've been talking to Heiko about getting the Hawkular agent to work on
Karaf for the Fuse team.
Currently there's number of JON plugins for Fuse that collect metrics and
expose operations through JMX for the components that are being monitored
(Karaf, Camel, SwitchYard, CXF, ActiveMQ, Fabric, etc). There's a few
components that are also shipped on top of EAP (Camel, SwitchYard) that
would need to be monitored as well, but I think I'd like to get the Karaf
portion finished first.
I'd like to create OSGI bundles for the parts of the hawkular-agent that we
need, maybe create a features file for installing the agent, and then
figure out a way to get the agent started so we could collect some data.
Could someone help point me in the right direction? I've got a bunch of
questions ..
How does the agent get started on EAP?
Which parts of hawkular-agent are wildfly-specific? Which parts would I
need for Karaf to get the agent started?
Would it be beneficial to OSGI-enable hawkular-agent by turning the JARs it
creates into OSGI bundles?
Does hawkular-agent have a lot of dependencies?
Thanks for your help in advance - look forward to working with you on this.
Tom
9 years, 6 months
Getting rid of agent/server password in clear
by Heiko W.Rupp
Hey Mazz and Juca,
tl;dr: we need to get rid of clear text passwords in standalone.xml
for the Docker builds I can run (pseudocode):
docker run -e HAWKULAR_USER=jdoe -e HAWKULAR_PASSWORD=password
pilhuhn/hawkular-services
the startup in the image takes care that jdoe is added to the
users.properties file for JAAS
and the agent gets those env-variables as user/password and the agent
can talk to the
server (see also below).
== Agent side
I recall that in the agent installer you have added some way of
'obfuscating' the password.
I wonder if that exists / can be added to the agent proper so that the
password is not
in standalone.xml in clear and I can pass -e
HAWKULAR_PASS_HASH=dhfadfhsdfadsfads
instead of the password and the agent then sends base64(hash(user +
password-hash))
to the server, which does the same with its local data and compares if
the base64
matches.
Remember that docker inspect <container id> lets you see env-variables
"Env": [
"HAWKULAR_BACKEND=remote",
"HAWKULAR_PASSWORD=password",
== Server side
Passing in the password like above to set up the server is equally bad
(perhaps a tiny bit less, as the
server is usually inside a more secured area than the agents). Here I
can in the startup script
easily replace the call to add-user.sh with some "add user + password if
not exists" logic and
the env-variable gets passed in what add-user.sh would compute and add.
9 years, 6 months
Integration of APM into Hawkular Services
by Gary Brown
Hi
Hawkular APM is currently built as a separate distribution independent from other Hawkular components. However in the near future we will want to explore integration with other components, such as Alerts, Metrics and Inventory.
Therefore I wanted to explore the options we have for building an integrated environment, to provide the basis for such integration work, without impacting the more immediate plans for Hawkular Services.
The two possible approaches are:
1) Provide a maven profile as part of the Hawkular Services build, that will include the APM server. The UI could be deployed separately as a war, or possibly integrated into the UI build?
2) As suggested by Juca, the APM distribution could be built upon the hawkular-services distribution.
There are pros/cons with both approaches:
My preference is option (1) as it moves us closer to a fully integrated hawkular-services solution, but relies on a separate build using the profile (not sure if that would result in a separate release distribution).
Option 2 would provide the full distribution as a release, but the downside is the size of the distribution (and its dependencies, such as cassandra), when user only interested in APM. Unclear whether a standalone APM distribution will still be required in the future - at present the website is structured to support this.
Thoughts?
Regards
Gary
9 years, 6 months
[Inventory] What constitutes a "syncable" change of an entity?
by Lukas Krejci
Hi all,
tl;dr: This probably only concerns Mazz and Austin :)
The subject is a little bit cryptic, so let me explain - this deals with
inventory sync and what to consider a change that is worth being synced on an
entity.
Today whether an entity is update during sync depends on whether some of this
"vital" or rather "identifying" properties change. Namely:
Feed: only ID and the hashes of child entities are considered
ResourceType: only ID and hashes of configs and child operation types are
considered
MetricType: id + data type + unit
OperationType: id + hashes of contained configs (return type and param types)
Metric: id
Resource: id + hashes of contained metrics, contained resources, config and
connection config
>From the above, one can see that not all changes to an entity will result in
the change being synchronized during the /sync call, because for example an
addition of a new generic property to a metric doesn't make its identity hash
change.
I start to think this is not precisely what we want to happen during the /sync
operation.
On one hand, I think it is good that we still can claim 2 resources being
identical, because their "structure" is the same, regardless of what the
generic properties on them look like (because anyone can add arbitrary
properties to them). This enables us to do the ../identical/.. magic in
traversals.
On the other hand the recent discussion about attaching an h-metric ID as a
generic property to a metric iff it differs from its id/path in inventory got
me thinking. In the current set up, if agent reported that it changed the h-
metric ID for some metric, the change would not be persisted, because /sync
would see the metric as the same (because changing a generic property doesn't
change the identity hash of the metric).
I can see 3 solutions to this:
* formalize the h-metric ID in some kind of dedicated structure in inventory
that would contribute to the identity hash (i.e. similar to the "also-known-
as" map I proposed in the thread about h-metric ID)
* change the way we compute the identity hash and make it consider everything
on an entity to contribute (I'm not sure I like this since it would limit the
usefulness of ../identical/.. traversals).
* compute 2 hashes - 1 for tracking the identity (i.e. the 1 we have today)
and a second one for tracking changes in content (i.e. one that would consider
any change)
Fortunately, none of the above is a huge change. The scaffolding is all there
so any of the approaches would amount to only a couple of days work.
WDYT?
--
Lukas Krejci
9 years, 6 months
Command Gateway - zip file is empty
by Juraci Paixão Kröhling
Team,
I was working on a change to hservices.t.g.r.c when I noticed a set of
exceptions, which might have been happening for days now. Basically,
they seem to be caused by this, for hawkular-command-gateway-war.war:
> Caused by: java.util.zip.ZipException: zip file is empty
This happens for the latest master, and only on that specific
environment: it works fine locally on my laptop and works fine on Docker
for Heiko. I couldn't find a reasonable answer for this.
A consequence is that the command-gateway endpoints are not working on
that environment, as the deployment failed. The agent, for instance,
calls the following URL quite often:
http://hawkular02:8080/hawkular/command-gateway/feed/ .
Does anyone has any idea on what might be wrong? Did I miss something
from the logs?
Here's a complete server.log:
https://paste.fedoraproject.org/404906/47075585/
The OS is RHEL 7.2, but not sure how relevant this is.
9 years, 6 months
deployments on server group
by Jiri Kremser
Hi,
today's hawkular inventory+WFagent combo have no information about the
deployments of server groups. So my question is what approach should I use,
if we want to visualize the deployments, possibly datasources as members of
a server group?
When using jboss-cli.sh I can see that a server running in a domain mode
has a deployment subsystem under the /server-group=main-server-group and I
can deploy a war into it and check that it's there. But I can't see
anything like that in the inventory neither for the server group resource
[1] nor for the member "domain server" [2].
Do I have to somehow change the configuration of the WF agent subsystem in
host.xml to report those deployments? I assumed installer should do that
when running with --target-config=.../host.xml
btw. I used this cmd for deploying a war:
deploy
/home/jkremser/workspace/ticket-monster/demo/target/ticket-monster.war
--all-server-groups
[1]:
http://localhost:18080/hawkular/inventory/deprecated/feeds/master.Unnamed...
[2]:
http://localhost:18080/hawkular/inventory/deprecated/feeds/master.Unnamed...
jk
9 years, 6 months
H-Services vs. src-deps
by Juraci Paixão Kröhling
Team,
It seems there's a small confusion about src-deps and H-Services.
H-Services is a module that is released every week, given that there's
at least one new commit since the last release.
The src-deps plugin is very helpful for our daily work, as it allows us
to use a given commit as a dependency for our modules, but it's not
appropriate for released artifacts, as it kinda breaks the stability
promise that "a released artifact uses only released artifacts" that is
common on Maven. Besides, I believe there were problems in the past
between src-deps and the maven-release-plugin.
So, avoid sending PRs with src-deps to H-Services. If for some reason
you really need to, switch it to a proper version before Tuesday morning.
- Juca.
9 years, 6 months
Metric under Resource Type
by Austin Kuo
Hi,
I posted my feed, resources and metrics to inventory with /bulk api.
But the errors shows:
"errorMsg" : "Entity of type 'Metric' cannot be created under an entity of
type 'Resource'."
Why is this not allowed since I can do it with normal create entity api?
How can I create a metric under a certain resource with bulk api?
Thanks!
9 years, 6 months
deploy / undeploy / remove
by John Mazzitelli
(This is a long email - it is for people involved in deploying applications via the Hawkular WildFly Agent. If you are writing UI code to deploy/undeploy, I suggest you read it. Send questions/comments - because I'm not even sure if I understand everything 100%)
I just realized why there is confusion regarding how to tell the agent to deploy/undeploy/remove applications.
There are different operations on different resources that do apparently the same thing. But I think this is mainly due to the fact the agent is trying to support both standalone and domain modes and to make it easier on the clients.
On the "WildFly Server" resource (which is the standalone server), these operations are defined by the agent:
<operation-dmr name="Deploy" internal-name="deploy"/>
<operation-dmr name="Undeploy" internal-name="undeploy"/>
On the "Host Controller" resource (which is only in domain mode), there are these same two operations defined.
So, you can deploy an application through the "WildFly Server" (standalone mode) and "Host Controller" (domain mode) - you need to pass in content when you do this, hence why DeployApplicationRequest JSON has to be used.
But notice you can also undeploy an application through those top level resources as well. You do this via UndeployApplicationRequest JSON.
Why don't we use ExecuteOperationRequest JSON here but instead require these two special JSON commands? Because there is no "deploy" or "undeploy" DMR operations on those wildfly resources (i.e. the top level server or host controller). However, there are several deploy/undeploy related DMR operations - like full-replace-deployment, upload-deployment-bytes, replace-deployment, etc. But rather than force the UI clients to have to know about all of these, the agent provides just the simple "Deploy" and "Undeploy" which do what you'd expect them to do. We make it easy by providing some flags in the DeployApplicationRequest JSON to let you say if you want it enabled or not and the agent will always force the deployment (that means, if there was already a deployment, the new deployment replaces the old). So the client doesn't have to fool around with the different combinations ("does this deployment already exist? No? Then just send in the content. Do you want it enabled? Deploy it, too. Yes, deployment already exists? Then send a undeploy command first, then send a remove command, then send a deploy command - or whatever WildFly wants you to do, I'm not even sure").
The agent is using the new wildfly maven core API to do the deploy and undeploy - so its very easy to wrap that stuff around our Deploy/Undeploy stuff - we let that library deal with all the details on how to do this. This, again, is why we don't use ExecuteOperationRequest - because if we did, the CLIENT would have to do all the stuff this wildfly maven core API does to get content to be deployed.
OK, but it gets more confusing because now if you look at the deployment resources, there are ANOTHER set of operations dealing with deploy/undeploy and these DO USE ExecuteOperationRequest JSON to invoke them (because these really do just pass through to the WildFly management controller and execute as is).
For example, on any deployment inside a standalone server you will see these:
/deployment=hawkular-rest-api.war/:read-operation-names
{
"deploy",
"redeploy",
"remove",
"undeploy",
}
Very confusing to have "deploy" on an deployment!! It is already deployed! I don't fully understand this, but I think this is to deploy an application whose content is uploaded but is NOT enabled. You'll see that deployment resource exists in the server but its not "enabled" because it isn't deployed. You "enable" it by deploying it.
And you'll see here you can redeploy the app, remove it (that removes the content entirely I believe - I think the entire deployment resource goes away when you do this), and undeploy (which to me means "disable").
This is why you will see a separate set of deployment related operations defined in the agent's "Deployment" resource type:
<operation-dmr name="Redeploy" internal-name="redeploy"/>
<operation-dmr name="Remove" internal-name="remove"/>
<operation-dmr name="Undeploy" internal-name="undeploy"/>
Very confusing indeed, but these are meant to be executed via the ExecuteOperationRequest JSON and they map one-to-one to the WildFly's operations that I showed above. Note there is no "Deploy" that we map - there is no reason for that other than it probably didn't make sense to us to add it. And I'm not even sure if the client should be executing these operations IF IN DOMAIN MODE! You should be going through the host controller to deploy and undeploy applications when in domain mode.
Clear as mud?
So the next question is - which ones should we be using? Send ExecuteOperationRequest JSON to the "Deployment" resource itself? Or (Un)DeployApplicationRequest JSON to the top level Server/HostController?
To my mind, we should be using the (Un)DeployApplicationRequest JSON on the top level servers. I am almost thinking we should remove the <operation-dmr> definitions for the Redeploy/Remove/Undeploy on the Deployment resources themselves - you really should only be using those when in standalone mode (so your client now has to know to only call them if the server is in standalone mode) and it just adds a second way to do something which makes it confusing to know which to use.
I think there should be a design meeting with all the involved parties (developers, UI design, requirement authors) so we can figure out what we really need. I know this is in the PRD but quite honestly that's like reading the fine print of a legal document and my eyes glaze over before I finish the first paragraph.
9 years, 6 months