Property location in the APM model
by Gary Brown
Hi
Currently the concept of a Property is associated with a trace fragment in the APM model. The fragment represents is the root object of the call trace that occurs within a particular service. So an end to end trace is comprised of a set of fragments captured from the various services that were involved in the execution of a particular business transaction.
The reason for associating the Property with the fragment initially was that these properties represented contextual information extracted from the message contents (or headers) that could be used to search for the relevant trace (business transaction) instance. So for example, the "order id" may be extracted and then associated with the fragment, which would then subsequently become associated with the end to end call trace.
However zipkin and the opentracing standard are based on the concept of spans, which represent particular points in the trace - and don't have an equivalent 'fragment' concept. Therefore all 'properties' (binary annotations in zipkin, tags in opentracing) are associated with each instrumentation point (node in APM model). This is also because the information they record in binary annotations/tags is not necessarily business contextual information, but can also be lower level.
So we are considering moving the APM Property concept to the Node instead of Trace fragment, to align more with zipkin/opentracing. However when querying the various data types in APM, the properties would still be aggregated as before, the only difference is that the finer grained association between Node and Property will now be maintained.
Let me know if this is an issue for anyone.
Regards
Gary
8 years, 5 months
Identification of WildFly in container in a Kube/Openshift env
by Heiko W.Rupp
Hey,
[ CC to Federico as he may have some ideas from the Kube/OS side ]
Our QE has opened an interesting case:
https://github.com/ManageIQ/manageiq/issues/9556
where I first thought WTF with that title.
But then when reading further it got more interesting.
Basically what happens is that especially in environments like
Kube/Openshift,
individual containers/appservers are Kettle and not Pets: one goes down,
gets
killed, you start a new one somewhere else.
Now the interesting question for us are (first purely on the Hawkular
side):
- how can we detect that such a container is down and will never come up
with that id again (-> we need to clean it up in inventory)
- can we learn that for a killed container A, a freshly started
container A' is
the replacement to e.g. continue with performance monitoring of the app
or to re-associate relationships with other items in inventory-
(Is that even something we want - again that is Kettle and not Pets
anymore)
- Could eap+embedded agent perhaps store some token in Kube which
is then passed when A' is started so that A' knows it is the new A (e.g.
feed id).
- I guess that would not make much sense anyway, as for an app with
three app servers all would get that same token.
Perhaps we should ignore that use case for now completely and tackle
that differently in the sense that we don't care about 'real' app
servers,
but rather introduce the concept of a 'virtual' server where we only
know
via Kube that it exists and how many of them for a certain application
(which is identified via some tag in Kube). Those virtual servers
deliver
data, but we don't really try to do anything with them 'personally',
but indirectly via Kube interactions (i.e. map the incoming data to the
app and not to an individual server). We would also not store
the individual server in inventory, so there is no need to clean it
up (again, no pet but kettle).
In fact we could just use the feed-id as kube token (or vice versa).
We still need a way to detect that one of those kettle-as is on Kube
and possibly either disable to re-route some of the lifecycle events
onto Kubernetes (start in any case, stop probably does not matter
if he container dies because the appserver inside stops or if kube
just kills it).
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill,
Eric Shander
8 years, 6 months
Hawkular Inventory 0.18.0.Final Released
by Lukas Krejci
Hi all,
I'm glad to announce the release of Hawkular Inventory 0.18.0.Final.
This release brings big improvements in inventory sync along with a couple of
other bugfixes and improvements. Namely:
* Sync now detects changes in all entity data including general-purpose
properties
* Sync no longer requires the root entity to exist prior to its contents and
sub-tree being synced
* It is possible to sync only certain entity types, leaving the rest of a
synced subtree as-is.
* /hawkular/inventory/entity/.../treeHash REST endpoint works correctly
* Bus integration no longer leaves lingering JMS connections behind
* Metric type now contains a "metricDataType" property that will replace the
confusingly named "type" in a future release. The values of the new property
are lower-cased which is in line with what Hawkular Metrics use. The original
property is still in the JSON payload but clients are urged to start using the
new, consistently named, property.
* It is newly possible to filter (and sort) on "metricDataType",
"identityHash", "contentHash" and "syncHash" properties.
Thanks go out to Jiri Kremser and John Mazzitelli for the big help with
reviewing and designing the changes that went to this release.
--
Lukas Krejci
8 years, 6 months
travis failures and srcdep plugin - need inventory release
by John Mazzitelli
tl;dr; Lukas, I need inventory release so I can avoid using srcdep plugin
I have a srcdep dependency in the agent to pull in a specific version of inventory.
It fails to compile on travis. Some errors about generics that looks to be a bug in javac - a newer version seems to fix it. So I changed travis config so it runs on a different machine that gets a different java version.
I then get travis to fail on the enforcer mvn plugin with no details as to why:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-enforcer-plugin:1.4.1:enforce (enforce-rules) on project hawkular-inventory-parent: Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed. -> [Help 1]
There are no specific messages above that explain the failure.
So, I can't get travis to pass my PR build with a inventory srcdep dependency. Thus, I need an inventory release.
8 years, 6 months
[inventory] Reading and querying metric type data type
by Lukas Krejci
Hi all,
while working on [1] I came across an incosistency in how the data type of a
metric type (i.e. gauge, availability, counter, ...) are stored and presented.
When you read a metric type using the REST API, you get something like this
back (some fields ommitted for brevity):
{
"path" : "/t;tnt/mt;myMetricType",
"unit" : "NONE",
"type" : "GAUGE",
"collectionInterval" : 0,
"id" : "myMetricType"
}
The metric data type is called "type" here (somewhat confusingly) and contains
the value in upper case.
If you wanted to filter by it, prior to fix to the above mentioned JIRA, you
had to:
/traversal/...;propertyName=__metric_data_type;propertyValue=gauge
Notice 2 things:
* the name of the property is different
* the value is in lower case
Now the question is how to bring order to this mess. Because the ideal fix
will break the format of the data, I'd like to discuss this with the rest of
the team so that we come to a compromise that will amount to the least amount
of work for all the inventory clients.
There are 2 obvious problems here:
1) the mismatch between the property name in the JSON output and the property
name used when querying
2) the mismatch of the letter case in JSON output and queried property value
For 1) I'd like rename BOTH the JSON output and queried property name to
"metricDataType". We can't "type" because its already taken to mean the type
of the entity and we can't use "__metric_data_type" because it's inconsistent
with the rest of the properties.
For 2) I'd like to consolidate on the upper case usage because that's the
default Jackson serialization of the enum.
Both of these (and especially 1)) can break existing clients.
What would be the path of least resistance for your usecase?
[1] https://issues.jboss.org/browse/HWKINVENT-192
Thanks,
--
Lukas Krejci
8 years, 6 months
Hawkular agent for Karaf?
by Thomas Cunningham
Hi,
I've been talking to Heiko about getting the Hawkular agent to work on
Karaf for the Fuse team.
Currently there's number of JON plugins for Fuse that collect metrics and
expose operations through JMX for the components that are being monitored
(Karaf, Camel, SwitchYard, CXF, ActiveMQ, Fabric, etc). There's a few
components that are also shipped on top of EAP (Camel, SwitchYard) that
would need to be monitored as well, but I think I'd like to get the Karaf
portion finished first.
I'd like to create OSGI bundles for the parts of the hawkular-agent that we
need, maybe create a features file for installing the agent, and then
figure out a way to get the agent started so we could collect some data.
Could someone help point me in the right direction? I've got a bunch of
questions ..
How does the agent get started on EAP?
Which parts of hawkular-agent are wildfly-specific? Which parts would I
need for Karaf to get the agent started?
Would it be beneficial to OSGI-enable hawkular-agent by turning the JARs it
creates into OSGI bundles?
Does hawkular-agent have a lot of dependencies?
Thanks for your help in advance - look forward to working with you on this.
Tom
8 years, 6 months
Getting rid of agent/server password in clear
by Heiko W.Rupp
Hey Mazz and Juca,
tl;dr: we need to get rid of clear text passwords in standalone.xml
for the Docker builds I can run (pseudocode):
docker run -e HAWKULAR_USER=jdoe -e HAWKULAR_PASSWORD=password
pilhuhn/hawkular-services
the startup in the image takes care that jdoe is added to the
users.properties file for JAAS
and the agent gets those env-variables as user/password and the agent
can talk to the
server (see also below).
== Agent side
I recall that in the agent installer you have added some way of
'obfuscating' the password.
I wonder if that exists / can be added to the agent proper so that the
password is not
in standalone.xml in clear and I can pass -e
HAWKULAR_PASS_HASH=dhfadfhsdfadsfads
instead of the password and the agent then sends base64(hash(user +
password-hash))
to the server, which does the same with its local data and compares if
the base64
matches.
Remember that docker inspect <container id> lets you see env-variables
"Env": [
"HAWKULAR_BACKEND=remote",
"HAWKULAR_PASSWORD=password",
== Server side
Passing in the password like above to set up the server is equally bad
(perhaps a tiny bit less, as the
server is usually inside a more secured area than the agents). Here I
can in the startup script
easily replace the call to add-user.sh with some "add user + password if
not exists" logic and
the env-variable gets passed in what add-user.sh would compute and add.
8 years, 6 months