Some food for thought about improving the release of (large) features
by Heiko W.Rupp
Hey,
some of us just had a meeting to recapture parts of the switch from
Inventory.v2 to .v3, where things went less easy (on the java side) than
I expected.
We identified a few areas where we could improve:
- Timeouts. Some tests were failing on local machines but not on travis
(and we had seen that in the direction in the past as well). We need to
be better at not assuming timing, as we can't know timing in the target
environments as well.
Similarly the test against live server was waiting 500*a few seconds
until inventory(.old) came up. Some waiting is good, but the question is
if e.g. inventory does not come up after some reasonable time, if we
should not abort the test as this may show real issues.
- Test reliability (the above is part of this). We need to try to have
more unit and also integration tests and make them more reliable. During
the merge we saw test failures on developer machines while Travis was
good. It turned out that this was due to timing. In the (RHQ) past we
saw test failures because of test ordering. We should perhaps try to
make our (integration) tests in random order on purpose, as in reality,
the user will not run the code in the order we assume in tests either
(yes, that may make setup and tear-down more complex).
- Making tests more end-to-end. Right now we have no idea (from the java
side) about the consequences of e.g. renaming a resource in the agent to
the display of this resource in ManageIQ. Luckily we already have the
ruby-gem tests that run against the live server. Perhaps we can extend
this somehow into MiQ test suite, so that this also tests against latest
hawkular-services master. Or record some interactions of MiQ with
H-services via the gem and have those interactions be re-played against
the live server (there will be a need for placeholders, but that is
something that cassettes already support)
- Way of working for such all-over changes: We were talking that in this
case it could be good to do that in a series of feature branches which
can use src-deps so that the feature branches all applied give the
desired new state. And only if all that is good, send pull-requests and
apply them to merge the full stream of work into master and get releases
of the components out.
7 years, 8 months
plan for HOSA to store inventory collected by Java Agent
by John Mazzitelli
The first use-case for needing HOSA to collect and store inventory has come in - from the Fuse team, specifically. This email is going to explain the current plan to get this to happen. Feel free to chime in with thoughts.
Right now, HOSA does not support collecting and storing inventory. HJA (and HWFA) supports collecting and storing inventory along with metrics.
HOSA has the privileges necessary to write directly to Origin Metrics. HJA does not.
We need to combine these to get what we need (and we need this soon because Fuse needs it).
Currently, the idea is:
a) HJA will have a new storage adapter mode "HOSA" (right now it has "HAWKULAR" and "METRICS" - the former is to support running with a full hawkular server such as when running within CloudForms and the latter is for when running with just a H-Metrics server). With this new HOSA mode, HJA will not store metrics or inventory directly into H-Metrics. Instead it will "cache" the data and wait for HOSA to come and ask for the data.
b) HOSA will have a new endpoint type "hawkular" (right now it has "prometheus" and "jolokia" and "json"). This new type will tell HOSA to read data from some HJA (which will including inventory data) and HOSA will simply pass that data on to Origin Metrics. HOSA will read both metrics and inventory from HJA and HOSA will store that data directly to Origin Metrics essentially making HOSA a proxy to Origin Metrics.
I don't know if HOSA will decorate the data with its own metadata (like tags and things) or if it will truly be a pass-through. I suspect HOSA will need to massage the data it gets from HJA to ensure the data IDs are unique across the OpenShift cluster and things like that.
That's the overall idea. I'm sure there are going to be stumbling blocks along the way that is going to force us to change some things around. But at a high level, I think it should work. HJA caches inventory and metrics rather than writing directly to an H-Metrics server - and HOSA will collect that cached data in HJA and it will be the one to store it in Origin Metrics.
7 years, 8 months
HEADS UP: Hawkular Services is now on Inventory.v3
by Heiko W.Rupp
Hey,
sorry for not sending this email earlier.
Hawkular-services has in master and also release 0.36
switched to new Inventory.v3, which is Inventory on Metrics.
The old Inventory component (that uses a relational DB)
has been removed.
The Ruby client gem has been bumped to version 3.0.1
to cater for this change [1].
Agent(s) 1.0.cr1 are also using this new Inventory code.
Joel is currently preparing a show&tell to walk us through
the new Inventory version and will also update docs on
hawkular.org accordingly.
My thanks goes to everyone involved to make this happen
and especially to Joel who did the majority of the work.
Heiko
[1] And HawkFX in master is also using that gem version now.
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill,
Eric Shander
7 years, 8 months
bug in MiQ / javaagent integration - needs to be fixed
by John Mazzitelli
There is a problem with the new Java agent and MiQ.
This problem is because the agent is no longer in WildFly's DMR tree of resources (because only WildFly subsystems are in the DMR tree).
Look here:
https://github.com/hawkular/hawkular-agent/blob/master/hawkular-javaagent...
So when the agent (java or subsystem agent, doesn't matter) is monitoring EAP over the DMR management interface, that's the metadata it uses. You can see we inject two non-WildFly attributes (immutable and in-container) under the WildFly Server type. But these come from the agent - hence why you see:
path: /subsystem=hawkular-wildfly-agent
But when running this as a java agent, that resource doesn't exist - there is no agent subsystem anymore. So the immutable and in-container attributes end up not getting defined under the WildFly Server resource.
The problem is MiQ looks for those attributes under WildFly Servers to determine if the agent is in immutable mode and if its running in a container.
In effect, we are trying to inject under a WildFly Server resource completely unrelated information (immutable and in-container attributes) - those are related to the AGENT, not the WildFly Server itself (i.e. you won't see those two attributes if you use the jboss-cli under the root "/" resource).
We probably should have had MiQ examine the AGENT resource for this (since "immutable" and "in container" are attributes on the agent, not the server). And we shouldn't define "immutable" and "in container" under the WildFly Server resource because that is depending on the fact that the agent is running as a subsystem. As a Java Agent, as you see, those attributes will never show any values because there is no agent subsystem.
MiQ can look at the agent attributes because the agent itself exposes immutable and in-container (because they ARE actual properties the agent has) in inventory. Under the agent resource in inventory you will see these:
https://github.com/hawkular/hawkular-agent/blob/master/hawkular-javaagent...
Note that we DO expose immutable and in-container under the Agent JMX resource in inventory also - and it has the same resource name as the subsystem agent (the name as you will see it in inventory):
https://github.com/hawkular/hawkular-agent/blob/master/hawkular-javaagent...
Hence why I think the best/easiest thing to do is get MiQ to examine the attributes on the resource where they really belong (and we should remove them from the WildFly Server type).
7 years, 8 months
OpenShift OAuth authentication and authorization for Hawkular APM
by Lars Milland
Hi
It would be really great if a functionality for Hawkular APM could be
found/established, matching the one that exists for Hawkular Metrics wise
for OpenShift, where the metrics are stored per tenant/namespace, and then
Hawkular security wise is integrated to the OAuth based security model of
OpenShift.
Is that a requirement/feature that have been considered? Or would it maybe
already be possible to integrate the Hawkular APM components to OpenShift
OAuth based security. Even if the Hawkular APM storage and security model
would not fit to the fully multitenant way of OpenShift, if just the
security model of a Hawkular APM installation could be connected to the
OpenShift OAuth model, then one Hawkular APM instance could be setup with
"service account tokens" used for sending metrics to the instance, and users
could log into the Hawkular APM UI with again OpenShift OAuth managed
credentials, mapped to roles coming from the OAuth ticket. Much the same way
that the security model of the OpenShift integrated Jenkins works - see:
https://github.com/openshift/jenkins-openshift-login-plugin
The current security model of APM is rather limited as far as I understand -
and based solely on a single manually fixed username/password for both
contributing application performance metrics/log entries, and same for the
Hawkular APM UI.
Best regards
Lars Milland
7 years, 8 months
need a briefing on what the hawkular agent is now doing with respect to inventory
by John Mazzitelli
Joel,
Now that the new inventory-into-metrics is in master and released, I need to know what you did :-D I suspect others will want to know what you did too.
Is it possible for you to write something up or have a 15-minute Blue Jeans session to discuss how inventory is stored in H-Metrics?
I am going to need to know this because I have to implement it in GoLang for HOSA, unless you want to do it :)
--John Mazz
7 years, 8 months
Add functionality to change password.
by Mohammad Murad
Hello
I'm a contributor to the Android Client of Hawkular.
On the Gitter channel I suggested that we should give the user an option to
change the password. Currently there is no REST API for that. This will be
very helpful if the credentials of the user are compromised.
Heiko W. Rupp asked me suggest this here.
If I can help with this, please let me know.
Regards
M. Murad
GitHub <http://github.com/free4murad>
7 years, 8 months
Hawkular Agent 1.0.0.CR1 has been released - inventory in metrics
by John Mazzitelli
Hawkular Agent 1.0.0.CR1 has been released.
This includes the new "inventory in metrics" feature - Hawkular-Inventory is no longer used to store inventory, the inventory is now stored in Hawkular-Metrics within Cassandra.
This is a *significant* change and needs people to beat on it heavily before we can claim victory (hence why it has the CR1 designation and not Final).
So, please grab it and use it when you need to use an agent.
If you find any bugs, please submit HWKAGENT JIRAs at https://issues.jboss.org/projects/HWKAGENT
--John Mazz
[this message was sent on April 24, 2017 at 9:19pm EDT]
7 years, 8 months
playing with HOSA outside OS
by John Mazzitelli
I had a couple peeps ask me if they can run HOSA without needing to run it inside an OpenShift cluster (presumably to collect metrics from Prometheus and Jolokia-JMX endpoints that are also running outside of OpenShift). The answer is "yes" and if you are interested, here is a quick how-to.
First get a config.yaml used to configure HOSA (that's the wget command below - it just grabs the example config from github) and then run "docker run" to launch HOSA:
$ wget -O /tmp/config.yaml https://raw.githubusercontent.com/hawkular/hawkular-openshift-agent/maste...
$ docker run --net=host -v /tmp/config.yaml:/config.yaml hawkular/hawkular-openshift-agent --config=/config.yaml
This assumes you have Hawkular-Metrics server (or a full Hawkular-Services server) running on 127.0.0.1 listening to port 8080. If not, just edit config.yaml to point to your server. You can edit that config.yaml however you want.
By default, HOSA itself is a Prometheus endpoint and will collect its own metrics and store them (see config.yaml for its endpoints definitions). So by running HOSA you will automatically start getting "prometheus" data stored into your H-Metrics. You can add more endpoint definitions to the config to tell HOSA to collect from your own Prometheus and Jolokia-JMX endpoints.
You don't have to use docker - if you build the go executable locally (git clone the HOSA repo and "make build") you can run the executable directly. But it's easier to just docker run - no need to git clone, no need to install Go, no need to build anything.
7 years, 8 months