faster dev build of hawkular
by Jiri Kremser
Hello there,
there is a PR hanging https://github.com/hawkular/hawkular/pull/342
that will change the way the -Pdev profile works. Currently, it creates the zip and gz archive, but most of us don't use it during the development. The directory is pretty much ok. So it won't be created anymore, if you need the zip/gz with precreated jdoe:password, use -Pdozip.
I am sending this not to break anyones automated workflow. I asked Filip and it seems QE don't depend on it.
Perhaps the dozip profile can go away completely, then.
jk
9 years, 5 months
Revisit resource naming + resource types for Alpha4
by Heiko W.Rupp
Hey,
we currently have a pretty ad-hoc resource naming scheme,
that involves magic constants like "MI~R" or "AI~R"
and also sometimes square brackets '[' and ']', which are
even invalid characters (if not escaped) inside a URL [1].
A recent switch from resource naming with [] to one without
[] created a bit of a mess, as the metric names still expect
the [] and probably other clients that are not part of the
github.com/hawkular repositories
We need to revisit and fix the resource types
(e.g. supply WildFly base data from within the Hawkular server)
and the resources using those including names of
metric definitions , operation definitions etc in Alpha4
before too much code relies on it.
Similar I believe that resource info should not contain
the full type information every time - only if explicitly
requested. Clients should be able to get the type info
by following a link that is supplied in the resource info.
On top of that we need to publish how client writers
can get the data they need.
[1] https://issues.jboss.org/browse/HAWKULAR-491
9 years, 5 months
What is the best way to build hawkular with the latest hawkular-metrics?
by Filip Brychta
Hello,
I have following goal:
1 - build each hawkular-metrics PR
2 - build hawkular-dist using artifact from step 1
3 - run some tests
What could be the best and most reliable way to do this?
a) just build hawkular dist with hawkular-metrics snapshot from step 1
- I tried this and it was working for a while but then it stopped working (I guess when those intergration branches were introduced)
- is there any chance that this approach will ever be reliable?
b) build hawkular dist using the latest snapshots for all components
- is this reliable?
c) any other option?
Or the only reliable way is to build hawkular-metrics and deploy it to a wildfly manually as a single component and test it this way?
Thank you,
Filip
9 years, 5 months
execute op
by John Mazzitelli
As part of the new UI-server-feed comm work, the following now works.
In our agent config, if a resource-type has an operation defined, you can execute that operation from end-to-end. I don't have the UI coded up - I mock out the UI with a simulated websocket client.
These are the log messages I got in the logs to show it working:
1) The UI sends in the request over websocket - the content of the request looks like this:
ExecuteOperationRequest={"resourceId":"mazztower~Local~/subsystem=hawkular-monitor", "operationName":"Status"}
2) The server receives it over the websocket. Log message:
Received message from UI client [AjEP4Q3X0ViCalHvAsodkve92mshxsCxJTy9PQ9r]
3) And then puts it on the bus. Whatever server is currently connected to that feed will have a bus listener for this particular command for that particular feed and picks it up. Log message from the bus listener:
Asking feed [mazztower] to execute operation [Status] on resource ID [mazztower~Local~/subsystem=hawkular-monitor]
4) That bus listener does what it needs to do - in this case, forwards the message to the appropriate feed/agent. Log message:
Attempting to send async message to [1] clients: [ExecuteOperationRequest={"resourceId":"mazztower~Local~/subsystem=hawkular-monitor","operationName":"Status"}]
5) The agent gets the message from its websocket. Log messages:
Received message from server
Received request to execute operation [Status] on resource [mazztower~Local~/subsystem=hawkular-monitor]
6) Once the operation is executed, the results are sent back to the server - these are logs back on the server again:
Received message from feed [mazztower]
Operation execution completed. Resource=[mazztower~Local~/subsystem=hawkular-monitor], Operation=[Status], Status=[OK], Message=["STOPPED"]
So you can see the server was told that the operation succeeded and what the results were in Message.
Lots more to do. But the end-to-end is working. Need to support parameters, next. Then have to figure out how to do resource configuration using this same comm mechanism.
9 years, 5 months
sequencing of feed-comm
by John Mazzitelli
Just sending this to document it :) Can we put these plantuml documents somewhere so they can render? github or someplace?
Anyway, this is the current impl - the responses coming back from the agent will pass through directly from the server to the UI websocket - this will not work long term -- see below
This will not work because the UI that submitted the request may not be connected to the same server that the feed is connected to - so we need to put a message on the bus that is the incoming response from the feed and a UI listener will be listening on the bus and send all messages to its UI. This may be difficult to do since UIs do not have an identifier like feeds do (feed ID) - there are session IDs but they change for each websocket connection. I'll figure something out :) Anyway, this is how it should work:
9 years, 5 months
Could not connect to Cassandra ... - does it have to be a WARN with a stack trace?
by Peter Palaga
Hi *,
There are several occurrences of this in every Hawkular start log:
WARN [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
(metricsservice-lifecycle-thread) Could not connect to Cassandra cluster
- assuming its not up yet:
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /127.0.0.1:9042
(com.datastax.driver.core.TransportException: [/127.0.0.1:9042] Cannot
connect))
plus the stack trace.
So given that this happens during every HK startup, could we not
classify it as normal and change it to INFO without the stack trace?
I am ready to prepare a PR unless somebody raises a hand against that.
Thanks,
Peter
9 years, 5 months
IFTTT alert notifications
by Jiri Kremser
Hello,
if you don't know the ifttt, it's simple site where you can define a trigger condition and then some action if the condition is met. It's a closed source software (no need to pay though), but the number of possible actions is enormous. I'd say it's industrial standard for this kind of problems. They provide a way to trigger the action manually by sending http POST to their api.
It's described here:
https://ifttt.com/maker
The post request looks like this:
`curl -X POST -H "Content-Type: application/json" -d '{"value1":"1","value2":"2","value3":"foo"}' https://maker.ifttt.com/trigger/sdf/with/key/aabbccddeeffgghh`
where sdf is the name of the event (must be defined via their website), aabbcc.. is the secret token for the user and valueN:n in the json is the arbitrary data you can then use inside the actions. For instance in the subject of the email or whatever the action allows.
So, if we add a new alert notification that can do such a post request we can get for free all these actions:
https://ifttt.com/recipes/do
Again, the if-condition-then-action rule must be defined via their website (if condition is called 'maker' in this case), the actions use the oauth so it asks for permission if needed. For instance I use it for pushbullet notifications so It asked pushbulet "auth api"
wdyt?
jk
9 years, 5 months
Parent POM and Wildfly BOM
by Thomas Segismont
Hi everyone,
I've been working on the changes needed in Metrics for the parent POM
upgrade to version 16 (those introducing Wildfly 9).
There are three things I noticed which I believe are worth sharing.
Firstly beware that the Wildfly guys have changed their philosophy about
the BOM: now they force the "provided" scope in the BOM and exclude all
the dependencies they think you shouldn't care about as a EE7
application developer.
On one hand it frees you from adding the provided scope declaration in
your application POM. On the other hand, if you use one of the artifacts
in tests then dependency resolution could suddenly be broken.
Secondly our parent POM does not only declare the Wildfly POM in
dependency management section, in also imports it. Which means that all
our projects get forced versioning and scope, even if they are not
Wildfly based.
Thirdly, minor issue, the Wildfly Maven plugin does not configure a
default Wildfly version which means that we are all forced to declare in
components parent POMs. Like this in Metrics:
https://github.com/hawkular/hawkular-metrics/blob/master/pom.xml#L190-L196
Going forward, I propose that we no longer "import" the BOM in Hawkular
parent, and let components do it where needed. And that we declare the
Wildfly version to start with the Wildfly Maven plugin in the parent.
Regards,
Thomas
9 years, 5 months