[GSoC] Hawkular Android Client
by Artur Dryomov
Hi everyone,
This year I will be working on the Hawkular Android client application as
part of the Google Summer of Code 2015 program.
The application itself will use Hawkular API and AeroGear SDK. In coming
days I’ll research these areas, especially documentation, and will try to
create some sort of architecture and basic design.
Thank you all for this opportunity!
Artur.
1 year, 3 months
Hawkular Metrics Openshift Containers
by Matt Wringe
I have a new subproject in Hawkular Metrics which sets up creating
components for Openshift/Fabric8
(https://github.com/hawkular/hawkular-metrics/pull/200).
There are 3 main parts
Cassandra: creates a custom seed provider to support
ReplicationControllers in Kubernetes, creates a folder/zip archive which
can be used to generate a Docker image. It may make sense to move the
Cassandra parts out to a separate project.
Hawkular Metrics: creates a folder/zip archive which can be used to
generate a Docker image
Kubernetes: pulls everything together into a single kubernetes
application. Can be used to deploy an application zip into fabric8 (via
drag and drop in the web console or via the maven plugin) or deploy all
the components into Openshift via the kubernetes.json configuration file.
The docker images are not created and deployed to a docker registry as
part of the build, it will just create a folder where you can run the
docker build from. None of the maven docker plugins I looked at seemed
to really work properly, so its still a manual process to do the build
(and push to a registry). Its something which needs to be improved.
The Cassandra service currently only supports adding new nodes to a
cluster and not removing them via the ReplicationController. This is due
to the replication factor being set to be 1 by default (which means when
a node is removed, so is the data it contained).
I believe the docker subproject of hawkular metrics is obsolete and can
be removed
(https://github.com/hawkular/hawkular-metrics/tree/master/docker), but
someone please correct me if I am wrong. It's scripts are referring to
the console which no longer exists as part of the project.
- Matt
1 year, 3 months
Tenant Id - Not Part of URL
by Stefan Negrea
Hello Everybody,
I've been working on a PR for the upcoming Hawkular Metrics release that will remove the tenant id from the end-point URLs. The tenant id will be moved to either a header parameter or a query parameter. The query parameter is in place for cases (such as curl) where setting a header is not possible, difficult, or inconvenient.
Here is an example of the change:
Existing URL:
/{tenantId}/gauge/{metricId}/data
New URL:
/gauge/{metricId}/data
Tenant id set via:
1) header - tenantId
2) query parameter - tenantId
There are two exceptions to this rule, /tenants and /db/{tenantid}/series. The /tenants end-point will be changed into something different in the upcoming releases since it is mostly a management type API that does not belong in the same place with the regular metrics endpoint. And /db/{tenantid}/series end-point is needed in this exact format for compatibility with Influxdb compatible services.
Now, to the merits of this change. The tenant id is volatile, can change any time, and changes to it should be expected; but the rest of the URL is fixed. The second issue is that the tenant id is a security concern. So we were limited in design choices since a security concern was leaking as part of the URL.
So removing the tenant id from the URL will give us permanent & consistent addresses for resources (metrics and metric data points). And we will gain a lot of flexibility on the security side. In the future, users could authenticate with a user/pass combo and the backend would transform that into a tenant id to be used on the request. If the same user later decides to use a tenant id to pass along the request, the URL of the resources would not change. Another expectation is that tenant id is not sufficient, it is typically a combo of id + secret; so we would have resorted to a header or query param for the second piece of information (the secret).
This change will give us the flexibility to adjust the security model (the meaning of tenant ids and ways to validate them) without compromising the URL structure. This will help Hawkular Metrics as it gets integrated into more and more projects and products.
Here are the links to the JIRA and the PR for this change:
https://github.com/hawkular/hawkular-metrics/pull/202
https://issues.jboss.org/browse/HWKMETRICS-68
Thank you,
Stefan Negrea
Software Engineer
1 year, 3 months
New and noteworthy in hawkular-parent 25
by Peter Palaga
Hi *,
hawkular-parent 25 brings the following:
* srcdeps-maven-plugin 0.0.5
* meets the promisses falsely done for 0.0.4:
* less console output
* built without tests
* wildfly-maven-plugin 1.1.0.Alpha4
I have sent PRs to all components repos.
Thanks,
Peter
_______________________________________________
hawkular-dev mailing list
hawkular-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev
1 year, 3 months
[metrics] Internal stats?
by Heiko W.Rupp
Hey,
what internal stats of the Hawkular metrics do we currently collect?
I think Joel did some work for the C* part.
What I think we need is
- number of data points stored on a per tenant basis.
Resolution could be something like "last minute" or
"last 5 minutes" I.e. not realtime updates in the table.
- Total number of data points (i.e. sum over all tenants)
- Query stats. This is probably more complicated, as
querying on metrics that are still in some buffer is
cheaper than over 3 years of raw data.
To get started I'd go with # of queries per tenant and global
Those could perhaps be differentiated on
- raw endpoint
- stats endpoint
- What about alerting? More alert definitions certainly
need more cpu, so number of alert definitions per tenant
and total would be another pair.
- does number of fired alerts also make sense?
The idea behind those is to get some usage figures of the
shared resource "Hawkular metrics" and then to be able to
charge them back onto individual tenants e.g. inside of
OpenShift.
9 years
hosa status endpoint now secured behind openshift secret
by John Mazzitelli
If you are deploying HOSA using its makefile and you are using HOSA's status endpoint (heiko :-) you might want to update your blogs on this), just a heads up that the /status endpoint is now secured behind credentials defined in an openshift secret. So if you point your browser to the new route, for example, you'll see it asks you for username/password now.
By default, the status endpoint is disabled, but the yaml our Makefile uses will enable it and put it behind a secret that is created for you. The credentials are fixed in the secret the makefile creates (see the config.yaml example file to know what they are - its the same credentials that are in the secret) but you are free to base64 encode your own credentials in a secret and use that.
9 years, 1 month
Inventory and 'transient' servers
by Heiko W.Rupp
Hey,
when WildFly connects to Inventory for the 1st time, we sync
the EAP information into inventory, which also includes the information
about which metrics are available.
Now when WildFly is deployed into Kubernetes or OpenShift, we
will see that WildFly is started, syncs and then dies at some point
in time, where k8s will not re-use the existing one, but start
a new one, which will have a different FeedId.
This will leave a WildFly in Inventory, that is later detected as
down in Metrics/Alerting, but the one in Inventory will stay
forever. Consequences are
- Inventory will get "full" with no-longer needed information
- Clients will retrieve data about non-"usable" servers
We need to figure out how to deal with this situation like e.g.:
- have inventory listen on k8s events and remove the server
when k8s removes it (not when it is going down; stopped pods
can stay around for some time)
- Have a different way of creating the feed id / server id so that
the state is "re-used". Something like feedId/server name could
be the name of the deployment config + version + k8s tenant.
Thoughts?
9 years, 1 month
Hawkular drill down on calls
by Kavin Kankeshwar
Hi,
Iam using the Hawkular JVM agent to send metrics to Hawkular controller,
But the one thing which i cannot do is drill down into where the time was
spent. i.e. drill down to the class which was taking time. (something like
AppDynamics Agent)
So wanted to check if i am missing something or the feature is not yet
possible in hawkular.
Regards,
--
Kavin.Kankeshwar
9 years, 1 month
Ability to group by datapoint tag in Grafana
by Gareth Healy
The OpenShift Agent when monitoring a prometheus endpoint creates a single
metric with tagged datapoints, i.e.:
https://github.com/coreos/etcd/blob/master/Documentation/v2/metrics.md#
http-requests
I1228 21:02:01.820530 1 metrics_storage.go:155] TRACE: Stored [3]
[counter] datapoints for metric named
[pod/fa32a887-cd08-11e6-ab2e-525400c583ad/custom/etcd_http_received_total]:
[
{2016-12-28 21:02:01.638767339 +0000 UTC 622 map[method:DELETE]}
{2016-12-28 21:02:01.638767339 +0000 UTC 414756 map[method:GET]}
{2016-12-28 21:02:01.638767339 +0000 UTC 33647 map[method:PUT]}
]
But when trying to view this via the grafana datasource, only 1 metric and
the aggregated counts are shown. What i'd like to do is something like the
below:
{
"start": 1482999755690,
"end": 1483000020093,
"order": "ASC",
"tags": "pod_namespace:etcd-testing",
"groupDatapointsByTagKey": "method"
}
Search via tags or name (as-is) and group the datapoints by a tag key,
which would give you 3 lines, instead of 1.
Does that sound possible?
Cheers.
9 years, 1 month