[metrics] Internal stats?
by Heiko W.Rupp
Hey,
what internal stats of the Hawkular metrics do we currently collect?
I think Joel did some work for the C* part.
What I think we need is
- number of data points stored on a per tenant basis.
Resolution could be something like "last minute" or
"last 5 minutes" I.e. not realtime updates in the table.
- Total number of data points (i.e. sum over all tenants)
- Query stats. This is probably more complicated, as
querying on metrics that are still in some buffer is
cheaper than over 3 years of raw data.
To get started I'd go with # of queries per tenant and global
Those could perhaps be differentiated on
- raw endpoint
- stats endpoint
- What about alerting? More alert definitions certainly
need more cpu, so number of alert definitions per tenant
and total would be another pair.
- does number of fired alerts also make sense?
The idea behind those is to get some usage figures of the
shared resource "Hawkular metrics" and then to be able to
charge them back onto individual tenants e.g. inside of
OpenShift.
6 years
Cross-Tenant endpoints in Alerting on OS
by Jay Shaughnessy
On 2/23/2017 6:05 PM, Matt Wringe wrote:
> Is there any reason why this being sent in private emails and not to a mailing list?
Matt, Not really, sending to dev-list for anyone interested in the
discussion...
> ----- Original Message -----
>> There was an IRC discussion today about $SUBJECT. Here is a summary of
>> a conversation Matt and I had to drill down into whether there was a
>> cross-tenant security concern with the Alerting API in OS. In short,
>> the answer seems to be no. Alerting (1.4+) offers two endpoints for
>> fetching cross-tenant: /alerts/admin/alerts and /alerts/admin/events.
>> Note that the 'admin' is just in the path, and was chosen just to group
>> what we deemed were admin-level endpoints, the first two of which are
>> these cross-tenant fetches. The 'admin' does not mean anything else in
>> this context, it does not reflect a special user or tenant. The way
>> these endpoints work is that that they accept a Hawkular-Tenant HTTP
>> header that can be a comma-separated-list of tenantIds. As with any of
>> the alerting endpoints. Alerting does not perform any security in the
>> request handling. But, in OS the HAM deployments both have the OS
>> security filtering in place. That filtering does two things, for a
>> cluster-admin user it's basically a pass-thru, the CSL Hawkular-Tenant
>> header is passed on and the endpoints work. For all other users the
>> Hawkular-Tenant header is validated. Because each project name is a
>> tenant name, the value must match a project name. As such, the
>> validation fails if a CSL is supplied. This is decent behavior for now
>> as it prevents any undesired access. Note that as a corner-case, these
>> endpoints will work fine if the header just supplies a single tenant, in
>> which case they are basically the same as the typical single-tenant
>> fetch endpoints.
> What has happened is now Alerts is not considering the Hawkular-tenant header to contain just a string, but a comma separated lists of strings.
>
> eg "Hawkular-tenant: projectA,projectB"
Note, not in general, comma-separated-lists handled only for the two
cross-tenant endpoints mentioned above.
> The OpenShift filter still considers this to be a string, so it will check with OpenShift if the user has permission to access the project named with a string value of "projectA,projectB". Since a project cannot have a ',' within its name, this check will always fail and return an access denied error.
>
> If the user is a cluster level user they are given access to everything, even impossibly named projects. So a cluster level user will happen to be able to use the current setup just due to how this works.
>
> So there doesn't appear to be any security issue that we need to deal with immediately, but we do probably want to handle this properly in the future. It might not be too difficult to add support to the tenant to consider a csl.
>
>> I'm not totally familiar with the Metrics approach to cross-tenant
>> handling but going forward we (Metrics and Alerting) should probably
>> look for some consistency, if possible. Moreover, any solution should
>> reflect what best serves OS. The idea of a CSL for the header is fairly
>> simple and flexible. It may be something to consider, for the OS filter
>> it would mean validating that the bearer has access to each of the
>> individual tenants before forwarding the request.
> I don't recall any meetings about adding multitenancy to Metrics. From what I recall, there is no plans at all to introduce multitenancy at all for metrics.
>
> If I was aware of this discussion when this was brought up for alerts, I would have probably objected to the endpoint being called 'admin' since I don't think that reflects what the true purpose of this is suppose to be. Its not really an admin endpoint, but an endpoint for cross-tenancy. I could have access to projectA and projectB, but not be an 'admin'
>
> If we are making changes like this which affect security, I would really like to be notified so that I can make sure our security filters will function properly. Even if I am in the meeting when its being discussed it would be good to ping me on the PR with the actual implementation.
Of course. This stuff went in in mid November and at that time we (in
alerting) were really just getting settled with the initial integration
into metrics for OS. Going forward I think we have a better idea of
what is relevant to OS and can more easily flag items of import.
6 years
Proposing closer integration of APM and "Hawkular metrics" on Kubernetes / OpenShift
by Heiko W.Rupp
Hi,
right now Hawkular metrics and Hawkular APM are going
relatively separate ways. This is in part due to the backend choice,
but probably also for other reasons.
I am proposing that we try to get the two closer together because at the
end neither tracing data alone, not classic monitoring data can answer
all the questions like:
APM
- why is my service XY slow (my be overload of underlying CPU)
- how much disk will my service need in two years
- how much network usage did my service have yesterday
Classic montoring
- which service will fail if I pull the plug here
- what are customers buying
- why is my service slow (may be come from a dependency)
I am proposing that we integrate the two over the UI - in the first
scenario
here the key driver is the APM UI with its trace diagrams (red boxes).
A klick on such a box will then show related metrics from the
classic monitoring.
On the level of the individual pod, both APM and Classic
'instrumentations'
are present. For JVM-based apps this is on one side the APM agent and/or
APM instrumentation ("OT-instrumentation") (*a) On the other side the
jolokia agent/agent bond (*b)

In this first scenario, APM and classic still have separate agents and
connections to the
backends and different backend storage.
The 2nd scenario, assumes that it is possible to use only one agent
binary that
does both APM and classic metric export. For classic metrics, Hosa will
poll it with
P8s metrics. And on top APM trace data will also be made available for
grab by
Hosa, which will then forward them to the APM server.

Thoughts?
Heiko
*a) I propose to always deploy the APM agent to get a quick and easy
coverage
of standard scenarios, so that the user only needs explicit
instrumentation to
increase granularity and/or to cover cases the agent can't cover.
Also "manual" instrumentation should be able to use the agent's
connection to
talk to the APM server.
*b) I think it would make sense to always use the Prometheus protocol
(and Hosa
may learn how to use the more efficient binary protocol) as Jolokia/http
is JVM/Jmx
specific, while P8s exporters also exist for other environments like
Node or Ruby
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill,
Eric Shander
6 years
more openshift issues
by John Mazzitelli
If I start openshift with "sudo ./openshift start" and then try to log in like this:
oc login -u system:admin
What would cause this:
Authentication required for https://192.168.1.15:8443 (openshift)
Username: system:admin
Password:
error: username system:admin is invalid for basic auth
When I start with "oc cluster up" I do not get asked for a password and it "just works"
6 years, 1 month
openshift - using cluster up but building from source
by John Mazzitelli
Has anyone been able to use "oc cluster up --metrics" in order to run OpenShift Origin *and* Origin Metrics but running a local build (i.e. I need to pick up changes in master branch of Origin/Origin Metrics that aren't released yet).
The docs make it look very complicated, and nothing I found seems to help a dev get this up and running quickly without having to look at tons of docs and run lots of commands with bunches of yaml :).
I'm hoping it is easy, but not documented.
This link doesn't even mention "cluster up" let alone running with Origin Metrics: https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc#develop...
If I run "openshift start" - how do I get my own build of Origin Metrics to deploy, like "oc cluster up --metrics" does?
It seems no matter what I do, using "cluster up" pulls down images from docker hub. I have no idea how to run Origin+Metrics using a local build.
I'm hoping someone knows how to do this and can give me the steps.
6 years, 1 month
Re: [Hawkular-dev] Test Account credentials for Hawkular Android Client
by Anuj Garg
Hello pawan. I was last maintainer of this android client. Lets talk on
hangout for detailed interaction if you interested in maintaining this code
On 21 Feb 2017 1:52 p.m., "Thomas Heute" <theute(a)redhat.com> wrote:
Then it would be localhost:8080 with myUsername and myPassword as defined
in step 3 of the guide.
On Tue, Feb 21, 2017 at 9:06 AM, Pawan Pal <pawanpal004(a)gmail.com> wrote:
> Hi,
> I set up my Hawkular server following this guide :
> http://www.hawkular.org/hawkular-services/docs/installation-guide/
>
>
>
>
> On Tue, Feb 21, 2017 at 11:37 AM, Pawan Pal <pawanpal004(a)gmail.com> wrote:
>
>> Hi all,
>> I would like to know the credentials of any testing account for Hawkular
>> android-client. I found jdoe/ password, but it is not working. Also please
>> give server and port.
>>
>> Thanks.
>>
>> --
>> Pawan Pal
>> *B.Tech (Information Technology and Mathematical Innovation)*
>> *Cluster Innovation Centre, University of Delhi*
>>
>>
>>
>>
>
>
> --
> Pawan Pal
> *B.Tech (Information Technology and Mathematical Innovation)*
> *Cluster Innovation Centre, University of Delhi*
>
> _______________________________________________
> hawkular-dev mailing list
> hawkular-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hawkular-dev
>
>
_______________________________________________
hawkular-dev mailing list
hawkular-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev
6 years, 1 month
Test Account credentials for Hawkular Android Client
by Pawan Pal
Hi all,
I would like to know the credentials of any testing account for Hawkular
android-client. I found jdoe/ password, but it is not working. Also please
give server and port.
Thanks.
--
Pawan Pal
*B.Tech (Information Technology and Mathematical Innovation)*
*Cluster Innovation Centre, University of Delhi*
6 years, 1 month