more openshift issues
by John Mazzitelli
If I start openshift with "sudo ./openshift start" and then try to log in like this:
oc login -u system:admin
What would cause this:
Authentication required for https://192.168.1.15:8443 (openshift)
Username: system:admin
Password:
error: username system:admin is invalid for basic auth
When I start with "oc cluster up" I do not get asked for a password and it "just works"
9 years, 1 month
openshift - using cluster up but building from source
by John Mazzitelli
Has anyone been able to use "oc cluster up --metrics" in order to run OpenShift Origin *and* Origin Metrics but running a local build (i.e. I need to pick up changes in master branch of Origin/Origin Metrics that aren't released yet).
The docs make it look very complicated, and nothing I found seems to help a dev get this up and running quickly without having to look at tons of docs and run lots of commands with bunches of yaml :).
I'm hoping it is easy, but not documented.
This link doesn't even mention "cluster up" let alone running with Origin Metrics: https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc#develop...
If I run "openshift start" - how do I get my own build of Origin Metrics to deploy, like "oc cluster up --metrics" does?
It seems no matter what I do, using "cluster up" pulls down images from docker hub. I have no idea how to run Origin+Metrics using a local build.
I'm hoping someone knows how to do this and can give me the steps.
9 years, 1 month
Re: [Hawkular-dev] Test Account credentials for Hawkular Android Client
by Anuj Garg
Hello pawan. I was last maintainer of this android client. Lets talk on
hangout for detailed interaction if you interested in maintaining this code
On 21 Feb 2017 1:52 p.m., "Thomas Heute" <theute(a)redhat.com> wrote:
Then it would be localhost:8080 with myUsername and myPassword as defined
in step 3 of the guide.
On Tue, Feb 21, 2017 at 9:06 AM, Pawan Pal <pawanpal004(a)gmail.com> wrote:
> Hi,
> I set up my Hawkular server following this guide :
> http://www.hawkular.org/hawkular-services/docs/installation-guide/
>
>
>
>
> On Tue, Feb 21, 2017 at 11:37 AM, Pawan Pal <pawanpal004(a)gmail.com> wrote:
>
>> Hi all,
>> I would like to know the credentials of any testing account for Hawkular
>> android-client. I found jdoe/ password, but it is not working. Also please
>> give server and port.
>>
>> Thanks.
>>
>> --
>> Pawan Pal
>> *B.Tech (Information Technology and Mathematical Innovation)*
>> *Cluster Innovation Centre, University of Delhi*
>>
>>
>>
>>
>
>
> --
> Pawan Pal
> *B.Tech (Information Technology and Mathematical Innovation)*
> *Cluster Innovation Centre, University of Delhi*
>
> _______________________________________________
> hawkular-dev mailing list
> hawkular-dev(a)lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/hawkular-dev
>
>
_______________________________________________
hawkular-dev mailing list
hawkular-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev
9 years, 1 month
Test Account credentials for Hawkular Android Client
by Pawan Pal
Hi all,
I would like to know the credentials of any testing account for Hawkular
android-client. I found jdoe/ password, but it is not working. Also please
give server and port.
Thanks.
--
Pawan Pal
*B.Tech (Information Technology and Mathematical Innovation)*
*Cluster Innovation Centre, University of Delhi*
9 years, 1 month
RxJava2 preliminary testing
by Michael Burman
Hi,
I did yesterday evening and today some testing on how using RxJava2
would benefit us (I'm expecting more from RxJava 2.1 actually, since it
has some enhanced parallelism features which we might benefit from).
Short notes from RxJava2 migration, it's more painful than I assumed.
The code changes can be small in terms of lines of code changed, but
almost every method has had their signature or behavior changed. So at
least I've had to read the documentation all the time when doing things
and trying to unlearn what I've done in the RxJava1.
And all this comes with a backwards compatibility pressure for Java 6
(so you can't benefit from many Java 8 advantages). Reactive-Commons /
Reactor have started from Java 8 to provide cleaner implementation. Grr.
I wrote a simple write path modification in the PR #762 (metrics) that
writes Gauges using RxJava2 ported micro-batching feature. There's still
some RxJavaInterOp use in it, so that might slow down the performance a
little bit. However, it is possible to merge these two codes. There are
also some other optimizations I think could be worth it.
I'd advice against it though, reading gets quite complex. I would almost
suggest that we would do the MetricsServiceImpl/DataAccessImpl merging
by rewriting small parts at a time in the new class with RxJava2 and
make that call the old code with RxJavaInterOp. That way we could move
slowly to the newer codebase.
I fixed the JMH-benchmarks (as they're not compiled in our CI and were
actually broken by some other PRs) and ran some tests. These are the
tests that measure only the metrics-core-service performance and do not
touch the REST-interface (or Wildfly) at all, thus giving better
comparison in how our internal changes behave.
What I'm seeing is around 20-30% difference in performance when writing
gauges this way. So this should offset some of the issues we saw when we
improved error handling (which caused performance degradation). I did
ran into the HWKMETRICS-542 (BusyPoolException) so the tests were run
with 1024 connections.
I'll continue next week some more testing, but at the same time I proved
that the micro-batching features do improve performance in the internal
processing, especially when there's small amount of writers to a single
node. But testing those features could probably benefit from more
benchmark tests without WIldfly (which takes so much processing power
that most performance improvements can't be measured correctly anymore).
- Micke
9 years, 1 month
HOSA and conversion from prometheus to hawkular metrics
by John Mazzitelli
The past several days I've been working on an enhancement to HOSA that came in from the community (in fact, I would consider it a bug). I'm about ready to merge the PR [1] for this and do a HOSA 1.1.0.Final release. I wanted to post this to announce it and see if there is any feedback, too.
Today, HOSA collects metrics from any Prometheus endpoint which you declare - example:
metrics
- name: go_memstats_sys_bytes
- name: process_max_fds
- name: process_open_fds
But if a Prometheus metric has labels, Prometheus itself considers each metric with a unique combination of labels as an individual time series metric. This is different than how Hawkular Metric works - each Hawkular Metric metric ID (even if its metric definition or its datapoints have tags) is a single time series metric. We need to account for this difference. For example, if our agent is configured with:
metrics:
- name: jvm_memory_pool_bytes_committed
And the Prometheus endpoint emits that metric with a label called "pool" like this:
jvm_memory_pool_bytes_committed{pool="Code Cache",} 2.7787264E7
jvm_memory_pool_bytes_committed{pool="PS Eden Space",} 2.3068672E7
then to Prometheus this is actually 2 time series metrics (the number of bytes committed per pool type), not 1. Even though the metric name is the same (what Prometheus calls a "metric family name"), there are two unique combinations of labels - one with "Code Cache" and one with "PS Eden Space" - so they are 2 distinct time series metric data.
Today, the agent only creates a single Hawkular-Metric in this case, with each datapoint tagged with those Prometheus labels on the appropriate data point. But we don't want to aggregate them like that since we lose the granularity that the Prometheus endpoint gives us (that is, the number of bytes committed in each pool type). I will say I think we might be able to get that granularity back through datapoint tag queries in Hawkular-Metrics but I don't know how well (if at all) that is supported and how efficient such queries would be even if supported, and how efficient storage of these metrics would be if we tag every data point with these labels (not sure if that is the general purpose of tags in H-Metrics). But, regardless, the fact that these really are different time series metrics should (IMO) be represented as different time series metrics (via metric definitions/metric IDs) in Hawkular-Metrics.
To support labeled Prometheus endpoint data like this, the agent needs to split this one named metric into N Hawkular-Metrics metrics (where N is the number of unique label combinations for that named metric). So even though the agent is configured with the one metric "jvm_memory_pool_bytes_committed" we need to actually create two Hawkular-Metric metric definitions (with two different and unique metric IDs obviously).
The PR [1] that is ready to go does this. By default it will create multiple metric definitions/metric IDs in the form "metric-family-name{labelName1=labelValue1,labelName2=labelValue2,...}" unless you want a different form in which case you can define an "id" and put in "${labelName}" in the ID you declare (such as "${oneLabelName}_my_own_metric_name_${theOtherLabelName}" or whatever). But I suspect the default format will be what most people want and thus nothing needs to be done. In the above example, two metric definitions with the following IDs are created:
1. jvm_memory_pool_bytes_committed{pool=Code Cache}
2. jvm_memory_pool_bytes_committed{pool=PS Eden Space}
--John Mazz
[1] https://github.com/hawkular/hawkular-openshift-agent/pull/117
9 years, 1 month
Collecting PV usage ?
by Thomas Heute
Mazz,
in your metric collection adventure for HOSA have you met a way to see the
usage of PVs attached to a pod ?
User should know (be able to visualize) how much of the PVs are used and
then be alerted if it reach a certain %.
Thomas
9 years, 2 months
HOSA now limits amount of metrics per pod; new agent metrics added
by John Mazzitelli
FYI: New enhancement to Hawkular OpenShift Agent (HOSA).
To avoid having a misconfigured or malicious pod from flooding HOSA and H-Metrics with large amounts of metric data, HOSA has now been enhanced to support the setting of "max_metrics_per_pod" (this is a setting in the agent global configuration). Its default is 50. Any pod that asks the agent to collect more than that (sum total across all of its endpoints) will be throttled down and only the maximum number of metrics will be stored for that pod. Note: when I say "metrics" here I do not mean datapoints - this limits the number of unique metric IDs allowed to be stored per pod)
If you enable the status endpoint, you'll see this in the yaml report when a max limit is reached for the endpoint in question:
openshift-infra|the-pod-name-73fgt|prometheus|http://172.19.0.5:8080/metrics: METRIC
LIMIT EXCEEDED. Last collection at [Sat, 11 Feb 2017 13:46:44 +0000] gathered
[54] metrics, [4] were discarded, in [1.697787ms]
A warning will also be logged in the log file:
"Reached max limit of metrics for [openshift-infra|the-pod-name-73fgt|prometheus|http://172.19.0.5:8080/metrics] - discarding [4] collected metrics"
(As part of this code change, the status endpoint was enhanced to now show the number of metrics collected from each endpoint under each pod. This is not the total number of datapoints; it is showing unique metric IDs - this number will always be <= the max metrics per pod)
Finally, the agent now collects and emits 4 metrics of its own (in addition to all the other "go" related ones like memory used, etc). They are:
1 Counter:
hawkular_openshift_agent_metric_data_points_collected_total
The total number of individual metric data points collected from all endpoints.
3 Gauges:
hawkular_openshift_agent_monitored_pods
The number of pods currently being monitored.
hawkular_openshift_agent_monitored_endpoints
The number of endpoints currently being monitored.
hawkular_openshift_agent_monitored_metrics
The total number of metrics currently being monitored across all endpoints.
All of this is in master and will be in the next HOSA release, which I hope to do this weekend.
9 years, 2 months
Hawkular Metrics 0.24.0 - Release
by Stefan Negrea
Hello,
I am happy to announce release 0.24.0 of Hawkular Metrics. This release is
anchored by a new tag query language and general stability improvements.
Here is a list of major changes:
- *Tag Query Language*
- A query language was added to support complex constructs for tag
based queries for metrics
- The old tag query syntax is deprecated but can still be used; the
new syntax takes precedence
- The new syntax supports:
- logical operators: AND,OR
- equality operators: =, !=
- value in array operators: IN, NOT IN
- existential conditions:
- tag without any operator is equivalent to = '*'
- tag preceded by the NOT operator matches only instances
without the tag defined
- all the values in between single quotes are treated as regex
expressions
- simple text values do not need single quotes
- spaces before and after equality operators are not necessary
- For more details please see: Pull Request 725
<https://github.com/hawkular/hawkular-metrics/pull/725>,
HWKMETRICS-523 <https://issues.jboss.org/browse/HWKMETRICS-523>
- Sample queries:
a1 = 'bcd' OR a2 != 'efg'
a1='bcd' OR a2!='efg'
a1 = efg AND ( a2 = 'hijk' OR a2 = 'xyz' )
a1 = 'efg' AND ( a2 IN ['hijk', 'xyz'] )
a1 = 'efg' AND a2 NOT IN ['hijk']
a1 = 'd' OR ( a1 != 'ab' AND ( c1 = '*' ) )
a1 OR a2
NOT a1 AND a2
a1 = 'a' AND NOT b2
a1 = a AND NOT b2
- *Performance*
- Updated compaction strategies for data tables from size tiered
compaction (STCS) to time window compaction (TWCS) (HWKMETRICS-556
<https://issues.jboss.org/browse/HWKMETRICS-556>)
- Jobs now execute on RxJava's I/O scheduler thread pool (
HWKMETRICS-579 <https://issues.jboss.org/browse/HWKMETRICS-579>)
- *Administration*
- The admin tenant is now configurable via ADMIN_TENANT environment
variable (HWKMETRICS-572
<https://issues.jboss.org/browse/HWKMETRICS-572>)
- Internal metric collection is disabled by default (HWKMETRICS-578
<https://issues.jboss.org/browse/HWKMETRICS-578>)
- Resolved a null pointer exception in DropWizardReporter due to
admin tenant changes (HWKMETRICS-577
<https://issues.jboss.org/browse/HWKMETRICS-577>)
- *Job Scheduler*
- Resolved an issue where the compression job would stop running
after a few days (HWKMETRICS-564
<https://issues.jboss.org/browse/HWKMETRICS-564>)
- Updated the job scheduler to renew job locks during job execution (
HWKMETRICS-570 <https://issues.jboss.org/browse/HWKMETRICS-570>)
- Updated the job scheduler to reacquire job lock after server
restarts (HWKMETRICS-583
<https://issues.jboss.org/browse/HWKMETRICS-583>)
- *Hawkular Alerting - Major Updates*
- Resolved several issues where schema upgrades were not applied
after the initial schema install (HWKALERTS-220
<https://issues.jboss.org/browse/HWKALERTS-220>, HWKALERTS-222
<https://issues.jboss.org/browse/HWKALERTS-222>)
*Hawkular Alerting - Included*
- Version 1.5.1
<https://issues.jboss.org/projects/HWKALERTS/versions/12333065>
- Project details and repository: Github
<https://github.com/hawkular/hawkular-alerts>
- Documentation: REST API
<http://www.hawkular.org/docs/rest/rest-alerts.html>, Examples
<https://github.com/hawkular/hawkular-alerts/tree/master/examples>,
Developer
Guide
<http://www.hawkular.org/community/docs/developer-guide/alerts.html>
*Hawkular Metrics Clients*
- Python: https://github.com/hawkular/hawkular-client-python
- Go: https://github.com/hawkular/hawkular-client-go
- Ruby: https://github.com/hawkular/hawkular-client-ruby
- Java: https://github.com/hawkular/hawkular-client-java
*Release Links*
Github Release:
https://github.com/hawkular/hawkular-metrics/releases/tag/0.24.0
JBoss Nexus Maven artifacts:
http://origin-repository.jboss.org/nexus/content/repositorie
s/public/org/hawkular/metrics/
Jira release tracker:
https://issues.jboss.org/projects/HWKMETRICS/versions/12332966
A big "Thank you" goes to John Sanda, Matt Wringe, Michael Burman, Joel
Takvorian, Jay Shaughnessy, Lucas Ponce, and Heiko Rupp for their project
contributions.
Thank you,
Stefan Negrea
9 years, 2 months