https and accounts/keycloak
by John Mazzitelli
I'm trying to figure out what does or does not work over HTTPS. So I configured kettle with my own self-signed keystore using these instructions:
http://blog.eisele.net/2015/01/ssl-with-wildfly-8-and-undertow.html
I can see the UI at https://localhost:8443 - I first had to tell Firefox to accept the certificate (so I know its really going over SSL). And the fact I can see the login screen tells me the SSL setup is OK and I'm able to access the UI over https. However, when I try to log in, I get an exception - and its a similar exception I get when the agent tries to call into kettle.
Has anyone tried accessing kettle over https and have you seen any keycloak issues when doing so? (nudge, nudge, juca :-)
Here's the exception I get when I try to log into the UI - I'm curious if there are other configuration settings we need to get HTTPS to work:
384109 [default task-21] ERROR io.undertow.request - UT005023: Exception handling request to /hawkular/accounts/personas/current
java.lang.RuntimeException: Unable to resolve realm public key remotely
at org.keycloak.adapters.AdapterDeploymentContext.resolveRealmKey(AdapterDeploymentContext.java:134)
at org.keycloak.adapters.AdapterDeploymentContext.resolveDeployment(AdapterDeploymentContext.java:83)
at org.keycloak.adapters.PreAuthActionsHandler.preflightCors(PreAuthActionsHandler.java:71)
at org.keycloak.adapters.PreAuthActionsHandler.handleRequest(PreAuthActionsHandler.java:47)
at org.keycloak.adapters.undertow.ServletPreAuthActionsHandler.handleRequest(ServletPreAuthActionsHandler.java:68)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.MetricsHandler.handleRequest(MetricsHandler.java:62)
at io.undertow.servlet.core.MetricsChainHandler.handleRequest(MetricsChainHandler.java:59)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1937)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1478)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:212)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:957)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:892)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1050)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1363)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1391)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1375)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:535)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:403)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:177)
at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:144)
at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:131)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.keycloak.adapters.AdapterDeploymentContext.resolveRealmKey(AdapterDeploymentContext.java:105)
... 16 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:387)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:292)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1460)
... 35 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:145)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:131)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
9 years, 5 months
HWKMETRICS-83 Create glue code component to integrate with hawkular-bus
by Thomas Segismont
Hi everyone,
I sent a pull request to the Metrics repo to implement $subject.
https://github.com/hawkular/hawkular-metrics/pull/393
I won't paste here the whole PR description, but the following might be
of interest.
There is a new "hawkular-metrics-component" WAR module. It is meant to
replace "hawkular-metrics-api-jaxrs" module in Hawkular deployments.
This new WAR is a "super"-module: it uses Maven WAR plugin overlay
mechanism to add extra classes to the standalone metrics web application.
The extra classes allow to subscribe to the
MetricsService#insertedDataEvents observable and publish messages on the
bus.
In order to make the migration easy, I have shamelessly copied Alerts'
AvailDataMessage and MetricDataMessage classes. Theses messages are
posted on to the "HawkularAvailData" and "HawkularMetricData" topics,
respectively.
After this PR is merged we only need to change the agent code to remove
the "double-push" part. And change Hawkular POM to switch to the new
Metrics component. That's it.
I would like to discuss the JSON payload format later. It might need to
be changed, but let's flip this coin first.
Thanks,
Thomas
9 years, 5 months
Preliminary results on Hawkular MS5 from QE
by Michael Foley
Hi,
QE is almost done qualifying MS5 ... so I thought I would share information at this intermediary point. The link is to a mojo page which summarizes QE activities and findings.
https://mojo.redhat.com/docs/DOC-1047356
The highlights are:
* UI Automation. New tests added for every JMAN4 Jira delivered. Regression testing on previously delivered Jiras. All results stored in Polarion. All links to pull requests for the creation and refactoring of UI automation.
* REST API automation. New tests added for every JMAN4 Jira delivered. Regression testing on previously delivered Jiras. All results stored in Polarion. All links to pull requests for the creation and refactoring of REST API automation.
* Performance CI on Hawk-Metrics. Throughput, response time, and disk usage for every pull request, with a goal to catch performance regressions as they happen. All results stored in Perf Repo.
* Manual testcases. New documented testcases for every JMAN4 delivered in MS5. Documented testcase execution with results stored in Polarion.
* Continuous Delivery Pipeline. Hawkular is containerized, a smaller suite of REST API tests are run, and if successful then pushed to a public facing endpoint. http://livingontheedge.hawkular.org/
* Jiras logged for every issue found as a result of these activities.
We are planning on reviewing the testing activities next Thursday October 15th. Please, you are invited to review, listen, ask questions on this testing ... or even influence or contribute to testing on the next sprint. Please feel free to contact me if you have questions, comments, or suggestions regarding this testing.
Regards,
Michael Foley
QE Supervisor, Middleware BU
9 years, 5 months
wildfly agent news
by John Mazzitelli
Just wanted to give a brief summary of some changes that went into the agent recently.
1) We now produce our own WildFly feature pack and can run in Swarm
Thanks to Bob McWhirter, the agent build now produces a feature pack that can be used by WildFly for installing the agent so it comes with Wildfly out of box. I don't know much more than that - but it sounds cool :)
Bob also got the agent running as a swarm app, so by running an uber-jar (via a command like "java -jar ...") you can run the agent. See https://wildfly-swarm.gitbooks.io/wildfly-swarm-users-guide/content/hawku...
2) We now monitor native platform resources.
I think I already announced this but I'll repeat. The agent can now monitor basic native platform resources like CPUs, system memory, and file systems - supported on Linux, Windows, and MacOS. When you run the wildfly agent, by default you will not only get WildFly servers in inventory, but you'll also get platform resources including a root "Operating System" resource along with its children that include Processor resources, File Store resources, and Memory resources. Collected are metrics such as CPU usage, total and free system memory, total and free disk space.
3) We now utilize the inventory batch API
The agent uses the new inventory batch API to store a resource, its type, its resource config, and its relationships to its parent and metrics in one REST API call. This speeds things up somewhat, but hopefully in the future we can optimize this even further to bulk insert multiple resources at a time. As part of this work, you will now see inventory storage diagnostic data logged in the server log. This can help us determine the performance of our rest calls into inventory. For example, in the diagnostic log message, you'll now see something like this: "feedid.diagnostics.inventory.storage-request-timer: type=[timer], count=[115], min=[283.090987], max=[6260.832863], mean=[3732.066908]" meaning we made 115 inventory requests (which means one inventory batch REST call ) that took an average of 3.7s.
All of this is in master and will go out in the next agent release (I actually think the platform resource stuff is already released and in kettle).
9 years, 5 months
Termin gestrichen: Hawkular-Team Update - Di 6. Okt. 2015 3:30PM - 4PM (hrupp@redhat.com)
by hrupp@redhat.com
Dieser Termin wurde gestrichen und aus Ihrem Kalender entfernt.
Titel: Hawkular-Team Update
This is a all-Hawkular team meeting to give updates where we are and so on.
This is *open to the public*.
Location:
on bluejeans: https://redhat.bluejeans.com/hrupp/
or alternatively teleconference Reservationless+ , passcode 204 2160 481
You can find Dial-In telephone numbers here:
https://www.intercallonline.com/listNumbersByCode.action?confCode=2042160481
RedHat internal short dial numbers are 16666 and 15555 (and probably
others, depends on your location)
Wann: Di 6. Okt. 2015 3:30PM - 4PM Berlin
Wo: pc 204 2160 481
Kalender: hrupp(a)redhat.com
Wer
* Heiko Rupp - Organisator
* theute(a)redhat.com
* miburman(a)redhat.com
* hawkular-dev(a)lists.jboss.org
* snegrea(a)redhat.com
* Jay Shaughnessy
* jcosta(a)redhat.com
* Lucas Ponce
* Mike Thompson
* Thomas Segismont
* gbrown(a)redhat.com
* John Sanda
* Gabriel Cardoso
* Simeon Pinder
* Jiri Kremser
* amendonc(a)redhat.com
* lkrejci(a)redhat.com
* John Mazzitelli
* Peter Palaga
* Viliam Rockai
Einladung von Google Kalender: https://www.google.com/calendar/
Sie erhalten diese E-Mail unter hawkular-dev(a)lists.jboss.org, da Sie ein
Gast bei diesem Termin sind.
Lehnen Sie diesen Termin ab, um keine weiteren Informationen zu diesem
Termin zu erhalten. Sie können auch unter https://www.google.com/calendar/
ein Google-Konto erstellen und Ihre Benachrichtigungseinstellungen für
Ihren gesamten Kalender steuern.
Wenn Sie diese Einladung weiterleiten, kann jeder Empfänger Ihre Antwort
auf die Einladung ändern. Weitere Informationen finden Sie unter
https://support.google.com/calendar/answer/37135#forwarding
9 years, 5 months
platform resources/metrics
by John Mazzitelli
After a talk with John D. last week, he mentioned we really need some basic platform metrics collected by our wildfly agent. While it is true the agent can collect "some" platform metrics via WildFly like system load average (/core-service=platform-mbean/type=operating-system/:read-attribute(name=system-load-average)) its very limited (in fact, system load average is the only one I found).
So I put together a new pull request to help: https://github.com/hawkular/hawkular-agent/pull/70
This utilizes a third party library Heiko pointed out to me - Oshi (https://github.com/dblock/oshi) - which just does the basics but it has what we want, I believe. It is JNA-based and doesn't require JNI native libraries to be shipped with the product like sigar does. It supports Linux, Windows, and MacOS. It collects basic metrics for four simple resource types - Memory, File Stores (i.e. disks/partitions), Processors, and Power Sources (i.e. batteries).
I have it discovering/inventories those four basic types of resources (along with the parent "OS" resource) and collecting the metrics for them.
The metrics collected are:
A. Memory:
1. Memory Available
2. Memory Total
B. File Stores:
1. Total Disk Space
2. Available Disk Space
C. Processors
1. CPU Usage (this is CPU load between 0% and 100%)
D. Power Sources
1. Remaining Capacity (0% to 100% (fully drained vs. fully charged)
2. Time Remaining (time left before power source is fully drained)
Here's what the inventory looks like (taken from inventory's REST API results):
The top root of the platform resource tree is the "Operating System" resource (Oshi has no metrics per se for the OS resource):
http://127.0.0.1:8080/hawkular/inventory/test/mazztower/resources/
[ {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;Operating%20System",
"properties" : {
"name" : "Operating System"
},
"id" : "Operating System"
},
"properties" : {
"name" : "GNU/Linux Fedora 21 (Twenty One)"
},
"id" : "GNU/Linux Fedora 21 (Twenty One)"
}
Its children are the four types of child resources (Memory, File Stores, Processors, PowerSources (of which I have none)):
http://127.0.0.1:8080/hawkular/inventory/test/mazztower/resources/GNU%2FL... Fedora 21 (Twenty One)/children
[ {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5B%2F%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [/]"
},
"id" : "File Store [/]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5Btmpfs%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [tmpfs]"
},
"id" : "File Store [tmpfs]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5Bmqueue%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [mqueue]"
},
"id" : "File Store [mqueue]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5Bhugetlbfs%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [hugetlbfs]"
},
"id" : "File Store [hugetlbfs]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5Bsunrpc%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [sunrpc]"
},
"id" : "File Store [sunrpc]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5B%2Fdev%2Fsda1%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [/dev/sda1]"
},
"id" : "File Store [/dev/sda1]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5B%2Fdev%2Fmapper%2Fvg_mazztower-lv_home2%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [/dev/mapper/vg_mazztower-lv_home2]"
},
"id" : "File Store [/dev/mapper/vg_mazztower-lv_home2]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5B%2Fdev%2Fmapper%2Fvg_mazztower-lv_home%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [/dev/mapper/vg_mazztower-lv_home]"
},
"id" : "File Store [/dev/mapper/vg_mazztower-lv_home]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;File%20Store%20%5B%2Fdev%2Fmapper%2Fvg_mazztower-lv_root%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;File%20Store",
"properties" : {
"name" : "File Store"
},
"id" : "File Store"
},
"properties" : {
"name" : "File Store [/dev/mapper/vg_mazztower-lv_root]"
},
"id" : "File Store [/dev/mapper/vg_mazztower-lv_root]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;Processor%20%5B0%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;Processor",
"properties" : {
"name" : "Processor"
},
"id" : "Processor"
},
"properties" : {
"name" : "Processor [0]"
},
"id" : "Processor [0]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;Processor%20%5B1%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;Processor",
"properties" : {
"name" : "Processor"
},
"id" : "Processor"
},
"properties" : {
"name" : "Processor [1]"
},
"id" : "Processor [1]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;Processor%20%5B2%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;Processor",
"properties" : {
"name" : "Processor"
},
"id" : "Processor"
},
"properties" : {
"name" : "Processor [2]"
},
"id" : "Processor [2]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;Processor%20%5B3%5D",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;Processor",
"properties" : {
"name" : "Processor"
},
"id" : "Processor"
},
"properties" : {
"name" : "Processor [3]"
},
"id" : "Processor [3]"
}, {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/r;GNU%2FLinux%20Fedora%2021%20(Twenty%20One)/r;Memory",
"type" : {
"path" : "/t;28026b36-8fe4-4332-84c8-524e173a68bf/e;test/f;mazztower/rt;Memory",
"properties" : {
"name" : "Memory"
},
"id" : "Memory"
},
"properties" : {
"name" : "Memory"
},
"id" : "Memory"
} ]
9 years, 5 months
Javascript library for displaying a graph of nodes/links
by Gary Brown
Hi UI devs
I'm looking for guidance on the best JS lib for displaying graphs (i.e. nodes/links) to represent a business process/transaction?
Would need to be able to:
- customise the nodes to represent various types - consumers, producers, components, databases, etc
- annotate nodes/links with information
- capture/act upon actions/events based on node/link selection, offer popup menus, etc.
- colour code nodes/links to highlight different areas of interest
- potentially control link thickness to reflect flows with greater volume of traffic
- collapse/expand certain paths
- ability to scroll around large diagram
- apache licensed
Hoping that we already have a good lib selected for the project to standardise on :)
Regards
Gary
9 years, 5 months