open shift agent - what to call it?
by John Mazzitelli
OK, folks, as much as I hate these "what should we name this thing?" threads, I have to do it.
We are at the point where we are going to start going full-throttle on building out an agent that can monitor things on Open Shift (and Heiko wants to be able to monitor things outside of Open Shift - I'll let him chime in on what his use cases are to get a better feel for what he's thinking)
We need a name ASAP so we can create a repository under the Hawkular github namespace and put the code up there so people can start working on it. I would like to do this sooner rather than later - say, by Thursday???
Matt was thinking "hawkulark" (Hawk-U-Lark, Hawkular-K) because "k" == kubernetes.
I was thinking "GoHawk" (rhymes with "mohawk") because it is implemented in "Go"
I wasn't keen on relying on "kubernetes" as part of the name since its really targeting Open Shift and even then doesn't have to run in Open Shift (back to the ideas Heiko has for this thing).
"GoHawk" doesn't seem to be a winner simply because what happens if we implement other hawkular feeds in Golang?
I'm assuming we'll come up with a name and agree to it collectively as a group - but I nominate Thomas H, Heiko R, and John D. as the committee to give the final approval/tie-breaking authority :) It won't be me. I suck at coming up with names.
--John Mazz
P.S. Who knows how to setup one of those online polls/surveys where you can enter your submissions and vote for other submissions?
8 years, 2 months
gohawk - need Go code to write to H-Metrics
by John Mazzitelli
I am close to having GoHawk [1] be able to take flight :) He's still a fledgling, not quite ready to leave the nest yet.. but close. I could even demo what I got if some folks are interested in learning how GoHawk is configured (YAML!!!), seeing it react to changes to an Open Shift node environment on the fly, collecting Prometheus data, and mock-storing the metrics to h-Metrics.
BUT! Right now I'm at the point where I need code that writes data to Hawkular Metrics from a Go client. Anyone have code that shows how to do this? This isn't code that QUERIES the H-Metrics for existing metric data - it is code that WRITES metrics to Hawkular Metrics. I already have a array of MetricHeader objects ([]metrics.MetricHeader) - I just need code that builds up the HTTP request and sends it (including any encryption/credential parameters/settings required?).
[1] https://github.com/jmazzitelli/gohawk
8 years, 2 months
Hawkular APM 0.11.0.Final now available
by Gary Brown
Hi
The Hawkular APM team are pleased to announce the release of version 0.11.0.Final.
The release details, including distributions, can be found here: https://github.com/hawkular/hawkular-apm/releases/tag/0.11.0.Final
The release includes:
* Improvements in the UI for displaying service dependency and trace instance information
* Zipkin integration now includes Kafka support (with json and thrift encoded data)
* Initial implementation of a Java opentracing provider
* Integration with Hawkular Alerts to trigger alerts based on trace completion events
Blogs and videos will follow in the next couple of days to demonstrate these capabilities.
Regards
Gary
8 years, 2 months
MiQ log/middleware.log
by mike thompson
So currently, we aren’t really logging anything to this middleware.log (as far as I can tell). Should we be? What is our policy around using this log (versus evm.log)?
This may be more important once we are in CFME and have to debug some customer issues.
Should all of our logging goto this log? Should some? If so what?
Sorry, just a bit confused about the purpose of this log since it shows empty for me.
— Mike
8 years, 2 months
Inventory and postgres ?
by Heiko W.Rupp
Lukas,
I suspect we have an issue in inventory (or two)
- when I run inventory as is with h2db, it works, but may or may not
contribute (a lot) to the growth of the heap as seen in the
"Hawkular-services and memory" thread.
- When I try to run against postgres 9.5, the WildFly of the platform
never is available inside inventory. I see tables in postgres, but the
WF never shows. Also later on I got below error
I am also a bit puzzled, that using postgres starts of a c3p0 connection
pool instead of using what is already present in WildFly
hawkular_1 | 20:11:37,309 WARN [org.hawkular.inventory.rest]
(default task-26) RestEasy exception, :
java.lang.IllegalArgumentException: A metric type with path
'/t;hawkular/f;f38c6e77-6ee0-47da-a80a-bdac9a249457/mt;Singleton%20EJB%20Metrics~Wait%20Time'
not found relative to
'/t;hawkular/f;f38c6e77-6ee0-47da-a80a-bdac9a249457/r;Local~~/r;Local~%2Fdeployment%3Dhawkular-metrics.ear/r;Local~%2Fdeployment%3Dhawkular-metrics.ear%2Fsubdeployment%3Dhawkular-alerts.war/r;Local~%2Fdeployment%3Dhawkular-metrics.ear%2Fsubdeployment%3Dhawkular-alerts.war%2Fsubsystem%3Dejb3%2Fsingleton-bean%3DPartitionManagerImpl'.
hawkular_1 | at
org.hawkular.inventory.base.BaseMetrics$ReadWrite.wireUpNewEntity(BaseMetrics.java:79)
hawkular_1 | at
org.hawkular.inventory.base.BaseMetrics$ReadWrite.wireUpNewEntity(BaseMetrics.java:51)
hawkular_1 | at
org.hawkular.inventory.base.Mutator.doCreate(Mutator.java:168)
hawkular_1 | at
org.hawkular.inventory.base.Mutator.lambda$doCreate$95(Mutator.java:81)
hawkular_1 | at
org.hawkular.inventory.base.TransactionPayload$Committing.lambda$committing$44(TransactionPayload.java:34)
hawkular_1 | at
org.hawkular.inventory.base.Traversal.lambda$inCommittableTxWithNotifications$94(Traversal.java:119)
hawkular_1 | at
org.hawkular.inventory.base.Util.onFailureRetry(Util.java:110)
hawkular_1 | at
org.hawkular.inventory.base.Util.inCommittableTx(Util.java:81)
hawkular_1 | at
org.hawkular.inventory.base.Traversal.inCommittableTxWithNotifications(Traversal.java:118)
hawkular_1 | at
org.hawkular.inventory.base.Traversal.inTxWithNotifications(Traversal.java:91)
hawkular_1 | at
org.hawkular.inventory.base.Mutator.doCreate(Mutator.java:81)
hawkular_1 | at
org.hawkular.inventory.base.BaseMetrics$ReadWrite.create(BaseMetrics.java:122)
hawkular_1 | at
org.hawkular.inventory.base.BaseMetrics$ReadWrite.create(BaseMetrics.java:51)
hawkular_1 | at
org.hawkular.inventory.api.WriteInterface.create(WriteInterface.java:60)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher$1.visitMetric(SingleSyncedFetcher.java:313)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher$1.visitMetric(SingleSyncedFetcher.java:305)
hawkular_1 | at
org.hawkular.inventory.paths.ElementTypeVisitor.accept(ElementTypeVisitor.java:36)
hawkular_1 | at
org.hawkular.inventory.paths.Path$Segment.accept(Path.java:648)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.create(SingleSyncedFetcher.java:305)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.create(SingleSyncedFetcher.java:350)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.create(SingleSyncedFetcher.java:350)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.create(SingleSyncedFetcher.java:350)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.lambda$syncTrees$121(SingleSyncedFetcher.java:246)
hawkular_1 | at java.lang.Iterable.forEach(Iterable.java:75)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.syncTrees(SingleSyncedFetcher.java:246)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.lambda$synchronize$119(SingleSyncedFetcher.java:123)
hawkular_1 | at
org.hawkular.inventory.base.TransactionPayload$Committing.lambda$committing$44(TransactionPayload.java:34)
hawkular_1 | at
org.hawkular.inventory.base.Traversal.lambda$inCommittableTx$93(Traversal.java:106)
hawkular_1 | at
org.hawkular.inventory.base.Util.onFailureRetry(Util.java:110)
hawkular_1 | at
org.hawkular.inventory.base.Util.inCommittableTx(Util.java:81)
hawkular_1 | at
org.hawkular.inventory.base.Traversal.inCommittableTx(Traversal.java:105)
hawkular_1 | at
org.hawkular.inventory.base.Traversal.inTx(Traversal.java:96)
hawkular_1 | at
org.hawkular.inventory.base.Traversal.inTx(Traversal.java:79)
hawkular_1 | at
org.hawkular.inventory.base.SingleSyncedFetcher.synchronize(SingleSyncedFetcher.java:93)
hawkular_1 | at
org.hawkular.inventory.base.BaseResources$Single.synchronize(BaseResources.java:206)
hawkular_1 | at
org.hawkular.inventory.rest.RestSync.sync(RestSync.java:80)
hawkular_1 | at
org.hawkular.inventory.rest.RestSync$Proxy$_$$_WeldClientProxy.sync(Unknown
Source)
hawkular_1 | at
sun.reflect.GeneratedMethodAccessor108.invoke(Unknown Source)
hawkular_1 | at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
hawkular_1 | at java.lang.reflect.Method.invoke(Method.java:498)
hawkular_1 | at
org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139)
hawkular_1 | at
org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295)
hawkular_1 | at
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249)
hawkular_1 | at
org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236)
hawkular_1 | at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:395)
hawkular_1 | at
org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202)
hawkular_1 | at
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:221)
hawkular_1 | at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
hawkular_1 | at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
hawkular_1 | at
javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
hawkular_1 | at
io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
hawkular_1 | at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
hawkular_1 | at
io.undertow.websockets.jsr.JsrWebSocketFilter.doFilter(JsrWebSocketFilter.java:129)
hawkular_1 | at
io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60)
hawkular_1 | at
io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
hawkular_1 | at
io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84)
hawkular_1 | at
io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
hawkular_1 | at
io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
hawkular_1 | at
org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
hawkular_1 | at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
hawkular_1 | at
io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
hawkular_1 | at
io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
hawkular_1 | at
io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:33)
hawkular_1 | at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
hawkular_1 | at
io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
hawkular_1 | at
io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
hawkular_1 | at
io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
hawkular_1 | at
io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
hawkular_1 | at
io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60)
hawkular_1 | at
io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77)
hawkular_1 | at
io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
hawkular_1 | at
io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43)
hawkular_1 | at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
hawkular_1 | at
org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
hawkular_1 | at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
hawkular_1 | at
io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
hawkular_1 | at
io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284)
hawkular_1 | at
io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263)
hawkular_1 | at
io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81)
hawkular_1 | at
io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174)
hawkular_1 | at
io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
hawkular_1 | at
io.undertow.server.HttpServerExchun(HttpServerExchange.java:793)
hawkular_1 | at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
hawkular_1 | at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
hawkular_1 | at java.lang.Thread.run(Thread.java:745)
hawkular_1 |
8 years, 2 months
Get rid of Travis?
by Michael Burman
Hi,
I'm proposing we get rid of the Travis for now (at least for metrics)
and stick to something else, such as the old Jenkins. At the moment, it
can take several runs for metrics PRs to finish, since Cassandra stops
responding in the Travis runs every time. Yesterday I started my PR 4
times, last week one PR for 10 times.
22:28:24,629 ERROR [com.datastax.driver.core.ControlConnection]
(cluster1-reconnection-1) [Control connection] Cannot connect to any
host, scheduling retry in 600000 milliseconds
Sometimes it doesn't even get this far and fails with running the
installation script of Cassandra. These errors have made the Travis runs
completely irrelevant, whenever they fail it just makes sense to restart
them without reading logs (we should get automatic script to restart
always when it fails) until they succeed. No errors reported by Travis
are trustworthy.
We need a working CI solution and this isn't it. There's no "community
visibility" if the results have no meaning, using Jenkins at least
provides us with "failed / not failed", Travis doesn't provide even this.
- Micke
8 years, 2 months
Inventory: transient feeds - or how to tackle the pets vs cattle scenario
by Heiko W.Rupp
Hey,
Right now we identify "agents" via their feed-id.
An instrumented wildfly comes online, registers
its feed with the server, sends its resource discovery
and later metrics with the feed id.
Over its lifecycle, the server may be stopped and re-started
several times.
This is great in the classical use case with installations
on tin or VMs.
In container-land especially with systems like Kubernetes,
containers are started once and after they have died for
whatever reason they are not restarted again.
So the id of an individual container is less and less interesting.
The interesting part is the overall app, that contains of many
containers linked together with several of them representing
an individual service of the app.
So basically we would rather need to record the app and other
metadata for identifying individual parts of the app (e.g. the web
servers or the data bases) and then get pointers to individual
stuff.
The feed would not need to survive for too long, but some of
its collected data perhaps. And then e.g. the discovery of resources
in a new container of the exact same type as before should be sort
of a no-op, as we know this already. Could we short-circuit that
by storing the docker-image-hash (or similar) and once we see this
known one abort the discovery?
Another aspect is certainly that we want to keep (some) historic
records of the died container - e.g. some metrics and the point
when it died. Suppose k8s kills a container and spins a new one
up (same image) on a different node, then logically it is a continuation
of the first one, but in a different place (but they have different feed
ids)
Now a more drastic scenario: As orchestration systems like k8s or
Docker-Swarm have their own registries, that can be queried : do we need
a
hawkular-inventory for this at all?
( We still need it for the non-OpenShift/K8s/Docker-Swarm envs )
--
Reg. Adresse: Red Hat GmbH, Technopark II, Haus C,
Werner-von-Siemens-Ring 14, D-85630 Grasbrunn
Handelsregister: Amtsgericht München HRB 153243
Geschäftsführer: Charles Cachera, Michael Cunningham, Michael O'Neill,
Eric Shander
8 years, 2 months