From lkrejci at redhat.com Fri Jul 1 09:41:57 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Fri, 01 Jul 2016 15:41:57 +0200 Subject: [Hawkular-dev] List of endpoints in components Message-ID: <1750191.fJRWNfzged@rpi.lan> Due to a bug in the listing of endpoints in inventory (something I never really paid attention to, because it seemed like this was autogenerated from Resteasy metadata) I started thinking about what is actually returned from that endpoint. The format is somewhat spartan: [{"uri": "...", "methods": ["GET", "POST", ...]}, ...] Now during the build, all the Hawkular components use something much more complete and elaborate - the swagger.json - which we use to generate the API documentation for hawkular.org. I think it would be much better if the endpoint listing actually returned the swagger.json instead of the above format lacking much of the information available in swagger. What do you think? This would obviously break the current users but would be much more helpful for API consumers given the richness of the swagger format and the amount of information we include. -- Lukas Krejci From mazz at redhat.com Fri Jul 1 12:30:24 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 1 Jul 2016 12:30:24 -0400 (EDT) Subject: [Hawkular-dev] inventory startup exceptions - but not causing problems In-Reply-To: <1483454553.1499814.1467390565756.JavaMail.zimbra@redhat.com> Message-ID: <2114264966.1499876.1467390624585.JavaMail.zimbra@redhat.com> Lukas: https://paste.fedoraproject.org/387177/14673904/ I see several of these inventory warnings in my agent itests - all my tests pass, so its not causing any problems that I can see. But just wanted to point it out. Seems to happen at startup? ------- 12:25:28,470 WARN [org.hawkular.inventory.rest] (default task-9) RestEasy exception, : java.lang.IllegalStateException: Transaction already open. at org.hawkular.inventory.impl.tinkerpop.spi.GraphProvider.startTransaction(GraphProvider.java:93) at org.hawkular.inventory.impl.tinkerpop.InventoryContext.startTransaction(InventoryContext.java:55) at org.hawkular.inventory.impl.tinkerpop.TinkerpopBackend.startTransaction(TinkerpopBackend.java:125) at org.hawkular.inventory.base.TransactionConstructor.lambda$startInBackend$88(TransactionConstructor.java:40) at org.hawkular.inventory.base.TraversalContext.startTransaction(TraversalContext.java:339) at org.hawkular.inventory.base.TraversalContext.startTransaction(TraversalContext.java:335) at org.hawkular.inventory.base.Util.inCommittableTx(Util.java:76) at org.hawkular.inventory.base.Traversal.inCommittableTx(Traversal.java:105) at org.hawkular.inventory.base.Traversal.inTx(Traversal.java:96) at org.hawkular.inventory.base.Traversal.inTx(Traversal.java:79) at org.hawkular.inventory.base.Fetcher.loadEntity(Fetcher.java:72) at org.hawkular.inventory.base.Fetcher.entity(Fetcher.java:55) at org.hawkular.inventory.base.Fetcher.entity(Fetcher.java:39) at org.hawkular.inventory.api.ResolvableToSingle.exists(ResolvableToSingle.java:52) at org.hawkular.inventory.rest.cdi.AutoTenantInventoryProducer$AutotenantInventory$2.get(AutoTenantInventoryProducer.java:150) at org.hawkular.inventory.rest.cdi.AutoTenantInventoryProducer$AutotenantInventory$2.get(AutoTenantInventoryProducer.java:141) at org.hawkular.inventory.rest.RestTenant.get(RestTenant.java:54) at org.hawkular.inventory.rest.RestTenant$Proxy$_$$_WeldClientProxy.get(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:139) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:295) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:249) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:236) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:395) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:202) at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:221) at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129) at io.undertow.websockets.jsr.JsrWebSocketFilter.doFilter(JsrWebSocketFilter.java:129) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:33) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:284) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263) at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$1$1.run(ServletInitialHandler.java:180) at java.security.AccessController.doPrivileged(Native Method) at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:177) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.hawkular.inventory.impl.tinkerpop.spi.NoRecordedStacktrace From snegrea at redhat.com Fri Jul 1 14:14:43 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Fri, 1 Jul 2016 13:14:43 -0500 Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> Message-ID: Hello, For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? Micke, what do you think are the next steps to move forward with your proposal? Thank you, Stefan Negrea On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman wrote: > Hi, > > This sparked my interest after the discussions in PR #523 (adding cache to > avoid metrics_idx writes). Stefan commented that he still wants to write to > this table to keep metrics available instantly, jsanda wants to write them > asynchronously. Maybe we should instead just stop writing there? > > Why? We do the same thing in tenants also at this time, we don't write > there if someone writes a metric to a new tenant. We fetch the partition > keys from metrics_idx table. Now, the same ideology could be applied to the > metrics_idx writing, read the partition keys from data. There's a small > performance penalty, but the main thing is that we don't really need that > information often - in most use cases never. > > If we want to search something with for example tags, we search it with > tags - that metricId has been manually added to the metrics_idx table. No > need to know if there's metrics which were not initialized. This should be > the preferred way of doing things in any case - use tags instead of pushing > metadata to the metricId. > > If we need to find out if id exists, fetching that from the PK > (PartitionKey) index is fast. The only place where we could slow down is if > there's lots of tenants with lots of metricIds each and we want to fetch > all the metricIds of a single tenant. In that case the fetching of > definitions could slow down. How often do users fetch all the tenant > metricIds without any filtering? And how performance critical is this sort > of behavior? And what use case does list of ids serve (without any > information associated to them) ? > > If you need to fetch datapoints from a known metricId, there's no need for > metrics_idx table writing or reading. So this index writing only applies to > listing metrics. > > - Micke > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160701/c1806802/attachment-0001.html From miburman at redhat.com Fri Jul 1 16:56:04 2016 From: miburman at redhat.com (Michael Burman) Date: Fri, 1 Jul 2016 16:56:04 -0400 (EDT) Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> Message-ID: <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> Hi, Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. - Micke ----- Original Message ----- From: "Stefan Negrea" To: "Discussions around Hawkular development" Sent: Friday, July 1, 2016 9:14:43 PM Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx Hello, For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? Micke, what do you think are the next steps to move forward with your proposal? Thank you, Stefan Negrea On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: Hi, This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. - Micke _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From hrupp at redhat.com Sun Jul 3 05:44:49 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Sun, 03 Jul 2016 11:44:49 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env Message-ID: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> Hey, [ CC to Federico as he may have some ideas from the Kube/OS side ] Our QE has opened an interesting case: https://github.com/ManageIQ/manageiq/issues/9556 where I first thought WTF with that title. But then when reading further it got more interesting. Basically what happens is that especially in environments like Kube/Openshift, individual containers/appservers are Kettle and not Pets: one goes down, gets killed, you start a new one somewhere else. Now the interesting question for us are (first purely on the Hawkular side): - how can we detect that such a container is down and will never come up with that id again (-> we need to clean it up in inventory) - can we learn that for a killed container A, a freshly started container A' is the replacement to e.g. continue with performance monitoring of the app or to re-associate relationships with other items in inventory- (Is that even something we want - again that is Kettle and not Pets anymore) - Could eap+embedded agent perhaps store some token in Kube which is then passed when A' is started so that A' knows it is the new A (e.g. feed id). - I guess that would not make much sense anyway, as for an app with three app servers all would get that same token. Perhaps we should ignore that use case for now completely and tackle that differently in the sense that we don't care about 'real' app servers, but rather introduce the concept of a 'virtual' server where we only know via Kube that it exists and how many of them for a certain application (which is identified via some tag in Kube). Those virtual servers deliver data, but we don't really try to do anything with them 'personally', but indirectly via Kube interactions (i.e. map the incoming data to the app and not to an individual server). We would also not store the individual server in inventory, so there is no need to clean it up (again, no pet but kettle). In fact we could just use the feed-id as kube token (or vice versa). We still need a way to detect that one of those kettle-as is on Kube and possibly either disable to re-route some of the lifecycle events onto Kubernetes (start in any case, stop probably does not matter if he container dies because the appserver inside stops or if kube just kills it). -- Reg. Adresse: Red Hat GmbH, Technopark II, Haus C, Werner-von-Siemens-Ring 14, D-85630 Grasbrunn Handelsregister: Amtsgericht M?nchen HRB 153243 Gesch?ftsf?hrer: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From mazz at redhat.com Sun Jul 3 08:14:44 2016 From: mazz at redhat.com (John Mazzitelli) Date: Sun, 3 Jul 2016 08:14:44 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> Message-ID: <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> In case you didn't understand the analogy, I believe Heiko meant to use the word "Cattle" not "Kettle" :-) I had to look it up - I've not heard the "cattle vs. pets" analogy before - but I get it now! ----- Original Message ----- > Hey, > > [ CC to Federico as he may have some ideas from the Kube/OS side ] > > Our QE has opened an interesting case: > > https://github.com/ManageIQ/manageiq/issues/9556 > > where I first thought WTF with that title. > > But then when reading further it got more interesting. > Basically what happens is that especially in environments like > Kube/Openshift, > individual containers/appservers are Kettle and not Pets: one goes down, > gets > killed, you start a new one somewhere else. > > Now the interesting question for us are (first purely on the Hawkular > side): > - how can we detect that such a container is down and will never come up > with that id again (-> we need to clean it up in inventory) > - can we learn that for a killed container A, a freshly started > container A' is > the replacement to e.g. continue with performance monitoring of the app > or to re-associate relationships with other items in inventory- > (Is that even something we want - again that is Kettle and not Pets > anymore) > - Could eap+embedded agent perhaps store some token in Kube which > is then passed when A' is started so that A' knows it is the new A (e.g. > feed id). > - I guess that would not make much sense anyway, as for an app with > three app servers all would get that same token. > > Perhaps we should ignore that use case for now completely and tackle > that differently in the sense that we don't care about 'real' app > servers, > but rather introduce the concept of a 'virtual' server where we only > know > via Kube that it exists and how many of them for a certain application > (which is identified via some tag in Kube). Those virtual servers > deliver > data, but we don't really try to do anything with them 'personally', > but indirectly via Kube interactions (i.e. map the incoming data to the > app and not to an individual server). We would also not store > the individual server in inventory, so there is no need to clean it > up (again, no pet but kettle). > In fact we could just use the feed-id as kube token (or vice versa). > We still need a way to detect that one of those kettle-as is on Kube > and possibly either disable to re-route some of the lifecycle events > onto Kubernetes (start in any case, stop probably does not matter > if he container dies because the appserver inside stops or if kube > just kills it). > > > -- > Reg. Adresse: Red Hat GmbH, Technopark II, Haus C, > Werner-von-Siemens-Ring 14, D-85630 Grasbrunn > Handelsregister: Amtsgericht M?nchen HRB 153243 > Gesch?ftsf?hrer: Charles Cachera, Michael Cunningham, Michael O'Neill, > Eric Shander > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From mazz at redhat.com Sun Jul 3 08:19:32 2016 From: mazz at redhat.com (John Mazzitelli) Date: Sun, 3 Jul 2016 08:19:32 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> Message-ID: <1919654970.1606295.1467548372575.JavaMail.zimbra@redhat.com> Is there some mechanism by which the agent can know if it (or the EAP server it is managing) is running inside a container? I'm thinking of something analogous to /etc/machine-id - perhaps when running in a container, Kube sets some environment variable, file system token, something? If there is a way to know, then the agent can just write some resource config property somewhere to say "this 'thing' is running in a container." So when that "thing" goes down, the server-side can be notified and do special things (clean up inventr? send an alert? ----- Original Message ----- > Hey, > > [ CC to Federico as he may have some ideas from the Kube/OS side ] > > Our QE has opened an interesting case: > > https://github.com/ManageIQ/manageiq/issues/9556 > > where I first thought WTF with that title. > > But then when reading further it got more interesting. > Basically what happens is that especially in environments like > Kube/Openshift, > individual containers/appservers are Kettle and not Pets: one goes down, > gets > killed, you start a new one somewhere else. > > Now the interesting question for us are (first purely on the Hawkular > side): > - how can we detect that such a container is down and will never come up > with that id again (-> we need to clean it up in inventory) > - can we learn that for a killed container A, a freshly started > container A' is > the replacement to e.g. continue with performance monitoring of the app > or to re-associate relationships with other items in inventory- > (Is that even something we want - again that is Kettle and not Pets > anymore) > - Could eap+embedded agent perhaps store some token in Kube which > is then passed when A' is started so that A' knows it is the new A (e.g. > feed id). > - I guess that would not make much sense anyway, as for an app with > three app servers all would get that same token. > > Perhaps we should ignore that use case for now completely and tackle > that differently in the sense that we don't care about 'real' app > servers, > but rather introduce the concept of a 'virtual' server where we only > know > via Kube that it exists and how many of them for a certain application > (which is identified via some tag in Kube). Those virtual servers > deliver > data, but we don't really try to do anything with them 'personally', > but indirectly via Kube interactions (i.e. map the incoming data to the > app and not to an individual server). We would also not store > the individual server in inventory, so there is no need to clean it > up (again, no pet but kettle). > In fact we could just use the feed-id as kube token (or vice versa). > We still need a way to detect that one of those kettle-as is on Kube > and possibly either disable to re-route some of the lifecycle events > onto Kubernetes (start in any case, stop probably does not matter > if he container dies because the appserver inside stops or if kube > just kills it). > > > -- > Reg. Adresse: Red Hat GmbH, Technopark II, Haus C, > Werner-von-Siemens-Ring 14, D-85630 Grasbrunn > Handelsregister: Amtsgericht M?nchen HRB 153243 > Gesch?ftsf?hrer: Charles Cachera, Michael Cunningham, Michael O'Neill, > Eric Shander > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From hrupp at redhat.com Mon Jul 4 04:23:38 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Mon, 04 Jul 2016 10:23:38 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> Message-ID: On 3 Jul 2016, at 14:14, John Mazzitelli wrote: > In case you didn't understand the analogy, I believe Heiko meant to > use the word "Cattle" not "Kettle" :-) Yes sorry. You are right. My finger were so used to the 'K' thingy :-) From tsegismo at redhat.com Mon Jul 4 05:44:07 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Mon, 4 Jul 2016 11:44:07 +0200 Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> Message-ID: <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> Hi, First a note about number of metrics per tenant. A million metrics per tenant would be easy to reach IMO. Let's take a simple example: a machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) metrics and you get close to a thousand metrics. Then multiply by a thousand machines and you reach the million. And of course there are users with more machines, and more complex setups (multiple Wildly/EAP servers with hundreds of apps deployed). Keep in mind that one of the promises of Metrics was the ability to store huge number of metrics, instead of disabling metric collection. That being said, do you have absolute numbers about the response time when querying for all metrics of a tenant? Twice as slower may not be that bad if we're going from 10ms down to 20ms :) Especially considering the use cases for such queries: daily sync with external system for example, or metric name autocomplete in Grafana/ManageIQ. Regards, Le 01/07/2016 ? 22:56, Michael Burman a ?crit : > Hi, > > Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). > > So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. > > - Micke > > ----- Original Message ----- > From: "Stefan Negrea" > To: "Discussions around Hawkular development" > Sent: Friday, July 1, 2016 9:14:43 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > Hello, > > For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... > > You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. > > To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. > > > John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? > > Micke, what do you think are the next steps to move forward with your proposal? > > > Thank you, > Stefan Negrea > > > On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: > > > Hi, > > This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? > > Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. > > If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. > > If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? > > If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. > > - Micke > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team From miburman at redhat.com Mon Jul 4 05:59:20 2016 From: miburman at redhat.com (Michael Burman) Date: Mon, 4 Jul 2016 05:59:20 -0400 (EDT) Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> Message-ID: <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> Hi, If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). - Micke ----- Original Message ----- From: "Thomas Segismont" To: hawkular-dev at lists.jboss.org Sent: Monday, July 4, 2016 12:44:07 PM Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx Hi, First a note about number of metrics per tenant. A million metrics per tenant would be easy to reach IMO. Let's take a simple example: a machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) metrics and you get close to a thousand metrics. Then multiply by a thousand machines and you reach the million. And of course there are users with more machines, and more complex setups (multiple Wildly/EAP servers with hundreds of apps deployed). Keep in mind that one of the promises of Metrics was the ability to store huge number of metrics, instead of disabling metric collection. That being said, do you have absolute numbers about the response time when querying for all metrics of a tenant? Twice as slower may not be that bad if we're going from 10ms down to 20ms :) Especially considering the use cases for such queries: daily sync with external system for example, or metric name autocomplete in Grafana/ManageIQ. Regards, Le 01/07/2016 ? 22:56, Michael Burman a ?crit : > Hi, > > Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). > > So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. > > - Micke > > ----- Original Message ----- > From: "Stefan Negrea" > To: "Discussions around Hawkular development" > Sent: Friday, July 1, 2016 9:14:43 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > Hello, > > For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... > > You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. > > To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. > > > John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? > > Micke, what do you think are the next steps to move forward with your proposal? > > > Thank you, > Stefan Negrea > > > On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: > > > Hi, > > This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? > > Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. > > If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. > > If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? > > If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. > > - Micke > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From tsegismo at redhat.com Mon Jul 4 06:27:41 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Mon, 4 Jul 2016 12:27:41 +0200 Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> Message-ID: <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> Le 04/07/2016 ? 11:59, Michael Burman a ?crit : > Hi, > > If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. There is no bad use case, only bad solutions :) Joke aside, it is true that the current solution for autocomplete in Grafana is far from perfect. It does not query all metrics of a tenant, but all metrics of a same type for a tenant, and then does the filtering on the client side. For some reason the Metrics API does not allow the name filter if no tag filter is set. Should I open a JIRA? > > Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? > > Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). Could the new features fulltext index capabilities in C* 3.x help here? > > - Micke > > ----- Original Message ----- > From: "Thomas Segismont" > To: hawkular-dev at lists.jboss.org > Sent: Monday, July 4, 2016 12:44:07 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > Hi, > > First a note about number of metrics per tenant. A million metrics per > tenant would be easy to reach IMO. Let's take a simple example: a > machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, > disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) > metrics and you get close to a thousand metrics. Then multiply by a > thousand machines and you reach the million. And of course there are > users with more machines, and more complex setups (multiple Wildly/EAP > servers with hundreds of apps deployed). > Keep in mind that one of the promises of Metrics was the ability to > store huge number of metrics, instead of disabling metric collection. > > That being said, do you have absolute numbers about the response time > when querying for all metrics of a tenant? Twice as slower may not be > that bad if we're going from 10ms down to 20ms :) Especially considering > the use cases for such queries: daily sync with external system for > example, or metric name autocomplete in Grafana/ManageIQ. > > Regards, > > Le 01/07/2016 ? 22:56, Michael Burman a ?crit : >> Hi, >> >> Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). >> >> So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. >> >> - Micke >> >> ----- Original Message ----- >> From: "Stefan Negrea" >> To: "Discussions around Hawkular development" >> Sent: Friday, July 1, 2016 9:14:43 PM >> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >> >> Hello, >> >> For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... >> >> You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. >> >> To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. >> >> >> John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? >> >> Micke, what do you think are the next steps to move forward with your proposal? >> >> >> Thank you, >> Stefan Negrea >> >> >> On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: >> >> >> Hi, >> >> This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? >> >> Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. >> >> If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. >> >> If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? >> >> If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. >> >> - Micke >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > -- Thomas Segismont JBoss ON Engineering Team From miburman at redhat.com Mon Jul 4 07:11:00 2016 From: miburman at redhat.com (Michael Burman) Date: Mon, 4 Jul 2016 07:11:00 -0400 (EDT) Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> Message-ID: <418144617.4766560.1467630660781.JavaMail.zimbra@redhat.com> Hi, The id filter does not work without tags filter because we can't do the id filtering on Cassandra side. It was done for performance reasons, as otherwise you have to fetch all the available metrics to the HWKMETRICS and then do the client side filtering. Fulltext indexing could certainly be an interesting choice for many of our use-cases to investigate at least. I forgot the whole thing, we should look into it before making any greater changes. Mm.. - Micke ----- Original Message ----- From: "Thomas Segismont" To: hawkular-dev at lists.jboss.org Sent: Monday, July 4, 2016 1:27:41 PM Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx Le 04/07/2016 ? 11:59, Michael Burman a ?crit : > Hi, > > If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. There is no bad use case, only bad solutions :) Joke aside, it is true that the current solution for autocomplete in Grafana is far from perfect. It does not query all metrics of a tenant, but all metrics of a same type for a tenant, and then does the filtering on the client side. For some reason the Metrics API does not allow the name filter if no tag filter is set. Should I open a JIRA? > > Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? > > Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). Could the new features fulltext index capabilities in C* 3.x help here? > > - Micke > > ----- Original Message ----- > From: "Thomas Segismont" > To: hawkular-dev at lists.jboss.org > Sent: Monday, July 4, 2016 12:44:07 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > Hi, > > First a note about number of metrics per tenant. A million metrics per > tenant would be easy to reach IMO. Let's take a simple example: a > machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, > disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) > metrics and you get close to a thousand metrics. Then multiply by a > thousand machines and you reach the million. And of course there are > users with more machines, and more complex setups (multiple Wildly/EAP > servers with hundreds of apps deployed). > Keep in mind that one of the promises of Metrics was the ability to > store huge number of metrics, instead of disabling metric collection. > > That being said, do you have absolute numbers about the response time > when querying for all metrics of a tenant? Twice as slower may not be > that bad if we're going from 10ms down to 20ms :) Especially considering > the use cases for such queries: daily sync with external system for > example, or metric name autocomplete in Grafana/ManageIQ. > > Regards, > > Le 01/07/2016 ? 22:56, Michael Burman a ?crit : >> Hi, >> >> Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). >> >> So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. >> >> - Micke >> >> ----- Original Message ----- >> From: "Stefan Negrea" >> To: "Discussions around Hawkular development" >> Sent: Friday, July 1, 2016 9:14:43 PM >> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >> >> Hello, >> >> For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... >> >> You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. >> >> To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. >> >> >> John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? >> >> Micke, what do you think are the next steps to move forward with your proposal? >> >> >> Thank you, >> Stefan Negrea >> >> >> On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: >> >> >> Hi, >> >> This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? >> >> Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. >> >> If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. >> >> If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? >> >> If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. >> >> - Micke >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > -- Thomas Segismont JBoss ON Engineering Team _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From tsegismo at redhat.com Mon Jul 4 09:19:30 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Mon, 4 Jul 2016 15:19:30 +0200 Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <418144617.4766560.1467630660781.JavaMail.zimbra@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <1961450804.4126707.1467294373087.JavaMail.zimbra@redhat.com> <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> <418144617.4766560.1467630660781.JavaMail.zimbra@redhat.com> Message-ID: <75b0fa33-2b72-0c8b-b701-78e1a4b8384e@redhat.com> Le 04/07/2016 ? 13:11, Michael Burman a ?crit : > Hi, > > The id filter does not work without tags filter because we can't do the id filtering on Cassandra side. It was done for performance reasons, as otherwise you have to fetch all the available metrics to the HWKMETRICS and then do the client side filtering. I understand we can't filter on the database. But then wouldn't it be better to filter on the server in order to save that to the client at least? I mean, if you need to get metrics by name, as a user you will have to load everything anyway. Couldn't we save that to the user? > > Fulltext indexing could certainly be an interesting choice for many of our use-cases to investigate at least. I forgot the whole thing, we should look into it before making any greater changes. Mm.. > > - Micke > > ----- Original Message ----- > From: "Thomas Segismont" > To: hawkular-dev at lists.jboss.org > Sent: Monday, July 4, 2016 1:27:41 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > > > Le 04/07/2016 ? 11:59, Michael Burman a ?crit : >> Hi, >> >> If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. > > There is no bad use case, only bad solutions :) Joke aside, it is true > that the current solution for autocomplete in Grafana is far from > perfect. It does not query all metrics of a tenant, but all metrics of a > same type for a tenant, and then does the filtering on the client side. > For some reason the Metrics API does not allow the name filter if no tag > filter is set. Should I open a JIRA? > >> >> Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? >> >> Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). > > Could the new features fulltext index capabilities in C* 3.x help here? > >> >> - Micke >> >> ----- Original Message ----- >> From: "Thomas Segismont" >> To: hawkular-dev at lists.jboss.org >> Sent: Monday, July 4, 2016 12:44:07 PM >> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >> >> Hi, >> >> First a note about number of metrics per tenant. A million metrics per >> tenant would be easy to reach IMO. Let's take a simple example: a >> machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, >> disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) >> metrics and you get close to a thousand metrics. Then multiply by a >> thousand machines and you reach the million. And of course there are >> users with more machines, and more complex setups (multiple Wildly/EAP >> servers with hundreds of apps deployed). >> Keep in mind that one of the promises of Metrics was the ability to >> store huge number of metrics, instead of disabling metric collection. >> >> That being said, do you have absolute numbers about the response time >> when querying for all metrics of a tenant? Twice as slower may not be >> that bad if we're going from 10ms down to 20ms :) Especially considering >> the use cases for such queries: daily sync with external system for >> example, or metric name autocomplete in Grafana/ManageIQ. >> >> Regards, >> >> Le 01/07/2016 ? 22:56, Michael Burman a ?crit : >>> Hi, >>> >>> Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). >>> >>> So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. >>> >>> - Micke >>> >>> ----- Original Message ----- >>> From: "Stefan Negrea" >>> To: "Discussions around Hawkular development" >>> Sent: Friday, July 1, 2016 9:14:43 PM >>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>> >>> Hello, >>> >>> For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... >>> >>> You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. >>> >>> To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. >>> >>> >>> John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? >>> >>> Micke, what do you think are the next steps to move forward with your proposal? >>> >>> >>> Thank you, >>> Stefan Negrea >>> >>> >>> On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: >>> >>> >>> Hi, >>> >>> This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? >>> >>> Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. >>> >>> If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. >>> >>> If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? >>> >>> If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. >>> >>> - Micke >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >> > -- Thomas Segismont JBoss ON Engineering Team From miburman at redhat.com Mon Jul 4 09:54:41 2016 From: miburman at redhat.com (Michael Burman) Date: Mon, 4 Jul 2016 09:54:41 -0400 (EDT) Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <75b0fa33-2b72-0c8b-b701-78e1a4b8384e@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> <418144617.4766560.1467630660781.JavaMail.zimbra@redhat.com> <75b0fa33-2b72-0c8b-b701-78e1a4b8384e@redhat.com> Message-ID: <427305826.4802006.1467640481529.JavaMail.zimbra@redhat.com> Hi, Sure, but we should discourage users to use metricIds for anything. Best approach would be to randomize them and force users to use tagging to find their metrics. Otherwise what we'll get is silly integrations where someone has "hostname.wildfly.metric.name" and then they want to search all the "metric.name" by doing idFilter="*.metric\.name" and complain "your metrics db is slow!". What they should do instead is always "/raw/query?tags=hostname=X,app=wildfly,metric=metric.name" and so on. - Micke ----- Original Message ----- From: "Thomas Segismont" To: hawkular-dev at lists.jboss.org Sent: Monday, July 4, 2016 4:19:30 PM Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx Le 04/07/2016 ? 13:11, Michael Burman a ?crit : > Hi, > > The id filter does not work without tags filter because we can't do the id filtering on Cassandra side. It was done for performance reasons, as otherwise you have to fetch all the available metrics to the HWKMETRICS and then do the client side filtering. I understand we can't filter on the database. But then wouldn't it be better to filter on the server in order to save that to the client at least? I mean, if you need to get metrics by name, as a user you will have to load everything anyway. Couldn't we save that to the user? > > Fulltext indexing could certainly be an interesting choice for many of our use-cases to investigate at least. I forgot the whole thing, we should look into it before making any greater changes. Mm.. > > - Micke > > ----- Original Message ----- > From: "Thomas Segismont" > To: hawkular-dev at lists.jboss.org > Sent: Monday, July 4, 2016 1:27:41 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > > > Le 04/07/2016 ? 11:59, Michael Burman a ?crit : >> Hi, >> >> If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. > > There is no bad use case, only bad solutions :) Joke aside, it is true > that the current solution for autocomplete in Grafana is far from > perfect. It does not query all metrics of a tenant, but all metrics of a > same type for a tenant, and then does the filtering on the client side. > For some reason the Metrics API does not allow the name filter if no tag > filter is set. Should I open a JIRA? > >> >> Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? >> >> Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). > > Could the new features fulltext index capabilities in C* 3.x help here? > >> >> - Micke >> >> ----- Original Message ----- >> From: "Thomas Segismont" >> To: hawkular-dev at lists.jboss.org >> Sent: Monday, July 4, 2016 12:44:07 PM >> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >> >> Hi, >> >> First a note about number of metrics per tenant. A million metrics per >> tenant would be easy to reach IMO. Let's take a simple example: a >> machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, >> disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) >> metrics and you get close to a thousand metrics. Then multiply by a >> thousand machines and you reach the million. And of course there are >> users with more machines, and more complex setups (multiple Wildly/EAP >> servers with hundreds of apps deployed). >> Keep in mind that one of the promises of Metrics was the ability to >> store huge number of metrics, instead of disabling metric collection. >> >> That being said, do you have absolute numbers about the response time >> when querying for all metrics of a tenant? Twice as slower may not be >> that bad if we're going from 10ms down to 20ms :) Especially considering >> the use cases for such queries: daily sync with external system for >> example, or metric name autocomplete in Grafana/ManageIQ. >> >> Regards, >> >> Le 01/07/2016 ? 22:56, Michael Burman a ?crit : >>> Hi, >>> >>> Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). >>> >>> So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. >>> >>> - Micke >>> >>> ----- Original Message ----- >>> From: "Stefan Negrea" >>> To: "Discussions around Hawkular development" >>> Sent: Friday, July 1, 2016 9:14:43 PM >>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>> >>> Hello, >>> >>> For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... >>> >>> You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. >>> >>> To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. >>> >>> >>> John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? >>> >>> Micke, what do you think are the next steps to move forward with your proposal? >>> >>> >>> Thank you, >>> Stefan Negrea >>> >>> >>> On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: >>> >>> >>> Hi, >>> >>> This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? >>> >>> Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. >>> >>> If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. >>> >>> If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? >>> >>> If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. >>> >>> - Micke >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >> > -- Thomas Segismont JBoss ON Engineering Team _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From tsegismo at redhat.com Mon Jul 4 10:29:23 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Mon, 4 Jul 2016 16:29:23 +0200 Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <427305826.4802006.1467640481529.JavaMail.zimbra@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <281273485.4555946.1467406564769.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> <418144617.4766560.1467630660781.JavaMail.zimbra@redhat.com> <75b0fa33-2b72-0c8b-b701-78e1a4b8384e@redhat.com> <427305826.4802006.1467640481529.JavaMail.zimbra@redhat.com> Message-ID: <71085524-bc4a-5013-55c9-c3fa9946205e@redhat.com> I don't agree with the philosophy. I err on the side of making it possible to use a server side filter and explain in the documentation why tagging your metrics is a better option. Le 04/07/2016 ? 15:54, Michael Burman a ?crit : > Hi, > > Sure, but we should discourage users to use metricIds for anything. Best approach would be to randomize them and force users to use tagging to find their metrics. Otherwise what we'll get is silly integrations where someone has "hostname.wildfly.metric.name" and then they want to search all the "metric.name" by doing idFilter="*.metric\.name" and complain "your metrics db is slow!". What they should do instead is always "/raw/query?tags=hostname=X,app=wildfly,metric=metric.name" and so on. > > - Micke > > ----- Original Message ----- > From: "Thomas Segismont" > To: hawkular-dev at lists.jboss.org > Sent: Monday, July 4, 2016 4:19:30 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > > > Le 04/07/2016 ? 13:11, Michael Burman a ?crit : >> Hi, >> >> The id filter does not work without tags filter because we can't do the id filtering on Cassandra side. It was done for performance reasons, as otherwise you have to fetch all the available metrics to the HWKMETRICS and then do the client side filtering. > > I understand we can't filter on the database. But then wouldn't it be > better to filter on the server in order to save that to the client at > least? I mean, if you need to get metrics by name, as a user you will > have to load everything anyway. Couldn't we save that to the user? > >> >> Fulltext indexing could certainly be an interesting choice for many of our use-cases to investigate at least. I forgot the whole thing, we should look into it before making any greater changes. Mm.. >> >> - Micke >> >> ----- Original Message ----- >> From: "Thomas Segismont" >> To: hawkular-dev at lists.jboss.org >> Sent: Monday, July 4, 2016 1:27:41 PM >> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >> >> >> >> Le 04/07/2016 ? 11:59, Michael Burman a ?crit : >>> Hi, >>> >>> If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. >> >> There is no bad use case, only bad solutions :) Joke aside, it is true >> that the current solution for autocomplete in Grafana is far from >> perfect. It does not query all metrics of a tenant, but all metrics of a >> same type for a tenant, and then does the filtering on the client side. >> For some reason the Metrics API does not allow the name filter if no tag >> filter is set. Should I open a JIRA? >> >>> >>> Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? >>> >>> Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). >> >> Could the new features fulltext index capabilities in C* 3.x help here? >> >>> >>> - Micke >>> >>> ----- Original Message ----- >>> From: "Thomas Segismont" >>> To: hawkular-dev at lists.jboss.org >>> Sent: Monday, July 4, 2016 12:44:07 PM >>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>> >>> Hi, >>> >>> First a note about number of metrics per tenant. A million metrics per >>> tenant would be easy to reach IMO. Let's take a simple example: a >>> machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, >>> disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) >>> metrics and you get close to a thousand metrics. Then multiply by a >>> thousand machines and you reach the million. And of course there are >>> users with more machines, and more complex setups (multiple Wildly/EAP >>> servers with hundreds of apps deployed). >>> Keep in mind that one of the promises of Metrics was the ability to >>> store huge number of metrics, instead of disabling metric collection. >>> >>> That being said, do you have absolute numbers about the response time >>> when querying for all metrics of a tenant? Twice as slower may not be >>> that bad if we're going from 10ms down to 20ms :) Especially considering >>> the use cases for such queries: daily sync with external system for >>> example, or metric name autocomplete in Grafana/ManageIQ. >>> >>> Regards, >>> >>> Le 01/07/2016 ? 22:56, Michael Burman a ?crit : >>>> Hi, >>>> >>>> Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). >>>> >>>> So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. >>>> >>>> - Micke >>>> >>>> ----- Original Message ----- >>>> From: "Stefan Negrea" >>>> To: "Discussions around Hawkular development" >>>> Sent: Friday, July 1, 2016 9:14:43 PM >>>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>>> >>>> Hello, >>>> >>>> For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... >>>> >>>> You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. >>>> >>>> To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. >>>> >>>> >>>> John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? >>>> >>>> Micke, what do you think are the next steps to move forward with your proposal? >>>> >>>> >>>> Thank you, >>>> Stefan Negrea >>>> >>>> >>>> On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: >>>> >>>> >>>> Hi, >>>> >>>> This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? >>>> >>>> Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. >>>> >>>> If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. >>>> >>>> If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? >>>> >>>> If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. >>>> >>>> - Micke >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>> >> > -- Thomas Segismont JBoss ON Engineering Team From miburman at redhat.com Mon Jul 4 12:23:11 2016 From: miburman at redhat.com (Michael Burman) Date: Mon, 4 Jul 2016 12:23:11 -0400 (EDT) Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <71085524-bc4a-5013-55c9-c3fa9946205e@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> <418144617.4766560.1467630660781.JavaMail.zimbra@redhat.com> <75b0fa33-2b72-0c8b-b701-78e1a4b8384e@redhat.com> <427305826.4802006.1467640481529.JavaMail.zimbra@redhat.com> <71085524-bc4a-5013-55c9-c3fa9946205e@redhat.com> Message-ID: <392979988.4845404.1467649391042.JavaMail.zimbra@redhat.com> Hi, The basic problem with supporting anything with metricId is the fact that we don't have data storage that is designed for the metricId matching. We have exact match or no match, nothing in between. We really should avoid supporting features we can't support. Or then we'll need inverted index for full text search for all the metricIds. Even with it, it's like transferring metadata in a filename. That has never been a good idea. - Micke ----- Original Message ----- From: "Thomas Segismont" To: hawkular-dev at lists.jboss.org Sent: Monday, July 4, 2016 5:29:23 PM Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx I don't agree with the philosophy. I err on the side of making it possible to use a server side filter and explain in the documentation why tagging your metrics is a better option. Le 04/07/2016 ? 15:54, Michael Burman a ?crit : > Hi, > > Sure, but we should discourage users to use metricIds for anything. Best approach would be to randomize them and force users to use tagging to find their metrics. Otherwise what we'll get is silly integrations where someone has "hostname.wildfly.metric.name" and then they want to search all the "metric.name" by doing idFilter="*.metric\.name" and complain "your metrics db is slow!". What they should do instead is always "/raw/query?tags=hostname=X,app=wildfly,metric=metric.name" and so on. > > - Micke > > ----- Original Message ----- > From: "Thomas Segismont" > To: hawkular-dev at lists.jboss.org > Sent: Monday, July 4, 2016 4:19:30 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > > > Le 04/07/2016 ? 13:11, Michael Burman a ?crit : >> Hi, >> >> The id filter does not work without tags filter because we can't do the id filtering on Cassandra side. It was done for performance reasons, as otherwise you have to fetch all the available metrics to the HWKMETRICS and then do the client side filtering. > > I understand we can't filter on the database. But then wouldn't it be > better to filter on the server in order to save that to the client at > least? I mean, if you need to get metrics by name, as a user you will > have to load everything anyway. Couldn't we save that to the user? > >> >> Fulltext indexing could certainly be an interesting choice for many of our use-cases to investigate at least. I forgot the whole thing, we should look into it before making any greater changes. Mm.. >> >> - Micke >> >> ----- Original Message ----- >> From: "Thomas Segismont" >> To: hawkular-dev at lists.jboss.org >> Sent: Monday, July 4, 2016 1:27:41 PM >> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >> >> >> >> Le 04/07/2016 ? 11:59, Michael Burman a ?crit : >>> Hi, >>> >>> If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. >> >> There is no bad use case, only bad solutions :) Joke aside, it is true >> that the current solution for autocomplete in Grafana is far from >> perfect. It does not query all metrics of a tenant, but all metrics of a >> same type for a tenant, and then does the filtering on the client side. >> For some reason the Metrics API does not allow the name filter if no tag >> filter is set. Should I open a JIRA? >> >>> >>> Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? >>> >>> Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). >> >> Could the new features fulltext index capabilities in C* 3.x help here? >> >>> >>> - Micke >>> >>> ----- Original Message ----- >>> From: "Thomas Segismont" >>> To: hawkular-dev at lists.jboss.org >>> Sent: Monday, July 4, 2016 12:44:07 PM >>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>> >>> Hi, >>> >>> First a note about number of metrics per tenant. A million metrics per >>> tenant would be easy to reach IMO. Let's take a simple example: a >>> machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, >>> disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) >>> metrics and you get close to a thousand metrics. Then multiply by a >>> thousand machines and you reach the million. And of course there are >>> users with more machines, and more complex setups (multiple Wildly/EAP >>> servers with hundreds of apps deployed). >>> Keep in mind that one of the promises of Metrics was the ability to >>> store huge number of metrics, instead of disabling metric collection. >>> >>> That being said, do you have absolute numbers about the response time >>> when querying for all metrics of a tenant? Twice as slower may not be >>> that bad if we're going from 10ms down to 20ms :) Especially considering >>> the use cases for such queries: daily sync with external system for >>> example, or metric name autocomplete in Grafana/ManageIQ. >>> >>> Regards, >>> >>> Le 01/07/2016 ? 22:56, Michael Burman a ?crit : >>>> Hi, >>>> >>>> Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). >>>> >>>> So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. >>>> >>>> - Micke >>>> >>>> ----- Original Message ----- >>>> From: "Stefan Negrea" >>>> To: "Discussions around Hawkular development" >>>> Sent: Friday, July 1, 2016 9:14:43 PM >>>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>>> >>>> Hello, >>>> >>>> For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... >>>> >>>> You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. >>>> >>>> To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. >>>> >>>> >>>> John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? >>>> >>>> Micke, what do you think are the next steps to move forward with your proposal? >>>> >>>> >>>> Thank you, >>>> Stefan Negrea >>>> >>>> >>>> On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: >>>> >>>> >>>> Hi, >>>> >>>> This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? >>>> >>>> Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. >>>> >>>> If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. >>>> >>>> If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? >>>> >>>> If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. >>>> >>>> - Micke >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>> >> > -- Thomas Segismont JBoss ON Engineering Team _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From auszon3 at gmail.com Mon Jul 4 19:53:20 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Mon, 04 Jul 2016 23:53:20 +0000 Subject: [Hawkular-dev] Sync with Inventory In-Reply-To: <1652727.ThqZrOltYD@rpi.lan> References: <37609601.21BcKnHFje@rpi.lan> <8a239b8f-8f4f-e2c4-9dab-791110dd22a1@redhat.com> <1652727.ThqZrOltYD@rpi.lan> Message-ID: why "isParentOf" is more suitable than "contains" in vertx as Thomas said? If it is "contains", it also makes sense to me since if "MyApp" is gone, the feeds it contains should disappear as well. Austin Lukas Krejci ? 2016?6?29? ???21:20??? > Btw. I've slightly updated the inventory organization description on the > hawkular site (http://www.hawkular.org/docs/components/inventory/ > index.html#inventory-organization > ). > I hope it explains the structure and > intent of the entities in inventory in a slightly more comprehensible > manner. > > My answers are inline below... > > On st?eda 29. ?ervna 2016 14:39:27 CEST Thomas Segismont wrote: > > Thank you very much for the thorough reply Lukas. A few > > questions/comments inline. > > > > Le 23/06/2016 ? 15:59, Lukas Krejci a ?crit : > > > On Thursday, June 23, 2016 10:27:12 AM Thomas Segismont wrote: > > >> Hey Lukas, > > >> > > >> Thank you for pointing us in the sync endpoint. Austin will look into > > >> this and will certainly come back with more questions. > > >> > > >> With respect to the user creating resources question, the difference > > >> between Vert.x and Wildfly is that the user creates resources > > >> grammatically. So in version 1 of the application, there might be two > > >> HTTP servers as well as 7 event bus handlers, but only 1 http server > in > > >> version 2. And a named worker pool in version 3. > > >> > > >> In the end, I believe it doesn't matter if it's container which > creates > > >> resources or if it's the user himself. Does it? > > > > > > It does not really (inventory has just a single API, so it does not > really > > > know who is talking to it - if a feed or if a user) - but resources > inside > > > and outside feeds have slightly different semantics. > > > > > > Right now the logic is this: > > > > > > Feeds are "agents" that don't care about anything else but their own > > > little > > > "world". That's why they can create their own resource types, metric > types > > > and they also declare resources and metrics of those types. Feed does > not > > > need to look "outside" of its own data and is in full charge of it. > > > > Does that mean that creating a feed is the only way to create > > resource/metric types? > > No, you can also create resource types and metric types directly under the > tenant. > > > I suppose the benefit of creating resource types is that then you can > > search for different resources of the same type easily. > > > > And if feeds create resource types, how do you know that resource types > > created by the Hawkular Agent feed running on server A are the same as > > those created by another agent running on server B? > > > > Inventory automatically computes "identity hashes" of resource types and > metric types - if 2 resource types in 2 feeds have the same ID and exactly > the > same configuration definitions, they are considered identical. If you know > 1 > resource type, you can find all the identical ones using the following REST > API (since 0.17.0.Final, the format of the URLs is thoroughly explained > here: > http://www.hawkular.org/docs/rest/rest-inventory.html#_api_endpoints): > > /hawkular/inventory/traversal/f;feedId/rt;resourceTypeId/identical > > If for example some resource types should be known up-front and "shared" > across all feeds, some kind of "gluecode" could create "global" resource > types > under the tenant, that would have the same id and structure as the types > that > the feeds declare. If then you want to for example find all resources of > given > type, you can: > > /hawkular/inventory/traversal/rt;myType/identical/rl;defines/type=resource > > I.e. for all types identical to the global one, find all resources defined > by > those types. > > > > Hence the /sync endpoint applies to a feed nicely - since it is in > charge, > > > it merely declares what is the view it has currently of the "world" it > > > sees and inventory will make sure it has the same picture - under that > > > feed. > > > > > > Now if you have an application that spans multiple vms/machines and is > > > composed of multiple processes, there is no such clear distinction of > > > "ownership". > > > > Good point, Vert.x applications are often distributed and communicating > > over the EventBus. > > > > > While indeed a "real" user can just act like a feed, the envisioned > > > workflow is that the user operates directly in environments and at the > > > top level. I.e. a user assigns feeds to environments (i.e. this feed > > > reports on my server in staging environment, etc) and the user creates > > > "logical" resources in the environment (i.e. "My App" resource in > staging > > > env is composed of a load balancer managed by this feed, mongodb > managed > > > by another feed there and clustered wflys there, there and there). > > > > > > To model this, inventory supports 2 kinds of tree hierarchies - 1 > created > > > using the "contains" relationship, which expresses existential > ownership - > > > i.e. a feed contains its resources and if a feed disappears, so do the > > > resources, because no one else can report on them. The entities bound > by > > > the > > How does a feed "disappear"? That would be by deleting it through the > > REST API, correct? Something the ManageIQ provider would do through the > > Ruby client? > > > > yes > > > > contains relationship form a tree - no loops or diamonds in it (this is > > > enforced by inventory). But there can also be a hierarchy created > using an > > > "isParentOf" relationship (which represents "logical" ownership). > > > Resources > > > bound by "isParentOf" can form an acyclic graph - i.e. 1 resource can > have > > > multiple parents as well as many children (isParentOf is applicable > only > > > to > > > resources, not other types of entities). > > > > > > The hierarchies formed by "contains" and "isParentOf" are independent. > So > > > you can create a resource "My App" in the staging environment and > declare > > > it a parent (using "isParentOf") of the resources declared by feeds > that > > > manage the machines where the constituent servers live. > > > > Interesting, that may be the way to model a Vert.x app deployed on two > > machines. Each process would have its own feed reporting discovered > > resources (http servers, event bus handlers, ... etc), and a logical app > > resource as parent. > > > > Exactly. > > > > That is the envisaged workflow for "apps". Now the downside to that is > > > that > > > (currently) there is no "sync" for that. The reason is that the > > > application > > > really is a logical concept and the underlying servers can be > repurposed > > > to > > > serve different applications (so if app stops using it, it shouldn't > > > really > > > disappear from inventory, as is the case with /sync - because if a feed > > > doesn't "see" a resource, then it really is just gone, because the > feed is > > > solely responsible for reporting on it). > > > > What happens to the resources exactly? Are they marked as gone or simply > > deleted? > > Right now they are deleted. That is of course not optimal and versioning > is in > the pipeline right after the port of inventory to Tinkerpop3. Basically all > the entities and relationships will get "from" and "to" timestamps. > Implicitly, you'd look at the "present", but you'd be able to look at how > things looked in the past by specifying a different "now" in your query. > > > Do you know how dependent services are updated? For example, when a JMS > > queue is gone, are alert definitions on queue depth removed as well? How > > does that happen? > > > > Inventory sends events on the bus about every C/U/D of every entity or > relationship, so other components can react on that. > > > > We can think about how to somehow help clients with "App sync" but I'm > not > > > sure if having a feed for vertx is the right thing to do. On the other > > > hand I very well may not be seeing some obvious problems of the above > or > > > parallels that make the 2 approaches really the same because the above > > > model is just ingrained in my brain after so many hours thinking about > it > > > ;) > > > > > >> As for the feed question, the Vert.x feed will be the Metrics SPI > > >> implementation (vertx-hawkular-metrics project). Again I guess it's > not > > >> much different than the Hawkular Agent. > > > > > > A feed would only be appropriate if vertx app never reported on > something > > > that would also be reported by other agents. I.e. if a part of a vertx > > > application is also reported on by a wfly agent, because that part is > > > running in a wfly server managed by us, then that will not work - 1 > > > resource cannot be "contained" in 2 different feeds (not just API wise, > > > but logically, too). > > I'm not too worried about this use case. First the vast majority of > > Vert.x applications I know about are not embedded. Secondly the Vert.x > > feed would not report resources already reported by the Hawkular Agent. > > > > >> Maybe the wording around user creating resources was confusing? Did > you > > >> thought he would do so from application code? In this case, the answer > > >> is no. > > > > > > Yeah, we should probably get together and discuss what your plans are > to > > > get on the same page with everything. > > > > I believe that presenting to you (and to whoever is interested) the > > conclusions of investigations would be beneficial indeed. > > > > +1 > > > >> Regards, > > >> Thomas > > >> > > >> Le 23/06/2016 ? 10:01, Austin Kuo a ?crit : > > >>> Yes, I?m gonna build the inventory for vertx applications. > > >>> So I have to create a feed for it. > > >>> > > >>> Thanks! > > >>> > > >>> On Tue, Jun 21, 2016 at 7:55 PM Lukas Krejci > >>> > > >>> > wrote: > > >>> Hi Austin, > > >>> > > >>> Inventory offers a /hawkular/inventory/sync endpoint that is > used to > > >>> synchronize the "world view" of feeds (feed being something that > > >>> pushes data > > >>> into inventory). > > >>> > > >>> You said though that a "user creates" the resources, so I am not > > >>> sure if /sync > > >>> would be applicable to your scenario. Would you please elaborate > > >>> more on where > > >>> in the inventory hierarchy you create your resources and how? > I.e. > > >>> are you > > >>> using some sort of feed akin to Hawkular's Wildfly Agent or are > you > > >>> just > > >>> creating your resources "manually" under environments? > > >>> > > >>> On Tuesday, June 21, 2016 02:20:33 AM Austin Kuo wrote: > > >>> > Hi all, > > >>> > > > >>> > I?m currently investigating how to sync with inventory server. > > >>> > Here?s the example scenario: > > >>> > Consider the following problem. A user creates version 1 of the > > >>> > > >>> app with > > >>> > > >>> > two http servers, one listening on port 8080, the other on port > > >>> > > >>> 8181. In > > >>> > > >>> > version 2, the http server listening on port 8181 is no longer > > >>> > needed. > > >>> > When the old version is stopped and the new version started, > there > > >>> > > >>> will be > > >>> > > >>> > just one http server listening. The application is not aware of > > >>> > the > > >>> > previous state. What should we do so that the second http > server > > >>> > > >>> is removed > > >>> > > >>> > from Inventory? > > >>> > > > >>> > Thanks in advance. > > >>> > > >>> -- > > >>> Lukas Krejci > > >>> > > >>> _______________________________________________ > > >>> hawkular-dev mailing list > > >>> hawkular-dev at lists.jboss.org hawkular-dev at lists.jboss.org> > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > >>> > > >>> _______________________________________________ > > >>> hawkular-dev mailing list > > >>> hawkular-dev at lists.jboss.org > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -- > Lukas Krejci > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160704/87a0e2de/attachment-0001.html From theute at redhat.com Tue Jul 5 02:17:58 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 5 Jul 2016 08:17:58 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> Message-ID: cattle vs pet monitoring is something I struggle with TBH... It doesn't make much sense to keep all data about all elements of the cattle as you are less interested by the performance of 1 member but more about the overall performance. With auto scaling, new containers are created/removed. You add one, you remove it, you read one, there is no continuation unlike when you restart a server... A configuration change is also not a continuation anymore, it's a whole new image, whole new container (in good practice at least) IMO we should keep thinking about those, and think more in terms of collection for the cases when Middleware is running in (immutable) containers... Thomas On Sun, Jul 3, 2016 at 11:44 AM, Heiko W.Rupp wrote: > Hey, > > [ CC to Federico as he may have some ideas from the Kube/OS side ] > > Our QE has opened an interesting case: > > https://github.com/ManageIQ/manageiq/issues/9556 > > where I first thought WTF with that title. > > But then when reading further it got more interesting. > Basically what happens is that especially in environments like > Kube/Openshift, > individual containers/appservers are Kettle and not Pets: one goes down, > gets > killed, you start a new one somewhere else. > > Now the interesting question for us are (first purely on the Hawkular > side): > - how can we detect that such a container is down and will never come up > with that id again (-> we need to clean it up in inventory) > - can we learn that for a killed container A, a freshly started > container A' is > the replacement to e.g. continue with performance monitoring of the app > or to re-associate relationships with other items in inventory- > (Is that even something we want - again that is Kettle and not Pets > anymore) > - Could eap+embedded agent perhaps store some token in Kube which > is then passed when A' is started so that A' knows it is the new A (e.g. > feed id). > - I guess that would not make much sense anyway, as for an app with > three app servers all would get that same token. > > Perhaps we should ignore that use case for now completely and tackle > that differently in the sense that we don't care about 'real' app > servers, > but rather introduce the concept of a 'virtual' server where we only > know > via Kube that it exists and how many of them for a certain application > (which is identified via some tag in Kube). Those virtual servers > deliver > data, but we don't really try to do anything with them 'personally', > but indirectly via Kube interactions (i.e. map the incoming data to the > app and not to an individual server). We would also not store > the individual server in inventory, so there is no need to clean it > up (again, no pet but kettle). > In fact we could just use the feed-id as kube token (or vice versa). > We still need a way to detect that one of those kettle-as is on Kube > and possibly either disable to re-route some of the lifecycle events > onto Kubernetes (start in any case, stop probably does not matter > if he container dies because the appserver inside stops or if kube > just kills it). > > > -- > Reg. Adresse: Red Hat GmbH, Technopark II, Haus C, > Werner-von-Siemens-Ring 14, D-85630 Grasbrunn > Handelsregister: Amtsgericht M?nchen HRB 153243 > Gesch?ftsf?hrer: Charles Cachera, Michael Cunningham, Michael O'Neill, > Eric Shander > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/9c536a5c/attachment.html From theute at redhat.com Tue Jul 5 04:19:13 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 5 Jul 2016 10:19:13 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> Message-ID: On Thu, Jun 16, 2016 at 6:11 PM, Stefan Negrea wrote: > I really like this last revision because the content is sectioned along > the way we deliver to users. So when a user navigates the website, the > content is always related to what they can directly download and use and > not mixed together. > > I have 3 small proposals: > 1) Hawkular and Overview can be combined into one Hawkular Overview or > Hawkular (which has the overview of the projects). > This is more an issue with the representation in the mindmap I think. Hawkular is really the homepage: http://www.hawkular.org/index.html and I think we can remove the link since the link on the logo does the same thing and it's quite a UXD standard. Overview is a more detailed page: http://www.hawkular.org/docs/overview.html a user should find it's way among the various projects. > 2) The Grafana plugin should be moved under Metrics because is for Metrics > and only Metrics. > The Grafana plugins works with Metrics and Services > 3) Hawkular Server should be renamed Hawkular Services because that the > official project name. > OTOH I would not want a totally separate structure for Hawkular Services *and* Hawkular Community, as they are very much the same except for some installation process (have to install C* separately or not) and maybe for additional parts in Community. The idea was to combined both into a single concept of server. Thomas > > > Thank you, > Stefan Negrea > > Software Engineer > > On Wed, Jun 15, 2016 at 1:18 PM, Thomas Heute wrote: > >> Sorry, I meant to sent the PNG file... here it is >> >> On Wed, Jun 15, 2016 at 8:08 PM, Thomas Heute wrote: >> >>> Based on that suggestion, here is another proposal. >>> >>> Rectangle means a page >>> Underline is more likely a section on a page >>> Green arrows mean links (To Travis, to gitbook.io...) >>> >>> Let me know what you think of that updated section >>> >>> Thomas >>> >>> On Tue, Jun 14, 2016 at 10:32 PM, Stefan Negrea >>> wrote: >>> >>>> I do not see the idea proposed yet, but why not structure the website >>>> around major projects? We have Hawkular community, Hawkular Services, >>>> Hawkular Metrics, and APM. Projects like Inventory or the clients would >>>> fall under Hawkular Services umbrella. So rather than designing a generic >>>> structure with everything make individual sub-sites and then apply the >>>> structure you proposed. >>>> >>>> The current website was designed when the direction of the community >>>> was different so a re-org along the previous structure is not sufficient. >>>> >>>> Thank you, >>>> Stefan Negrea >>>> >>>> On Tue, Jun 14, 2016 at 12:08 PM, Michael Burman >>>> wrote: >>>> >>>>> Currently Heapster stores in internal memory few minutes of data and >>>>> allows queries that request this data (through its REST-interface). The >>>>> consume part will just request the data from the HWKMETRICS instead. >>>>> >>>>> - Micke >>>>> >>>>> ----- Original Message ----- >>>>> From: "Thomas Heute" >>>>> To: "Discussions around Hawkular development" < >>>>> hawkular-dev at lists.jboss.org> >>>>> Sent: Tuesday, June 14, 2016 4:09:40 PM >>>>> Subject: Re: [Hawkular-dev] Hawkular.org >>>>> >>>>> >>>>> >>>>> On Tue, Jun 14, 2016 at 3:06 PM, Michael Burman < miburman at redhat.com >>>>> > wrote: >>>>> >>>>> >>>>> Consumers is terrible word for any client, as they both consume as >>>>> well as produce the data. >>>>> >>>>> Well that was actually reflecting the current state, we have "things" >>>>> that feed data to the server and "things" that consume data from the >>>>> server. The client libraries provide an API to feed and consume. >>>>> >>>>> >>>>> For example for Heapster, we currently produce the data, however at >>>>> the moment I'm creating a change that will consume the data from HWKMETRICS. >>>>> >>>>> Why does it consume data now ? >>>>> >>>>> Thomas >>>>> >>>>> >>>>> >>>>> >>>>> Integration / clients is far more used and known word, while >>>>> consumer/producer is something more specific and implies a design pattern. >>>>> >>>>> - Micke >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/84771518/attachment.html From hrupp at redhat.com Tue Jul 5 08:56:46 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Tue, 05 Jul 2016 14:56:46 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <1838408380.58534009.1466061626977.JavaMail.zimbra@redhat.com> Message-ID: <2A5E2AA6-EA7C-4EAA-9F41-8667BCF6BC4D@redhat.com> On 16 Jun 2016, at 15:33, Stefan Negrea wrote: > On Thu, Jun 16, 2016 at 2:20 AM, Gary Brown wrote: > >> I think the structure is ok, but prefer having Downloads and Documentation >> at the top level. But instead of the previous structure, still organise >> based on package, so: >> > > There is no point having top level Downloads and Documentation if > everything is sectioned the way Thomas proposed in the last email. There And then as user having to navigate separate trees to get e.g. to the downloads section of hawkular-services and the Go client is also confusing. Personally I'd like to see one top level of downloads or even downloads+docs | Project | Docs | Download latest |-----------------|-----------|---------------| | hawkular-services | docs/hs/.. | v0.0.5 | | go client | docs/go/.. | v 0.0.3 | With docs + downloads being hyperlinked Individual subtrees could even link to this table for their downloads (but directly link to the target docs) From hrupp at redhat.com Tue Jul 5 08:58:11 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Tue, 05 Jul 2016 14:58:11 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <1838408380.58534009.1466061626977.JavaMail.zimbra@redhat.com> Message-ID: <601E3663-1926-462D-A75D-3B9CD6C9494B@redhat.com> On 16 Jun 2016, at 15:33, Stefan Negrea wrote: > content because it will be a large section with content from unrelated > projects mixed together. They are not unrelated. APM may be a bit separate, but in general they all server the Hawkular idea and e.g. Metrics is a part of H-services and not something that happens to have the same name by accident. From hrupp at redhat.com Tue Jul 5 08:59:15 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Tue, 05 Jul 2016 14:59:15 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> Message-ID: <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> On 16 Jun 2016, at 18:11, Stefan Negrea wrote: > 2) The Grafana plugin should be moved under Metrics because is for Metrics > and only Metrics. If this is true - can we make it work with H-services as well? > 3) Hawkular Server should be renamed Hawkular Services because that the > official project name. Or the other way around :-) From hrupp at redhat.com Tue Jul 5 08:59:57 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Tue, 05 Jul 2016 14:59:57 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> Message-ID: On 5 Jul 2016, at 10:19, Thomas Heute wrote: > OTOH I would not want a totally separate structure for Hawkular Services > *and* Hawkular Community, as they are very much the same except for some > installation process (have to install C* separately or not) and maybe for > additional parts in Community. The idea was to combined both into a single > concept of server. +1 From tsegismo at redhat.com Tue Jul 5 09:03:22 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Tue, 5 Jul 2016 15:03:22 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> Message-ID: <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >> 2) The Grafana plugin should be moved under Metrics because is for Metrics >> > and only Metrics. > If this is true - can we make it work with H-services as well? > The Grafana plugin works with all active flavors of Metrics: standalone, Openshift-Metrics and Hawkular-Services. I'm not sure what Stefan meant. -- Thomas Segismont JBoss ON Engineering Team From theute at redhat.com Tue Jul 5 09:43:16 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 5 Jul 2016 15:43:16 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: <2A5E2AA6-EA7C-4EAA-9F41-8667BCF6BC4D@redhat.com> References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <1838408380.58534009.1466061626977.JavaMail.zimbra@redhat.com> <2A5E2AA6-EA7C-4EAA-9F41-8667BCF6BC4D@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 2:56 PM, Heiko W.Rupp wrote: > On 16 Jun 2016, at 15:33, Stefan Negrea wrote: > > > On Thu, Jun 16, 2016 at 2:20 AM, Gary Brown wrote: > > > >> I think the structure is ok, but prefer having Downloads and > Documentation > >> at the top level. But instead of the previous structure, still organise > >> based on package, so: > >> > > > > There is no point having top level Downloads and Documentation if > > everything is sectioned the way Thomas proposed in the last email. There > > And then as user having to navigate separate trees to get e.g. to the > downloads section of hawkular-services and the Go client is also confusing. > Personally I'd like to see one top level of downloads or even > downloads+docs > > | Project | Docs | Download latest > |-----------------|-----------|---------------| > | hawkular-services | docs/hs/.. | v0.0.5 | > | go client | docs/go/.. | v 0.0.3 | > > With docs + downloads being hyperlinked > We may want to add links from Hawkular-Services download page to clients that are relevant (same for Metrics and APM). And indeed maybe even a big fat master page with doanload+docs links (In the overview page) This on top of the hawkular.org2.png structure WDYT ? > > Individual subtrees could even link to this table for their > downloads (but directly link to the target docs) > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/5539cfc1/attachment.html From snegrea at redhat.com Tue Jul 5 10:05:14 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Tue, 5 Jul 2016 09:05:14 -0500 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont wrote: > > > Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : > >> 2) The Grafana plugin should be moved under Metrics because is for > Metrics > >> > and only Metrics. > > If this is true - can we make it work with H-services as well? > > > > The Grafana plugin works with all active flavors of Metrics: standalone, > Openshift-Metrics and Hawkular-Services. > > I'm not sure what Stefan meant. > The Grafana plugins works with Metrics deployed on all distributions however, the plugin itself can only be used with the Metrics project, there are no projects (such as Alerts, or Inventory) that will ever integrate with it. That is why I think it should be under the Metrics project and not in another place. The integration itself is very specific to just Metrics, not the entire Hawkular Services. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/fe5da1c2/attachment.html From snegrea at redhat.com Tue Jul 5 10:10:01 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Tue, 5 Jul 2016 09:10:01 -0500 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 3:19 AM, Thomas Heute wrote: > > > On Thu, Jun 16, 2016 at 6:11 PM, Stefan Negrea wrote: > >> I really like this last revision because the content is sectioned along >> the way we deliver to users. So when a user navigates the website, the >> content is always related to what they can directly download and use and >> not mixed together. >> >> I have 3 small proposals: >> 1) Hawkular and Overview can be combined into one Hawkular Overview or >> Hawkular (which has the overview of the projects). >> > > This is more an issue with the representation in the mindmap I think. > Hawkular is really the homepage: http://www.hawkular.org/index.html and I > think we can remove the link since the link on the logo does the same thing > and it's quite a UXD standard. > Overview is a more detailed page: > http://www.hawkular.org/docs/overview.html a user should find it's way > among the various projects. > As long as we do not have two top menu entries, one for Hawkular and one for Overview, then the proposal makes sense. > > >> 2) The Grafana plugin should be moved under Metrics because is for >> Metrics and only Metrics. >> > > The Grafana plugins works with Metrics and Services > But it is a Metrics specific integration, no other services will ever integrate with Grafana. > > >> 3) Hawkular Server should be renamed Hawkular Services because that the >> official project name. >> > > OTOH I would not want a totally separate structure for Hawkular Services > *and* Hawkular Community, as they are very much the same except for some > installation process (have to install C* separately or not) and maybe for > additional parts in Community. The idea was to combined both into a single > concept of server. > Agree here, what I suggested is that Hawkular Community is just a subsection in the Hawkular Services; and not two different sections. That will give more visibility to the concept of Hawkular Services rather than promote the unused name of Hawkular Server . > Thomas > > >> >> >> Thank you, >> Stefan Negrea >> >> Software Engineer >> >> On Wed, Jun 15, 2016 at 1:18 PM, Thomas Heute wrote: >> >>> Sorry, I meant to sent the PNG file... here it is >>> >>> On Wed, Jun 15, 2016 at 8:08 PM, Thomas Heute wrote: >>> >>>> Based on that suggestion, here is another proposal. >>>> >>>> Rectangle means a page >>>> Underline is more likely a section on a page >>>> Green arrows mean links (To Travis, to gitbook.io...) >>>> >>>> Let me know what you think of that updated section >>>> >>>> Thomas >>>> >>>> On Tue, Jun 14, 2016 at 10:32 PM, Stefan Negrea >>>> wrote: >>>> >>>>> I do not see the idea proposed yet, but why not structure the website >>>>> around major projects? We have Hawkular community, Hawkular Services, >>>>> Hawkular Metrics, and APM. Projects like Inventory or the clients would >>>>> fall under Hawkular Services umbrella. So rather than designing a generic >>>>> structure with everything make individual sub-sites and then apply the >>>>> structure you proposed. >>>>> >>>>> The current website was designed when the direction of the community >>>>> was different so a re-org along the previous structure is not sufficient. >>>>> >>>>> Thank you, >>>>> Stefan Negrea >>>>> >>>>> On Tue, Jun 14, 2016 at 12:08 PM, Michael Burman >>>>> wrote: >>>>> >>>>>> Currently Heapster stores in internal memory few minutes of data and >>>>>> allows queries that request this data (through its REST-interface). The >>>>>> consume part will just request the data from the HWKMETRICS instead. >>>>>> >>>>>> - Micke >>>>>> >>>>>> ----- Original Message ----- >>>>>> From: "Thomas Heute" >>>>>> To: "Discussions around Hawkular development" < >>>>>> hawkular-dev at lists.jboss.org> >>>>>> Sent: Tuesday, June 14, 2016 4:09:40 PM >>>>>> Subject: Re: [Hawkular-dev] Hawkular.org >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jun 14, 2016 at 3:06 PM, Michael Burman < miburman at redhat.com >>>>>> > wrote: >>>>>> >>>>>> >>>>>> Consumers is terrible word for any client, as they both consume as >>>>>> well as produce the data. >>>>>> >>>>>> Well that was actually reflecting the current state, we have "things" >>>>>> that feed data to the server and "things" that consume data from the >>>>>> server. The client libraries provide an API to feed and consume. >>>>>> >>>>>> >>>>>> For example for Heapster, we currently produce the data, however at >>>>>> the moment I'm creating a change that will consume the data from HWKMETRICS. >>>>>> >>>>>> Why does it consume data now ? >>>>>> >>>>>> Thomas >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Integration / clients is far more used and known word, while >>>>>> consumer/producer is something more specific and implies a design pattern. >>>>>> >>>>>> - Micke >>>>>> >>>>>> _______________________________________________ >>>>>> hawkular-dev mailing list >>>>>> hawkular-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/8139330e/attachment-0001.html From theute at redhat.com Tue Jul 5 10:13:18 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 5 Jul 2016 16:13:18 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 4:05 PM, Stefan Negrea wrote: > On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont > wrote: > >> >> >> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >> >> 2) The Grafana plugin should be moved under Metrics because is for >> Metrics >> >> > and only Metrics. >> > If this is true - can we make it work with H-services as well? >> > >> >> The Grafana plugin works with all active flavors of Metrics: standalone, >> Openshift-Metrics and Hawkular-Services. >> >> I'm not sure what Stefan meant. >> > > The Grafana plugins works with Metrics deployed on all distributions > however, the plugin itself can only be used with the Metrics project, there > are no projects (such as Alerts, or Inventory) that will ever integrate > with it. That is why I think it should be under the Metrics project and not > in another place. The integration itself is very specific to just Metrics, > not the entire Hawkular Services. > But then it applies to Hawkular services (the most important Hawkular project) and then should really shine there as much as for Metrics. We really need to think of Hawkular Metrics as a core part of Hawkular Services, not as a side project. Same for the documentation, the documentation to use metrics will need to be part of Hawkular Services documentation, not just a mere pointer to Hawkular metrics and let the user deal with the differences. Thomas > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/274af8b3/attachment.html From theute at redhat.com Tue Jul 5 10:14:31 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 5 Jul 2016 16:14:31 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 4:10 PM, Stefan Negrea wrote: > > On Tue, Jul 5, 2016 at 3:19 AM, Thomas Heute wrote: > >> >> >> On Thu, Jun 16, 2016 at 6:11 PM, Stefan Negrea >> wrote: >> >>> I really like this last revision because the content is sectioned along >>> the way we deliver to users. So when a user navigates the website, the >>> content is always related to what they can directly download and use and >>> not mixed together. >>> >>> I have 3 small proposals: >>> 1) Hawkular and Overview can be combined into one Hawkular Overview or >>> Hawkular (which has the overview of the projects). >>> >> >> This is more an issue with the representation in the mindmap I think. >> Hawkular is really the homepage: http://www.hawkular.org/index.html and >> I think we can remove the link since the link on the logo does the same >> thing and it's quite a UXD standard. >> Overview is a more detailed page: >> http://www.hawkular.org/docs/overview.html a user should find it's way >> among the various projects. >> > > As long as we do not have two top menu entries, one for Hawkular and one > for Overview, then the proposal makes sense. > > >> >> >>> 2) The Grafana plugin should be moved under Metrics because is for >>> Metrics and only Metrics. >>> >> >> The Grafana plugins works with Metrics and Services >> > > But it is a Metrics specific integration, no other services will ever > integrate with Grafana. > It is not Hawkular Metrics specific if it works for Hawkular Services. Thomas > > >> >> >>> 3) Hawkular Server should be renamed Hawkular Services because that the >>> official project name. >>> >> >> OTOH I would not want a totally separate structure for Hawkular Services >> *and* Hawkular Community, as they are very much the same except for some >> installation process (have to install C* separately or not) and maybe for >> additional parts in Community. The idea was to combined both into a single >> concept of server. >> > > Agree here, what I suggested is that Hawkular Community is just a > subsection in the Hawkular Services; and not two different sections. That > will give more visibility to the concept of Hawkular Services rather than > promote the unused name of Hawkular Server . > > > > >> Thomas >> >> >>> >>> >>> Thank you, >>> Stefan Negrea >>> >>> Software Engineer >>> >>> On Wed, Jun 15, 2016 at 1:18 PM, Thomas Heute wrote: >>> >>>> Sorry, I meant to sent the PNG file... here it is >>>> >>>> On Wed, Jun 15, 2016 at 8:08 PM, Thomas Heute >>>> wrote: >>>> >>>>> Based on that suggestion, here is another proposal. >>>>> >>>>> Rectangle means a page >>>>> Underline is more likely a section on a page >>>>> Green arrows mean links (To Travis, to gitbook.io...) >>>>> >>>>> Let me know what you think of that updated section >>>>> >>>>> Thomas >>>>> >>>>> On Tue, Jun 14, 2016 at 10:32 PM, Stefan Negrea >>>>> wrote: >>>>> >>>>>> I do not see the idea proposed yet, but why not structure the website >>>>>> around major projects? We have Hawkular community, Hawkular Services, >>>>>> Hawkular Metrics, and APM. Projects like Inventory or the clients would >>>>>> fall under Hawkular Services umbrella. So rather than designing a generic >>>>>> structure with everything make individual sub-sites and then apply the >>>>>> structure you proposed. >>>>>> >>>>>> The current website was designed when the direction of the community >>>>>> was different so a re-org along the previous structure is not sufficient. >>>>>> >>>>>> Thank you, >>>>>> Stefan Negrea >>>>>> >>>>>> On Tue, Jun 14, 2016 at 12:08 PM, Michael Burman >>>>> > wrote: >>>>>> >>>>>>> Currently Heapster stores in internal memory few minutes of data and >>>>>>> allows queries that request this data (through its REST-interface). The >>>>>>> consume part will just request the data from the HWKMETRICS instead. >>>>>>> >>>>>>> - Micke >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>> From: "Thomas Heute" >>>>>>> To: "Discussions around Hawkular development" < >>>>>>> hawkular-dev at lists.jboss.org> >>>>>>> Sent: Tuesday, June 14, 2016 4:09:40 PM >>>>>>> Subject: Re: [Hawkular-dev] Hawkular.org >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Tue, Jun 14, 2016 at 3:06 PM, Michael Burman < >>>>>>> miburman at redhat.com > wrote: >>>>>>> >>>>>>> >>>>>>> Consumers is terrible word for any client, as they both consume as >>>>>>> well as produce the data. >>>>>>> >>>>>>> Well that was actually reflecting the current state, we have >>>>>>> "things" that feed data to the server and "things" that consume data from >>>>>>> the server. The client libraries provide an API to feed and consume. >>>>>>> >>>>>>> >>>>>>> For example for Heapster, we currently produce the data, however at >>>>>>> the moment I'm creating a change that will consume the data from HWKMETRICS. >>>>>>> >>>>>>> Why does it consume data now ? >>>>>>> >>>>>>> Thomas >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Integration / clients is far more used and known word, while >>>>>>> consumer/producer is something more specific and implies a design pattern. >>>>>>> >>>>>>> - Micke >>>>>>> >>>>>>> _______________________________________________ >>>>>>> hawkular-dev mailing list >>>>>>> hawkular-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> hawkular-dev mailing list >>>>>> hawkular-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>> >>>>>> >>>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/4defc09a/attachment-0001.html From tsegismo at redhat.com Tue Jul 5 10:21:50 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Tue, 5 Jul 2016 16:21:50 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> Message-ID: <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : > On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont > wrote: > > > > Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : > >> 2) The Grafana plugin should be moved under Metrics because is for Metrics > >> > and only Metrics. > > If this is true - can we make it work with H-services as well? > > > > The Grafana plugin works with all active flavors of Metrics: standalone, > Openshift-Metrics and Hawkular-Services. > > I'm not sure what Stefan meant. > > > The Grafana plugins works with Metrics deployed on all distributions > however, the plugin itself can only be used with the Metrics project, > there are no projects (such as Alerts, or Inventory) that will ever > integrate with it. That is why I think it should be under the Metrics > project and not in another place. The integration itself is very > specific to just Metrics, not the entire Hawkular Services. I see what you meant now. But we can't presume anything about other services roadmaps. For example, the datasource plugin annotation feature could be implemented with requests to an event service. Anyway, since it should be able to connect to Metrics in different environments (H-Services, OS-Metrics and standalone), I err on the side of promoting it as a top level project. > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team From snegrea at redhat.com Tue Jul 5 11:31:18 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Tue, 5 Jul 2016 10:31:18 -0500 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 9:14 AM, Thomas Heute wrote: > > > On Tue, Jul 5, 2016 at 4:10 PM, Stefan Negrea wrote: > >> >> On Tue, Jul 5, 2016 at 3:19 AM, Thomas Heute wrote: >> >>> >>> >>> On Thu, Jun 16, 2016 at 6:11 PM, Stefan Negrea >>> wrote: >>> >>>> I really like this last revision because the content is sectioned along >>>> the way we deliver to users. So when a user navigates the website, the >>>> content is always related to what they can directly download and use and >>>> not mixed together. >>>> >>>> I have 3 small proposals: >>>> 1) Hawkular and Overview can be combined into one Hawkular Overview or >>>> Hawkular (which has the overview of the projects). >>>> >>> >>> This is more an issue with the representation in the mindmap I think. >>> Hawkular is really the homepage: http://www.hawkular.org/index.html and >>> I think we can remove the link since the link on the logo does the same >>> thing and it's quite a UXD standard. >>> Overview is a more detailed page: >>> http://www.hawkular.org/docs/overview.html a user should find it's way >>> among the various projects. >>> >> >> As long as we do not have two top menu entries, one for Hawkular and one >> for Overview, then the proposal makes sense. >> >> >>> >>> >>>> 2) The Grafana plugin should be moved under Metrics because is for >>>> Metrics and only Metrics. >>>> >>> >>> The Grafana plugins works with Metrics and Services >>> >> >> But it is a Metrics specific integration, no other services will ever >> integrate with Grafana. >> > > It is not Hawkular Metrics specific if it works for Hawkular Services. > > That is where we will confuse everybody about the integration. Hawkular Services has a lot services/stuff bundled. Grafana is only relevant and works for Metrics, not Alerts, not Inventory. It does not integrate with the other services for anything. Thinking from the perspective of a new user, if I see the Grafana integration mentioned in the context of Hawkular Services, then immediately I would assume that it integrates with all the services. The context confuses everything ... Thomas > > > > >> >> >>> >>> >>>> 3) Hawkular Server should be renamed Hawkular Services because that the >>>> official project name. >>>> >>> >>> OTOH I would not want a totally separate structure for Hawkular Services >>> *and* Hawkular Community, as they are very much the same except for some >>> installation process (have to install C* separately or not) and maybe for >>> additional parts in Community. The idea was to combined both into a single >>> concept of server. >>> >> >> Agree here, what I suggested is that Hawkular Community is just a >> subsection in the Hawkular Services; and not two different sections. That >> will give more visibility to the concept of Hawkular Services rather than >> promote the unused name of Hawkular Server . >> >> >> >> >>> Thomas >>> >>> >>>> >>>> >>>> Thank you, >>>> Stefan Negrea >>>> >>>> Software Engineer >>>> >>>> On Wed, Jun 15, 2016 at 1:18 PM, Thomas Heute >>>> wrote: >>>> >>>>> Sorry, I meant to sent the PNG file... here it is >>>>> >>>>> On Wed, Jun 15, 2016 at 8:08 PM, Thomas Heute >>>>> wrote: >>>>> >>>>>> Based on that suggestion, here is another proposal. >>>>>> >>>>>> Rectangle means a page >>>>>> Underline is more likely a section on a page >>>>>> Green arrows mean links (To Travis, to gitbook.io...) >>>>>> >>>>>> Let me know what you think of that updated section >>>>>> >>>>>> Thomas >>>>>> >>>>>> On Tue, Jun 14, 2016 at 10:32 PM, Stefan Negrea >>>>>> wrote: >>>>>> >>>>>>> I do not see the idea proposed yet, but why not structure the >>>>>>> website around major projects? We have Hawkular community, Hawkular >>>>>>> Services, Hawkular Metrics, and APM. Projects like Inventory or the clients >>>>>>> would fall under Hawkular Services umbrella. So rather than designing a >>>>>>> generic structure with everything make individual sub-sites and then apply >>>>>>> the structure you proposed. >>>>>>> >>>>>>> The current website was designed when the direction of the community >>>>>>> was different so a re-org along the previous structure is not sufficient. >>>>>>> >>>>>>> Thank you, >>>>>>> Stefan Negrea >>>>>>> >>>>>>> On Tue, Jun 14, 2016 at 12:08 PM, Michael Burman < >>>>>>> miburman at redhat.com> wrote: >>>>>>> >>>>>>>> Currently Heapster stores in internal memory few minutes of data >>>>>>>> and allows queries that request this data (through its REST-interface). The >>>>>>>> consume part will just request the data from the HWKMETRICS instead. >>>>>>>> >>>>>>>> - Micke >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>> From: "Thomas Heute" >>>>>>>> To: "Discussions around Hawkular development" < >>>>>>>> hawkular-dev at lists.jboss.org> >>>>>>>> Sent: Tuesday, June 14, 2016 4:09:40 PM >>>>>>>> Subject: Re: [Hawkular-dev] Hawkular.org >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jun 14, 2016 at 3:06 PM, Michael Burman < >>>>>>>> miburman at redhat.com > wrote: >>>>>>>> >>>>>>>> >>>>>>>> Consumers is terrible word for any client, as they both consume as >>>>>>>> well as produce the data. >>>>>>>> >>>>>>>> Well that was actually reflecting the current state, we have >>>>>>>> "things" that feed data to the server and "things" that consume data from >>>>>>>> the server. The client libraries provide an API to feed and consume. >>>>>>>> >>>>>>>> >>>>>>>> For example for Heapster, we currently produce the data, however at >>>>>>>> the moment I'm creating a change that will consume the data from HWKMETRICS. >>>>>>>> >>>>>>>> Why does it consume data now ? >>>>>>>> >>>>>>>> Thomas >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Integration / clients is far more used and known word, while >>>>>>>> consumer/producer is something more specific and implies a design pattern. >>>>>>>> >>>>>>>> - Micke >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> hawkular-dev mailing list >>>>>>>> hawkular-dev at lists.jboss.org >>>>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> hawkular-dev mailing list >>>>>>> hawkular-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/e4b0cfcb/attachment.html From snegrea at redhat.com Tue Jul 5 11:35:02 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Tue, 5 Jul 2016 10:35:02 -0500 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 9:13 AM, Thomas Heute wrote: > > > > On Tue, Jul 5, 2016 at 4:05 PM, Stefan Negrea wrote: > >> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >> wrote: >> >>> >>> >>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>> >> 2) The Grafana plugin should be moved under Metrics because is for >>> Metrics >>> >> > and only Metrics. >>> > If this is true - can we make it work with H-services as well? >>> > >>> >>> The Grafana plugin works with all active flavors of Metrics: standalone, >>> Openshift-Metrics and Hawkular-Services. >>> >>> I'm not sure what Stefan meant. >>> >> >> The Grafana plugins works with Metrics deployed on all distributions >> however, the plugin itself can only be used with the Metrics project, there >> are no projects (such as Alerts, or Inventory) that will ever integrate >> with it. That is why I think it should be under the Metrics project and not >> in another place. The integration itself is very specific to just Metrics, >> not the entire Hawkular Services. >> > > > But then it applies to Hawkular services (the most important Hawkular > project) and then should really shine there as much as for Metrics. > > We really need to think of Hawkular Metrics as a core part of Hawkular > Services, not as a side project. > > Same for the documentation, the documentation to use metrics will need to > be part of Hawkular Services documentation, not just a mere pointer to > Hawkular metrics and let the user deal with the differences. > I am really not sure why the discussion took this turn, my comments were not about Services vs Metrics. To refocus on the Grafana plugin, the plugin is really specific to graphing actual metrics using the Metrics API only. > Thomas > > > > >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/55a5b0b5/attachment-0001.html From theute at redhat.com Tue Jul 5 11:55:37 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 5 Jul 2016 17:55:37 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 5:35 PM, Stefan Negrea wrote: > > On Tue, Jul 5, 2016 at 9:13 AM, Thomas Heute wrote: > >> >> >> >> On Tue, Jul 5, 2016 at 4:05 PM, Stefan Negrea wrote: >> >>> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >>> wrote: >>> >>>> >>>> >>>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>>> >> 2) The Grafana plugin should be moved under Metrics because is for >>>> Metrics >>>> >> > and only Metrics. >>>> > If this is true - can we make it work with H-services as well? >>>> > >>>> >>>> The Grafana plugin works with all active flavors of Metrics: standalone, >>>> Openshift-Metrics and Hawkular-Services. >>>> >>>> I'm not sure what Stefan meant. >>>> >>> >>> The Grafana plugins works with Metrics deployed on all distributions >>> however, the plugin itself can only be used with the Metrics project, there >>> are no projects (such as Alerts, or Inventory) that will ever integrate >>> with it. That is why I think it should be under the Metrics project and not >>> in another place. The integration itself is very specific to just Metrics, >>> not the entire Hawkular Services. >>> >> >> >> But then it applies to Hawkular services (the most important Hawkular >> project) and then should really shine there as much as for Metrics. >> >> We really need to think of Hawkular Metrics as a core part of Hawkular >> Services, not as a side project. >> >> Same for the documentation, the documentation to use metrics will need to >> be part of Hawkular Services documentation, not just a mere pointer to >> Hawkular metrics and let the user deal with the differences. >> > > > I am really not sure why the discussion took this turn, my comments were > not about Services vs Metrics. To refocus on the Grafana plugin, the plugin > is really specific to graphing actual metrics using the Metrics API only. > Let's not assume that people will take Hawkular Services and then combine themselves what is available to the metrics users with Hawkular Services. Hawkular Metrics really shines when used with Alerts and Inventory, let's have that package easy to use and consume. And if it makes sense to add alerts or inventory in the Grafana plugin, we should. Also if we have it along the other clients, it's more consistent. Thomas > > > >> Thomas >> >> >> >> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/0336d6f7/attachment.html From mazz at redhat.com Tue Jul 5 12:33:19 2016 From: mazz at redhat.com (John Mazzitelli) Date: Tue, 5 Jul 2016 12:33:19 -0400 (EDT) Subject: [Hawkular-dev] errors when embedding c* In-Reply-To: <1029680102.2368167.1467735448813.JavaMail.zimbra@redhat.com> References: <1029680102.2368167.1467735448813.JavaMail.zimbra@redhat.com> Message-ID: <73835231.2370472.1467736399419.JavaMail.zimbra@redhat.com> > We have embedded C* today using the maven 'embeddedc' profile. > HOWEVER, I can tell you that that still doesn't work quite right > ... > some components throw errors at startup and you usually have to > shutdown the h-server, restart it, and usually (not always) things > start working again. Whether we embed C* in -services or in another distro, either way, we have a problem. I'm sure all of us have seen this at one time or another :) It seems if I restart the server, most times (not all) I no longer see these errors. I have no idea if this is just going to cause problems in alerts, or if there are other problems we'll see. But I see lots of these kinds of errors: =========== 2016-07-05 11:06:21,747 ERROR [org.jboss.as.ejb3.invocation] (Thread-127 (ActiveMQ-client-global-threads-913106351)) WFLYEJB0034: EJB Invocation failed on component CacheManager for method public java.util.Set org.hawkular.alerts.bus.init.CacheManager.getActiveDataIds(): javax.ejb.ConcurrentAccessTimeoutException: WFLYEJB0241: EJB 3.1 PFD2 4.8.5.5.1 concurrent access timeout on CacheManager - could not obtain lock within 5000MILLISECONDS ... at org.hawkular.alerts.bus.init.CacheManager$$$view16.getActiveDataIds(Unknown Source) at org.hawkular.alerts.bus.listener.MetricDataListener.onBasicMessage(MetricDataListener.java:82) at org.hawkular.alerts.bus.listener.MetricDataListener.onBasicMessage(MetricDataListener.java:50) at org.hawkular.bus.common.consumer.BasicMessageListener.onBasicMessage(BasicMessageListener.java:77) at org.hawkular.bus.common.consumer.BasicMessageListener.onMessage(BasicMessageListener.java:63) ... From lponce at redhat.com Tue Jul 5 12:48:17 2016 From: lponce at redhat.com (Lucas Ponce) Date: Tue, 5 Jul 2016 12:48:17 -0400 (EDT) Subject: [Hawkular-dev] errors when embedding c* In-Reply-To: <73835231.2370472.1467736399419.JavaMail.zimbra@redhat.com> References: <1029680102.2368167.1467735448813.JavaMail.zimbra@redhat.com> <73835231.2370472.1467736399419.JavaMail.zimbra@redhat.com> Message-ID: <1567260162.2372981.1467737297300.JavaMail.zimbra@redhat.com> I didn't see that error on my tests. I will take a look. Are you using the new C* embedded in hawkular-services, right ? Can you post me the commands/steps to reproduce it ? Thanks. ----- Mensaje original ----- > De: "John Mazzitelli" > Para: "Discussions around Hawkular development" > Enviados: Martes, 5 de Julio 2016 18:33:19 > Asunto: [Hawkular-dev] errors when embedding c* > > > We have embedded C* today using the maven 'embeddedc' profile. > > HOWEVER, I can tell you that that still doesn't work quite right > > ... > > some components throw errors at startup and you usually have to > > shutdown the h-server, restart it, and usually (not always) things > > start working again. > > Whether we embed C* in -services or in another distro, either way, we have a > problem. > > I'm sure all of us have seen this at one time or another :) > > It seems if I restart the server, most times (not all) I no longer see these > errors. I have no idea if this is just going to cause problems in alerts, or > if there are other problems we'll see. But I see lots of these kinds of > errors: > > =========== > > 2016-07-05 11:06:21,747 ERROR [org.jboss.as.ejb3.invocation] (Thread-127 > (ActiveMQ-client-global-threads-913106351)) WFLYEJB0034: EJB Invocation > failed on component CacheManager for method public java.util.Set > org.hawkular.alerts.bus.init.CacheManager.getActiveDataIds(): > javax.ejb.ConcurrentAccessTimeoutException: WFLYEJB0241: EJB 3.1 PFD2 > 4.8.5.5.1 concurrent access timeout on CacheManager - could not obtain lock > within 5000MILLISECONDS > ... > at > org.hawkular.alerts.bus.init.CacheManager$$$view16.getActiveDataIds(Unknown > Source) > at > org.hawkular.alerts.bus.listener.MetricDataListener.onBasicMessage(MetricDataListener.java:82) > at > org.hawkular.alerts.bus.listener.MetricDataListener.onBasicMessage(MetricDataListener.java:50) > at > org.hawkular.bus.common.consumer.BasicMessageListener.onBasicMessage(BasicMessageListener.java:77) > at > org.hawkular.bus.common.consumer.BasicMessageListener.onMessage(BasicMessageListener.java:63) > ... > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From mazz at redhat.com Tue Jul 5 12:57:52 2016 From: mazz at redhat.com (John Mazzitelli) Date: Tue, 5 Jul 2016 12:57:52 -0400 (EDT) Subject: [Hawkular-dev] errors when embedding c* In-Reply-To: <1567260162.2372981.1467737297300.JavaMail.zimbra@redhat.com> References: <1029680102.2368167.1467735448813.JavaMail.zimbra@redhat.com> <73835231.2370472.1467736399419.JavaMail.zimbra@redhat.com> <1567260162.2372981.1467737297300.JavaMail.zimbra@redhat.com> Message-ID: <728791890.2374388.1467737872862.JavaMail.zimbra@redhat.com> 1. Git clone the hawkular-services master branch 2. From the top directory under hawkular-services, run "mvn clean install -Pdev -Pembeddedec -Pdozip" (note: -Pdozip is only needed once - looks like the itests need the zip to build or something because i got a build error if I didn't build the zip at least once - I should probably write a JIRA on the new -Pdozip profile - that was added recently). 3. Run "dist/target/hawkular-services-dist-*/bin/standalone.sh" That's how I see it. I see it pretty regularly. Note, I am NOT running on a machine with a SSD hard drive :) ----- Original Message ----- > I didn't see that error on my tests. > > I will take a look. > > Are you using the new C* embedded in hawkular-services, right ? > > Can you post me the commands/steps to reproduce it ? > > Thanks. > > ----- Mensaje original ----- > > De: "John Mazzitelli" > > Para: "Discussions around Hawkular development" > > > > Enviados: Martes, 5 de Julio 2016 18:33:19 > > Asunto: [Hawkular-dev] errors when embedding c* > > > > > We have embedded C* today using the maven 'embeddedc' profile. > > > HOWEVER, I can tell you that that still doesn't work quite right > > > ... > > > some components throw errors at startup and you usually have to > > > shutdown the h-server, restart it, and usually (not always) things > > > start working again. > > > > Whether we embed C* in -services or in another distro, either way, we have > > a > > problem. > > > > I'm sure all of us have seen this at one time or another :) > > > > It seems if I restart the server, most times (not all) I no longer see > > these > > errors. I have no idea if this is just going to cause problems in alerts, > > or > > if there are other problems we'll see. But I see lots of these kinds of > > errors: > > > > =========== > > > > 2016-07-05 11:06:21,747 ERROR [org.jboss.as.ejb3.invocation] (Thread-127 > > (ActiveMQ-client-global-threads-913106351)) WFLYEJB0034: EJB Invocation > > failed on component CacheManager for method public java.util.Set > > org.hawkular.alerts.bus.init.CacheManager.getActiveDataIds(): > > javax.ejb.ConcurrentAccessTimeoutException: WFLYEJB0241: EJB 3.1 PFD2 > > 4.8.5.5.1 concurrent access timeout on CacheManager - could not obtain lock > > within 5000MILLISECONDS > > ... > > at > > org.hawkular.alerts.bus.init.CacheManager$$$view16.getActiveDataIds(Unknown > > Source) > > at > > org.hawkular.alerts.bus.listener.MetricDataListener.onBasicMessage(MetricDataListener.java:82) > > at > > org.hawkular.alerts.bus.listener.MetricDataListener.onBasicMessage(MetricDataListener.java:50) > > at > > org.hawkular.bus.common.consumer.BasicMessageListener.onBasicMessage(BasicMessageListener.java:77) > > at > > org.hawkular.bus.common.consumer.BasicMessageListener.onMessage(BasicMessageListener.java:63) > > ... > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > From snegrea at redhat.com Tue Jul 5 14:10:57 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Tue, 5 Jul 2016 13:10:57 -0500 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> Message-ID: On Tue, Jul 5, 2016 at 10:55 AM, Thomas Heute wrote: > > > On Tue, Jul 5, 2016 at 5:35 PM, Stefan Negrea wrote: > >> >> On Tue, Jul 5, 2016 at 9:13 AM, Thomas Heute wrote: >> >>> >>> >>> >>> On Tue, Jul 5, 2016 at 4:05 PM, Stefan Negrea >>> wrote: >>> >>>> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >>>> wrote: >>>> >>>>> >>>>> >>>>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>>>> >> 2) The Grafana plugin should be moved under Metrics because is for >>>>> Metrics >>>>> >> > and only Metrics. >>>>> > If this is true - can we make it work with H-services as well? >>>>> > >>>>> >>>>> The Grafana plugin works with all active flavors of Metrics: >>>>> standalone, >>>>> Openshift-Metrics and Hawkular-Services. >>>>> >>>>> I'm not sure what Stefan meant. >>>>> >>>> >>>> The Grafana plugins works with Metrics deployed on all distributions >>>> however, the plugin itself can only be used with the Metrics project, there >>>> are no projects (such as Alerts, or Inventory) that will ever integrate >>>> with it. That is why I think it should be under the Metrics project and not >>>> in another place. The integration itself is very specific to just Metrics, >>>> not the entire Hawkular Services. >>>> >>> >>> >>> But then it applies to Hawkular services (the most important Hawkular >>> project) and then should really shine there as much as for Metrics. >>> >>> We really need to think of Hawkular Metrics as a core part of Hawkular >>> Services, not as a side project. >>> >>> Same for the documentation, the documentation to use metrics will need >>> to be part of Hawkular Services documentation, not just a mere pointer to >>> Hawkular metrics and let the user deal with the differences. >>> >> >> >> I am really not sure why the discussion took this turn, my comments were >> not about Services vs Metrics. To refocus on the Grafana plugin, the plugin >> is really specific to graphing actual metrics using the Metrics API only. >> > > > Let's not assume that people will take Hawkular Services and then combine > themselves what is available to the metrics users with Hawkular Services. > > Hawkular Metrics really shines when used with Alerts and Inventory, let's > have that package easy to use and consume. And if it makes sense to add > alerts or inventory in the Grafana plugin, we should. Also if we have it > along the other clients, it's more consistent. > > Thomas > > To reiterate the point about the Grafana plugin. Grafana integration only makes sense for Hawkular Metrics, because of the actual Grafana project itself. If I have to pick where the put the documentation about the plugin I would put it under the Metrics section because that is the only place where it logically fits. A second option would be to leave it where you had it on the last mindmap, under clients. To me it makes no sense to put it under Hawkular Services, because the association between the two is done through inference (Grafana connects to Hawkular Metrics, and Hawkular Metrics is a service inside Hawkular Services, therefore Grafana plugin can be used with Hawkular Services). *tl;dr* Leaving the Grafana plugin documentation under Clients is the only solution if the placing it under Hawkular Metrics documentation is not an option. > >> >> >> >>> Thomas >>> >>> >>> >>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/072ff342/attachment-0001.html From jsanda at redhat.com Tue Jul 5 14:27:59 2016 From: jsanda at redhat.com (John Sanda) Date: Tue, 5 Jul 2016 14:27:59 -0400 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> Message-ID: <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> > On Jul 5, 2016, at 10:21 AM, Thomas Segismont wrote: > > > > Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : >> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >> >> wrote: >> >> >> >> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>>> 2) The Grafana plugin should be moved under Metrics because is for Metrics >>>>> and only Metrics. >>> If this is true - can we make it work with H-services as well? >>> >> >> The Grafana plugin works with all active flavors of Metrics: standalone, >> Openshift-Metrics and Hawkular-Services. >> >> I'm not sure what Stefan meant. >> >> >> The Grafana plugins works with Metrics deployed on all distributions >> however, the plugin itself can only be used with the Metrics project, >> there are no projects (such as Alerts, or Inventory) that will ever >> integrate with it. That is why I think it should be under the Metrics >> project and not in another place. The integration itself is very >> specific to just Metrics, not the entire Hawkular Services. > > I see what you meant now. But we can't presume anything about other > services roadmaps. For example, the datasource plugin annotation feature > could be implemented with requests to an event service. > > Anyway, since it should be able to connect to Metrics in different > environments (H-Services, OS-Metrics and standalone), I err on the side > of promoting it as a top level project. > I think a top-level project makes the most sense. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/cd296b07/attachment.html From jsanda at redhat.com Tue Jul 5 15:14:32 2016 From: jsanda at redhat.com (John Sanda) Date: Tue, 5 Jul 2016 15:14:32 -0400 Subject: [Hawkular-dev] Metrics and use of metrics_idx In-Reply-To: <392979988.4845404.1467649391042.JavaMail.zimbra@redhat.com> References: <2031761900.4108094.1467293089854.JavaMail.zimbra@redhat.com> <5432f1e7-b52f-b218-b612-2e153add5dd7@redhat.com> <1299492960.4738361.1467626360299.JavaMail.zimbra@redhat.com> <8cbf3aef-f737-aef2-5569-0a48ab7aaf16@redhat.com> <418144617.4766560.1467630660781.JavaMail.zimbra@redhat.com> <75b0fa33-2b72-0c8b-b701-78e1a4b8384e@redhat.com> <427305826.4802006.1467640481529.JavaMail.zimbra@redhat.com> <71085524-bc4a-5013-55c9-c3fa9946205e@redhat.com> <392979988.4845404.1467649391042.JavaMail.zimbra@redhat.com> Message-ID: <31A05372-6C4B-4771-A0FE-8ECB45296A6E@redhat.com> I think the key question which Micke raised earlier in the thread is whether or not we even need the metrics_idx table at all. I think the points raised about searching via tags are all valid, and I think that querying the data table for unique ids should be sufficient for those use cases in which we need to fetch all ids. There is one I am not so sure about. Today we store all data points for a metric within a single partition (bad). We have had plans for a while now to do date partitioning, by day for example. If/when we implement date partitioning, the performance for fetching all metrics ids would probably be a bit worse; however, we can also introduce an index if needed. We can introduce a separate index for rollups. Rollups will be configurable so we will need to look somewhere in the database to determine which metrics support which rollups. Originally I was thinking that this would be metrics_idx, but a separate index would be cleaner. > On Jul 4, 2016, at 12:23 PM, Michael Burman wrote: > > Hi, > > The basic problem with supporting anything with metricId is the fact that we don't have data storage that is designed for the metricId matching. We have exact match or no match, nothing in between. We really should avoid supporting features we can't support. Or then we'll need inverted index for full text search for all the metricIds. > > Even with it, it's like transferring metadata in a filename. That has never been a good idea. > > - Micke > > ----- Original Message ----- > From: "Thomas Segismont" > To: hawkular-dev at lists.jboss.org > Sent: Monday, July 4, 2016 5:29:23 PM > Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx > > I don't agree with the philosophy. I err on the side of making it > possible to use a server side filter and explain in the documentation > why tagging your metrics is a better option. > > Le 04/07/2016 ? 15:54, Michael Burman a ?crit : >> Hi, >> >> Sure, but we should discourage users to use metricIds for anything. Best approach would be to randomize them and force users to use tagging to find their metrics. Otherwise what we'll get is silly integrations where someone has "hostname.wildfly.metric.name" and then they want to search all the "metric.name" by doing idFilter="*.metric\.name" and complain "your metrics db is slow!". What they should do instead is always "/raw/query?tags=hostname=X,app=wildfly,metric=metric.name" and so on. >> >> - Micke >> >> ----- Original Message ----- >> From: "Thomas Segismont" >> To: hawkular-dev at lists.jboss.org >> Sent: Monday, July 4, 2016 4:19:30 PM >> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >> >> >> >> Le 04/07/2016 ? 13:11, Michael Burman a ?crit : >>> Hi, >>> >>> The id filter does not work without tags filter because we can't do the id filtering on Cassandra side. It was done for performance reasons, as otherwise you have to fetch all the available metrics to the HWKMETRICS and then do the client side filtering. >> >> I understand we can't filter on the database. But then wouldn't it be >> better to filter on the server in order to save that to the client at >> least? I mean, if you need to get metrics by name, as a user you will >> have to load everything anyway. Couldn't we save that to the user? >> >>> >>> Fulltext indexing could certainly be an interesting choice for many of our use-cases to investigate at least. I forgot the whole thing, we should look into it before making any greater changes. Mm.. >>> >>> - Micke >>> >>> ----- Original Message ----- >>> From: "Thomas Segismont" >>> To: hawkular-dev at lists.jboss.org >>> Sent: Monday, July 4, 2016 1:27:41 PM >>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>> >>> >>> >>> Le 04/07/2016 ? 11:59, Michael Burman a ?crit : >>>> Hi, >>>> >>>> If you consider a use case to sync with Grafana, then sending a million metricIds to a select list is probably a bad use-case. If it's on the other hand syncing all the metrics daily to some external system, it would not matter if it takes minutes to do the listing (as it probably will take a while to transfer everything in JSON). And both of those use-cases should use tags filtering to find something and not list all the metrics blindfolded. >>> >>> There is no bad use case, only bad solutions :) Joke aside, it is true >>> that the current solution for autocomplete in Grafana is far from >>> perfect. It does not query all metrics of a tenant, but all metrics of a >>> same type for a tenant, and then does the filtering on the client side. >>> For some reason the Metrics API does not allow the name filter if no tag >>> filter is set. Should I open a JIRA? >>> >>>> >>>> Currently to list 50k metrics it takes 200ms on our current metrics_idx (to localhost) and 400ms with the other methology. For millions even the current method takes a long time already without filtering. A billion metricIds is no problem if you really want to find them. But why would you list a billion metricIds to choose from instead of actually finding them with something relevant (and in that case there's no slowdown to current) ? >>>> >>>> Metric name autocomplete can't use non-filtered processing in our current case either as we can't check for partial keys from Cassandra (Cassandra does not support prefix querying of keys). >>> >>> Could the new features fulltext index capabilities in C* 3.x help here? >>> >>>> >>>> - Micke >>>> >>>> ----- Original Message ----- >>>> From: "Thomas Segismont" >>>> To: hawkular-dev at lists.jboss.org >>>> Sent: Monday, July 4, 2016 12:44:07 PM >>>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>>> >>>> Hi, >>>> >>>> First a note about number of metrics per tenant. A million metrics per >>>> tenant would be easy to reach IMO. Let's take a simple example: a >>>> machine running an Wildfly/EAP server. Combine OS (CPU, memory, network, >>>> disk) with container (Undertow, Datasource, ActiveMQ Artemis, EJB, JPA) >>>> metrics and you get close to a thousand metrics. Then multiply by a >>>> thousand machines and you reach the million. And of course there are >>>> users with more machines, and more complex setups (multiple Wildly/EAP >>>> servers with hundreds of apps deployed). >>>> Keep in mind that one of the promises of Metrics was the ability to >>>> store huge number of metrics, instead of disabling metric collection. >>>> >>>> That being said, do you have absolute numbers about the response time >>>> when querying for all metrics of a tenant? Twice as slower may not be >>>> that bad if we're going from 10ms down to 20ms :) Especially considering >>>> the use cases for such queries: daily sync with external system for >>>> example, or metric name autocomplete in Grafana/ManageIQ. >>>> >>>> Regards, >>>> >>>> Le 01/07/2016 ? 22:56, Michael Burman a ?crit : >>>>> Hi, >>>>> >>>>> Well, I've done some testing of the performance as well as a implementation that mimics the current behavior but does the reading from partition key. The performance is just fine with quite large amount of metricIds (at 50k the difference is ~2x, so in my dev machine I can still read about 3 times per second all the possible tenants and current behavior would be about 6 times per second), I started to see performance "issues" when the amount of unique metricIds reached millions (although I'm not sure if it's a performance issue in that case either - given that such query is probably not in some performance critical query). The amount of datapoints did not make a difference (I just tested with 8668 metricIds with a total of >86M stored datapoints, thus confirming my expectations also). >>>>> >>>>> So unless there's a known use case of using millions of metricIds with stored datapoints (IIRC, once all the rows for a partition key are deleted, the partition key disappears.. John can confirm) I think we should move forward with this. Although we could make a enhancement to fetching metricIds by allowing to query only metrics_idx table (something like fetch only the registered metrics) - in that case someone who registers all the metrics could fetch them quickly also. >>>>> >>>>> - Micke >>>>> >>>>> ----- Original Message ----- >>>>> From: "Stefan Negrea" >>>>> To: "Discussions around Hawkular development" >>>>> Sent: Friday, July 1, 2016 9:14:43 PM >>>>> Subject: Re: [Hawkular-dev] Metrics and use of metrics_idx >>>>> >>>>> Hello, >>>>> >>>>> For Hawkular Metrics the speed of writes is always more important than the speed of reads (due to a variety of reasons). But that only works up to a certain extent, in the sense that we cannot totally neglect the read part. Let me see if I can narrow the impact of your proposal ... >>>>> >>>>> You made a very good point that the performance of reads is not affected if we discard metrics_idx for endpoints that require the metrics id. We only need to consider the impact of querying for metrics and tenants since both use metrics_idx. Querying the list of tenants is not very important because it is an admin feature that we will soon secure via the newly proposed "admin framework". So only querying for metrics definitions will be truly affected by removing the metrics_idx completely. But only a portion of those requests are affected because tags queries use the tags index. >>>>> >>>>> To conclude, metrics_idx is only important in cases where the user wants a full list of all metrics ever stored for a tenant id. If we can profile the performance impact on a larger set of metric definitions and we find the time difference without the metrics_idx is negligible then we should go forward with your proposal. >>>>> >>>>> >>>>> John Sanda, do you foresee using metrics_idx in the context of metric aggregation and the job scheduling framework that you've been working on? >>>>> >>>>> Micke, what do you think are the next steps to move forward with your proposal? >>>>> >>>>> >>>>> Thank you, >>>>> Stefan Negrea >>>>> >>>>> >>>>> On Thu, Jun 30, 2016 at 8:46 AM, Michael Burman < miburman at redhat.com > wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> This sparked my interest after the discussions in PR #523 (adding cache to avoid metrics_idx writes). Stefan commented that he still wants to write to this table to keep metrics available instantly, jsanda wants to write them asynchronously. Maybe we should instead just stop writing there? >>>>> >>>>> Why? We do the same thing in tenants also at this time, we don't write there if someone writes a metric to a new tenant. We fetch the partition keys from metrics_idx table. Now, the same ideology could be applied to the metrics_idx writing, read the partition keys from data. There's a small performance penalty, but the main thing is that we don't really need that information often - in most use cases never. >>>>> >>>>> If we want to search something with for example tags, we search it with tags - that metricId has been manually added to the metrics_idx table. No need to know if there's metrics which were not initialized. This should be the preferred way of doing things in any case - use tags instead of pushing metadata to the metricId. >>>>> >>>>> If we need to find out if id exists, fetching that from the PK (PartitionKey) index is fast. The only place where we could slow down is if there's lots of tenants with lots of metricIds each and we want to fetch all the metricIds of a single tenant. In that case the fetching of definitions could slow down. How often do users fetch all the tenant metricIds without any filtering? And how performance critical is this sort of behavior? And what use case does list of ids serve (without any information associated to them) ? >>>>> >>>>> If you need to fetch datapoints from a known metricId, there's no need for metrics_idx table writing or reading. So this index writing only applies to listing metrics. >>>>> >>>>> - Micke >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>> >>> >> > > -- > Thomas Segismont > JBoss ON Engineering Team > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev From snegrea at redhat.com Tue Jul 5 20:18:23 2016 From: snegrea at redhat.com (Stefan Negrea) Date: Tue, 5 Jul 2016 19:18:23 -0500 Subject: [Hawkular-dev] Hawkular Metrics 0.17.0 - Release Message-ID: Hello Everybody, I am happy to announce release 0.17.0 of Hawkular Metrics. This release is anchored by performance enhancements and new Grafana Datasource Plugin. Here is a list of major changes: 1. *Grafana Datasource Plugin - Experimental* - A new Grafana 3 datasource plugin is now available for Hawkular Metrics. This plugin integrates natively via the REST API. - For downloads and installation instructions please visit Hawkular Datasource for Grafana - The plugin is developed as an independent project and contributions are welcomed. 2. *InfluxDB API - DEPRECATED* - The InfluxDB API has been deprecated and will be removed in the upcoming release. - This was an addition to make project integrations easier. As the REST interface matured, the role of the InfluxDB compatibility interface was reduced only serve as the Grafana interface. With the release of the native Grafana plugin, this is no longer needed. - For more details: HWKMETRICS-411 3. *Fetching Raw Data - Multiple Metrics - Experimental* - Prior to this release, it was possible to only fetch raw data points for a single metric. This release added POST */query endpoint that allows querying for raw data points for multiple metrics. - The endpoints are: - POST /hawkular/metrics/gauges/raw/query - POST /hawkular/metrics/counters/raw/query - POST /hawkular/metrics/counters/rates/query - POST /hawkular/metrics/strings/raw/query - POST /hawkular/metrics/availability/raw/query - POST /hawkular/metrics/metrics/raw/query - The endpoint accepts a list of metrics ids and allows filtering by providing start time, end time, sort order and limit. - For more details: HWKMETRICS-393 4. *Performance Enhancements* - Two Cassandra driver settings (maxConnectionsPerHost and maxRequestsPerConnection) are now user configurable. Part of the update, the default values have been increased from the driver defaults. The new defaults had a significant performance boost for a simple test deployment. The settings are configurable to allow users to optimize driver behavior for larger Hawkular Metrics deployments. (HWKMETRICS-430 ) - On Linux deployments, the Cassandra driver uses Netty native epoll ( HWKMETRICS-418 ) 5. *Cassandra* - Fixed an issue with schema upgrades present in Hawkular Metrics 0.15.0 and 0.16.0. We recommend upgrading from previous versions directly to 0.17.0. For more details: HWKMETRICS-425 - Cassandra 3.7 is now the supported version of Cassandra. Support has been deprecated for Cassandra 3.5. *Hawkular Metrics Clients* - Python: https://github.com/hawkular/hawkular-client-python - Go: https://github.com/hawkular/hawkular-client-go - Ruby: https://github.com/hawkular/hawkular-client-ruby - Java: https://github.com/hawkular/hawkular-client-java Release Links Github Release: https://github.com/hawkular/hawkular-metrics/releases/tag/0.17.0 JBoss Nexus Maven artifacts: http://origin-repository.jboss.org/nexus/content/repositories/public/org/hawkular/metrics/ Jira release tracker: https://issues.jboss.org/browse/HWKMETRICS/fixforversion/12330692 A big "Thank you!" goes to John Sanda, Thomas Segismont, Mike Thompson, Matt Wringe, Michael Burman, and Heiko Rupp for their project contributions. Thank you, Stefan Negrea -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160705/cd30bbd0/attachment.html From hrupp at redhat.com Wed Jul 6 02:39:09 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Wed, 06 Jul 2016 08:39:09 +0200 Subject: [Hawkular-dev] Hawkular Metrics 0.17.0 - Release In-Reply-To: References: Message-ID: On 6 Jul 2016, at 2:18, Stefan Negrea wrote: > I am happy to announce release 0.17.0 of Hawkular Metrics. This release is > anchored by performance enhancements and new Grafana Datasource Plugin. Good news, congrats everyone involved. That release is btw already inside of Hawkular-services 0.0.5 and the ruby-gem v2.2.0 makes use of one a feature (HWKMETRICS-393) From theute at redhat.com Wed Jul 6 03:34:43 2016 From: theute at redhat.com (Thomas Heute) Date: Wed, 6 Jul 2016 09:34:43 +0200 Subject: [Hawkular-dev] Hawkular Metrics 0.17.0 - Release In-Reply-To: References: Message-ID: Congrats ! On Wed, Jul 6, 2016 at 8:39 AM, Heiko W.Rupp wrote: > On 6 Jul 2016, at 2:18, Stefan Negrea wrote: > > > I am happy to announce release 0.17.0 of Hawkular Metrics. This release > is > > anchored by performance enhancements and new Grafana Datasource Plugin. > > Good news, congrats everyone involved. > > That release is btw already inside of Hawkular-services 0.0.5 > and the ruby-gem v2.2.0 makes use of one a feature (HWKMETRICS-393) > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160706/be81d04c/attachment.html From mazz at redhat.com Wed Jul 6 21:13:37 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 6 Jul 2016 21:13:37 -0400 (EDT) Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <180852793.2713865.1467851568205.JavaMail.zimbra@redhat.com> Message-ID: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> ====== Agent is introducing two changes: 1. Metric Type definitions created by the agent will have the same ID as before, but its Name is changing (probably does not affect anyone). 2. Clients (like UI, HawkFX, etc) should no longer assume the agent's h-inventory metric definition IDs match h-metric metric IDs - instead, they must look at the "metric-id" property on the h-inventory metric definition to know how to look up the actual metric data in h-metrics. Will affect all clients, but metric IDs will default to what they are today - so nothing changes and thus nothing will break today if you run with out of box configuration. ====== There is a use-case where the agent needs to support custom metric IDs (that is, rather than accepting the out-of-box metric IDs created by the agent, allow the user to define what the metric IDs should look like). See https://issues.jboss.org/browse/HWKAGENT-78 As a refresher, remember that when you create resources in inventory, those resources can be associated with one or more "metric" definitions. Those resource metrics are themselves associated with a "metric type" definition. Today, when the agent stores metric data into Hawkular Metrics, it stores the data under the ID of the "metric" that is associated with the resource (so the h-inventory metric ID is the same as the h-metric metric ID by definition - at least for the data the hawkular wildfly agent inserts). I am proposing two changes in the PR: https://github.com/hawkular/hawkular-agent/pull/226 First, today, the "metric type" definition that the agent creates has an ID and a Name that are identical. I am changing this so the ID stays the same (which is the metric set name, followed by "~", followed by the name of the metric -- e.g. if there was a that contains a , the metric type ID would be "this~that"), but the Name is only the name without the set name (e.g. the name would be "that" in the previous example). The above is a minor change, and I doubt anyone is affected by it. But I point it out just in case. Second, it should no longer be assumed that the inventory's resource metric ID is identical to the h-metric's metric ID. This second change will potentially affect everyone (I know it affects Heiko's HawkFX :) Now, that said, nothing really changes now, because the defaults will remain as they are (that is, the h-inventory's metric ID will still be exactly the same as the h-metrics ID - the agent keeps them identical). The change happens when the user actually configures the agent with a custom metric ID template (e.g. ). This means h-metric IDs will be DIFFERENT than h-inventory metric IDs. How then does a client know what h-metric IDs to look for if they only have h-inventory metric definitions? Well, recall that inventory allows for properties to be associated with any entity. I use this feature here. Rather than rely on an implicit rule ("h-inventory metric ID is the same as h-metric metric ID") I explicitly define this linkage in a property called "metric-id" on the h-inventory metric definition. Out of box, that property's value will be identical to the h-inventory metric ID (and hence why nothing really changes - since the explicit rule in this case provides the same behavior as if following the old implicit rule). In fact, I'm considering if I should set that property at all if its the same as the h-inventory ID - I think it might be better to only set a "metric-id" property if it is different. But this would require clients to know about the implicit rule if there is no metric-id property set ("is there a metric-id property set? No? Then use the h-inventory metric ID for the h-metric metric ID"). For example, see here (this is a live example I copied from the "raw" inventory JSON that HawkFX gave me for a metric) - this is the h-inventory entity definition for the metric "Heap Used" on my WildFly Server resource - notice the "properties" map has a "metric-id" value that is DIFFERENT than the "id" - that "metric-id" is something I customized in my agent config in standalone.xml (well, I used the swarm agent, so I put it in the swarm config, but its basically the same thing): { "path": "/t;hawkular/f;mazz/m;MI~R~%5Bmazz%2FWildFly~~%5D~MT~WildFly%20Memory%20Metrics~Heap%20Used", "properties": { "__identityHash": "70e59a5d427632223da36c225ba6ef8572985", "metric-id": "feed=mazz__msn=WildFly__typeName=Heap Used__resName=WildFly Server [WildFly]__resId=WildFly~~__typeId=WildFly Memory Metrics~Heap Used" }, "name": "Heap Used", "identityHash": "70e59a5d427632223da36c225ba6ef8572985", "type": { "path": "/t;hawkular/f;mazz/mt;WildFly%20Memory%20Metrics~Heap%20Used", "name": "Heap Used", "identityHash": "3be5b5fdabed925ac46fdc6d8295e34bbd3147a", "unit": "BYTES", "type": "GAUGE", "collectionInterval": 30, "id": "WildFly Memory Metrics~Heap Used" }, "id": "MI~R~[mazz/WildFly~~]~MT~WildFly Memory Metrics~Heap Used" } Notice this: feed=mazz__msn=WildFly__typeName=Heap Used__resName=WildFly Server [WildFly]__resId=WildFly~~__typeId=WildFly Memory Metrics~Heap Used is different from this: MI~R~[mazz/WildFly~~]~MT~WildFly Memory Metrics~Heap Used And that's the issue. Clients have to know to look for the "metric-id" property and use it when looking up metric data in h-metrics (so if you want to graph the data, you have to ask h-metrics for the data associated with the value found in the "metric-id" property). From lponce at redhat.com Thu Jul 7 02:56:48 2016 From: lponce at redhat.com (Lucas Ponce) Date: Thu, 7 Jul 2016 02:56:48 -0400 (EDT) Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> Message-ID: <1152184140.2770991.1467874608320.JavaMail.zimbra@redhat.com> Hi John, Thanks for bringing this as it might have potential side effects. I haven't reviewed your proposal yet but you can take an idea about how MiQ links Hawkular metrics inside something that can be managed internally of MiQ. https://github.com/ManageIQ/manageiq/blob/master/product/live_metrics/middleware_server.yaml https://github.com/ManageIQ/manageiq/blob/master/product/live_metrics/middleware_datasource.yaml I guess that if the id of the type is still valid, then perhaps nothing affects but just to be sure I chime in here just to validate it. I am afraid about potential changes at this stage. ----- Mensaje original ----- > De: "John Mazzitelli" > Para: "Discussions around Hawkular development" > Enviados: Jueves, 7 de Julio 2016 3:13:37 > Asunto: [Hawkular-dev] agent using custom metric IDs > > ====== > > Agent is introducing two changes: > > 1. Metric Type definitions created by the agent will have the same ID as > before, but its Name is changing (probably does not affect anyone). > > 2. Clients (like UI, HawkFX, etc) should no longer assume the agent's > h-inventory metric definition IDs match h-metric metric IDs - instead, they > must look at the "metric-id" property on the h-inventory metric definition > to know how to look up the actual metric data in h-metrics. Will affect all > clients, but metric IDs will default to what they are today - so nothing > changes and thus nothing will break today if you run with out of box > configuration. > > ====== > > There is a use-case where the agent needs to support custom metric IDs (that > is, rather than accepting the out-of-box metric IDs created by the agent, > allow the user to define what the metric IDs should look like). See > https://issues.jboss.org/browse/HWKAGENT-78 > > As a refresher, remember that when you create resources in inventory, those > resources can be associated with one or more "metric" definitions. Those > resource metrics are themselves associated with a "metric type" definition. > Today, when the agent stores metric data into Hawkular Metrics, it stores > the data under the ID of the "metric" that is associated with the resource > (so the h-inventory metric ID is the same as the h-metric metric ID by > definition - at least for the data the hawkular wildfly agent inserts). > > I am proposing two changes in the PR: > https://github.com/hawkular/hawkular-agent/pull/226 > > First, today, the "metric type" definition that the agent creates has an ID > and a Name that are identical. I am changing this so the ID stays the same > (which is the metric set name, followed by "~", followed by the name of the > metric -- e.g. if there was a that contains a > , the metric type ID would be "this~that"), but the > Name is only the name without the set name (e.g. the name would be "that" in > the previous example). > > The above is a minor change, and I doubt anyone is affected by it. But I > point it out just in case. > > Second, it should no longer be assumed that the inventory's resource metric > ID is identical to the h-metric's metric ID. > > This second change will potentially affect everyone (I know it affects > Heiko's HawkFX :) > > Now, that said, nothing really changes now, because the defaults will remain > as they are (that is, the h-inventory's metric ID will still be exactly the > same as the h-metrics ID - the agent keeps them identical). The change > happens when the user actually configures the agent with a custom metric ID > template (e.g. ...>). This means h-metric IDs will be DIFFERENT than h-inventory metric > IDs. > > How then does a client know what h-metric IDs to look for if they only have > h-inventory metric definitions? Well, recall that inventory allows for > properties to be associated with any entity. I use this feature here. Rather > than rely on an implicit rule ("h-inventory metric ID is the same as > h-metric metric ID") I explicitly define this linkage in a property called > "metric-id" on the h-inventory metric definition. Out of box, that > property's value will be identical to the h-inventory metric ID (and hence > why nothing really changes - since the explicit rule in this case provides > the same behavior as if following the old implicit rule). In fact, I'm > considering if I should set that property at all if its the same as the > h-inventory ID - I think it might be better to only set a "metric-id" > property if it is different. But this would require clients to know about > the implicit rule if there is no metric-id property set ("is there a > metric-id property set? No? Then use the h-inven! > tory metric ID for the h-metric metric ID"). > > For example, see here (this is a live example I copied from the "raw" > inventory JSON that HawkFX gave me for a metric) - this is the h-inventory > entity definition for the metric "Heap Used" on my WildFly Server resource - > notice the "properties" map has a "metric-id" value that is DIFFERENT than > the "id" - that "metric-id" is something I customized in my agent config in > standalone.xml (well, I used the swarm agent, so I put it in the swarm > config, but its basically the same thing): > > { > "path": > "/t;hawkular/f;mazz/m;MI~R~%5Bmazz%2FWildFly~~%5D~MT~WildFly%20Memory%20Metrics~Heap%20Used", > "properties": { > "__identityHash": "70e59a5d427632223da36c225ba6ef8572985", > "metric-id": "feed=mazz__msn=WildFly__typeName=Heap Used__resName=WildFly > Server [WildFly]__resId=WildFly~~__typeId=WildFly Memory Metrics~Heap > Used" > }, > "name": "Heap Used", > "identityHash": "70e59a5d427632223da36c225ba6ef8572985", > "type": { > "path": "/t;hawkular/f;mazz/mt;WildFly%20Memory%20Metrics~Heap%20Used", > "name": "Heap Used", > "identityHash": "3be5b5fdabed925ac46fdc6d8295e34bbd3147a", > "unit": "BYTES", > "type": "GAUGE", > "collectionInterval": 30, > "id": "WildFly Memory Metrics~Heap Used" > }, > "id": "MI~R~[mazz/WildFly~~]~MT~WildFly Memory Metrics~Heap Used" > } > > Notice this: > > feed=mazz__msn=WildFly__typeName=Heap Used__resName=WildFly Server > [WildFly]__resId=WildFly~~__typeId=WildFly Memory Metrics~Heap Used > > is different from this: > > MI~R~[mazz/WildFly~~]~MT~WildFly Memory Metrics~Heap Used > > And that's the issue. Clients have to know to look for the "metric-id" > property and use it when looking up metric data in h-metrics (so if you want > to graph the data, you have to ask h-metrics for the data associated with > the value found in the "metric-id" property). > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From theute at redhat.com Thu Jul 7 04:58:46 2016 From: theute at redhat.com (Thomas Heute) Date: Thu, 7 Jul 2016 10:58:46 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: So here is a very rough idea of how it would look like: http://209.132.178.114:10188/ Content need to be adapted, new pages to be created, but hopefully you get the idea On Tue, Jul 5, 2016 at 8:27 PM, John Sanda wrote: > > On Jul 5, 2016, at 10:21 AM, Thomas Segismont wrote: > > > > Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : > > On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >> wrote: > > > > Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : > > 2) The Grafana plugin should be moved under Metrics because is for Metrics > > and only Metrics. > > If this is true - can we make it work with H-services as well? > > > The Grafana plugin works with all active flavors of Metrics: standalone, > Openshift-Metrics and Hawkular-Services. > > I'm not sure what Stefan meant. > > > The Grafana plugins works with Metrics deployed on all distributions > however, the plugin itself can only be used with the Metrics project, > there are no projects (such as Alerts, or Inventory) that will ever > integrate with it. That is why I think it should be under the Metrics > project and not in another place. The integration itself is very > specific to just Metrics, not the entire Hawkular Services. > > > I see what you meant now. But we can't presume anything about other > services roadmaps. For example, the datasource plugin annotation feature > could be implemented with requests to an event service. > > Anyway, since it should be able to connect to Metrics in different > environments (H-Services, OS-Metrics and standalone), I err on the side > of promoting it as a top level project. > > > I think a top-level project makes the most sense. > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160707/96886d38/attachment.html From abonas at redhat.com Thu Jul 7 05:37:30 2016 From: abonas at redhat.com (Alissa Bonas) Date: Thu, 7 Jul 2016 12:37:30 +0300 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: Couple of suggestions: 1. In the "Hawkular features" section in homepage make the icons clickable. right now the only way to get more info is to click the "more" part. 2. Top level menu font color is a really pale grey so everything looks disabled. 3. Community-Connect leads to page named "Join". Perhaps it would more clear to make the link and the page name the same (and I would call it "Get involved" anyway) On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute wrote: > So here is a very rough idea of how it would look like: > http://209.132.178.114:10188/ > > Content need to be adapted, new pages to be created, but hopefully you get > the idea > > On Tue, Jul 5, 2016 at 8:27 PM, John Sanda wrote: > >> >> On Jul 5, 2016, at 10:21 AM, Thomas Segismont >> wrote: >> >> >> >> Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : >> >> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont > >> wrote: >> >> >> >> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >> >> 2) The Grafana plugin should be moved under Metrics because is for Metrics >> >> and only Metrics. >> >> If this is true - can we make it work with H-services as well? >> >> >> The Grafana plugin works with all active flavors of Metrics: >> standalone, >> Openshift-Metrics and Hawkular-Services. >> >> I'm not sure what Stefan meant. >> >> >> The Grafana plugins works with Metrics deployed on all distributions >> however, the plugin itself can only be used with the Metrics project, >> there are no projects (such as Alerts, or Inventory) that will ever >> integrate with it. That is why I think it should be under the Metrics >> project and not in another place. The integration itself is very >> specific to just Metrics, not the entire Hawkular Services. >> >> >> I see what you meant now. But we can't presume anything about other >> services roadmaps. For example, the datasource plugin annotation feature >> could be implemented with requests to an event service. >> >> Anyway, since it should be able to connect to Metrics in different >> environments (H-Services, OS-Metrics and standalone), I err on the side >> of promoting it as a top level project. >> >> >> I think a top-level project makes the most sense. >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160707/bc607631/attachment.html From theute at redhat.com Thu Jul 7 05:48:24 2016 From: theute at redhat.com (Thomas Heute) Date: Thu, 7 Jul 2016 11:48:24 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: Thanks for the comments, I'm really looking more for feedback on the organization of the content, all the content is taken from the existing. For the 3rd point, the current website has 2 very similar pages (I only conserved one here as it was a quick shot, but the 2 needs to be merged) http://www.hawkular.org/community/index.html http://www.hawkular.org/community/join.html On Thu, Jul 7, 2016 at 11:37 AM, Alissa Bonas wrote: > Couple of suggestions: > > 1. In the "Hawkular features" section in homepage make the icons > clickable. right now the only way to get more info is to click the "more" > part. > 2. Top level menu font color is a really pale grey so everything looks > disabled. > 3. Community-Connect leads to page named "Join". Perhaps it would more > clear to make the link and the page name the same (and I would call it "Get > involved" anyway) > > > > > On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute wrote: > >> So here is a very rough idea of how it would look like: >> http://209.132.178.114:10188/ >> >> Content need to be adapted, new pages to be created, but hopefully you >> get the idea >> >> On Tue, Jul 5, 2016 at 8:27 PM, John Sanda wrote: >> >>> >>> On Jul 5, 2016, at 10:21 AM, Thomas Segismont >>> wrote: >>> >>> >>> >>> Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : >>> >>> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >> >> wrote: >>> >>> >>> >>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>> >>> 2) The Grafana plugin should be moved under Metrics because is for >>> Metrics >>> >>> and only Metrics. >>> >>> If this is true - can we make it work with H-services as well? >>> >>> >>> The Grafana plugin works with all active flavors of Metrics: >>> standalone, >>> Openshift-Metrics and Hawkular-Services. >>> >>> I'm not sure what Stefan meant. >>> >>> >>> The Grafana plugins works with Metrics deployed on all distributions >>> however, the plugin itself can only be used with the Metrics project, >>> there are no projects (such as Alerts, or Inventory) that will ever >>> integrate with it. That is why I think it should be under the Metrics >>> project and not in another place. The integration itself is very >>> specific to just Metrics, not the entire Hawkular Services. >>> >>> >>> I see what you meant now. But we can't presume anything about other >>> services roadmaps. For example, the datasource plugin annotation feature >>> >>> could be implemented with requests to an event service. >>> >>> Anyway, since it should be able to connect to Metrics in different >>> environments (H-Services, OS-Metrics and standalone), I err on the side >>> of promoting it as a top level project. >>> >>> >>> I think a top-level project makes the most sense. >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160707/236d9bd1/attachment-0001.html From gbrown at redhat.com Thu Jul 7 06:25:26 2016 From: gbrown at redhat.com (Gary Brown) Date: Thu, 7 Jul 2016 06:25:26 -0400 (EDT) Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: <488951763.4062228.1467887126176.JavaMail.zimbra@redhat.com> Hi Thomas Think it looks fine - as long as we are not going to be providing any more distributions (e.g. Metrics + Alerts) - otherwise it will be extending the top level menu. Apologies if already discussed, but just wanted to clarify the purpose of having Metrics as separate distribution on the community project website. I understand that Metrics is used as a standalone component within openshift, but wondering whether that means it should be publicly available as such on the hawkular website, encouraging other community users to use it as a separate component. Wondering whether for simplicity, it would be better to only provide hawkular-services distribution publicly, even if user only wants metrics, and then the Metrics build is just an internal packaging option provided to Openshift? Regards Gary ----- Original Message ----- > Thanks for the comments, I'm really looking more for feedback on the > organization of the content, all the content is taken from the existing. > > For the 3rd point, the current website has 2 very similar pages (I only > conserved one here as it was a quick shot, but the 2 needs to be merged) > http://www.hawkular.org/community/index.html > http://www.hawkular.org/community/join.html > > > On Thu, Jul 7, 2016 at 11:37 AM, Alissa Bonas < abonas at redhat.com > wrote: > > > > Couple of suggestions: > > 1. In the "Hawkular features" section in homepage make the icons clickable. > right now the only way to get more info is to click the "more" part. > 2. Top level menu font color is a really pale grey so everything looks > disabled. > 3. Community-Connect leads to page named "Join". Perhaps it would more clear > to make the link and the page name the same (and I would call it "Get > involved" anyway) > > > > > On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute < theute at redhat.com > wrote: > > > > So here is a very rough idea of how it would look like: > http://209.132.178.114:10188/ > > Content need to be adapted, new pages to be created, but hopefully you get > the idea > > On Tue, Jul 5, 2016 at 8:27 PM, John Sanda < jsanda at redhat.com > wrote: > > > > > > > > On Jul 5, 2016, at 10:21 AM, Thomas Segismont < tsegismo at redhat.com > wrote: > > > > Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : > > > On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont < tsegismo at redhat.com > < mailto:tsegismo at redhat.com >> wrote: > > > > Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : > > > > > 2) The Grafana plugin should be moved under Metrics because is for Metrics > > > and only Metrics. > If this is true - can we make it work with H-services as well? > > > The Grafana plugin works with all active flavors of Metrics: standalone, > Openshift-Metrics and Hawkular-Services. > > I'm not sure what Stefan meant. > > > The Grafana plugins works with Metrics deployed on all distributions > however, the plugin itself can only be used with the Metrics project, > there are no projects (such as Alerts, or Inventory) that will ever > integrate with it. That is why I think it should be under the Metrics > project and not in another place. The integration itself is very > specific to just Metrics, not the entire Hawkular Services. > > I see what you meant now. But we can't presume anything about other > services roadmaps. For example, the datasource plugin annotation feature > could be implemented with requests to an event service. > > Anyway, since it should be able to connect to Metrics in different > environments (H-Services, OS-Metrics and standalone), I err on the side > of promoting it as a top level project. > > > I think a top-level project makes the most sense. > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From theute at redhat.com Thu Jul 7 06:46:35 2016 From: theute at redhat.com (Thomas Heute) Date: Thu, 7 Jul 2016 12:46:35 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: <488951763.4062228.1467887126176.JavaMail.zimbra@redhat.com> References: <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> <488951763.4062228.1467887126176.JavaMail.zimbra@redhat.com> Message-ID: Hawkular Services should indeed be the default choice, we heard of multiple (potential) users of Hawkular Metrics asking about alerts and/or inventory support... Metrics is the fallback if one happen to only use metrics. At least the message of Hawkular Metrics is simple, it's a TSDB, period. I'm fine leaving the 3 top level projects, and it should definitely not grow. Note that I didn't expose Alerts there to not add confusion even though it can run on its own. Thomas On Thu, Jul 7, 2016 at 12:25 PM, Gary Brown wrote: > Hi Thomas > > Think it looks fine - as long as we are not going to be providing any more > distributions (e.g. Metrics + Alerts) - otherwise it will be extending the > top level menu. > > Apologies if already discussed, but just wanted to clarify the purpose of > having Metrics as separate distribution on the community project website. > > I understand that Metrics is used as a standalone component within > openshift, but wondering whether that means it should be publicly available > as such on the hawkular website, encouraging other community users to use > it as a separate component. > > Wondering whether for simplicity, it would be better to only provide > hawkular-services distribution publicly, even if user only wants metrics, > and then the Metrics build is just an internal packaging option provided to > Openshift? > > Regards > Gary > > > ----- Original Message ----- > > Thanks for the comments, I'm really looking more for feedback on the > > organization of the content, all the content is taken from the existing. > > > > For the 3rd point, the current website has 2 very similar pages (I only > > conserved one here as it was a quick shot, but the 2 needs to be merged) > > http://www.hawkular.org/community/index.html > > http://www.hawkular.org/community/join.html > > > > > > On Thu, Jul 7, 2016 at 11:37 AM, Alissa Bonas < abonas at redhat.com > > wrote: > > > > > > > > Couple of suggestions: > > > > 1. In the "Hawkular features" section in homepage make the icons > clickable. > > right now the only way to get more info is to click the "more" part. > > 2. Top level menu font color is a really pale grey so everything looks > > disabled. > > 3. Community-Connect leads to page named "Join". Perhaps it would more > clear > > to make the link and the page name the same (and I would call it "Get > > involved" anyway) > > > > > > > > > > On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute < theute at redhat.com > > wrote: > > > > > > > > So here is a very rough idea of how it would look like: > > http://209.132.178.114:10188/ > > > > Content need to be adapted, new pages to be created, but hopefully you > get > > the idea > > > > On Tue, Jul 5, 2016 at 8:27 PM, John Sanda < jsanda at redhat.com > wrote: > > > > > > > > > > > > > > > > On Jul 5, 2016, at 10:21 AM, Thomas Segismont < tsegismo at redhat.com > > wrote: > > > > > > > > Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : > > > > > > On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont < tsegismo at redhat.com > > < mailto:tsegismo at redhat.com >> wrote: > > > > > > > > Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : > > > > > > > > > > 2) The Grafana plugin should be moved under Metrics because is for > Metrics > > > > > > and only Metrics. > > If this is true - can we make it work with H-services as well? > > > > > > The Grafana plugin works with all active flavors of Metrics: standalone, > > Openshift-Metrics and Hawkular-Services. > > > > I'm not sure what Stefan meant. > > > > > > The Grafana plugins works with Metrics deployed on all distributions > > however, the plugin itself can only be used with the Metrics project, > > there are no projects (such as Alerts, or Inventory) that will ever > > integrate with it. That is why I think it should be under the Metrics > > project and not in another place. The integration itself is very > > specific to just Metrics, not the entire Hawkular Services. > > > > I see what you meant now. But we can't presume anything about other > > services roadmaps. For example, the datasource plugin annotation feature > > could be implemented with requests to an event service. > > > > Anyway, since it should be able to connect to Metrics in different > > environments (H-Services, OS-Metrics and standalone), I err on the side > > of promoting it as a top level project. > > > > > > I think a top-level project makes the most sense. > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160707/cece485f/attachment.html From gbrown at redhat.com Thu Jul 7 07:49:11 2016 From: gbrown at redhat.com (Gary Brown) Date: Thu, 7 Jul 2016 07:49:11 -0400 (EDT) Subject: [Hawkular-dev] Small PR to review: https://github.com/hawkular/hawkular-apm/pull/471 In-Reply-To: <696708569.4071818.1467892084614.JavaMail.zimbra@redhat.com> Message-ID: <46411179.4071842.1467892151884.JavaMail.zimbra@redhat.com> Hi Anyone available to review this PR, which simply changes the location of the APM instrumentation rule configuration files from standalone/data to standalone/configuration? Regards Gary From mazz at redhat.com Thu Jul 7 08:45:50 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 7 Jul 2016 08:45:50 -0400 (EDT) Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <1152184140.2770991.1467874608320.JavaMail.zimbra@redhat.com> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> <1152184140.2770991.1467874608320.JavaMail.zimbra@redhat.com> Message-ID: <1654243242.2852543.1467895550205.JavaMail.zimbra@redhat.com> > I haven't reviewed your proposal yet but you can take an idea about how MiQ > links Hawkular metrics inside something that can be managed internally of > MiQ. > > https://github.com/ManageIQ/manageiq/blob/master/product/live_metrics/middleware_server.yaml > https://github.com/ManageIQ/manageiq/blob/master/product/live_metrics/middleware_datasource.yaml > > I guess that if the id of the type is still valid, then perhaps nothing > affects but just to be sure I chime in here just to validate it. Right. This is OK. The comments in that MiQ code says "It maps the native id used in the provider" - so this is OK because the native IDs (the metric type IDs) remain the same as they are today, nothing changes (what you have there in the miq code - "setName~typeName" is what the IDs will still be.) The only thing changing is the type NAME (which will no longer be the same as the ID - I strip the "setName~" from it). The reason for this change is because if a person wants to define their own custom metric ID, I needed a way to give them the ability to inject the metric name in their custom metric ID. They will most likely want to use the metric name as part of the metric ID template - so I now support a token of "%MetricTypeName" which users can use - it will be replaced with the name of the metric type (not to include the "setName~" string). From abonas at redhat.com Thu Jul 7 11:15:30 2016 From: abonas at redhat.com (Alissa Bonas) Date: Thu, 7 Jul 2016 18:15:30 +0300 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: iirc we discussed adding an Events section (events as in meetups/conferences) somewhere visible like front page? Also, a summary of version compatibility matrix of all components with each other would be helpful imo. On Thu, Jul 7, 2016 at 12:48 PM, Thomas Heute wrote: > Thanks for the comments, I'm really looking more for feedback on the > organization of the content, all the content is taken from the existing. > > For the 3rd point, the current website has 2 very similar pages (I only > conserved one here as it was a quick shot, but the 2 needs to be merged) > http://www.hawkular.org/community/index.html > http://www.hawkular.org/community/join.html > > > On Thu, Jul 7, 2016 at 11:37 AM, Alissa Bonas wrote: > >> Couple of suggestions: >> >> 1. In the "Hawkular features" section in homepage make the icons >> clickable. right now the only way to get more info is to click the "more" >> part. >> 2. Top level menu font color is a really pale grey so everything looks >> disabled. >> 3. Community-Connect leads to page named "Join". Perhaps it would more >> clear to make the link and the page name the same (and I would call it "Get >> involved" anyway) >> >> >> >> >> On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute wrote: >> >>> So here is a very rough idea of how it would look like: >>> http://209.132.178.114:10188/ >>> >>> Content need to be adapted, new pages to be created, but hopefully you >>> get the idea >>> >>> On Tue, Jul 5, 2016 at 8:27 PM, John Sanda wrote: >>> >>>> >>>> On Jul 5, 2016, at 10:21 AM, Thomas Segismont >>>> wrote: >>>> >>>> >>>> >>>> Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : >>>> >>>> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >>> >> wrote: >>>> >>>> >>>> >>>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>>> >>>> 2) The Grafana plugin should be moved under Metrics because is for >>>> Metrics >>>> >>>> and only Metrics. >>>> >>>> If this is true - can we make it work with H-services as well? >>>> >>>> >>>> The Grafana plugin works with all active flavors of Metrics: >>>> standalone, >>>> Openshift-Metrics and Hawkular-Services. >>>> >>>> I'm not sure what Stefan meant. >>>> >>>> >>>> The Grafana plugins works with Metrics deployed on all distributions >>>> however, the plugin itself can only be used with the Metrics project, >>>> there are no projects (such as Alerts, or Inventory) that will ever >>>> integrate with it. That is why I think it should be under the Metrics >>>> project and not in another place. The integration itself is very >>>> specific to just Metrics, not the entire Hawkular Services. >>>> >>>> >>>> I see what you meant now. But we can't presume anything about other >>>> services roadmaps. For example, the datasource plugin annotation feature >>>> >>>> could be implemented with requests to an event service. >>>> >>>> Anyway, since it should be able to connect to Metrics in different >>>> environments (H-Services, OS-Metrics and standalone), I err on the side >>>> >>>> of promoting it as a top level project. >>>> >>>> >>>> I think a top-level project makes the most sense. >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160707/c339385c/attachment-0001.html From lkrejci at redhat.com Thu Jul 7 17:08:33 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 07 Jul 2016 23:08:33 +0200 Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> Message-ID: <9471848.laFv79TRbG@localhost.localdomain> On st?eda 6. ?ervence 2016 21:13:37 CEST John Mazzitelli wrote: > ====== > > Agent is introducing two changes: > > 1. Metric Type definitions created by the agent will have the same ID as > before, but its Name is changing (probably does not affect anyone). > > 2. Clients (like UI, HawkFX, etc) should no longer assume the agent's > h-inventory metric definition IDs match h-metric metric IDs - instead, they > must look at the "metric-id" property on the h-inventory metric definition > to know how to look up the actual metric data in h-metrics. Will affect all > clients, but metric IDs will default to what they are today - so nothing > changes and thus nothing will break today if you run with out of box > configuration. > > ====== > > There is a use-case where the agent needs to support custom metric IDs (that > is, rather than accepting the out-of-box metric IDs created by the agent, > allow the user to define what the metric IDs should look like). See > https://issues.jboss.org/browse/HWKAGENT-78 > > As a refresher, remember that when you create resources in inventory, those > resources can be associated with one or more "metric" definitions. Those > resource metrics are themselves associated with a "metric type" definition. > Today, when the agent stores metric data into Hawkular Metrics, it stores > the data under the ID of the "metric" that is associated with the resource > (so the h-inventory metric ID is the same as the h-metric metric ID by > definition - at least for the data the hawkular wildfly agent inserts). > > I am proposing two changes in the PR: > https://github.com/hawkular/hawkular-agent/pull/226 > > First, today, the "metric type" definition that the agent creates has an ID > and a Name that are identical. I am changing this so the ID stays the same > (which is the metric set name, followed by "~", followed by the name of the > metric -- e.g. if there was a that contains a > , the metric type ID would be "this~that"), but the > Name is only the name without the set name (e.g. the name would be "that" > in the previous example). > > The above is a minor change, and I doubt anyone is affected by it. But I > point it out just in case. > > Second, it should no longer be assumed that the inventory's resource metric > ID is identical to the h-metric's metric ID. > > This second change will potentially affect everyone (I know it affects > Heiko's HawkFX :) > > Now, that said, nothing really changes now, because the defaults will remain > as they are (that is, the h-inventory's metric ID will still be exactly the > same as the h-metrics ID - the agent keeps them identical). The change > happens when the user actually configures the agent with a custom metric ID > template (e.g. ...>). This means h-metric IDs will be DIFFERENT than h-inventory metric > IDs. > > How then does a client know what h-metric IDs to look for if they only have > h-inventory metric definitions? Well, recall that inventory allows for > properties to be associated with any entity. I use this feature here. > Rather than rely on an implicit rule ("h-inventory metric ID is the same as > h-metric metric ID") I explicitly define this linkage in a property called > "metric-id" on the h-inventory metric definition. Out of box, that > property's value will be identical to the h-inventory metric ID (and hence > why nothing really changes - since the explicit rule in this case provides > the same behavior as if following the old implicit rule). In fact, I'm > considering if I should set that property at all if its the same as the > h-inventory ID - I think it might be better to only set a "metric-id" > property if it is different. But this would require clients to know about > the implicit rule if there is no metric-id property set ("is there a > metric-id property set? No? Then use the h-inven! tory metric ID for the > h-metric metric ID"). > > For example, see here (this is a live example I copied from the "raw" > inventory JSON that HawkFX gave me for a metric) - this is the h-inventory > entity definition for the metric "Heap Used" on my WildFly Server resource > - notice the "properties" map has a "metric-id" value that is DIFFERENT > than the "id" - that "metric-id" is something I customized in my agent > config in standalone.xml (well, I used the swarm agent, so I put it in the > swarm config, but its basically the same thing): > > { > "path": > "/t;hawkular/f;mazz/m;MI~R~%5Bmazz%2FWildFly~~%5D~MT~WildFly%20Memory%20Met > rics~Heap%20Used", "properties": { > "__identityHash": "70e59a5d427632223da36c225ba6ef8572985", > "metric-id": "feed=mazz__msn=WildFly__typeName=Heap > Used__resName=WildFly Server [WildFly]__resId=WildFly~~__typeId=WildFly > Memory Metrics~Heap Used" }, > "name": "Heap Used", > "identityHash": "70e59a5d427632223da36c225ba6ef8572985", > "type": { > "path": "/t;hawkular/f;mazz/mt;WildFly%20Memory%20Metrics~Heap%20Used", > "name": "Heap Used", > "identityHash": "3be5b5fdabed925ac46fdc6d8295e34bbd3147a", > "unit": "BYTES", > "type": "GAUGE", > "collectionInterval": 30, > "id": "WildFly Memory Metrics~Heap Used" > }, > "id": "MI~R~[mazz/WildFly~~]~MT~WildFly Memory Metrics~Heap Used" > } > > Notice this: > > feed=mazz__msn=WildFly__typeName=Heap Used__resName=WildFly Server > [WildFly]__resId=WildFly~~__typeId=WildFly Memory Metrics~Heap Used > > is different from this: > > MI~R~[mazz/WildFly~~]~MT~WildFly Memory Metrics~Heap Used > > And that's the issue. Clients have to know to look for the "metric-id" > property and use it when looking up metric data in h-metrics (so if you > want to graph the data, you have to ask h-metrics for the data associated > with the value found in the "metric-id" property). > What if you used the inventory's canonical path as an the metric ID for h- metrics (e.g. used "path" from the json above)? After all, if inventory should be the storage of all things Hawkular knows about, the canonical path is the one and only thing that can identify any one thing in inventory. So it lends itself nicely to be used as an identifier in other components. > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From lkrejci at redhat.com Thu Jul 7 17:28:03 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 07 Jul 2016 23:28:03 +0200 Subject: [Hawkular-dev] Sync with Inventory In-Reply-To: References: <1652727.ThqZrOltYD@rpi.lan> Message-ID: <2966868.AeUHh1n2pu@localhost.localdomain> On pond?l? 4. ?ervence 2016 23:53:20 CEST Austin Kuo wrote: > why "isParentOf" is more suitable than "contains" in vertx as Thomas said? > > If it is "contains", it also makes sense to me since if "MyApp" is gone, > the feeds it contains should disappear as well. > > Austin I'm not sure how vertx apps are composed so maybe that would be the way to go. I was going under the assumption that a component of a vertx application might live in e.g. Wildfly server but I may be completely wrong there. If the above were true, it could be that the server which contains (sic) the part of the vertx application would be reported about by its own feed. If there were more than 1 such server and feed, you could no longer represent the vertx application in any 1 of them, because it somehow sits "above" all of them. Inventory doesn't allow for a resource to contain a feed, so in that case you're forced to use "isParentOf" between the app resource and its "components" in the various feeds. What this also gives you is the insight if a vertx application has been completely "uninstalled". If the feeds for its various parts still report the parts as being present yet your overall application has been deleted, you know that there must have been some kind of mishap. But as I said - this only makes sense if the components of a vertx application are embedded in something else. Or, actually, it would also make sense if the components were standalone but could participate in more than 1 vertx application. > Lukas Krejci ? 2016?6?29? ???21:20??? > > > Btw. I've slightly updated the inventory organization description on the > > hawkular site (http://www.hawkular.org/docs/components/inventory/ > > index.html#inventory-organization > > > ganization>). I hope it explains the structure and > > intent of the entities in inventory in a slightly more comprehensible > > manner. > > > > My answers are inline below... > > > > On st?eda 29. ?ervna 2016 14:39:27 CEST Thomas Segismont wrote: > > > Thank you very much for the thorough reply Lukas. A few > > > questions/comments inline. > > > > > > Le 23/06/2016 ? 15:59, Lukas Krejci a ?crit : > > > > On Thursday, June 23, 2016 10:27:12 AM Thomas Segismont wrote: > > > >> Hey Lukas, > > > >> > > > >> Thank you for pointing us in the sync endpoint. Austin will look into > > > >> this and will certainly come back with more questions. > > > >> > > > >> With respect to the user creating resources question, the difference > > > >> between Vert.x and Wildfly is that the user creates resources > > > >> grammatically. So in version 1 of the application, there might be two > > > >> HTTP servers as well as 7 event bus handlers, but only 1 http server > > > > in > > > > > >> version 2. And a named worker pool in version 3. > > > >> > > > >> In the end, I believe it doesn't matter if it's container which > > > > creates > > > > > >> resources or if it's the user himself. Does it? > > > > > > > > It does not really (inventory has just a single API, so it does not > > > > really > > > > > > know who is talking to it - if a feed or if a user) - but resources > > > > inside > > > > > > and outside feeds have slightly different semantics. > > > > > > > > Right now the logic is this: > > > > > > > > Feeds are "agents" that don't care about anything else but their own > > > > little > > > > "world". That's why they can create their own resource types, metric > > > > types > > > > > > and they also declare resources and metrics of those types. Feed does > > > > not > > > > > > need to look "outside" of its own data and is in full charge of it. > > > > > > Does that mean that creating a feed is the only way to create > > > resource/metric types? > > > > No, you can also create resource types and metric types directly under the > > tenant. > > > > > I suppose the benefit of creating resource types is that then you can > > > search for different resources of the same type easily. > > > > > > And if feeds create resource types, how do you know that resource types > > > created by the Hawkular Agent feed running on server A are the same as > > > those created by another agent running on server B? > > > > Inventory automatically computes "identity hashes" of resource types and > > metric types - if 2 resource types in 2 feeds have the same ID and exactly > > the > > same configuration definitions, they are considered identical. If you know > > 1 > > resource type, you can find all the identical ones using the following > > REST > > API (since 0.17.0.Final, the format of the URLs is thoroughly explained > > here: > > http://www.hawkular.org/docs/rest/rest-inventory.html#_api_endpoints): > > > > /hawkular/inventory/traversal/f;feedId/rt;resourceTypeId/identical > > > > If for example some resource types should be known up-front and "shared" > > across all feeds, some kind of "gluecode" could create "global" resource > > types > > under the tenant, that would have the same id and structure as the types > > that > > the feeds declare. If then you want to for example find all resources of > > given > > type, you can: > > > > /hawkular/inventory/traversal/rt;myType/identical/rl;defines/type=resource > > > > I.e. for all types identical to the global one, find all resources defined > > by > > those types. > > > > > > Hence the /sync endpoint applies to a feed nicely - since it is in > > > > charge, > > > > > > it merely declares what is the view it has currently of the "world" it > > > > sees and inventory will make sure it has the same picture - under that > > > > feed. > > > > > > > > Now if you have an application that spans multiple vms/machines and is > > > > composed of multiple processes, there is no such clear distinction of > > > > "ownership". > > > > > > Good point, Vert.x applications are often distributed and communicating > > > over the EventBus. > > > > > > > While indeed a "real" user can just act like a feed, the envisioned > > > > workflow is that the user operates directly in environments and at the > > > > top level. I.e. a user assigns feeds to environments (i.e. this feed > > > > reports on my server in staging environment, etc) and the user creates > > > > "logical" resources in the environment (i.e. "My App" resource in > > > > staging > > > > > > env is composed of a load balancer managed by this feed, mongodb > > > > managed > > > > > > by another feed there and clustered wflys there, there and there). > > > > > > > > To model this, inventory supports 2 kinds of tree hierarchies - 1 > > > > created > > > > > > using the "contains" relationship, which expresses existential > > > > ownership - > > > > > > i.e. a feed contains its resources and if a feed disappears, so do the > > > > resources, because no one else can report on them. The entities bound > > > > by > > > > > > the > > > > > > How does a feed "disappear"? That would be by deleting it through the > > > REST API, correct? Something the ManageIQ provider would do through the > > > Ruby client? > > > > yes > > > > > > contains relationship form a tree - no loops or diamonds in it (this > > > > is > > > > enforced by inventory). But there can also be a hierarchy created > > > > using an > > > > > > "isParentOf" relationship (which represents "logical" ownership). > > > > Resources > > > > bound by "isParentOf" can form an acyclic graph - i.e. 1 resource can > > > > have > > > > > > multiple parents as well as many children (isParentOf is applicable > > > > only > > > > > > to > > > > resources, not other types of entities). > > > > > > > > The hierarchies formed by "contains" and "isParentOf" are independent. > > > > So > > > > > > you can create a resource "My App" in the staging environment and > > > > declare > > > > > > it a parent (using "isParentOf") of the resources declared by feeds > > > > that > > > > > > manage the machines where the constituent servers live. > > > > > > Interesting, that may be the way to model a Vert.x app deployed on two > > > machines. Each process would have its own feed reporting discovered > > > resources (http servers, event bus handlers, ... etc), and a logical app > > > resource as parent. > > > > Exactly. > > > > > > That is the envisaged workflow for "apps". Now the downside to that is > > > > that > > > > (currently) there is no "sync" for that. The reason is that the > > > > application > > > > really is a logical concept and the underlying servers can be > > > > repurposed > > > > > > to > > > > serve different applications (so if app stops using it, it shouldn't > > > > really > > > > disappear from inventory, as is the case with /sync - because if a > > > > feed > > > > doesn't "see" a resource, then it really is just gone, because the > > > > feed is > > > > > > solely responsible for reporting on it). > > > > > > What happens to the resources exactly? Are they marked as gone or simply > > > deleted? > > > > Right now they are deleted. That is of course not optimal and versioning > > is in > > the pipeline right after the port of inventory to Tinkerpop3. Basically > > all > > the entities and relationships will get "from" and "to" timestamps. > > Implicitly, you'd look at the "present", but you'd be able to look at how > > things looked in the past by specifying a different "now" in your query. > > > > > Do you know how dependent services are updated? For example, when a JMS > > > queue is gone, are alert definitions on queue depth removed as well? How > > > does that happen? > > > > Inventory sends events on the bus about every C/U/D of every entity or > > relationship, so other components can react on that. > > > > > > We can think about how to somehow help clients with "App sync" but I'm > > > > not > > > > > > sure if having a feed for vertx is the right thing to do. On the other > > > > hand I very well may not be seeing some obvious problems of the above > > > > or > > > > > > parallels that make the 2 approaches really the same because the above > > > > model is just ingrained in my brain after so many hours thinking about > > > > it > > > > > > ;) > > > > > > > >> As for the feed question, the Vert.x feed will be the Metrics SPI > > > >> implementation (vertx-hawkular-metrics project). Again I guess it's > > > > not > > > > > >> much different than the Hawkular Agent. > > > > > > > > A feed would only be appropriate if vertx app never reported on > > > > something > > > > > > that would also be reported by other agents. I.e. if a part of a vertx > > > > application is also reported on by a wfly agent, because that part is > > > > running in a wfly server managed by us, then that will not work - 1 > > > > resource cannot be "contained" in 2 different feeds (not just API > > > > wise, > > > > but logically, too). > > > > > > I'm not too worried about this use case. First the vast majority of > > > Vert.x applications I know about are not embedded. Secondly the Vert.x > > > feed would not report resources already reported by the Hawkular Agent. > > > > > > >> Maybe the wording around user creating resources was confusing? Did > > > > you > > > > > >> thought he would do so from application code? In this case, the > > > >> answer > > > >> is no. > > > > > > > > Yeah, we should probably get together and discuss what your plans are > > > > to > > > > > > get on the same page with everything. > > > > > > I believe that presenting to you (and to whoever is interested) the > > > conclusions of investigations would be beneficial indeed. > > > > +1 > > > > > >> Regards, > > > >> Thomas > > > >> > > > >> Le 23/06/2016 ? 10:01, Austin Kuo a ?crit : > > > >>> Yes, I?m gonna build the inventory for vertx applications. > > > >>> So I have to create a feed for it. > > > >>> > > > >>> Thanks! > > > >>> > > > >>> On Tue, Jun 21, 2016 at 7:55 PM Lukas Krejci > > >>> > > > >>> > wrote: > > > >>> Hi Austin, > > > >>> > > > >>> Inventory offers a /hawkular/inventory/sync endpoint that is > > > > used to > > > > > >>> synchronize the "world view" of feeds (feed being something that > > > >>> pushes data > > > >>> into inventory). > > > >>> > > > >>> You said though that a "user creates" the resources, so I am not > > > >>> sure if /sync > > > >>> would be applicable to your scenario. Would you please elaborate > > > >>> more on where > > > >>> in the inventory hierarchy you create your resources and how? > > > > I.e. > > > > > >>> are you > > > >>> using some sort of feed akin to Hawkular's Wildfly Agent or are > > > > you > > > > > >>> just > > > >>> creating your resources "manually" under environments? > > > >>> > > > >>> On Tuesday, June 21, 2016 02:20:33 AM Austin Kuo wrote: > > > >>> > Hi all, > > > >>> > > > > >>> > I?m currently investigating how to sync with inventory server. > > > >>> > Here?s the example scenario: > > > >>> > Consider the following problem. A user creates version 1 of > > > >>> > the > > > >>> > > > >>> app with > > > >>> > > > >>> > two http servers, one listening on port 8080, the other on > > > >>> > port > > > >>> > > > >>> 8181. In > > > >>> > > > >>> > version 2, the http server listening on port 8181 is no longer > > > >>> > needed. > > > >>> > When the old version is stopped and the new version started, > > > > there > > > > > >>> will be > > > >>> > > > >>> > just one http server listening. The application is not aware > > > >>> > of > > > >>> > the > > > >>> > previous state. What should we do so that the second http > > > > server > > > > > >>> is removed > > > >>> > > > >>> > from Inventory? > > > >>> > > > > >>> > Thanks in advance. > > > >>> > > > >>> -- > > > >>> Lukas Krejci > > > >>> > > > >>> _______________________________________________ > > > >>> hawkular-dev mailing list > > > > > >>> hawkular-dev at lists.jboss.org > hawkular-dev at lists.jboss.org> > > > > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > >>> > > > >>> _______________________________________________ > > > >>> hawkular-dev mailing list > > > >>> hawkular-dev at lists.jboss.org > > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > -- > > Lukas Krejci > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From mazz at redhat.com Thu Jul 7 17:31:14 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 7 Jul 2016 17:31:14 -0400 (EDT) Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <9471848.laFv79TRbG@localhost.localdomain> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> <9471848.laFv79TRbG@localhost.localdomain> Message-ID: <638211996.2970012.1467927074314.JavaMail.zimbra@redhat.com> > What if you used the inventory's canonical path as an the metric ID Well, that doesn't solve the problem. That would just be replacing one auto-generated ID for another one. The requirement is to allow a user to be able to customize the metric ID to something the user wants. See the JIRA for details: https://issues.jboss.org/browse/HWKAGENT-78 From theute at redhat.com Fri Jul 8 03:44:22 2016 From: theute at redhat.com (Thomas Heute) Date: Fri, 8 Jul 2016 09:44:22 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: Yes, I didn't rework the content of the pages, only the structure, but we'll definitely add with the project page links to the supported clients. Thomas On Thu, Jul 7, 2016 at 5:15 PM, Alissa Bonas wrote: > iirc we discussed adding an Events section (events as in > meetups/conferences) somewhere visible like front page? > Also, a summary of version compatibility matrix of all components with > each other would be helpful imo. > > On Thu, Jul 7, 2016 at 12:48 PM, Thomas Heute wrote: > >> Thanks for the comments, I'm really looking more for feedback on the >> organization of the content, all the content is taken from the existing. >> >> For the 3rd point, the current website has 2 very similar pages (I only >> conserved one here as it was a quick shot, but the 2 needs to be merged) >> http://www.hawkular.org/community/index.html >> http://www.hawkular.org/community/join.html >> >> >> On Thu, Jul 7, 2016 at 11:37 AM, Alissa Bonas wrote: >> >>> Couple of suggestions: >>> >>> 1. In the "Hawkular features" section in homepage make the icons >>> clickable. right now the only way to get more info is to click the "more" >>> part. >>> 2. Top level menu font color is a really pale grey so everything looks >>> disabled. >>> 3. Community-Connect leads to page named "Join". Perhaps it would more >>> clear to make the link and the page name the same (and I would call it "Get >>> involved" anyway) >>> >>> >>> >>> >>> On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute wrote: >>> >>>> So here is a very rough idea of how it would look like: >>>> http://209.132.178.114:10188/ >>>> >>>> Content need to be adapted, new pages to be created, but hopefully you >>>> get the idea >>>> >>>> On Tue, Jul 5, 2016 at 8:27 PM, John Sanda wrote: >>>> >>>>> >>>>> On Jul 5, 2016, at 10:21 AM, Thomas Segismont >>>>> wrote: >>>>> >>>>> >>>>> >>>>> Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : >>>>> >>>>> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >>>> >> wrote: >>>>> >>>>> >>>>> >>>>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>>>> >>>>> 2) The Grafana plugin should be moved under Metrics because is for >>>>> Metrics >>>>> >>>>> and only Metrics. >>>>> >>>>> If this is true - can we make it work with H-services as well? >>>>> >>>>> >>>>> The Grafana plugin works with all active flavors of Metrics: >>>>> standalone, >>>>> Openshift-Metrics and Hawkular-Services. >>>>> >>>>> I'm not sure what Stefan meant. >>>>> >>>>> >>>>> The Grafana plugins works with Metrics deployed on all distributions >>>>> however, the plugin itself can only be used with the Metrics project, >>>>> there are no projects (such as Alerts, or Inventory) that will ever >>>>> integrate with it. That is why I think it should be under the Metrics >>>>> project and not in another place. The integration itself is very >>>>> specific to just Metrics, not the entire Hawkular Services. >>>>> >>>>> >>>>> I see what you meant now. But we can't presume anything about other >>>>> services roadmaps. For example, the datasource plugin annotation >>>>> feature >>>>> could be implemented with requests to an event service. >>>>> >>>>> Anyway, since it should be able to connect to Metrics in different >>>>> environments (H-Services, OS-Metrics and standalone), I err on the side >>>>> >>>>> of promoting it as a top level project. >>>>> >>>>> >>>>> I think a top-level project makes the most sense. >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160708/82b107e9/attachment-0001.html From theute at redhat.com Fri Jul 8 03:47:17 2016 From: theute at redhat.com (Thomas Heute) Date: Fri, 8 Jul 2016 09:47:17 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: Ok, so as there were no major issues reported, I'll continue with the most important parts (overview and downloads). I'd like to roll this out quickly and iterate as rebasing will be difficult... Also, I'll probably break some links, so that URL will match the new menus. We can keep redirect pages if we find out that some links should remain. Thomas On Fri, Jul 8, 2016 at 9:44 AM, Thomas Heute wrote: > Yes, I didn't rework the content of the pages, only the structure, but > we'll definitely add with the project page links to the supported clients. > > Thomas > > On Thu, Jul 7, 2016 at 5:15 PM, Alissa Bonas wrote: > >> iirc we discussed adding an Events section (events as in >> meetups/conferences) somewhere visible like front page? >> Also, a summary of version compatibility matrix of all components with >> each other would be helpful imo. >> >> On Thu, Jul 7, 2016 at 12:48 PM, Thomas Heute wrote: >> >>> Thanks for the comments, I'm really looking more for feedback on the >>> organization of the content, all the content is taken from the existing. >>> >>> For the 3rd point, the current website has 2 very similar pages (I only >>> conserved one here as it was a quick shot, but the 2 needs to be merged) >>> http://www.hawkular.org/community/index.html >>> http://www.hawkular.org/community/join.html >>> >>> >>> On Thu, Jul 7, 2016 at 11:37 AM, Alissa Bonas wrote: >>> >>>> Couple of suggestions: >>>> >>>> 1. In the "Hawkular features" section in homepage make the icons >>>> clickable. right now the only way to get more info is to click the "more" >>>> part. >>>> 2. Top level menu font color is a really pale grey so everything looks >>>> disabled. >>>> 3. Community-Connect leads to page named "Join". Perhaps it would more >>>> clear to make the link and the page name the same (and I would call it "Get >>>> involved" anyway) >>>> >>>> >>>> >>>> >>>> On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute >>>> wrote: >>>> >>>>> So here is a very rough idea of how it would look like: >>>>> http://209.132.178.114:10188/ >>>>> >>>>> Content need to be adapted, new pages to be created, but hopefully you >>>>> get the idea >>>>> >>>>> On Tue, Jul 5, 2016 at 8:27 PM, John Sanda wrote: >>>>> >>>>>> >>>>>> On Jul 5, 2016, at 10:21 AM, Thomas Segismont >>>>>> wrote: >>>>>> >>>>>> >>>>>> >>>>>> Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : >>>>>> >>>>>> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont >>>>> >> wrote: >>>>>> >>>>>> >>>>>> >>>>>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>>>>> >>>>>> 2) The Grafana plugin should be moved under Metrics because is for >>>>>> Metrics >>>>>> >>>>>> and only Metrics. >>>>>> >>>>>> If this is true - can we make it work with H-services as well? >>>>>> >>>>>> >>>>>> The Grafana plugin works with all active flavors of Metrics: >>>>>> standalone, >>>>>> Openshift-Metrics and Hawkular-Services. >>>>>> >>>>>> I'm not sure what Stefan meant. >>>>>> >>>>>> >>>>>> The Grafana plugins works with Metrics deployed on all distributions >>>>>> however, the plugin itself can only be used with the Metrics project, >>>>>> there are no projects (such as Alerts, or Inventory) that will ever >>>>>> integrate with it. That is why I think it should be under the Metrics >>>>>> project and not in another place. The integration itself is very >>>>>> specific to just Metrics, not the entire Hawkular Services. >>>>>> >>>>>> >>>>>> I see what you meant now. But we can't presume anything about other >>>>>> services roadmaps. For example, the datasource plugin annotation >>>>>> feature >>>>>> could be implemented with requests to an event service. >>>>>> >>>>>> Anyway, since it should be able to connect to Metrics in different >>>>>> environments (H-Services, OS-Metrics and standalone), I err on the >>>>>> side >>>>>> of promoting it as a top level project. >>>>>> >>>>>> >>>>>> I think a top-level project makes the most sense. >>>>>> >>>>>> _______________________________________________ >>>>>> hawkular-dev mailing list >>>>>> hawkular-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160708/ca221858/attachment.html From miburman at redhat.com Fri Jul 8 04:52:39 2016 From: miburman at redhat.com (Michael Burman) Date: Fri, 8 Jul 2016 04:52:39 -0400 (EDT) Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <638211996.2970012.1467927074314.JavaMail.zimbra@redhat.com> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> <9471848.laFv79TRbG@localhost.localdomain> <638211996.2970012.1467927074314.JavaMail.zimbra@redhat.com> Message-ID: <591385937.6191806.1467967959180.JavaMail.zimbra@redhat.com> Hi, Why? The metricId is an id to a computer. It should not have anything meaningful to the user. Use tags if you want some information to the stored id. Stop pushing "structure" to the metricId, you can't use it for searching them. - Micke ----- Original Message ----- From: "John Mazzitelli" To: hawkular-dev at lists.jboss.org Sent: Friday, July 8, 2016 12:31:14 AM Subject: Re: [Hawkular-dev] agent using custom metric IDs > What if you used the inventory's canonical path as an the metric ID Well, that doesn't solve the problem. That would just be replacing one auto-generated ID for another one. The requirement is to allow a user to be able to customize the metric ID to something the user wants. See the JIRA for details: https://issues.jboss.org/browse/HWKAGENT-78 _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From lponce at redhat.com Fri Jul 8 04:54:57 2016 From: lponce at redhat.com (Lucas Ponce) Date: Fri, 8 Jul 2016 04:54:57 -0400 (EDT) Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <591385937.6191806.1467967959180.JavaMail.zimbra@redhat.com> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> <9471848.laFv79TRbG@localhost.localdomain> <638211996.2970012.1467927074314.JavaMail.zimbra@redhat.com> <591385937.6191806.1467967959180.JavaMail.zimbra@redhat.com> Message-ID: <1046478233.3056457.1467968097463.JavaMail.zimbra@redhat.com> +1 In general, id should be treated more from an abstract thing. At the end, bringing some knowledge about how the id is built is a bad thing. It is a technical debt that now we are suffering in MiQ and at some point we need to address. id should we similar as an UUID, IMO, nothing that you should use to extract business info, for that we can include additional properties as Mike comments. ----- Mensaje original ----- > De: "Michael Burman" > Para: "John Mazzitelli" , "Discussions around Hawkular development" > Enviados: Viernes, 8 de Julio 2016 10:52:39 > Asunto: Re: [Hawkular-dev] agent using custom metric IDs > > Hi, > > Why? The metricId is an id to a computer. It should not have anything > meaningful to the user. Use tags if you want some information to the stored > id. Stop pushing "structure" to the metricId, you can't use it for searching > them. > > - Micke > > ----- Original Message ----- > From: "John Mazzitelli" > To: hawkular-dev at lists.jboss.org > Sent: Friday, July 8, 2016 12:31:14 AM > Subject: Re: [Hawkular-dev] agent using custom metric IDs > > > What if you used the inventory's canonical path as an the metric ID > > Well, that doesn't solve the problem. That would just be replacing one > auto-generated ID for another one. The requirement is to allow a user to be > able to customize the metric ID to something the user wants. See the JIRA > for details: https://issues.jboss.org/browse/HWKAGENT-78 > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From lkrejci at redhat.com Thu Jul 7 17:53:39 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 07 Jul 2016 23:53:39 +0200 Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <638211996.2970012.1467927074314.JavaMail.zimbra@redhat.com> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> <9471848.laFv79TRbG@localhost.localdomain> <638211996.2970012.1467927074314.JavaMail.zimbra@redhat.com> Message-ID: <10638522.b8cWRMdI2m@localhost.localdomain> On ?tvrtek 7. ?ervence 2016 17:31:14 CEST John Mazzitelli wrote: > > What if you used the inventory's canonical path as an the metric ID > > Well, that doesn't solve the problem. That would just be replacing one > auto-generated ID for another one. The requirement is to allow a user to be > able to customize the metric ID to something the user wants. See the JIRA > for details: https://issues.jboss.org/browse/HWKAGENT-78 Hmm, good point. I wonder then if this should not evolve into something more engrained into inventory. Something like "AKA" property on the entities: "also-known-as": { "h-metrics": "h-metrics-specific-id", "h-alerts": "another-id", "collectd": "yet-another-id", "rhq": "some-id", "legacy-monitoring-system-in-our-enterprise": "blah-id", ... } Well, maybe your approach of just shoving this info in general purpose properties is enough. All the clients would still need to know where to look for the other ids, so the above would possibly not bring too much simplification for the clients. Consolidation maybe, but not simplification. > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From tsegismo at redhat.com Fri Jul 8 10:00:20 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Fri, 8 Jul 2016 16:00:20 +0200 Subject: [Hawkular-dev] Sync with Inventory In-Reply-To: References: <37609601.21BcKnHFje@rpi.lan> <8a239b8f-8f4f-e2c4-9dab-791110dd22a1@redhat.com> <1652727.ThqZrOltYD@rpi.lan> Message-ID: <8f20c332-472e-b43d-2923-8b2a3c3012ea@redhat.com> Hi Austin, Not sure what you meant, but my opinion on the problem so far is that we should have a feed per Vert.x instance. Eventually (but that is beyond the scope of your project) we could find out when two Vert.x feeds are using the same clustered EventBus and then create a "logical" resource in inventory to represent it. Thomas Le 05/07/2016 ? 01:53, Austin Kuo a ?crit : > why "isParentOf" is more suitable than "contains" in vertx as Thomas said? > > If it is "contains", it also makes sense to me since if "MyApp" is gone, > the feeds it contains should disappear as well. > > Austin > Lukas Krejci >? 2016?6 > ?29? ???21:20??? > > Btw. I've slightly updated the inventory organization description on the > hawkular site (http://www.hawkular.org/docs/components/inventory/ > index.html#inventory-organization > ). > I hope it explains the structure and > intent of the entities in inventory in a slightly more > comprehensible manner. > > My answers are inline below... > > On st?eda 29. ?ervna 2016 14:39:27 CEST Thomas Segismont wrote: > > Thank you very much for the thorough reply Lukas. A few > > questions/comments inline. > > > > Le 23/06/2016 ? 15:59, Lukas Krejci a ?crit : > > > On Thursday, June 23, 2016 10:27:12 AM Thomas Segismont wrote: > > >> Hey Lukas, > > >> > > >> Thank you for pointing us in the sync endpoint. Austin will > look into > > >> this and will certainly come back with more questions. > > >> > > >> With respect to the user creating resources question, the > difference > > >> between Vert.x and Wildfly is that the user creates resources > > >> grammatically. So in version 1 of the application, there might > be two > > >> HTTP servers as well as 7 event bus handlers, but only 1 http > server in > > >> version 2. And a named worker pool in version 3. > > >> > > >> In the end, I believe it doesn't matter if it's container which > creates > > >> resources or if it's the user himself. Does it? > > > > > > It does not really (inventory has just a single API, so it does > not really > > > know who is talking to it - if a feed or if a user) - but > resources inside > > > and outside feeds have slightly different semantics. > > > > > > Right now the logic is this: > > > > > > Feeds are "agents" that don't care about anything else but their own > > > little > > > "world". That's why they can create their own resource types, > metric types > > > and they also declare resources and metrics of those types. Feed > does not > > > need to look "outside" of its own data and is in full charge of it. > > > > Does that mean that creating a feed is the only way to create > > resource/metric types? > > No, you can also create resource types and metric types directly > under the > tenant. > > > I suppose the benefit of creating resource types is that then you can > > search for different resources of the same type easily. > > > > And if feeds create resource types, how do you know that resource > types > > created by the Hawkular Agent feed running on server A are the same as > > those created by another agent running on server B? > > > > Inventory automatically computes "identity hashes" of resource types and > metric types - if 2 resource types in 2 feeds have the same ID and > exactly the > same configuration definitions, they are considered identical. If > you know 1 > resource type, you can find all the identical ones using the > following REST > API (since 0.17.0.Final, the format of the URLs is thoroughly > explained here: > http://www.hawkular.org/docs/rest/rest-inventory.html#_api_endpoints): > > /hawkular/inventory/traversal/f;feedId/rt;resourceTypeId/identical > > If for example some resource types should be known up-front and "shared" > across all feeds, some kind of "gluecode" could create "global" > resource types > under the tenant, that would have the same id and structure as the > types that > the feeds declare. If then you want to for example find all > resources of given > type, you can: > > /hawkular/inventory/traversal/rt;myType/identical/rl;defines/type=resource > > I.e. for all types identical to the global one, find all resources > defined by > those types. > > > > Hence the /sync endpoint applies to a feed nicely - since it is > in charge, > > > it merely declares what is the view it has currently of the > "world" it > > > sees and inventory will make sure it has the same picture - > under that > > > feed. > > > > > > Now if you have an application that spans multiple vms/machines > and is > > > composed of multiple processes, there is no such clear > distinction of > > > "ownership". > > > > Good point, Vert.x applications are often distributed and > communicating > > over the EventBus. > > > > > While indeed a "real" user can just act like a feed, the envisioned > > > workflow is that the user operates directly in environments and > at the > > > top level. I.e. a user assigns feeds to environments (i.e. this feed > > > reports on my server in staging environment, etc) and the user > creates > > > "logical" resources in the environment (i.e. "My App" resource > in staging > > > env is composed of a load balancer managed by this feed, mongodb > managed > > > by another feed there and clustered wflys there, there and there). > > > > > > To model this, inventory supports 2 kinds of tree hierarchies - > 1 created > > > using the "contains" relationship, which expresses existential > ownership - > > > i.e. a feed contains its resources and if a feed disappears, so > do the > > > resources, because no one else can report on them. The entities > bound by > > > the > > How does a feed "disappear"? That would be by deleting it through the > > REST API, correct? Something the ManageIQ provider would do > through the > > Ruby client? > > > > yes > > > > contains relationship form a tree - no loops or diamonds in it > (this is > > > enforced by inventory). But there can also be a hierarchy > created using an > > > "isParentOf" relationship (which represents "logical" ownership). > > > Resources > > > bound by "isParentOf" can form an acyclic graph - i.e. 1 > resource can have > > > multiple parents as well as many children (isParentOf is > applicable only > > > to > > > resources, not other types of entities). > > > > > > The hierarchies formed by "contains" and "isParentOf" are > independent. So > > > you can create a resource "My App" in the staging environment > and declare > > > it a parent (using "isParentOf") of the resources declared by > feeds that > > > manage the machines where the constituent servers live. > > > > Interesting, that may be the way to model a Vert.x app deployed on two > > machines. Each process would have its own feed reporting discovered > > resources (http servers, event bus handlers, ... etc), and a > logical app > > resource as parent. > > > > Exactly. > > > > That is the envisaged workflow for "apps". Now the downside to > that is > > > that > > > (currently) there is no "sync" for that. The reason is that the > > > application > > > really is a logical concept and the underlying servers can be > repurposed > > > to > > > serve different applications (so if app stops using it, it shouldn't > > > really > > > disappear from inventory, as is the case with /sync - because if > a feed > > > doesn't "see" a resource, then it really is just gone, because > the feed is > > > solely responsible for reporting on it). > > > > What happens to the resources exactly? Are they marked as gone or > simply > > deleted? > > Right now they are deleted. That is of course not optimal and > versioning is in > the pipeline right after the port of inventory to Tinkerpop3. > Basically all > the entities and relationships will get "from" and "to" timestamps. > Implicitly, you'd look at the "present", but you'd be able to look > at how > things looked in the past by specifying a different "now" in your query. > > > Do you know how dependent services are updated? For example, when > a JMS > > queue is gone, are alert definitions on queue depth removed as > well? How > > does that happen? > > > > Inventory sends events on the bus about every C/U/D of every entity or > relationship, so other components can react on that. > > > > We can think about how to somehow help clients with "App sync" > but I'm not > > > sure if having a feed for vertx is the right thing to do. On the > other > > > hand I very well may not be seeing some obvious problems of the > above or > > > parallels that make the 2 approaches really the same because the > above > > > model is just ingrained in my brain after so many hours thinking > about it > > > ;) > > > > > >> As for the feed question, the Vert.x feed will be the Metrics SPI > > >> implementation (vertx-hawkular-metrics project). Again I guess > it's not > > >> much different than the Hawkular Agent. > > > > > > A feed would only be appropriate if vertx app never reported on > something > > > that would also be reported by other agents. I.e. if a part of a > vertx > > > application is also reported on by a wfly agent, because that > part is > > > running in a wfly server managed by us, then that will not work - 1 > > > resource cannot be "contained" in 2 different feeds (not just > API wise, > > > but logically, too). > > I'm not too worried about this use case. First the vast majority of > > Vert.x applications I know about are not embedded. Secondly the Vert.x > > feed would not report resources already reported by the Hawkular > Agent. > > > > >> Maybe the wording around user creating resources was confusing? > Did you > > >> thought he would do so from application code? In this case, the > answer > > >> is no. > > > > > > Yeah, we should probably get together and discuss what your > plans are to > > > get on the same page with everything. > > > > I believe that presenting to you (and to whoever is interested) the > > conclusions of investigations would be beneficial indeed. > > > > +1 > > > >> Regards, > > >> Thomas > > >> > > >> Le 23/06/2016 ? 10:01, Austin Kuo a ?crit : > > >>> Yes, I?m gonna build the inventory for vertx applications. > > >>> So I have to create a feed for it. > > >>> > > >>> Thanks! > > >>> > > >>> On Tue, Jun 21, 2016 at 7:55 PM Lukas Krejci > > > >>> > > >>> >> wrote: > > >>> Hi Austin, > > >>> > > >>> Inventory offers a /hawkular/inventory/sync endpoint that > is used to > > >>> synchronize the "world view" of feeds (feed being > something that > > >>> pushes data > > >>> into inventory). > > >>> > > >>> You said though that a "user creates" the resources, so I > am not > > >>> sure if /sync > > >>> would be applicable to your scenario. Would you please > elaborate > > >>> more on where > > >>> in the inventory hierarchy you create your resources and > how? I.e. > > >>> are you > > >>> using some sort of feed akin to Hawkular's Wildfly Agent > or are you > > >>> just > > >>> creating your resources "manually" under environments? > > >>> > > >>> On Tuesday, June 21, 2016 02:20:33 AM Austin Kuo wrote: > > >>> > Hi all, > > >>> > > > >>> > I?m currently investigating how to sync with inventory > server. > > >>> > Here?s the example scenario: > > >>> > Consider the following problem. A user creates version 1 > of the > > >>> > > >>> app with > > >>> > > >>> > two http servers, one listening on port 8080, the other > on port > > >>> > > >>> 8181. In > > >>> > > >>> > version 2, the http server listening on port 8181 is no > longer > > >>> > needed. > > >>> > When the old version is stopped and the new version > started, there > > >>> > > >>> will be > > >>> > > >>> > just one http server listening. The application is not > aware of > > >>> > the > > >>> > previous state. What should we do so that the second > http server > > >>> > > >>> is removed > > >>> > > >>> > from Inventory? > > >>> > > > >>> > Thanks in advance. > > >>> > > >>> -- > > >>> Lukas Krejci > > >>> > > >>> _______________________________________________ > > >>> hawkular-dev mailing list > > >>> hawkular-dev at lists.jboss.org > > > > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > >>> > > >>> _______________________________________________ > > >>> hawkular-dev mailing list > > >>> hawkular-dev at lists.jboss.org > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -- > Lukas Krejci > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team From auszon3 at gmail.com Fri Jul 8 10:05:43 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Fri, 08 Jul 2016 14:05:43 +0000 Subject: [Hawkular-dev] Sync with Inventory In-Reply-To: <8f20c332-472e-b43d-2923-8b2a3c3012ea@redhat.com> References: <37609601.21BcKnHFje@rpi.lan> <8a239b8f-8f4f-e2c4-9dab-791110dd22a1@redhat.com> <1652727.ThqZrOltYD@rpi.lan> <8f20c332-472e-b43d-2923-8b2a3c3012ea@redhat.com> Message-ID: Ok, that way, we can just use '/sync' to sycn with inventory. On Fri, Jul 8, 2016 at 10:00 PM Thomas Segismont wrote: > Hi Austin, > > Not sure what you meant, but my opinion on the problem so far is that we > should have a feed per Vert.x instance. Eventually (but that is beyond > the scope of your project) we could find out when two Vert.x feeds are > using the same clustered EventBus and then create a "logical" resource > in inventory to represent it. > > Thomas > > Le 05/07/2016 ? 01:53, Austin Kuo a ?crit : > > why "isParentOf" is more suitable than "contains" in vertx as Thomas > said? > > > > If it is "contains", it also makes sense to me since if "MyApp" is gone, > > the feeds it contains should disappear as well. > > > > Austin > > Lukas Krejci >? 2016?6 > > ?29? ???21:20??? > > > > Btw. I've slightly updated the inventory organization description on > the > > hawkular site (http://www.hawkular.org/docs/components/inventory/ > > index.html#inventory-organization > > < > http://www.hawkular.org/docs/components/inventory/index.html#inventory-organization > >). > > I hope it explains the structure and > > intent of the entities in inventory in a slightly more > > comprehensible manner. > > > > My answers are inline below... > > > > On st?eda 29. ?ervna 2016 14:39:27 CEST Thomas Segismont wrote: > > > Thank you very much for the thorough reply Lukas. A few > > > questions/comments inline. > > > > > > Le 23/06/2016 ? 15:59, Lukas Krejci a ?crit : > > > > On Thursday, June 23, 2016 10:27:12 AM Thomas Segismont wrote: > > > >> Hey Lukas, > > > >> > > > >> Thank you for pointing us in the sync endpoint. Austin will > > look into > > > >> this and will certainly come back with more questions. > > > >> > > > >> With respect to the user creating resources question, the > > difference > > > >> between Vert.x and Wildfly is that the user creates resources > > > >> grammatically. So in version 1 of the application, there might > > be two > > > >> HTTP servers as well as 7 event bus handlers, but only 1 http > > server in > > > >> version 2. And a named worker pool in version 3. > > > >> > > > >> In the end, I believe it doesn't matter if it's container which > > creates > > > >> resources or if it's the user himself. Does it? > > > > > > > > It does not really (inventory has just a single API, so it does > > not really > > > > know who is talking to it - if a feed or if a user) - but > > resources inside > > > > and outside feeds have slightly different semantics. > > > > > > > > Right now the logic is this: > > > > > > > > Feeds are "agents" that don't care about anything else but their > own > > > > little > > > > "world". That's why they can create their own resource types, > > metric types > > > > and they also declare resources and metrics of those types. Feed > > does not > > > > need to look "outside" of its own data and is in full charge of > it. > > > > > > Does that mean that creating a feed is the only way to create > > > resource/metric types? > > > > No, you can also create resource types and metric types directly > > under the > > tenant. > > > > > I suppose the benefit of creating resource types is that then you > can > > > search for different resources of the same type easily. > > > > > > And if feeds create resource types, how do you know that resource > > types > > > created by the Hawkular Agent feed running on server A are the > same as > > > those created by another agent running on server B? > > > > > > > Inventory automatically computes "identity hashes" of resource types > and > > metric types - if 2 resource types in 2 feeds have the same ID and > > exactly the > > same configuration definitions, they are considered identical. If > > you know 1 > > resource type, you can find all the identical ones using the > > following REST > > API (since 0.17.0.Final, the format of the URLs is thoroughly > > explained here: > > http://www.hawkular.org/docs/rest/rest-inventory.html#_api_endpoints > ): > > > > /hawkular/inventory/traversal/f;feedId/rt;resourceTypeId/identical > > > > If for example some resource types should be known up-front and > "shared" > > across all feeds, some kind of "gluecode" could create "global" > > resource types > > under the tenant, that would have the same id and structure as the > > types that > > the feeds declare. If then you want to for example find all > > resources of given > > type, you can: > > > > > /hawkular/inventory/traversal/rt;myType/identical/rl;defines/type=resource > > > > I.e. for all types identical to the global one, find all resources > > defined by > > those types. > > > > > > Hence the /sync endpoint applies to a feed nicely - since it is > > in charge, > > > > it merely declares what is the view it has currently of the > > "world" it > > > > sees and inventory will make sure it has the same picture - > > under that > > > > feed. > > > > > > > > Now if you have an application that spans multiple vms/machines > > and is > > > > composed of multiple processes, there is no such clear > > distinction of > > > > "ownership". > > > > > > Good point, Vert.x applications are often distributed and > > communicating > > > over the EventBus. > > > > > > > While indeed a "real" user can just act like a feed, the > envisioned > > > > workflow is that the user operates directly in environments and > > at the > > > > top level. I.e. a user assigns feeds to environments (i.e. this > feed > > > > reports on my server in staging environment, etc) and the user > > creates > > > > "logical" resources in the environment (i.e. "My App" resource > > in staging > > > > env is composed of a load balancer managed by this feed, mongodb > > managed > > > > by another feed there and clustered wflys there, there and > there). > > > > > > > > To model this, inventory supports 2 kinds of tree hierarchies - > > 1 created > > > > using the "contains" relationship, which expresses existential > > ownership - > > > > i.e. a feed contains its resources and if a feed disappears, so > > do the > > > > resources, because no one else can report on them. The entities > > bound by > > > > the > > > How does a feed "disappear"? That would be by deleting it through > the > > > REST API, correct? Something the ManageIQ provider would do > > through the > > > Ruby client? > > > > > > > yes > > > > > > contains relationship form a tree - no loops or diamonds in it > > (this is > > > > enforced by inventory). But there can also be a hierarchy > > created using an > > > > "isParentOf" relationship (which represents "logical" ownership). > > > > Resources > > > > bound by "isParentOf" can form an acyclic graph - i.e. 1 > > resource can have > > > > multiple parents as well as many children (isParentOf is > > applicable only > > > > to > > > > resources, not other types of entities). > > > > > > > > The hierarchies formed by "contains" and "isParentOf" are > > independent. So > > > > you can create a resource "My App" in the staging environment > > and declare > > > > it a parent (using "isParentOf") of the resources declared by > > feeds that > > > > manage the machines where the constituent servers live. > > > > > > Interesting, that may be the way to model a Vert.x app deployed on > two > > > machines. Each process would have its own feed reporting discovered > > > resources (http servers, event bus handlers, ... etc), and a > > logical app > > > resource as parent. > > > > > > > Exactly. > > > > > > That is the envisaged workflow for "apps". Now the downside to > > that is > > > > that > > > > (currently) there is no "sync" for that. The reason is that the > > > > application > > > > really is a logical concept and the underlying servers can be > > repurposed > > > > to > > > > serve different applications (so if app stops using it, it > shouldn't > > > > really > > > > disappear from inventory, as is the case with /sync - because if > > a feed > > > > doesn't "see" a resource, then it really is just gone, because > > the feed is > > > > solely responsible for reporting on it). > > > > > > What happens to the resources exactly? Are they marked as gone or > > simply > > > deleted? > > > > Right now they are deleted. That is of course not optimal and > > versioning is in > > the pipeline right after the port of inventory to Tinkerpop3. > > Basically all > > the entities and relationships will get "from" and "to" timestamps. > > Implicitly, you'd look at the "present", but you'd be able to look > > at how > > things looked in the past by specifying a different "now" in your > query. > > > > > Do you know how dependent services are updated? For example, when > > a JMS > > > queue is gone, are alert definitions on queue depth removed as > > well? How > > > does that happen? > > > > > > > Inventory sends events on the bus about every C/U/D of every entity > or > > relationship, so other components can react on that. > > > > > > We can think about how to somehow help clients with "App sync" > > but I'm not > > > > sure if having a feed for vertx is the right thing to do. On the > > other > > > > hand I very well may not be seeing some obvious problems of the > > above or > > > > parallels that make the 2 approaches really the same because the > > above > > > > model is just ingrained in my brain after so many hours thinking > > about it > > > > ;) > > > > > > > >> As for the feed question, the Vert.x feed will be the Metrics > SPI > > > >> implementation (vertx-hawkular-metrics project). Again I guess > > it's not > > > >> much different than the Hawkular Agent. > > > > > > > > A feed would only be appropriate if vertx app never reported on > > something > > > > that would also be reported by other agents. I.e. if a part of a > > vertx > > > > application is also reported on by a wfly agent, because that > > part is > > > > running in a wfly server managed by us, then that will not work > - 1 > > > > resource cannot be "contained" in 2 different feeds (not just > > API wise, > > > > but logically, too). > > > I'm not too worried about this use case. First the vast majority of > > > Vert.x applications I know about are not embedded. Secondly the > Vert.x > > > feed would not report resources already reported by the Hawkular > > Agent. > > > > > > >> Maybe the wording around user creating resources was confusing? > > Did you > > > >> thought he would do so from application code? In this case, the > > answer > > > >> is no. > > > > > > > > Yeah, we should probably get together and discuss what your > > plans are to > > > > get on the same page with everything. > > > > > > I believe that presenting to you (and to whoever is interested) the > > > conclusions of investigations would be beneficial indeed. > > > > > > > +1 > > > > > >> Regards, > > > >> Thomas > > > >> > > > >> Le 23/06/2016 ? 10:01, Austin Kuo a ?crit : > > > >>> Yes, I?m gonna build the inventory for vertx applications. > > > >>> So I have to create a feed for it. > > > >>> > > > >>> Thanks! > > > >>> > > > >>> On Tue, Jun 21, 2016 at 7:55 PM Lukas Krejci > > > > > >>> > > > >>> >> > wrote: > > > >>> Hi Austin, > > > >>> > > > >>> Inventory offers a /hawkular/inventory/sync endpoint that > > is used to > > > >>> synchronize the "world view" of feeds (feed being > > something that > > > >>> pushes data > > > >>> into inventory). > > > >>> > > > >>> You said though that a "user creates" the resources, so I > > am not > > > >>> sure if /sync > > > >>> would be applicable to your scenario. Would you please > > elaborate > > > >>> more on where > > > >>> in the inventory hierarchy you create your resources and > > how? I.e. > > > >>> are you > > > >>> using some sort of feed akin to Hawkular's Wildfly Agent > > or are you > > > >>> just > > > >>> creating your resources "manually" under environments? > > > >>> > > > >>> On Tuesday, June 21, 2016 02:20:33 AM Austin Kuo wrote: > > > >>> > Hi all, > > > >>> > > > > >>> > I?m currently investigating how to sync with inventory > > server. > > > >>> > Here?s the example scenario: > > > >>> > Consider the following problem. A user creates version 1 > > of the > > > >>> > > > >>> app with > > > >>> > > > >>> > two http servers, one listening on port 8080, the other > > on port > > > >>> > > > >>> 8181. In > > > >>> > > > >>> > version 2, the http server listening on port 8181 is no > > longer > > > >>> > needed. > > > >>> > When the old version is stopped and the new version > > started, there > > > >>> > > > >>> will be > > > >>> > > > >>> > just one http server listening. The application is not > > aware of > > > >>> > the > > > >>> > previous state. What should we do so that the second > > http server > > > >>> > > > >>> is removed > > > >>> > > > >>> > from Inventory? > > > >>> > > > > >>> > Thanks in advance. > > > >>> > > > >>> -- > > > >>> Lukas Krejci > > > >>> > > > >>> _______________________________________________ > > > >>> hawkular-dev mailing list > > > >>> hawkular-dev at lists.jboss.org > > > > > > > > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > >>> > > > >>> _______________________________________________ > > > >>> hawkular-dev mailing list > > > >>> hawkular-dev at lists.jboss.org hawkular-dev at lists.jboss.org> > > > >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > -- > > Lukas Krejci > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > -- > Thomas Segismont > JBoss ON Engineering Team > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160708/5b5669e6/attachment-0001.html From gbrown at redhat.com Mon Jul 11 07:25:09 2016 From: gbrown at redhat.com (Gary Brown) Date: Mon, 11 Jul 2016 07:25:09 -0400 (EDT) Subject: [Hawkular-dev] PR Review Request: https://github.com/hawkular/hawkular-apm/pull/474 In-Reply-To: <2123989627.4968883.1468236140445.JavaMail.zimbra@redhat.com> Message-ID: <825034117.4969034.1468236309333.JavaMail.zimbra@redhat.com> Hi Simple PR to rename a maven module that builds the APM javaagent jar, which previously only supported REST based communication to the server, but now also supports Kafka. Regards Gary From gbrown at redhat.com Mon Jul 11 11:45:46 2016 From: gbrown at redhat.com (Gary Brown) Date: Mon, 11 Jul 2016 11:45:46 -0400 (EDT) Subject: [Hawkular-dev] PR Review Request: https://github.com/hawkular/hawkular-apm/pull/475 In-Reply-To: <1369186755.5065211.1468251863083.JavaMail.zimbra@redhat.com> Message-ID: <2111053924.5065415.1468251946321.JavaMail.zimbra@redhat.com> Another small change - to consolidate environment variables used for configuration. Selection of technology used is now based on a prefix used on the value, rather than on use of different environment variables. Regards Gary From auszon3 at gmail.com Tue Jul 12 02:26:38 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Tue, 12 Jul 2016 06:26:38 +0000 Subject: [Hawkular-dev] Post a metricType under a resourceType, 400 bad request Message-ID: Hi all, I was trying to create a metric type under my resource type called ?MYRT?, Here?s body look like: {?collectionInterval?: 30, ?id?: ?metricTypeId123?, "type": "COUNTER", "unit": "NONE"} I posted this to url: http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/resourceTypes/MYRT/metricTypes It gave me 400 bad request. But I posed to http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/metricTypes . And it worked. Wondering why is that? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160712/1780162b/attachment.html From jpkroehling at redhat.com Tue Jul 12 04:49:22 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Tue, 12 Jul 2016 10:49:22 +0200 Subject: [Hawkular-dev] Master's thesis: Alert Prediction in Metric Data Based on Time Series Analysis In-Reply-To: References: Message-ID: <501683cb-0a0a-d0ae-84e7-d918dbcefb80@redhat.com> On 28.06.2016 14:44, Pavol Loffay wrote: > yesterday I successfully defended my thesis [1]. I would like to thank > to everyone from Hawkular team who helped me with the problems I was > facing. Last but not least I would like to thank to company Red Hat for > the opportunity to work on this project. This is really awesome! Congratulations! - Juca. From theute at redhat.com Tue Jul 12 11:45:01 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 12 Jul 2016 17:45:01 +0200 Subject: [Hawkular-dev] Hawkular Services vs Hawkular Community Message-ID: (If you don't know about Hawkular Community you can skip this email completely) After some discussions about Hawkular Community, here is our conclusion: - We will not have a separate Hawkular Community repository - We will ship 2 distributions for Hawkular Services, - one for dev/demo/quick test that will have an embedded Cassandra, a default user and the embedded agent enabled (This is what you get when building Hawkular Services today with "-Pembeddedc -Pdev"). - the other one that requires an external Cassandra server, no default user and the agent being disabled. - We'd like a simple UI to explore data (metrics, inventory, alerts), but this is not a priority, if someone in the community is looking for an Angular project it would be great, please contact us on this mailing list. We would advertise that work in the "Client" section of the revamped hawkular.org [1] as a separate download. (Note that this would likely be purely static content and could even be made available on a CDN) - For other side projects that are not mature to be included in Hawkular Community or side projects, they will be listed either in the "client" or "labs" section of the revamped website [1] depending on their maturity and scope. Thanks, Thomas [1] WIP: http://209.132.178.114:10188/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160712/aa40293e/attachment.html From mazz at redhat.com Tue Jul 12 13:19:01 2016 From: mazz at redhat.com (John Mazzitelli) Date: Tue, 12 Jul 2016 13:19:01 -0400 (EDT) Subject: [Hawkular-dev] Hawkular Services vs Hawkular Community In-Reply-To: References: Message-ID: <1814223324.4223927.1468343941794.JavaMail.zimbra@redhat.com> > - We'd like a simple UI to explore data (metrics, inventory, alerts), but > this is not a priority, if someone in the community is looking for an > Angular project it would be great, please contact us on this mailing list. I (and I think for Heiko, too) would love to see HawkFX integrated here. We already have it, and it is very useful as-is today. Of course, it can use lots of love to add enhancements to it, clean it up, etc. But it is a great start and like I said is useful today just as it is - so integrating it now would be a great value-add today. https://github.com/pilhuhn/hawkfx I have been using it a lot now that I know it exists :) From theute at redhat.com Tue Jul 12 13:31:07 2016 From: theute at redhat.com (Thomas Heute) Date: Tue, 12 Jul 2016 19:31:07 +0200 Subject: [Hawkular-dev] Hawkular Services vs Hawkular Community In-Reply-To: <1814223324.4223927.1468343941794.JavaMail.zimbra@redhat.com> References: <1814223324.4223927.1468343941794.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Jul 12, 2016 at 7:19 PM, John Mazzitelli wrote: > > - We'd like a simple UI to explore data (metrics, inventory, alerts), but > > this is not a priority, if someone in the community is looking for an > > Angular project it would be great, please contact us on this mailing > list. > > I (and I think for Heiko, too) would love to see HawkFX integrated here. > We already have it, and it is very useful as-is today. Of course, it can > use lots of love to add enhancements to it, clean it up, etc. But it is a > great start and like I said is useful today just as it is - so integrating > it now would be a great value-add today. > It falls in the client/labs category, and in the PR for the new org it's under "labs": http://209.132.178.114:10188/clients/index.html > > https://github.com/pilhuhn/hawkfx > > I have been using it a lot now that I know it exists :) > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160712/6d59cfc8/attachment.html From mazz at redhat.com Tue Jul 12 15:24:27 2016 From: mazz at redhat.com (John Mazzitelli) Date: Tue, 12 Jul 2016 15:24:27 -0400 (EDT) Subject: [Hawkular-dev] agent blog on jolokia support In-Reply-To: <2001706729.4281020.1468351400284.JavaMail.zimbra@redhat.com> Message-ID: <387016135.4281375.1468351467716.JavaMail.zimbra@redhat.com> I decided to blog about this since it is a relatively unknown feature, but could be helpful to those that need to collect metric data from Jolokia endpoints. http://management-platform.blogspot.com/2016/07/collecting-jmx-data-and-storing-in.html From mazz at redhat.com Tue Jul 12 17:34:20 2016 From: mazz at redhat.com (John Mazzitelli) Date: Tue, 12 Jul 2016 17:34:20 -0400 (EDT) Subject: [Hawkular-dev] agent blog on prometheus support In-Reply-To: <601108940.4304866.1468359231631.JavaMail.zimbra@redhat.com> Message-ID: <1949684309.4304908.1468359260336.JavaMail.zimbra@redhat.com> I decided to blog about this since it, too, is a relatively unknown feature, but could be helpful to those that need to collect metric data from Prometheus endpoints. See: http://management-platform.blogspot.com/2016/07/collecting-prometheus-data-and-storing.html From tsegismo at redhat.com Wed Jul 13 04:18:32 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Wed, 13 Jul 2016 10:18:32 +0200 Subject: [Hawkular-dev] Post a metricType under a resourceType, 400 bad request In-Reply-To: References: Message-ID: <2fd76353-690e-0103-3f04-b8959a465068@redhat.com> Austin, Can you paste the 400 response body? Anything in the logs? Thanks Le 12/07/2016 ? 08:26, Austin Kuo a ?crit : > Hi all, > I was trying to create a metric type under my resource type called ?MYRT?, > Here?s body look like: > {?collectionInterval?: 30, ?id?: ?metricTypeId123?, "type": "COUNTER", > "unit": "NONE"} > I posted this to > url: http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/resourceTypes/MYRT/metricTypes > It gave me 400 bad request. > But I posed > to http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/metricTypes. > And it worked. > Wondering why is that? > > Thanks > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team From auszon3 at gmail.com Wed Jul 13 05:07:16 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Wed, 13 Jul 2016 09:07:16 +0000 Subject: [Hawkular-dev] Post a metricType under a resourceType, 400 bad request In-Reply-To: <2fd76353-690e-0103-3f04-b8959a465068@redhat.com> References: <2fd76353-690e-0103-3f04-b8959a465068@redhat.com> Message-ID: The 400 response body. { "errorMsg": "Can not deserialize instance of java.util.ArrayList out of START_OBJECT token\n at [Source: io.undertow.servlet.spec.ServletInputStreamImpl at 3251f36a; line: 1, column: 1]" } On Wed, Jul 13, 2016 at 4:18 PM Thomas Segismont wrote: > Austin, > > Can you paste the 400 response body? Anything in the logs? > > Thanks > > Le 12/07/2016 ? 08:26, Austin Kuo a ?crit : > > Hi all, > > I was trying to create a metric type under my resource type called > ?MYRT?, > > Here?s body look like: > > {?collectionInterval?: 30, ?id?: ?metricTypeId123?, "type": "COUNTER", > > "unit": "NONE"} > > I posted this to > > url: > http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/resourceTypes/MYRT/metricTypes > > It gave me 400 bad request. > > But I posed > > to > http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/metricTypes > . > > And it worked. > > Wondering why is that? > > > > Thanks > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > -- > Thomas Segismont > JBoss ON Engineering Team > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160713/a22a5042/attachment.html From tsegismo at redhat.com Wed Jul 13 05:14:46 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Wed, 13 Jul 2016 11:14:46 +0200 Subject: [Hawkular-dev] Post a metricType under a resourceType, 400 bad request In-Reply-To: References: <2fd76353-690e-0103-3f04-b8959a465068@redhat.com> Message-ID: <1416a657-4242-3f30-f12e-0881da5c76fa@redhat.com> That looks like a malformed request entity: the server is expecting a list and you are providing an object? Please check the inventory REST API. Le 13/07/2016 ? 11:07, Austin Kuo a ?crit : > The 400 response body. > { > "errorMsg": "Can not deserialize instance of java.util.ArrayList out > of START_OBJECT token\n at [Source: > io.undertow.servlet.spec.ServletInputStreamImpl at 3251f36a; line: 1, > column: 1]" > } > > On Wed, Jul 13, 2016 at 4:18 PM Thomas Segismont > wrote: > > Austin, > > Can you paste the 400 response body? Anything in the logs? > > Thanks > > Le 12/07/2016 ? 08:26, Austin Kuo a ?crit : > > Hi all, > > I was trying to create a metric type under my resource type called > ?MYRT?, > > Here?s body look like: > > {?collectionInterval?: 30, ?id?: ?metricTypeId123?, "type": "COUNTER", > > "unit": "NONE"} > > I posted this to > > url: > http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/resourceTypes/MYRT/metricTypes > > It gave me 400 bad request. > > But I posed > > to > http://localhost:8080/hawkular/inventory/deprecated/feeds/vertx-localhost/metricTypes. > > And it worked. > > Wondering why is that? > > > > Thanks > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > -- > Thomas Segismont > JBoss ON Engineering Team > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team From mazz at redhat.com Wed Jul 13 12:46:38 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 13 Jul 2016 12:46:38 -0400 (EDT) Subject: [Hawkular-dev] inventory and resource config props In-Reply-To: <1927097188.4577642.1468425979847.JavaMail.zimbra@redhat.com> Message-ID: <1028718149.4583836.1468428398842.JavaMail.zimbra@redhat.com> I ran into an Inventory issue today. I don't know what is the expected behavior - I hope what I saw isn't expected :) I ran the agent and it added everything in inventory successfully. I then shutdown the agent, added a new resource configuration property definition to a resource in the agent configuration, then re-ran the agent. The agent successfully found and attempted to add the new resource configuration property via the inventory bulk/ API, but inventory never stored it. When I look in inventory, I do not see my new resource configuration property attached to the already existing resource. The resource config props that were on the resource originally are still there, but the new one was never added. The agent builds the bulk payload in its AsyncInventoryStorage class - specifically, you can see adding the resource config props here: https://github.com/hawkular/hawkular-agent/blob/master/hawkular-wildfly-agent/src/main/java/org/hawkular/agent/monitor/storage/AsyncInventoryStorage.java#L348-L360 What is wrong? From mazz at redhat.com Wed Jul 13 12:55:50 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 13 Jul 2016 12:55:50 -0400 (EDT) Subject: [Hawkular-dev] inventory and resource config props In-Reply-To: <1028718149.4583836.1468428398842.JavaMail.zimbra@redhat.com> References: <1028718149.4583836.1468428398842.JavaMail.zimbra@redhat.com> Message-ID: <811076823.4585144.1468428950341.JavaMail.zimbra@redhat.com> And just for giggles, I tried something else. Leave the resource config definitions the same, but change their values. The agent can detect the current values (that are different from when they were originally stored in inventory) and it tries to send via bulk/ the new values, but Inventory retains the old values. It never updates the resource config property values to the new ones that the agent sends via bulk/ So, in short, it looks like once entities are added to inventory via bulk/ they can never be updated when sent via bulk/ again. I think this must involve sync. I need the inventory folks to tell me how the agent can do what I want - clearly, bulk/ is the wrong way to do this. ----- Original Message ----- > I ran into an Inventory issue today. I don't know what is the expected > behavior - I hope what I saw isn't expected :) > > I ran the agent and it added everything in inventory successfully. > > I then shutdown the agent, added a new resource configuration property > definition to a resource in the agent configuration, then re-ran the agent. > > The agent successfully found and attempted to add the new resource > configuration property via the inventory bulk/ API, but inventory never > stored it. When I look in inventory, I do not see my new resource > configuration property attached to the already existing resource. The > resource config props that were on the resource originally are still there, > but the new one was never added. > > The agent builds the bulk payload in its AsyncInventoryStorage class - > specifically, you can see adding the resource config props here: > https://github.com/hawkular/hawkular-agent/blob/master/hawkular-wildfly-agent/src/main/java/org/hawkular/agent/monitor/storage/AsyncInventoryStorage.java#L348-L360 > > What is wrong? > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From auszon3 at gmail.com Thu Jul 14 05:16:36 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Thu, 14 Jul 2016 09:16:36 +0000 Subject: [Hawkular-dev] Integrate metric data with inventory Message-ID: Hi, I?m currently working on vertx metric agent, my job is to enable the inventory such that users can view the inventory and corresponding metrics. My question is that how ?real metric data? and ?inventory metrics? are matched? Is it simply bound via id? Thanks, Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160714/3629a5ea/attachment.html From lkrejci at redhat.com Thu Jul 14 05:35:33 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 14 Jul 2016 11:35:33 +0200 Subject: [Hawkular-dev] Integrate metric data with inventory In-Reply-To: References: Message-ID: <7493774.kS2gW6jxFP@localhost.localdomain> On ?tvrtek 14. ?ervence 2016 9:16:36 CEST Austin Kuo wrote: > Hi, > I?m currently working on vertx metric agent, my job is to enable the > inventory such that users can view the inventory and corresponding metrics. > My question is that how ?real metric data? and ?inventory metrics? are > matched? > Is it simply bound via id? > There is no predefined way of doing it actually. It might be by id (or you can use inventory's canonical path of the metrics as its id in h-metrics) or even use a different ID in h-metrics and only store an info about it in inventory. This is actually what wildfly agent is currently doing (see the thread starting with http://lists.jboss.org/pipermail/hawkular-dev/2016-July/002973.html). > Thanks, > Austin -- Lukas Krejci From lkrejci at redhat.com Thu Jul 14 05:30:30 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 14 Jul 2016 11:30:30 +0200 Subject: [Hawkular-dev] inventory and resource config props In-Reply-To: <811076823.4585144.1468428950341.JavaMail.zimbra@redhat.com> References: <1028718149.4583836.1468428398842.JavaMail.zimbra@redhat.com> <811076823.4585144.1468428950341.JavaMail.zimbra@redhat.com> Message-ID: <1872170.eQ6IhhQFcy@localhost.localdomain> On st?eda 13. ?ervence 2016 12:55:50 CEST John Mazzitelli wrote: > And just for giggles, I tried something else. Leave the resource config > definitions the same, but change their values. The agent can detect the > current values (that are different from when they were originally stored in > inventory) and it tries to send via bulk/ the new values, but Inventory > retains the old values. It never updates the resource config property > values to the new ones that the agent sends via bulk/ > > So, in short, it looks like once entities are added to inventory via bulk/ > they can never be updated when sent via bulk/ again. > > I think this must involve sync. I need the inventory folks to tell me how > the agent can do what I want - clearly, bulk/ is the wrong way to do this. > Yes, /bulk is just for creating stuff. So if your entities already exist, bulk just skips them, never updates. This is exactly what /sync is for. > ----- Original Message ----- > > > I ran into an Inventory issue today. I don't know what is the expected > > behavior - I hope what I saw isn't expected :) > > > > I ran the agent and it added everything in inventory successfully. > > > > I then shutdown the agent, added a new resource configuration property > > definition to a resource in the agent configuration, then re-ran the > > agent. > > > > The agent successfully found and attempted to add the new resource > > configuration property via the inventory bulk/ API, but inventory never > > stored it. When I look in inventory, I do not see my new resource > > configuration property attached to the already existing resource. The > > resource config props that were on the resource originally are still > > there, > > but the new one was never added. > > > > The agent builds the bulk payload in its AsyncInventoryStorage class - > > specifically, you can see adding the resource config props here: > > https://github.com/hawkular/hawkular-agent/blob/master/hawkular-wildfly-ag > > ent/src/main/java/org/hawkular/agent/monitor/storage/AsyncInventoryStorage > > .java#L348-L360 > > > > What is wrong? > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From mazz at redhat.com Thu Jul 14 08:41:44 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 14 Jul 2016 08:41:44 -0400 (EDT) Subject: [Hawkular-dev] inventory and resource config props In-Reply-To: <1872170.eQ6IhhQFcy@localhost.localdomain> References: <1028718149.4583836.1468428398842.JavaMail.zimbra@redhat.com> <811076823.4585144.1468428950341.JavaMail.zimbra@redhat.com> <1872170.eQ6IhhQFcy@localhost.localdomain> Message-ID: <802072933.4814145.1468500104519.JavaMail.zimbra@redhat.com> > Yes, /bulk is just for creating stuff. So if your entities already exist, > bulk just skips them, never updates. This is exactly what /sync is for. So can I assume I can just switch over my URL from "bulk/" to "sync/" and things will just work as expected? (i.e. resources and their related data like config props that don't exist are created, resources and their related data that do exist are updated)?? From lkrejci at redhat.com Thu Jul 14 09:48:01 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 14 Jul 2016 15:48:01 +0200 Subject: [Hawkular-dev] inventory and resource config props In-Reply-To: <802072933.4814145.1468500104519.JavaMail.zimbra@redhat.com> References: <1028718149.4583836.1468428398842.JavaMail.zimbra@redhat.com> <1872170.eQ6IhhQFcy@localhost.localdomain> <802072933.4814145.1468500104519.JavaMail.zimbra@redhat.com> Message-ID: <1468507140.GUObFCcsHJ@localhost.localdomain> On ?tvrtek 14. ?ervence 2016 8:41:44 CEST John Mazzitelli wrote: > > Yes, /bulk is just for creating stuff. So if your entities already exist, > > bulk just skips them, never updates. This is exactly what /sync is for. > > So can I assume I can just switch over my URL from "bulk/" to "sync/" and > things will just work as expected? (i.e. resources and their related data > like config props that don't exist are created, resources and their related > data that do exist are updated)?? Unfortunately the format of the data is different to what is used in /bulk. It still uses the various blueprint types but they are composed together differently. Also, there now is a builder for the whole structure, so it might be easier for you to make use of it. See https://github.com/hawkular/hawkular-inventory/blob/master/hawkular-inventory-api/src/main/java/org/hawkular/inventory/api/model/ InventoryStructure.java#L348 -- Lukas Krejci From mazz at redhat.com Thu Jul 14 10:00:49 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 14 Jul 2016 10:00:49 -0400 (EDT) Subject: [Hawkular-dev] inventory and resource config props In-Reply-To: <1468507140.GUObFCcsHJ@localhost.localdomain> References: <1028718149.4583836.1468428398842.JavaMail.zimbra@redhat.com> <1872170.eQ6IhhQFcy@localhost.localdomain> <802072933.4814145.1468500104519.JavaMail.zimbra@redhat.com> <1468507140.GUObFCcsHJ@localhost.localdomain> Message-ID: <847532125.4835526.1468504849765.JavaMail.zimbra@redhat.com> OK, the format of the data is less a concern to me - I just want to make sure what I want is possible. I have no problem converting from bulk/ to sync/ if the agent gets this create-and-automatic-update. And it sounds like it does. I created a HWKAGENT JIRA for this: https://issues.jboss.org/browse/HWKAGENT-119 ----- Original Message ----- > On ?tvrtek 14. ?ervence 2016 8:41:44 CEST John Mazzitelli wrote: > > > Yes, /bulk is just for creating stuff. So if your entities already exist, > > > bulk just skips them, never updates. This is exactly what /sync is for. > > > > So can I assume I can just switch over my URL from "bulk/" to "sync/" and > > things will just work as expected? (i.e. resources and their related data > > like config props that don't exist are created, resources and their related > > data that do exist are updated)?? > > Unfortunately the format of the data is different to what is used in /bulk. > It > still uses the various blueprint types but they are composed together > differently. Also, there now is a builder for the whole structure, so it > might > be easier for you to make use of it. > > See > https://github.com/hawkular/hawkular-inventory/blob/master/hawkular-inventory-api/src/main/java/org/hawkular/inventory/api/model/ > InventoryStructure.java#L348 > > -- > Lukas Krejci > From theute at redhat.com Thu Jul 14 10:20:00 2016 From: theute at redhat.com (Thomas Heute) Date: Thu, 14 Jul 2016 16:20:00 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <1265553183.102271604.1465909592402.JavaMail.zimbra@redhat.com> <43292415.102368387.1465924138218.JavaMail.zimbra@redhat.com> <0D5FC92F-40C4-493F-A57B-206EA804DD58@redhat.com> <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> Message-ID: The new structure is in place: http://www.hawkular.org/ There is less content than before since after the changes to Hawkular-Services, a lot of content was outdated. We'll need a quickstart and installation guide up soon. And some pages need a lot more love (see http://www.hawkular.org/hawkular-clients/ ), so please see where/how you can contribute so that people don't get lost when they reach hawkular.org Thomas On Fri, Jul 8, 2016 at 9:47 AM, Thomas Heute wrote: > Ok, so as there were no major issues reported, I'll continue with the most > important parts (overview and downloads). > > I'd like to roll this out quickly and iterate as rebasing will be > difficult... > > Also, I'll probably break some links, so that URL will match the new > menus. We can keep redirect pages if we find out that some links should > remain. > > Thomas > > On Fri, Jul 8, 2016 at 9:44 AM, Thomas Heute wrote: > >> Yes, I didn't rework the content of the pages, only the structure, but >> we'll definitely add with the project page links to the supported clients. >> >> Thomas >> >> On Thu, Jul 7, 2016 at 5:15 PM, Alissa Bonas wrote: >> >>> iirc we discussed adding an Events section (events as in >>> meetups/conferences) somewhere visible like front page? >>> Also, a summary of version compatibility matrix of all components with >>> each other would be helpful imo. >>> >>> On Thu, Jul 7, 2016 at 12:48 PM, Thomas Heute wrote: >>> >>>> Thanks for the comments, I'm really looking more for feedback on the >>>> organization of the content, all the content is taken from the existing. >>>> >>>> For the 3rd point, the current website has 2 very similar pages (I only >>>> conserved one here as it was a quick shot, but the 2 needs to be merged) >>>> http://www.hawkular.org/community/index.html >>>> http://www.hawkular.org/community/join.html >>>> >>>> >>>> On Thu, Jul 7, 2016 at 11:37 AM, Alissa Bonas >>>> wrote: >>>> >>>>> Couple of suggestions: >>>>> >>>>> 1. In the "Hawkular features" section in homepage make the icons >>>>> clickable. right now the only way to get more info is to click the "more" >>>>> part. >>>>> 2. Top level menu font color is a really pale grey so everything looks >>>>> disabled. >>>>> 3. Community-Connect leads to page named "Join". Perhaps it would more >>>>> clear to make the link and the page name the same (and I would call it "Get >>>>> involved" anyway) >>>>> >>>>> >>>>> >>>>> >>>>> On Thu, Jul 7, 2016 at 11:58 AM, Thomas Heute >>>>> wrote: >>>>> >>>>>> So here is a very rough idea of how it would look like: >>>>>> http://209.132.178.114:10188/ >>>>>> >>>>>> Content need to be adapted, new pages to be created, but hopefully >>>>>> you get the idea >>>>>> >>>>>> On Tue, Jul 5, 2016 at 8:27 PM, John Sanda wrote: >>>>>> >>>>>>> >>>>>>> On Jul 5, 2016, at 10:21 AM, Thomas Segismont >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Le 05/07/2016 ? 16:05, Stefan Negrea a ?crit : >>>>>>> >>>>>>> On Tue, Jul 5, 2016 at 8:03 AM, Thomas Segismont < >>>>>>> tsegismo at redhat.com >>>>>>> >> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Le 05/07/2016 ? 14:59, Heiko W.Rupp a ?crit : >>>>>>> >>>>>>> 2) The Grafana plugin should be moved under Metrics because is for >>>>>>> Metrics >>>>>>> >>>>>>> and only Metrics. >>>>>>> >>>>>>> If this is true - can we make it work with H-services as well? >>>>>>> >>>>>>> >>>>>>> The Grafana plugin works with all active flavors of Metrics: >>>>>>> standalone, >>>>>>> Openshift-Metrics and Hawkular-Services. >>>>>>> >>>>>>> I'm not sure what Stefan meant. >>>>>>> >>>>>>> >>>>>>> The Grafana plugins works with Metrics deployed on all distributions >>>>>>> however, the plugin itself can only be used with the Metrics project, >>>>>>> there are no projects (such as Alerts, or Inventory) that will ever >>>>>>> integrate with it. That is why I think it should be under the Metrics >>>>>>> project and not in another place. The integration itself is very >>>>>>> specific to just Metrics, not the entire Hawkular Services. >>>>>>> >>>>>>> >>>>>>> I see what you meant now. But we can't presume anything about other >>>>>>> services roadmaps. For example, the datasource plugin annotation >>>>>>> feature >>>>>>> could be implemented with requests to an event service. >>>>>>> >>>>>>> Anyway, since it should be able to connect to Metrics in different >>>>>>> environments (H-Services, OS-Metrics and standalone), I err on the >>>>>>> side >>>>>>> of promoting it as a top level project. >>>>>>> >>>>>>> >>>>>>> I think a top-level project makes the most sense. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> hawkular-dev mailing list >>>>>>> hawkular-dev at lists.jboss.org >>>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> hawkular-dev mailing list >>>>>> hawkular-dev at lists.jboss.org >>>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> hawkular-dev mailing list >>>>> hawkular-dev at lists.jboss.org >>>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160714/0b8f378a/attachment-0001.html From mazz at redhat.com Thu Jul 14 17:25:03 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 14 Jul 2016 17:25:03 -0400 (EDT) Subject: [Hawkular-dev] integrating ansible and hawkular to start remote wildfly servers In-Reply-To: <567684372.4991448.1468529813402.JavaMail.zimbra@redhat.com> Message-ID: <2068959737.4995788.1468531503081.JavaMail.zimbra@redhat.com> I have a very simple PoC working that shows Ansible integrated with Hawkular such that it can start a WildFly server remotely. I'll begin with a brief summary. SUMMARY A client connects to the Hawkular Server via its websocket and passes in a JSON request (like it does the other kinds of requests). The JSON request looks something like: AnsibleRequest={"playbook":"start-wildfly.yml", "extraVars": { "wildfly_home_dir":"/opt/wildfly10" } } The command-gateway server takes the request, runs the Ansible playbook, and returns the response back over the websocket to the client with an AnsibleResponse (which includes the Ansible output to show what it did). DETAILS The code is in two branches in my repos. See [1] and [2]. You need both for this to work. Hawkular-Services packages up Ansible playbooks and its associated files in standalone/configuration/ansible. See [3]. Hawkular-Commons adds to command gateway a new JSON request/response for Ansible requests. See [4] and [5]. The Ansible command is invoked in the server when the AnsibleRequest JSON is received over the websocket. This command implementation runs the Ansible playbook and returns the results back in a AnsibleResponse. See [6]. If you build commons and then services to pull the new commons in, you can run a test to see it work. I use the test command CLI utility in the agent to do this [7]. First, build the new dist and run it so you have a Hawkular Service running (for example, "mvn clean install -Pdev,embeddedc"). Next, create a test JSON request file (say, "/tmp/hawkular.json") with this content: { "playbook":"start-wildfly.yml", "extraVars": { "wildfly_home_dir":"/directory/where/you/installed/wildfly" } } Now build the hawkular-agent from source (just so you get the test command CLI utility) and run the test command CLI, telling it to use the JSON in your file and send it to your Hawkular Service server: $ cd /hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli/target $ java -jar hawkular-wildfly-agent-command-cli-*.jar --username jdoe --password password \ --command AnsibleRequest --request-file=/tmp/hawkular.json This sends the request to your local Hawkular Service server over the websocket, the Ansible playbook "start-wildfly.yml" is run, which starts the WildFly in that home dir you specified in your JSON request, and the response is sent back. The Command CLI tool will write the responses it receives to disk - look at the cli output for the names of the files it writes - you can look in there to see what the Ansible command response was (its just the full Ansible output). There is still LOTS to do here (for one, you'll notice it assumes your wildfly install is on "localhost" :)). This is merely a PoC that we can use as a starting point - it shows that this can be done and how we can do it. -- John Mazz [1] https://github.com/jmazzitelli/hawkular-commons/tree/HAWKULAR-1096-ansible-commands [2] https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands [3] https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands/dist/src/main/resources/standalone/configuration/ansible [4] https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleRequest.schema.json [5] https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleResponse.schema.json [6] https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-war/src/main/java/org/hawkular/cmdgw/command/ws/AnsibleCommand.java [7] https://github.com/hawkular/hawkular-agent/tree/master/hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli From mazz at redhat.com Thu Jul 14 21:57:59 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 14 Jul 2016 21:57:59 -0400 (EDT) Subject: [Hawkular-dev] integrating ansible and hawkular to start remote wildfly servers In-Reply-To: <2068959737.4995788.1468531503081.JavaMail.zimbra@redhat.com> References: <2068959737.4995788.1468531503081.JavaMail.zimbra@redhat.com> Message-ID: <2024470761.5019568.1468547879321.JavaMail.zimbra@redhat.com> Slight change - I found out there is a feature in feature-pack stuff that let you ship with direct file content that gets copied into the server. So I put the Ansible scripts in the feature pack. This is the correct way to do it. Now anyone building the server with the feature pack (integration tests, for example) will get the Ansible integration as well. So link [3] changes from my original email. It is now: [3] https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands/feature-pack/src/main/content/standalone/configuration/ansible ----- Original Message ----- > I have a very simple PoC working that shows Ansible integrated with Hawkular > such that it can start a WildFly server remotely. > > I'll begin with a brief summary. > > SUMMARY > > A client connects to the Hawkular Server via its websocket and passes in a > JSON request (like it does the other kinds of requests). The JSON request > looks something like: > > AnsibleRequest={"playbook":"start-wildfly.yml", "extraVars": { > "wildfly_home_dir":"/opt/wildfly10" } } > > The command-gateway server takes the request, runs the Ansible playbook, and > returns the response back over the websocket to the client with an > AnsibleResponse (which includes the Ansible output to show what it did). > > DETAILS > > The code is in two branches in my repos. See [1] and [2]. You need both for > this to work. > > Hawkular-Services packages up Ansible playbooks and its associated files in > standalone/configuration/ansible. See [3]. > > Hawkular-Commons adds to command gateway a new JSON request/response for > Ansible requests. See [4] and [5]. > > The Ansible command is invoked in the server when the AnsibleRequest JSON is > received over the websocket. This command implementation runs the Ansible > playbook and returns the results back in a AnsibleResponse. See [6]. > > If you build commons and then services to pull the new commons in, you can > run a test to see it work. I use the test command CLI utility in the agent > to do this [7]. First, build the new dist and run it so you have a Hawkular > Service running (for example, "mvn clean install -Pdev,embeddedc"). Next, > create a test JSON request file (say, "/tmp/hawkular.json") with this > content: > > { "playbook":"start-wildfly.yml", > "extraVars": { > "wildfly_home_dir":"/directory/where/you/installed/wildfly" > } > } > > Now build the hawkular-agent from source (just so you get the test command > CLI utility) and run the test command CLI, telling it to use the JSON in > your file and send it to your Hawkular Service server: > > $ cd > /hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli/target > > $ java -jar hawkular-wildfly-agent-command-cli-*.jar --username jdoe > --password password \ > --command AnsibleRequest --request-file=/tmp/hawkular.json > > This sends the request to your local Hawkular Service server over the > websocket, the Ansible playbook "start-wildfly.yml" is run, which starts the > WildFly in that home dir you specified in your JSON request, and the > response is sent back. The Command CLI tool will write the responses it > receives to disk - look at the cli output for the names of the files it > writes - you can look in there to see what the Ansible command response was > (its just the full Ansible output). > > There is still LOTS to do here (for one, you'll notice it assumes your > wildfly install is on "localhost" :)). This is merely a PoC that we can use > as a starting point - it shows that this can be done and how we can do it. > > -- John Mazz > > [1] > https://github.com/jmazzitelli/hawkular-commons/tree/HAWKULAR-1096-ansible-commands > [2] > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands > [3] > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands/dist/src/main/resources/standalone/configuration/ansible > [4] > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleRequest.schema.json > [5] > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleResponse.schema.json > [6] > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-war/src/main/java/org/hawkular/cmdgw/command/ws/AnsibleCommand.java > [7] > https://github.com/hawkular/hawkular-agent/tree/master/hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From auszon3 at gmail.com Thu Jul 14 23:41:06 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Fri, 15 Jul 2016 03:41:06 +0000 Subject: [Hawkular-dev] Integrate metric data with inventory In-Reply-To: <7493774.kS2gW6jxFP@localhost.localdomain> References: <7493774.kS2gW6jxFP@localhost.localdomain> Message-ID: Ok! So h-inventory metric has a property 'metric-id', this property is used for mapping h-metric metric, right? Thanks On Thu, Jul 14, 2016 at 5:40 PM Lukas Krejci wrote: > On ?tvrtek 14. ?ervence 2016 9:16:36 CEST Austin Kuo wrote: > > Hi, > > I?m currently working on vertx metric agent, my job is to enable the > > inventory such that users can view the inventory and corresponding > metrics. > > My question is that how ?real metric data? and ?inventory metrics? are > > matched? > > Is it simply bound via id? > > > > There is no predefined way of doing it actually. It might be by id (or you > can > use inventory's canonical path of the metrics as its id in h-metrics) or > even > use a different ID in h-metrics and only store an info about it in > inventory. > This is actually what wildfly agent is currently doing (see the thread > starting with > http://lists.jboss.org/pipermail/hawkular-dev/2016-July/002973.html). > > > Thanks, > > Austin > > > -- > Lukas Krejci > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160715/a11b6432/attachment.html From mazz at redhat.com Fri Jul 15 08:01:00 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 15 Jul 2016 08:01:00 -0400 (EDT) Subject: [Hawkular-dev] Integrate metric data with inventory In-Reply-To: References: <7493774.kS2gW6jxFP@localhost.localdomain> Message-ID: <107353213.5365179.1468584060857.JavaMail.zimbra@redhat.com> > So h-inventory metric has a property 'metric-id', this property is used for > mapping h-metric metric, right? Yes. "metric-id" is an optional property - it does not have to exist. But if it does exist, yes, that is your explicit mapping to the hawkular-metric ID. If "metric-id" property does NOT exist on the inventory metric, then you assume the inventory ID is the same as the hawkular-metrics ID. ----- Original Message ----- > Ok! > So h-inventory metric has a property 'metric-id', this property is used for > mapping h-metric metric, right? > Thanks > > On Thu, Jul 14, 2016 at 5:40 PM Lukas Krejci < lkrejci at redhat.com > wrote: > > > On ?tvrtek 14. ?ervence 2016 9:16:36 CEST Austin Kuo wrote: > > Hi, > > I?m currently working on vertx metric agent, my job is to enable the > > inventory such that users can view the inventory and corresponding metrics. > > My question is that how ?real metric data? and ?inventory metrics? are > > matched? > > Is it simply bound via id? > > > > There is no predefined way of doing it actually. It might be by id (or you > can > use inventory's canonical path of the metrics as its id in h-metrics) or even > use a different ID in h-metrics and only store an info about it in inventory. > This is actually what wildfly agent is currently doing (see the thread > starting with > http://lists.jboss.org/pipermail/hawkular-dev/2016-July/002973.html ). > > > Thanks, > > Austin > > > -- > Lukas Krejci > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From jpkroehling at redhat.com Fri Jul 15 09:52:08 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Fri, 15 Jul 2016 15:52:08 +0200 Subject: [Hawkular-dev] Hawkular Services vs Hawkular Community In-Reply-To: References: Message-ID: <911048b2-55ce-16b5-0093-ea5f94665ebb@redhat.com> On 12.07.2016 17:45, Thomas Heute wrote: > - We will ship 2 distributions for Hawkular Services, > - one for dev/demo/quick test that will have an embedded > Cassandra, a default user and the embedded agent enabled (This is what > you get when building Hawkular Services today with "-Pembeddedc -Pdev"). Once the dev distribution is done, I can adjust the script to upload those artifacts to GitHub as well. As of now, the dev distribution is a "replacement" for the main artifact, so, it's not uploaded to Nexus and it's not part of a release. - Juca. From tsegismo at redhat.com Fri Jul 15 17:44:59 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Fri, 15 Jul 2016 23:44:59 +0200 Subject: [Hawkular-dev] Integrate metric data with inventory In-Reply-To: <107353213.5365179.1468584060857.JavaMail.zimbra@redhat.com> References: <7493774.kS2gW6jxFP@localhost.localdomain> <107353213.5365179.1468584060857.JavaMail.zimbra@redhat.com> Message-ID: <51c408c1-0075-38cf-ffcf-76f8f08ae0e1@redhat.com> That sounds like a nice solution to me. That would allow to keep the existing metric names. My only concern is the following: is this metric-id attribute something clients like HawkFx and ManageIQ provider look at? Le 15/07/2016 ? 14:01, John Mazzitelli a ?crit : >> So h-inventory metric has a property 'metric-id', this property is used for >> mapping h-metric metric, right? > > Yes. > > "metric-id" is an optional property - it does not have to exist. But if it does exist, yes, that is your explicit mapping to the hawkular-metric ID. > > If "metric-id" property does NOT exist on the inventory metric, then you assume the inventory ID is the same as the hawkular-metrics ID. > > ----- Original Message ----- >> Ok! >> So h-inventory metric has a property 'metric-id', this property is used for >> mapping h-metric metric, right? >> Thanks >> >> On Thu, Jul 14, 2016 at 5:40 PM Lukas Krejci < lkrejci at redhat.com > wrote: >> >> >> On ?tvrtek 14. ?ervence 2016 9:16:36 CEST Austin Kuo wrote: >>> Hi, >>> I?m currently working on vertx metric agent, my job is to enable the >>> inventory such that users can view the inventory and corresponding metrics. >>> My question is that how ?real metric data? and ?inventory metrics? are >>> matched? >>> Is it simply bound via id? >>> >> >> There is no predefined way of doing it actually. It might be by id (or you >> can >> use inventory's canonical path of the metrics as its id in h-metrics) or even >> use a different ID in h-metrics and only store an info about it in inventory. >> This is actually what wildfly agent is currently doing (see the thread >> starting with >> http://lists.jboss.org/pipermail/hawkular-dev/2016-July/002973.html ). >> >>> Thanks, >>> Austin >> >> >> -- >> Lukas Krejci >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team From mazz at redhat.com Fri Jul 15 19:59:15 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 15 Jul 2016 19:59:15 -0400 (EDT) Subject: [Hawkular-dev] Integrate metric data with inventory In-Reply-To: <51c408c1-0075-38cf-ffcf-76f8f08ae0e1@redhat.com> References: <7493774.kS2gW6jxFP@localhost.localdomain> <107353213.5365179.1468584060857.JavaMail.zimbra@redhat.com> <51c408c1-0075-38cf-ffcf-76f8f08ae0e1@redhat.com> Message-ID: <826680666.5659302.1468627155075.JavaMail.zimbra@redhat.com> HawkFX does look at it, yes. (that PR got merged a few days ago). The ruby client needs to add it - Heiko asked that I create an issue for it, which I did. It is here: https://github.com/hawkular/hawkular-client-ruby/issues/110 ----- Original Message ----- > That sounds like a nice solution to me. That would allow to keep the > existing metric names. My only concern is the following: is this > metric-id attribute something clients like HawkFx and ManageIQ provider > look at? > > Le 15/07/2016 ? 14:01, John Mazzitelli a ?crit : > >> So h-inventory metric has a property 'metric-id', this property is used > >> for > >> mapping h-metric metric, right? > > > > Yes. > > > > "metric-id" is an optional property - it does not have to exist. But if it > > does exist, yes, that is your explicit mapping to the hawkular-metric ID. > > > > If "metric-id" property does NOT exist on the inventory metric, then you > > assume the inventory ID is the same as the hawkular-metrics ID. > > > > ----- Original Message ----- > >> Ok! > >> So h-inventory metric has a property 'metric-id', this property is used > >> for > >> mapping h-metric metric, right? > >> Thanks > >> > >> On Thu, Jul 14, 2016 at 5:40 PM Lukas Krejci < lkrejci at redhat.com > wrote: > >> > >> > >> On ?tvrtek 14. ?ervence 2016 9:16:36 CEST Austin Kuo wrote: > >>> Hi, > >>> I?m currently working on vertx metric agent, my job is to enable the > >>> inventory such that users can view the inventory and corresponding > >>> metrics. > >>> My question is that how ?real metric data? and ?inventory metrics? are > >>> matched? > >>> Is it simply bound via id? > >>> > >> > >> There is no predefined way of doing it actually. It might be by id (or you > >> can > >> use inventory's canonical path of the metrics as its id in h-metrics) or > >> even > >> use a different ID in h-metrics and only store an info about it in > >> inventory. > >> This is actually what wildfly agent is currently doing (see the thread > >> starting with > >> http://lists.jboss.org/pipermail/hawkular-dev/2016-July/002973.html ). > >> > >>> Thanks, > >>> Austin > >> > >> > >> -- > >> Lukas Krejci > >> > >> _______________________________________________ > >> hawkular-dev mailing list > >> hawkular-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > >> > >> _______________________________________________ > >> hawkular-dev mailing list > >> hawkular-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > >> > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > -- > Thomas Segismont > JBoss ON Engineering Team > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From theute at redhat.com Mon Jul 18 05:52:32 2016 From: theute at redhat.com (Thomas Heute) Date: Mon, 18 Jul 2016 11:52:32 +0200 Subject: [Hawkular-dev] integrating ansible and hawkular to start remote wildfly servers In-Reply-To: <2024470761.5019568.1468547879321.JavaMail.zimbra@redhat.com> References: <2068959737.4995788.1468531503081.JavaMail.zimbra@redhat.com> <2024470761.5019568.1468547879321.JavaMail.zimbra@redhat.com> Message-ID: So Ansible is supposed to be already installed and available on the machine / (docker container) running Hawkular Services, right ? Thomas On Fri, Jul 15, 2016 at 3:57 AM, John Mazzitelli wrote: > Slight change - I found out there is a feature in feature-pack stuff that > let you ship with direct file content that gets copied into the server. So > I put the Ansible scripts in the feature pack. This is the correct way to > do it. Now anyone building the server with the feature pack (integration > tests, for example) will get the Ansible integration as well. > > So link [3] changes from my original email. It is now: > > [3] > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands/feature-pack/src/main/content/standalone/configuration/ansible > > ----- Original Message ----- > > I have a very simple PoC working that shows Ansible integrated with > Hawkular > > such that it can start a WildFly server remotely. > > > > I'll begin with a brief summary. > > > > SUMMARY > > > > A client connects to the Hawkular Server via its websocket and passes in > a > > JSON request (like it does the other kinds of requests). The JSON request > > looks something like: > > > > AnsibleRequest={"playbook":"start-wildfly.yml", "extraVars": { > > "wildfly_home_dir":"/opt/wildfly10" } } > > > > The command-gateway server takes the request, runs the Ansible playbook, > and > > returns the response back over the websocket to the client with an > > AnsibleResponse (which includes the Ansible output to show what it did). > > > > DETAILS > > > > The code is in two branches in my repos. See [1] and [2]. You need both > for > > this to work. > > > > Hawkular-Services packages up Ansible playbooks and its associated files > in > > standalone/configuration/ansible. See [3]. > > > > Hawkular-Commons adds to command gateway a new JSON request/response for > > Ansible requests. See [4] and [5]. > > > > The Ansible command is invoked in the server when the AnsibleRequest > JSON is > > received over the websocket. This command implementation runs the Ansible > > playbook and returns the results back in a AnsibleResponse. See [6]. > > > > If you build commons and then services to pull the new commons in, you > can > > run a test to see it work. I use the test command CLI utility in the > agent > > to do this [7]. First, build the new dist and run it so you have a > Hawkular > > Service running (for example, "mvn clean install -Pdev,embeddedc"). Next, > > create a test JSON request file (say, "/tmp/hawkular.json") with this > > content: > > > > { "playbook":"start-wildfly.yml", > > "extraVars": { > > "wildfly_home_dir":"/directory/where/you/installed/wildfly" > > } > > } > > > > Now build the hawkular-agent from source (just so you get the test > command > > CLI utility) and run the test command CLI, telling it to use the JSON in > > your file and send it to your Hawkular Service server: > > > > $ cd > > > /hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli/target > > > > $ java -jar hawkular-wildfly-agent-command-cli-*.jar --username jdoe > > --password password \ > > --command AnsibleRequest --request-file=/tmp/hawkular.json > > > > This sends the request to your local Hawkular Service server over the > > websocket, the Ansible playbook "start-wildfly.yml" is run, which starts > the > > WildFly in that home dir you specified in your JSON request, and the > > response is sent back. The Command CLI tool will write the responses it > > receives to disk - look at the cli output for the names of the files it > > writes - you can look in there to see what the Ansible command response > was > > (its just the full Ansible output). > > > > There is still LOTS to do here (for one, you'll notice it assumes your > > wildfly install is on "localhost" :)). This is merely a PoC that we can > use > > as a starting point - it shows that this can be done and how we can do > it. > > > > -- John Mazz > > > > [1] > > > https://github.com/jmazzitelli/hawkular-commons/tree/HAWKULAR-1096-ansible-commands > > [2] > > > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands > > [3] > > > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands/dist/src/main/resources/standalone/configuration/ansible > > [4] > > > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleRequest.schema.json > > [5] > > > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleResponse.schema.json > > [6] > > > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-war/src/main/java/org/hawkular/cmdgw/command/ws/AnsibleCommand.java > > [7] > > > https://github.com/hawkular/hawkular-agent/tree/master/hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160718/667b867a/attachment.html From theute at redhat.com Mon Jul 18 05:55:15 2016 From: theute at redhat.com (Thomas Heute) Date: Mon, 18 Jul 2016 11:55:15 +0200 Subject: [Hawkular-dev] Hawkular Services vs Hawkular Community In-Reply-To: <911048b2-55ce-16b5-0093-ea5f94665ebb@redhat.com> References: <911048b2-55ce-16b5-0093-ea5f94665ebb@redhat.com> Message-ID: Sounds good, this is not critical anyway On Fri, Jul 15, 2016 at 3:52 PM, Juraci Paix?o Kr?hling < jpkroehling at redhat.com> wrote: > On 12.07.2016 17:45, Thomas Heute wrote: > > - We will ship 2 distributions for Hawkular Services, > > - one for dev/demo/quick test that will have an embedded > > Cassandra, a default user and the embedded agent enabled (This is what > > you get when building Hawkular Services today with "-Pembeddedc > -Pdev"). > > Once the dev distribution is done, I can adjust the script to upload > those artifacts to GitHub as well. > > As of now, the dev distribution is a "replacement" for the main > artifact, so, it's not uploaded to Nexus and it's not part of a release. > > - Juca. > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160718/9aaf9474/attachment.html From tsegismo at redhat.com Mon Jul 18 17:19:32 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Mon, 18 Jul 2016 23:19:32 +0200 Subject: [Hawkular-dev] Integrate metric data with inventory In-Reply-To: <826680666.5659302.1468627155075.JavaMail.zimbra@redhat.com> References: <7493774.kS2gW6jxFP@localhost.localdomain> <107353213.5365179.1468584060857.JavaMail.zimbra@redhat.com> <51c408c1-0075-38cf-ffcf-76f8f08ae0e1@redhat.com> <826680666.5659302.1468627155075.JavaMail.zimbra@redhat.com> Message-ID: <4f9686e5-4874-0d7a-93e4-45690232e5a5@redhat.com> Great! Le 16/07/2016 ? 01:59, John Mazzitelli a ?crit : > HawkFX does look at it, yes. (that PR got merged a few days ago). > > The ruby client needs to add it - Heiko asked that I create an issue for it, which I did. It is here: > > https://github.com/hawkular/hawkular-client-ruby/issues/110 > > ----- Original Message ----- >> That sounds like a nice solution to me. That would allow to keep the >> existing metric names. My only concern is the following: is this >> metric-id attribute something clients like HawkFx and ManageIQ provider >> look at? >> >> Le 15/07/2016 ? 14:01, John Mazzitelli a ?crit : >>>> So h-inventory metric has a property 'metric-id', this property is used >>>> for >>>> mapping h-metric metric, right? >>> >>> Yes. >>> >>> "metric-id" is an optional property - it does not have to exist. But if it >>> does exist, yes, that is your explicit mapping to the hawkular-metric ID. >>> >>> If "metric-id" property does NOT exist on the inventory metric, then you >>> assume the inventory ID is the same as the hawkular-metrics ID. >>> >>> ----- Original Message ----- >>>> Ok! >>>> So h-inventory metric has a property 'metric-id', this property is used >>>> for >>>> mapping h-metric metric, right? >>>> Thanks >>>> >>>> On Thu, Jul 14, 2016 at 5:40 PM Lukas Krejci < lkrejci at redhat.com > wrote: >>>> >>>> >>>> On ?tvrtek 14. ?ervence 2016 9:16:36 CEST Austin Kuo wrote: >>>>> Hi, >>>>> I?m currently working on vertx metric agent, my job is to enable the >>>>> inventory such that users can view the inventory and corresponding >>>>> metrics. >>>>> My question is that how ?real metric data? and ?inventory metrics? are >>>>> matched? >>>>> Is it simply bound via id? >>>>> >>>> >>>> There is no predefined way of doing it actually. It might be by id (or you >>>> can >>>> use inventory's canonical path of the metrics as its id in h-metrics) or >>>> even >>>> use a different ID in h-metrics and only store an info about it in >>>> inventory. >>>> This is actually what wildfly agent is currently doing (see the thread >>>> starting with >>>> http://lists.jboss.org/pipermail/hawkular-dev/2016-July/002973.html ). >>>> >>>>> Thanks, >>>>> Austin >>>> >>>> >>>> -- >>>> Lukas Krejci >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> _______________________________________________ >>>> hawkular-dev mailing list >>>> hawkular-dev at lists.jboss.org >>>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>> >>> _______________________________________________ >>> hawkular-dev mailing list >>> hawkular-dev at lists.jboss.org >>> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >> >> -- >> Thomas Segismont >> JBoss ON Engineering Team >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -- Thomas Segismont JBoss ON Engineering Team From auszon3 at gmail.com Tue Jul 19 02:51:33 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Tue, 19 Jul 2016 06:51:33 +0000 Subject: [Hawkular-dev] Status of integration of inventory and vertx metrics Message-ID: Hi, The current status is that I have created - a feed - a root resource with a resource type - a eventbus resource as a child of the root resource above with a resource type - a eventbus.handlers metrics with gauge metric type. This has a property 'metric-id' corresponding to the id of the metric data. Fortunately, now I can view the metric data graph from the client (hawkfx). Austin, Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160719/f59b76d8/attachment.html From jpkroehling at redhat.com Tue Jul 19 05:05:52 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Tue, 19 Jul 2016 11:05:52 +0200 Subject: [Hawkular-dev] Hawkular Services 0.0.6.Final Message-ID: <29f6cdb2-eec4-e2bf-a9b4-1ee40675ea97@redhat.com> Team, Hawkular Services 0.0.6.Final has just been released. The only functional change between this and the previous version is that Hawkular Services now reports itself as a "Hawkular" server, instead of "Wildfly" (HAWKULAR-1098) As the previous distributions, the Agent has to be configured with an user. This can be accomplished by: - Adding an user via bin/add-user.sh like: ./bin/add-user.sh \ -a \ -u \ -p \ -g read-write,read-only - Changing the Agent's credential on standalone.xml to the credentials from the previous step or by passing hawkular.rest.user / hawkular.rest.password as system properties (-Dhawkular.rest.user=jdoe) You can find the release packages, sources and checksums at GitHub, in addition to Maven: https://github.com/hawkular/hawkular-services/releases/tag/0.0.6.Final Shortcuts for the downloads: Zip - https://git.io/vKwxm tar.gz - https://git.io/vKwxI - Juca. From gbrown at redhat.com Tue Jul 19 10:03:06 2016 From: gbrown at redhat.com (Gary Brown) Date: Tue, 19 Jul 2016 10:03:06 -0400 (EDT) Subject: [Hawkular-dev] Hawkular APM / Zipkin Adapter In-Reply-To: <1469667643.6392076.1468936934954.JavaMail.zimbra@redhat.com> Message-ID: <1416067697.6392368.1468936986205.JavaMail.zimbra@redhat.com> Hi Here is a short demo showing the initial work capturing information reported by Zipkin clients and translating it for use by the Hawkular APM server. https://youtu.be/x6H2YJi2v1o We will be adding more capabilities in the coming weeks. If you have any specific requirements please let us know (or just create a jira). If anyone has a polyglot example instrumented with zipkin clients that we could test please get in contact. Regards Gary From tsegismo at redhat.com Wed Jul 20 16:39:34 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Wed, 20 Jul 2016 22:39:34 +0200 Subject: [Hawkular-dev] Status of integration of inventory and vertx metrics In-Reply-To: References: Message-ID: <4da107a0-bd63-1e3d-7cca-0c6be914da22@redhat.com> Great. I believe you should start working on HttpServer reporting now (type + resource + 1 metric). IMO, you would benefit from extracting the Inventory HTTP client code into something reusable to report different resources and types. Regards, Thomas Le 19/07/2016 ? 08:51, Austin Kuo a ?crit : > Hi, > The current status is that I have created > > * a feed > * a root resource with a resource type > * a eventbus resource as a child of the root resource above with a > resource type > * a eventbus.handlers metrics with gauge metric type. This has a > property 'metric-id' corresponding to the id of the metric data. > Fortunately, now I can view the metric data graph from the client > (hawkfx). > > Austin, > Thanks. > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From auszon3 at gmail.com Thu Jul 21 04:03:20 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Thu, 21 Jul 2016 08:03:20 +0000 Subject: [Hawkular-dev] Status of integration of inventory and vertx metrics In-Reply-To: <4da107a0-bd63-1e3d-7cca-0c6be914da22@redhat.com> References: <4da107a0-bd63-1e3d-7cca-0c6be914da22@redhat.com> Message-ID: no problem but why just 1 metric since there are a lot? On Thu, Jul 21, 2016 at 4:39 AM Thomas Segismont wrote: > Great. > > I believe you should start working on HttpServer reporting now (type + > resource + 1 metric). > > IMO, you would benefit from extracting the Inventory HTTP client code > into something reusable to report different resources and types. > > Regards, > Thomas > > Le 19/07/2016 ? 08:51, Austin Kuo a ?crit : > > Hi, > > The current status is that I have created > > > > * a feed > > * a root resource with a resource type > > * a eventbus resource as a child of the root resource above with a > > resource type > > * a eventbus.handlers metrics with gauge metric type. This has a > > property 'metric-id' corresponding to the id of the metric data. > > Fortunately, now I can view the metric data graph from the client > > (hawkfx). > > > > Austin, > > Thanks. > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160721/2f3e128f/attachment.html From lkrejci at redhat.com Thu Jul 21 04:29:11 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 21 Jul 2016 10:29:11 +0200 Subject: [Hawkular-dev] [Inventory] What constitutes a "syncable" change of an entity? Message-ID: <2975223.sKYu2GfQZt@dhcp-10-40-1-131.brq.redhat.com> Hi all, tl;dr: This probably only concerns Mazz and Austin :) The subject is a little bit cryptic, so let me explain - this deals with inventory sync and what to consider a change that is worth being synced on an entity. Today whether an entity is update during sync depends on whether some of this "vital" or rather "identifying" properties change. Namely: Feed: only ID and the hashes of child entities are considered ResourceType: only ID and hashes of configs and child operation types are considered MetricType: id + data type + unit OperationType: id + hashes of contained configs (return type and param types) Metric: id Resource: id + hashes of contained metrics, contained resources, config and connection config >From the above, one can see that not all changes to an entity will result in the change being synchronized during the /sync call, because for example an addition of a new generic property to a metric doesn't make its identity hash change. I start to think this is not precisely what we want to happen during the /sync operation. On one hand, I think it is good that we still can claim 2 resources being identical, because their "structure" is the same, regardless of what the generic properties on them look like (because anyone can add arbitrary properties to them). This enables us to do the ../identical/.. magic in traversals. On the other hand the recent discussion about attaching an h-metric ID as a generic property to a metric iff it differs from its id/path in inventory got me thinking. In the current set up, if agent reported that it changed the h- metric ID for some metric, the change would not be persisted, because /sync would see the metric as the same (because changing a generic property doesn't change the identity hash of the metric). I can see 3 solutions to this: * formalize the h-metric ID in some kind of dedicated structure in inventory that would contribute to the identity hash (i.e. similar to the "also-known- as" map I proposed in the thread about h-metric ID) * change the way we compute the identity hash and make it consider everything on an entity to contribute (I'm not sure I like this since it would limit the usefulness of ../identical/.. traversals). * compute 2 hashes - 1 for tracking the identity (i.e. the 1 we have today) and a second one for tracking changes in content (i.e. one that would consider any change) Fortunately, none of the above is a huge change. The scaffolding is all there so any of the approaches would amount to only a couple of days work. WDYT? -- Lukas Krejci From auszon3 at gmail.com Thu Jul 21 04:42:03 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Thu, 21 Jul 2016 08:42:03 +0000 Subject: [Hawkular-dev] Inventory API question Message-ID: I was creating 2 different resource types with the same http client at the same time. But one succeed, the other failed with the response 400 and body: { "errorMsg" : "The transaction has already been closed" } Is it not allowed to do so? Austin. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160721/9f7a0e2a/attachment.html From lkrejci at redhat.com Thu Jul 21 05:30:59 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 21 Jul 2016 11:30:59 +0200 Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: Message-ID: <3857246.CrpiIficQV@dhcp-10-40-1-131.brq.redhat.com> That's definitely a bug. What version of inventory are you using? I think I've fixed a problem like this in 0.17.2.Final. But of course, this could be a different instantiation of it. On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: > I was creating 2 different resource types with the same http client at the > same time. > But one succeed, the other failed with the response 400 and body: > { > "errorMsg" : "The transaction has already been closed" > } > > Is it not allowed to do so? > > Austin. -- Lukas Krejci From auszon3 at gmail.com Thu Jul 21 06:15:03 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Thu, 21 Jul 2016 10:15:03 +0000 Subject: [Hawkular-dev] Inventory API question In-Reply-To: <3857246.CrpiIficQV@dhcp-10-40-1-131.brq.redhat.com> References: <3857246.CrpiIficQV@dhcp-10-40-1-131.brq.redhat.com> Message-ID: Not sure about the version since i'm using the docker image provide by pilhuhn. I'm try to run it directly, but how to make it run at the host 0.0.0.0 such that I can access it from remote ? On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci wrote: > That's definitely a bug. What version of inventory are you using? > > I think I've fixed a problem like this in 0.17.2.Final. But of course, this > could be a different instantiation of it. > > > On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: > > I was creating 2 different resource types with the same http client at > the > > same time. > > But one succeed, the other failed with the response 400 and body: > > { > > "errorMsg" : "The transaction has already been closed" > > } > > > > Is it not allowed to do so? > > > > Austin. > > > -- > Lukas Krejci > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160721/b7a96d92/attachment.html From auszon3 at gmail.com Thu Jul 21 06:16:27 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Thu, 21 Jul 2016 10:16:27 +0000 Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: <3857246.CrpiIficQV@dhcp-10-40-1-131.brq.redhat.com> Message-ID: Oops, it's 0.17.2.Final. I just saw it from the browser. On Thu, Jul 21, 2016 at 6:15 PM Austin Kuo wrote: > Not sure about the version since i'm using the docker image provide by > pilhuhn. > I'm try to run it directly, but how to make it run at the host 0.0.0.0 > such that I can access it from remote ? > > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci wrote: > >> That's definitely a bug. What version of inventory are you using? >> >> I think I've fixed a problem like this in 0.17.2.Final. But of course, >> this >> could be a different instantiation of it. >> >> >> On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: >> > I was creating 2 different resource types with the same http client at >> the >> > same time. >> > But one succeed, the other failed with the response 400 and body: >> > { >> > "errorMsg" : "The transaction has already been closed" >> > } >> > >> > Is it not allowed to do so? >> > >> > Austin. >> >> >> -- >> Lukas Krejci >> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160721/fecbf268/attachment.html From lkrejci at redhat.com Thu Jul 21 06:18:45 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 21 Jul 2016 12:18:45 +0200 Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: <3857246.CrpiIficQV@dhcp-10-40-1-131.brq.redhat.com> Message-ID: <4222899.G6YPBCsyZ4@dhcp-10-40-1-131.brq.redhat.com> On ?tvrtek 21. ?ervence 2016 10:15:03 CEST Austin Kuo wrote: > Not sure about the version since i'm using the docker image provide by > pilhuhn. you should be able to tell the version by accessing: /hawkular/inventory/status > I'm try to run it directly, but how to make it run at the host 0.0.0.0 such > that I can access it from remote ? > .../bin/standalone.sh -b 0.0.0.0 > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci wrote: > > That's definitely a bug. What version of inventory are you using? > > > > I think I've fixed a problem like this in 0.17.2.Final. But of course, > > this > > could be a different instantiation of it. > > > > On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: > > > I was creating 2 different resource types with the same http client at > > > > the > > > > > same time. > > > But one succeed, the other failed with the response 400 and body: > > > { > > > > > > "errorMsg" : "The transaction has already been closed" > > > > > > } > > > > > > Is it not allowed to do so? > > > > > > Austin. > > > > -- > > Lukas Krejci > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From lkrejci at redhat.com Thu Jul 21 06:23:47 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 21 Jul 2016 12:23:47 +0200 Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: Message-ID: <7421037.s5N4yk3Dks@dhcp-10-40-1-131.brq.redhat.com> On ?tvrtek 21. ?ervence 2016 10:16:27 CEST Austin Kuo wrote: > Oops, it's 0.17.2.Final. I just saw it from the browser. > Ok, then this is a bug... Would you be able to write up a JIRA for it with repro steps so that I can take a look at it? > On Thu, Jul 21, 2016 at 6:15 PM Austin Kuo wrote: > > Not sure about the version since i'm using the docker image provide by > > pilhuhn. > > I'm try to run it directly, but how to make it run at the host 0.0.0.0 > > such that I can access it from remote ? > > > > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci wrote: > >> That's definitely a bug. What version of inventory are you using? > >> > >> I think I've fixed a problem like this in 0.17.2.Final. But of course, > >> this > >> could be a different instantiation of it. > >> > >> On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: > >> > I was creating 2 different resource types with the same http client at > >> > >> the > >> > >> > same time. > >> > But one succeed, the other failed with the response 400 and body: > >> > { > >> > > >> > "errorMsg" : "The transaction has already been closed" > >> > > >> > } > >> > > >> > Is it not allowed to do so? > >> > > >> > Austin. > >> > >> -- > >> Lukas Krejci > >> > >> _______________________________________________ > >> hawkular-dev mailing list > >> hawkular-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci From auszon3 at gmail.com Thu Jul 21 06:34:11 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Thu, 21 Jul 2016 10:34:11 +0000 Subject: [Hawkular-dev] Inventory API question In-Reply-To: <7421037.s5N4yk3Dks@dhcp-10-40-1-131.brq.redhat.com> References: <7421037.s5N4yk3Dks@dhcp-10-40-1-131.brq.redhat.com> Message-ID: Sure. https://issues.jboss.org/browse/HAWKULAR-1099 On Thu, Jul 21, 2016 at 6:23 PM Lukas Krejci wrote: > On ?tvrtek 21. ?ervence 2016 10:16:27 CEST Austin Kuo wrote: > > Oops, it's 0.17.2.Final. I just saw it from the browser. > > > > Ok, then this is a bug... Would you be able to write up a JIRA for it with > repro steps so that I can take a look at it? > > > On Thu, Jul 21, 2016 at 6:15 PM Austin Kuo wrote: > > > Not sure about the version since i'm using the docker image provide by > > > pilhuhn. > > > I'm try to run it directly, but how to make it run at the host 0.0.0.0 > > > such that I can access it from remote ? > > > > > > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci > wrote: > > >> That's definitely a bug. What version of inventory are you using? > > >> > > >> I think I've fixed a problem like this in 0.17.2.Final. But of course, > > >> this > > >> could be a different instantiation of it. > > >> > > >> On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: > > >> > I was creating 2 different resource types with the same http > client at > > >> > > >> the > > >> > > >> > same time. > > >> > But one succeed, the other failed with the response 400 and body: > > >> > { > > >> > > > >> > "errorMsg" : "The transaction has already been closed" > > >> > > > >> > } > > >> > > > >> > Is it not allowed to do so? > > >> > > > >> > Austin. > > >> > > >> -- > > >> Lukas Krejci > > >> > > >> _______________________________________________ > > >> hawkular-dev mailing list > > >> hawkular-dev at lists.jboss.org > > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > -- > Lukas Krejci > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160721/dff477ae/attachment.html From lkrejci at redhat.com Thu Jul 21 08:08:26 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Thu, 21 Jul 2016 14:08:26 +0200 Subject: [Hawkular-dev] [Inventory] Performance of Tinkerpop3 backends Message-ID: <2837463.D3iHUI0alh@dhcp-10-40-1-131.brq.redhat.com> Hi all, to move inventory forward, we need to port it to Tinkerpop3 - a new(ish) and actively maintained version of the Tinkerpop graph API. Apart from the huge improvement in the API expressiveness and capabilities, the important thing is that it comes with a variety of backends, 2 of which are of particular interest to us ATM. The Titan backend (with Titan in version 1.0) and SQL backend (using the sqlg library). The SQL backend is a much improved (yet still unfinished in terms of optimizations and some corner case features) version of the toy SQL backend for Tinkerpop2. Back in March I ran performance comparisons for SQL/postgres and Titan (0.5.4) on Tinkerpop2 and concluded that Titan was the best choice then. After completing a simplistic port of inventory to Tinkerpop3 (not taking advantage of any new features or opportunities to simplify inventory codebase), I've run the performance tests again for the 2 new backends - Titan 1.0 and Sqlg (on postgres). This time the results are not so clear as the last time. >From the charts [1] you can see that Postgres is actually quite a bit faster on reads and can better handle concurrent read access while Titan shines in writes (arguably thanks to Cassandra as its storage). Of course, I can imagine that the read performance advantage of Postgres would decrease with the growing amount of data stored (the tests ran with the inventory size of ~10k entities) but I am quite positive we'd get competitive read performance from both solutions up to the sizes of inventory we anticipate (100k-1M entities). Now the question is whether the insert performance is something we should be worried about in Postgres too much. IMHO, there should be some room for improvement in Sqlg and also our move to /sync for agent synchronization would make this less of a problem (because there would be not that many initial imports that would create vast amounts of entities). Nevertheless I currently cannot say who is the "winner" here. Each backend has its pros and cons: Titan: Pros: - high write throughput - backed by cassandra Cons: - slower reads - project virtually dead - complex codebase (self-made fixes unlikely) Sqlg: Pros: - small codebase - everybody knows SQL - faster reads - faster concurrent reads Cons: - slow writes - another backend needed (Postgres) Therefore my intention here is to go forward with a "proper" port to Tinkerpop3 with Titan still enabled but focus primarily on Sqlg to see if we can do anything with the write performance. IMHO, any choice we make is "workable" as it is even today but we need to weigh in the productization requirements. For those Sqlg with its small dep footprint and postgres backend seems preferable to the huge dependency mess of Titan. [1] https://dashboards.ly/ua-TtqrpCXcQ3fnjezP5phKhc -- Lukas Krejci From hrupp at redhat.com Fri Jul 22 11:30:08 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Fri, 22 Jul 2016 17:30:08 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> Message-ID: <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> No that I know that I should write about Cattle :) I want to pick this conversation up again. In my Docker setup http://pilhuhn.blogspot.de/2016/06/using-hawkular-services-via-docker.html I have a WildFly in a container ("hawkfly") which I can start with `docker run pilhuhn/hawkfly`. When the container stops, I could restart it (docker start or by using a --restart=always policy). Or I just start a new one with `docker run` as above. Actually I can start dozens that way to scale up and down. In the later case I end up with dozens of 'dead' wildfly servers in Hawkular inventory (and thus also in MiQ inventory, as the inventory sync can't remove them if they are still present in Hawkular inventory). Before I continue I want to list a few use cases that I identified a) plain WildFly on bare metal/VM. Always same instance, gets started and stopped many times, keeps state This is probably what we always did and do (= a pet) b) WildFly in container b1) managed by some orchestration system (= cattle ) b2) started is some more ad-hoc way (e.g. docker-compose, manual docker-run commands) (= in between cattle and pets) Now we also need to keep in mind that for applications e.g. a bunch of app servers may run the same code for load balancing / fault tolerance reasons. Now about the relationship to our inventory For a) it is pretty clear that users see the individual App-servers as long-living installation. When it crashes, they restart it, but it stays the same install. So we can easily list and keep it in inventory and also have some command to be "manually" to clean up once the user decides that that installation is really no longer needed. Also as the AS (usually) has full access to the file system, the feed-id is preserved over restarts. For b) the situation is different, as images are immutable and containers have some "local storage", that is valid only for the same container. So the WF can only remember its feed id on restarts of the same container, but not when I docker run a new one as replacement for an old one. Now with both Docker and (most probably) also k8s it is possible to get a stream of events when containers are stopped and removed. So we could connect to them and make use of the information. The other aspect here is what do we do with the collected data. Right now out approach is very pet-centric, which is fine for use case a) above. For containers, we don't want to have dead cattle in inventory for forever. We may also not want to remove the collected metrics, as they can be still important. For these user cases we should probably abstract this to the level of a flock. We want to monitor the size of the flocks and also when flock members die and are born, but no longer try to identify individual members. We brand them to denote being part of the flock. With the flock we can still have individual cattle report their metrics individually, but we need to have a way to aggregate over the flock. Similar for alerting purposes, where we set up the alert definitions on the flock and not individual members. Alert definitions need then be applied to all new members. For inventory, we can when we learn about a member dying just remove if from the flock and adjust counters, record an event about it. Similar for new members. Now the question is how do we learn about cattle in the flock? When building images with the agent inside, we can pass an env-variable or agent setting that a) indicates that this is an agent inside a docker container b) indicates which flock this belongs to. Does that make sense? Heiko On 3 Jul 2016, at 14:14, John Mazzitelli wrote: > In case you didn't understand the analogy, I believe Heiko meant to > use the word "Cattle" not "Kettle" :-) > > I had to look it up - I've not heard the "cattle vs. pets" analogy > before - but I get it now! > > ----- Original Message ----- >> Hey, >> >> [ CC to Federico as he may have some ideas from the Kube/OS side ] >> >> Our QE has opened an interesting case: >> >> https://github.com/ManageIQ/manageiq/issues/9556 >> >> where I first thought WTF with that title. >> >> But then when reading further it got more interesting. >> Basically what happens is that especially in environments like >> Kube/Openshift, >> individual containers/appservers are Kettle and not Pets: one goes >> down, >> gets >> killed, you start a new one somewhere else. >> >> Now the interesting question for us are (first purely on the Hawkular >> side): >> - how can we detect that such a container is down and will never come >> up >> with that id again (-> we need to clean it up in inventory) >> - can we learn that for a killed container A, a freshly started >> container A' is >> the replacement to e.g. continue with performance monitoring of the >> app >> or to re-associate relationships with other items in inventory- >> (Is that even something we want - again that is Kettle and not Pets >> anymore) >> - Could eap+embedded agent perhaps store some token in Kube which >> is then passed when A' is started so that A' knows it is the new A >> (e.g. >> feed id). >> - I guess that would not make much sense anyway, as for an app >> with >> three app servers all would get that same token. >> >> Perhaps we should ignore that use case for now completely and tackle >> that differently in the sense that we don't care about 'real' app >> servers, >> but rather introduce the concept of a 'virtual' server where we only >> know >> via Kube that it exists and how many of them for a certain >> application >> (which is identified via some tag in Kube). Those virtual servers >> deliver >> data, but we don't really try to do anything with them 'personally', >> but indirectly via Kube interactions (i.e. map the incoming data to >> the >> app and not to an individual server). We would also not store >> the individual server in inventory, so there is no need to clean it >> up (again, no pet but kettle). >> In fact we could just use the feed-id as kube token (or vice versa). >> We still need a way to detect that one of those kettle-as is on Kube >> and possibly either disable to re-route some of the lifecycle events >> onto Kubernetes (start in any case, stop probably does not matter >> if he container dies because the appserver inside stops or if kube >> just kills it). From jpkroehling at redhat.com Fri Jul 22 11:58:08 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Fri, 22 Jul 2016 17:58:08 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> Message-ID: <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> On 22.07.2016 17:30, Heiko W.Rupp wrote: > No that I know that I should write about Cattle :) I want to pick this > conversation > up again. Before I share my comments on the individual parts, I have a question. Do we still see the main monitored subject as the application, or do we care about the OS/environment? In other words: does it make sense to have two application instances with the same feed-id? If so, having an alternate algorithm for containers that comes up with an ID based on local artifacts would solve most of the issues, I believe. - Juca. From mazz at redhat.com Sat Jul 23 23:09:18 2016 From: mazz at redhat.com (John Mazzitelli) Date: Sat, 23 Jul 2016 23:09:18 -0400 (EDT) Subject: [Hawkular-dev] integrating ansible and hawkular to start remote wildfly servers In-Reply-To: References: <2068959737.4995788.1468531503081.JavaMail.zimbra@redhat.com> <2024470761.5019568.1468547879321.JavaMail.zimbra@redhat.com> Message-ID: <66242975.7809291.1469329758229.JavaMail.zimbra@redhat.com> > So Ansible is supposed to be already installed and available on the machine > / (docker container) running Hawkular Services, right ? Correct. This is assuming "ansible-playbook" is available and on the PATH so it can be executed. Obviously, we can do this however we want (ship with Ansible ourselves so we know where it is, or have it configurable so we can tell Hawkular where the Ansible executable is, etc.). Again, this was just a PoC to get this working, but as of right now, the PoC requires Ansible to already be installed and available. ----- Original Message ----- > So Ansible is supposed to be already installed and available on the machine > / (docker container) running Hawkular Services, right ? > > Thomas > > > > On Fri, Jul 15, 2016 at 3:57 AM, John Mazzitelli wrote: > > > Slight change - I found out there is a feature in feature-pack stuff that > > let you ship with direct file content that gets copied into the server. So > > I put the Ansible scripts in the feature pack. This is the correct way to > > do it. Now anyone building the server with the feature pack (integration > > tests, for example) will get the Ansible integration as well. > > > > So link [3] changes from my original email. It is now: > > > > [3] > > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands/feature-pack/src/main/content/standalone/configuration/ansible > > > > ----- Original Message ----- > > > I have a very simple PoC working that shows Ansible integrated with > > Hawkular > > > such that it can start a WildFly server remotely. > > > > > > I'll begin with a brief summary. > > > > > > SUMMARY > > > > > > A client connects to the Hawkular Server via its websocket and passes in > > a > > > JSON request (like it does the other kinds of requests). The JSON request > > > looks something like: > > > > > > AnsibleRequest={"playbook":"start-wildfly.yml", "extraVars": { > > > "wildfly_home_dir":"/opt/wildfly10" } } > > > > > > The command-gateway server takes the request, runs the Ansible playbook, > > and > > > returns the response back over the websocket to the client with an > > > AnsibleResponse (which includes the Ansible output to show what it did). > > > > > > DETAILS > > > > > > The code is in two branches in my repos. See [1] and [2]. You need both > > for > > > this to work. > > > > > > Hawkular-Services packages up Ansible playbooks and its associated files > > in > > > standalone/configuration/ansible. See [3]. > > > > > > Hawkular-Commons adds to command gateway a new JSON request/response for > > > Ansible requests. See [4] and [5]. > > > > > > The Ansible command is invoked in the server when the AnsibleRequest > > JSON is > > > received over the websocket. This command implementation runs the Ansible > > > playbook and returns the results back in a AnsibleResponse. See [6]. > > > > > > If you build commons and then services to pull the new commons in, you > > can > > > run a test to see it work. I use the test command CLI utility in the > > agent > > > to do this [7]. First, build the new dist and run it so you have a > > Hawkular > > > Service running (for example, "mvn clean install -Pdev,embeddedc"). Next, > > > create a test JSON request file (say, "/tmp/hawkular.json") with this > > > content: > > > > > > { "playbook":"start-wildfly.yml", > > > "extraVars": { > > > "wildfly_home_dir":"/directory/where/you/installed/wildfly" > > > } > > > } > > > > > > Now build the hawkular-agent from source (just so you get the test > > command > > > CLI utility) and run the test command CLI, telling it to use the JSON in > > > your file and send it to your Hawkular Service server: > > > > > > $ cd > > > > > /hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli/target > > > > > > $ java -jar hawkular-wildfly-agent-command-cli-*.jar --username jdoe > > > --password password \ > > > --command AnsibleRequest --request-file=/tmp/hawkular.json > > > > > > This sends the request to your local Hawkular Service server over the > > > websocket, the Ansible playbook "start-wildfly.yml" is run, which starts > > the > > > WildFly in that home dir you specified in your JSON request, and the > > > response is sent back. The Command CLI tool will write the responses it > > > receives to disk - look at the cli output for the names of the files it > > > writes - you can look in there to see what the Ansible command response > > was > > > (its just the full Ansible output). > > > > > > There is still LOTS to do here (for one, you'll notice it assumes your > > > wildfly install is on "localhost" :)). This is merely a PoC that we can > > use > > > as a starting point - it shows that this can be done and how we can do > > it. > > > > > > -- John Mazz > > > > > > [1] > > > > > https://github.com/jmazzitelli/hawkular-commons/tree/HAWKULAR-1096-ansible-commands > > > [2] > > > > > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands > > > [3] > > > > > https://github.com/jmazzitelli/hawkular-services/tree/HAWKULAR-1096-ansible-commands/dist/src/main/resources/standalone/configuration/ansible > > > [4] > > > > > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleRequest.schema.json > > > [5] > > > > > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-api/src/main/resources/schema/AnsibleResponse.schema.json > > > [6] > > > > > https://github.com/jmazzitelli/hawkular-commons/blob/HAWKULAR-1096-ansible-commands/hawkular-command-gateway/hawkular-command-gateway-war/src/main/java/org/hawkular/cmdgw/command/ws/AnsibleCommand.java > > > [7] > > > > > https://github.com/hawkular/hawkular-agent/tree/master/hawkular-wildfly-agent-itest-parent/hawkular-wildfly-agent-command-cli > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > From gbrown at redhat.com Mon Jul 25 04:57:37 2016 From: gbrown at redhat.com (Gary Brown) Date: Mon, 25 Jul 2016 04:57:37 -0400 (EDT) Subject: [Hawkular-dev] Integration of APM into Hawkular Services In-Reply-To: <50397198.7808227.1469435375977.JavaMail.zimbra@redhat.com> Message-ID: <1772278521.7813723.1469437057939.JavaMail.zimbra@redhat.com> Hi Hawkular APM is currently built as a separate distribution independent from other Hawkular components. However in the near future we will want to explore integration with other components, such as Alerts, Metrics and Inventory. Therefore I wanted to explore the options we have for building an integrated environment, to provide the basis for such integration work, without impacting the more immediate plans for Hawkular Services. The two possible approaches are: 1) Provide a maven profile as part of the Hawkular Services build, that will include the APM server. The UI could be deployed separately as a war, or possibly integrated into the UI build? 2) As suggested by Juca, the APM distribution could be built upon the hawkular-services distribution. There are pros/cons with both approaches: My preference is option (1) as it moves us closer to a fully integrated hawkular-services solution, but relies on a separate build using the profile (not sure if that would result in a separate release distribution). Option 2 would provide the full distribution as a release, but the downside is the size of the distribution (and its dependencies, such as cassandra), when user only interested in APM. Unclear whether a standalone APM distribution will still be required in the future - at present the website is structured to support this. Thoughts? Regards Gary From mazz at redhat.com Mon Jul 25 13:52:19 2016 From: mazz at redhat.com (John Mazzitelli) Date: Mon, 25 Jul 2016 13:52:19 -0400 (EDT) Subject: [Hawkular-dev] [Inventory] What constitutes a "syncable" change of an entity? In-Reply-To: <2975223.sKYu2GfQZt@dhcp-10-40-1-131.brq.redhat.com> References: <2975223.sKYu2GfQZt@dhcp-10-40-1-131.brq.redhat.com> Message-ID: <817400505.8459902.1469469139625.JavaMail.zimbra@redhat.com> Lukas, Ignoring identity for a second - it seems to me if I want to change a general property value, it should "just change" when passed to the /sync endpoint. I don't see why it wouldn't. "foo" general property is "1" - now I want to change it to "2" - I send resource up via /sync with the general property "foo=2" - that change should be persisted. Now, if there are other use cases where identity checks should look at a restricted set of data related to the resource, that's fine. But that to me is separate from what we want /sync to do. Maybe I just don't understand the issues between the two. But from an outsider's point of view, I would say of the three options you provided in your last email, I choose the third: > compute 2 hashes - 1 for tracking the identity (i.e. the 1 we have today) > and a second one for tracking changes in content (i.e. one that would consider any change) This would support what I want in /sync plus support identity checking and traversal - where you said, "This enables us to do the ../identical/.. magic in traversals." --JohnMazz ----- Original Message ----- > Hi all, > > tl;dr: This probably only concerns Mazz and Austin :) > > The subject is a little bit cryptic, so let me explain - this deals with > inventory sync and what to consider a change that is worth being synced on an > entity. > > Today whether an entity is update during sync depends on whether some of this > "vital" or rather "identifying" properties change. Namely: > > Feed: only ID and the hashes of child entities are considered > ResourceType: only ID and hashes of configs and child operation types are > considered > MetricType: id + data type + unit > OperationType: id + hashes of contained configs (return type and param types) > Metric: id > Resource: id + hashes of contained metrics, contained resources, config and > connection config > > >From the above, one can see that not all changes to an entity will result in > the change being synchronized during the /sync call, because for example an > addition of a new generic property to a metric doesn't make its identity hash > change. > > I start to think this is not precisely what we want to happen during the > /sync > operation. > > On one hand, I think it is good that we still can claim 2 resources being > identical, because their "structure" is the same, regardless of what the > generic properties on them look like (because anyone can add arbitrary > properties to them). This enables us to do the ../identical/.. magic in > traversals. > > On the other hand the recent discussion about attaching an h-metric ID as a > generic property to a metric iff it differs from its id/path in inventory got > me thinking. In the current set up, if agent reported that it changed the h- > metric ID for some metric, the change would not be persisted, because /sync > would see the metric as the same (because changing a generic property doesn't > change the identity hash of the metric). > > I can see 3 solutions to this: > > * formalize the h-metric ID in some kind of dedicated structure in inventory > that would contribute to the identity hash (i.e. similar to the "also-known- > as" map I proposed in the thread about h-metric ID) > > * change the way we compute the identity hash and make it consider everything > on an entity to contribute (I'm not sure I like this since it would limit the > usefulness of ../identical/.. traversals). > > * compute 2 hashes - 1 for tracking the identity (i.e. the 1 we have today) > and a second one for tracking changes in content (i.e. one that would > consider > any change) > > Fortunately, none of the above is a huge change. The scaffolding is all there > so any of the approaches would amount to only a couple of days work. > > WDYT? > > -- > Lukas Krejci > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From tsegismo at redhat.com Mon Jul 25 16:24:44 2016 From: tsegismo at redhat.com (Thomas Segismont) Date: Mon, 25 Jul 2016 22:24:44 +0200 Subject: [Hawkular-dev] Status of integration of inventory and vertx metrics In-Reply-To: References: <4da107a0-bd63-1e3d-7cca-0c6be914da22@redhat.com> Message-ID: <9a7fddeb-a391-bd54-3ac2-8b4a9173d145@redhat.com> 1 metric was just for the "start" :) Of course more is better Le 21/07/2016 ? 10:03, Austin Kuo a ?crit : > no problem but why just 1 metric since there are a lot? > > > On Thu, Jul 21, 2016 at 4:39 AM Thomas Segismont > wrote: > > Great. > > I believe you should start working on HttpServer reporting now (type + > resource + 1 metric). > > IMO, you would benefit from extracting the Inventory HTTP client code > into something reusable to report different resources and types. > > Regards, > Thomas > > Le 19/07/2016 ? 08:51, Austin Kuo a ?crit : > > Hi, > > The current status is that I have created > > > > * a feed > > * a root resource with a resource type > > * a eventbus resource as a child of the root resource above with a > > resource type > > * a eventbus.handlers metrics with gauge metric type. This has a > > property 'metric-id' corresponding to the id of the metric data. > > Fortunately, now I can view the metric data graph from the client > > (hawkfx). > > > > Austin, > > Thanks. > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From auszon3 at gmail.com Tue Jul 26 04:16:30 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Tue, 26 Jul 2016 08:16:30 +0000 Subject: [Hawkular-dev] Status of integration of inventory and vertx metrics In-Reply-To: <9a7fddeb-a391-bd54-3ac2-8b4a9173d145@redhat.com> References: <4da107a0-bd63-1e3d-7cca-0c6be914da22@redhat.com> <9a7fddeb-a391-bd54-3ac2-8b4a9173d145@redhat.com> Message-ID: Hi, I've already add the http resource and the metric, but there seems to be a bug that inventory does not allow concurrent transactions when creating multiple resource types at the same time. If I send the request one by one, it's working, but it should be executed concurrently. So I'm waiting for the fix before submitting the PR. Austin. On Tue, Jul 26, 2016 at 4:24 AM Thomas Segismont wrote: > 1 metric was just for the "start" :) > > Of course more is better > > Le 21/07/2016 ? 10:03, Austin Kuo a ?crit : > > no problem but why just 1 metric since there are a lot? > > > > > > On Thu, Jul 21, 2016 at 4:39 AM Thomas Segismont > > wrote: > > > > Great. > > > > I believe you should start working on HttpServer reporting now (type > + > > resource + 1 metric). > > > > IMO, you would benefit from extracting the Inventory HTTP client code > > into something reusable to report different resources and types. > > > > Regards, > > Thomas > > > > Le 19/07/2016 ? 08:51, Austin Kuo a ?crit : > > > Hi, > > > The current status is that I have created > > > > > > * a feed > > > * a root resource with a resource type > > > * a eventbus resource as a child of the root resource above with > a > > > resource type > > > * a eventbus.handlers metrics with gauge metric type. This has a > > > property 'metric-id' corresponding to the id of the metric > data. > > > Fortunately, now I can view the metric data graph from the > client > > > (hawkfx). > > > > > > Austin, > > > Thanks. > > > > > > > > > > > > _______________________________________________ > > > hawkular-dev mailing list > > > hawkular-dev at lists.jboss.org > > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > > > > > > _______________________________________________ > > hawkular-dev mailing list > > hawkular-dev at lists.jboss.org > > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160726/251e6b9a/attachment.html From auszon3 at gmail.com Tue Jul 26 08:07:20 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Tue, 26 Jul 2016 12:07:20 +0000 Subject: [Hawkular-dev] Status of integration of inventory and vertx metrics In-Reply-To: References: <4da107a0-bd63-1e3d-7cca-0c6be914da22@redhat.com> <9a7fddeb-a391-bd54-3ac2-8b4a9173d145@redhat.com> Message-ID: Also, I jus thought of a few of questions: 1. Does a kind of resource have an unique resource type ? 2. Does a kind of metric have an unique metric type? or for example, all counter metrics can share the same type (counter)... Thanks. On Tue, Jul 26, 2016 at 4:16 PM Austin Kuo wrote: > Hi, > I've already add the http resource and the metric, but there seems to be a > bug that inventory does not allow concurrent transactions when creating > multiple resource types at the same time. If I send the request one by one, > it's working, but it should be executed concurrently. So I'm waiting for > the fix before submitting the PR. > > Austin. > > On Tue, Jul 26, 2016 at 4:24 AM Thomas Segismont > wrote: > >> 1 metric was just for the "start" :) >> >> Of course more is better >> >> Le 21/07/2016 ? 10:03, Austin Kuo a ?crit : >> > no problem but why just 1 metric since there are a lot? >> > >> > >> > On Thu, Jul 21, 2016 at 4:39 AM Thomas Segismont > > > wrote: >> > >> > Great. >> > >> > I believe you should start working on HttpServer reporting now >> (type + >> > resource + 1 metric). >> > >> > IMO, you would benefit from extracting the Inventory HTTP client >> code >> > into something reusable to report different resources and types. >> > >> > Regards, >> > Thomas >> > >> > Le 19/07/2016 ? 08:51, Austin Kuo a ?crit : >> > > Hi, >> > > The current status is that I have created >> > > >> > > * a feed >> > > * a root resource with a resource type >> > > * a eventbus resource as a child of the root resource above >> with a >> > > resource type >> > > * a eventbus.handlers metrics with gauge metric type. This has a >> > > property 'metric-id' corresponding to the id of the metric >> data. >> > > Fortunately, now I can view the metric data graph from the >> client >> > > (hawkfx). >> > > >> > > Austin, >> > > Thanks. >> > > >> > > >> > > >> > > _______________________________________________ >> > > hawkular-dev mailing list >> > > hawkular-dev at lists.jboss.org > > >> > > https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > > >> > _______________________________________________ >> > hawkular-dev mailing list >> > hawkular-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > >> > >> > >> > _______________________________________________ >> > hawkular-dev mailing list >> > hawkular-dev at lists.jboss.org >> > https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160726/7791bfa6/attachment.html From jshaughn at redhat.com Tue Jul 26 09:43:25 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Tue, 26 Jul 2016 09:43:25 -0400 Subject: [Hawkular-dev] [Inventory] What constitutes a "syncable" change of an entity? In-Reply-To: <817400505.8459902.1469469139625.JavaMail.zimbra@redhat.com> References: <2975223.sKYu2GfQZt@dhcp-10-40-1-131.brq.redhat.com> <817400505.8459902.1469469139625.JavaMail.zimbra@redhat.com> Message-ID: From what I understand of the issue, I'd also endorse option 3: 2 hashes. This, I think, would provide the most flexibility. I'd avoid option 2 because we don't want to cripple the 'identical' magic. On 7/25/2016 1:52 PM, John Mazzitelli wrote: > Lukas, > > Ignoring identity for a second - it seems to me if I want to change a general property value, it should "just change" when passed to the /sync endpoint. I don't see why it wouldn't. "foo" general property is "1" - now I want to change it to "2" - I send resource up via /sync with the general property "foo=2" - that change should be persisted. > > Now, if there are other use cases where identity checks should look at a restricted set of data related to the resource, that's fine. But that to me is separate from what we want /sync to do. > > Maybe I just don't understand the issues between the two. But from an outsider's point of view, I would say of the three options you provided in your last email, I choose the third: > >> compute 2 hashes - 1 for tracking the identity (i.e. the 1 we have today) >> and a second one for tracking changes in content (i.e. one that would consider any change) > This would support what I want in /sync plus support identity checking and traversal - where you said, "This enables us to do the ../identical/.. magic in traversals." > > > --JohnMazz > > ----- Original Message ----- >> Hi all, >> >> tl;dr: This probably only concerns Mazz and Austin :) >> >> The subject is a little bit cryptic, so let me explain - this deals with >> inventory sync and what to consider a change that is worth being synced on an >> entity. >> >> Today whether an entity is update during sync depends on whether some of this >> "vital" or rather "identifying" properties change. Namely: >> >> Feed: only ID and the hashes of child entities are considered >> ResourceType: only ID and hashes of configs and child operation types are >> considered >> MetricType: id + data type + unit >> OperationType: id + hashes of contained configs (return type and param types) >> Metric: id >> Resource: id + hashes of contained metrics, contained resources, config and >> connection config >> >> >From the above, one can see that not all changes to an entity will result in >> the change being synchronized during the /sync call, because for example an >> addition of a new generic property to a metric doesn't make its identity hash >> change. >> >> I start to think this is not precisely what we want to happen during the >> /sync >> operation. >> >> On one hand, I think it is good that we still can claim 2 resources being >> identical, because their "structure" is the same, regardless of what the >> generic properties on them look like (because anyone can add arbitrary >> properties to them). This enables us to do the ../identical/.. magic in >> traversals. >> >> On the other hand the recent discussion about attaching an h-metric ID as a >> generic property to a metric iff it differs from its id/path in inventory got >> me thinking. In the current set up, if agent reported that it changed the h- >> metric ID for some metric, the change would not be persisted, because /sync >> would see the metric as the same (because changing a generic property doesn't >> change the identity hash of the metric). >> >> I can see 3 solutions to this: >> >> * formalize the h-metric ID in some kind of dedicated structure in inventory >> that would contribute to the identity hash (i.e. similar to the "also-known- >> as" map I proposed in the thread about h-metric ID) >> >> * change the way we compute the identity hash and make it consider everything >> on an entity to contribute (I'm not sure I like this since it would limit the >> usefulness of ../identical/.. traversals). >> >> * compute 2 hashes - 1 for tracking the identity (i.e. the 1 we have today) >> and a second one for tracking changes in content (i.e. one that would >> consider >> any change) >> >> Fortunately, none of the above is a huge change. The scaffolding is all there >> so any of the approaches would amount to only a couple of days work. >> >> WDYT? >> >> -- >> Lukas Krejci >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160726/8878f50d/attachment-0001.html From jshaughn at redhat.com Tue Jul 26 10:12:55 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Tue, 26 Jul 2016 10:12:55 -0400 Subject: [Hawkular-dev] agent using custom metric IDs In-Reply-To: <10638522.b8cWRMdI2m@localhost.localdomain> References: <268194573.2718225.1467854017722.JavaMail.zimbra@redhat.com> <9471848.laFv79TRbG@localhost.localdomain> <638211996.2970012.1467927074314.JavaMail.zimbra@redhat.com> <10638522.b8cWRMdI2m@localhost.localdomain> Message-ID: On 7/7/2016 5:53 PM, Lukas Krejci wrote: > I wonder then if this should not evolve into something more engrained > into > inventory. Something like "AKA" property on the entities: > > "also-known-as": { > "h-metrics": "h-metrics-specific-id", > "h-alerts": "another-id", > "collectd": "yet-another-id", > "rhq": "some-id", > "legacy-monitoring-system-in-our-enterprise": "blah-id", > ... > } > This AKA property could be a good thing for mapping the Inventory metric name to other IDs. From jshaughn at redhat.com Tue Jul 26 10:28:50 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Tue, 26 Jul 2016 10:28:50 -0400 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> <488951763.4062228.1467887126176.JavaMail.zimbra@redhat.com> Message-ID: On 7/7/2016 6:46 AM, Thomas Heute wrote: > > I'm fine leaving the 3 top level projects, and it should definitely > not grow. > > Note that I didn't expose Alerts there to not add confusion even > though it can run on its own. > I understand the decision to have APM and Metrics as top level options but I'd like to see Standalone Alerting mentioned somewhere. Perhaps in Labs? Also, a small thing but we've been trying to migrate from 'Hawkular Alerts' to 'Hawkular Alerting' in text/docs, etc. I'd like to see that changed on hawkular.org. From jshaughn at redhat.com Tue Jul 26 11:59:50 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Tue, 26 Jul 2016 11:59:50 -0400 Subject: [Hawkular-dev] Integration of APM into Hawkular Services In-Reply-To: <1772278521.7813723.1469437057939.JavaMail.zimbra@redhat.com> References: <1772278521.7813723.1469437057939.JavaMail.zimbra@redhat.com> Message-ID: If the plan is to eventually integrate APM into Hawkular Services (not as an option) then I'd go with option 1 and then eventually the profile would go away. I think an add-on UI war would maybe be an optional 'Lab' offering, there is no UI with which to integrate. The standalone APM offering could still be offered. I'm not sure whether that would still be offered as a top-level menu item on hawkular.org, or also as a 'Lab'. On 7/25/2016 4:57 AM, Gary Brown wrote: > Hi > > Hawkular APM is currently built as a separate distribution independent from other Hawkular components. However in the near future we will want to explore integration with other components, such as Alerts, Metrics and Inventory. > > Therefore I wanted to explore the options we have for building an integrated environment, to provide the basis for such integration work, without impacting the more immediate plans for Hawkular Services. > > The two possible approaches are: > > 1) Provide a maven profile as part of the Hawkular Services build, that will include the APM server. The UI could be deployed separately as a war, or possibly integrated into the UI build? > > 2) As suggested by Juca, the APM distribution could be built upon the hawkular-services distribution. > > There are pros/cons with both approaches: > > My preference is option (1) as it moves us closer to a fully integrated hawkular-services solution, but relies on a separate build using the profile (not sure if that would result in a separate release distribution). > > Option 2 would provide the full distribution as a release, but the downside is the size of the distribution (and its dependencies, such as cassandra), when user only interested in APM. Unclear whether a standalone APM distribution will still be required in the future - at present the website is structured to support this. > > Thoughts? > > Regards > Gary > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160726/a278dc84/attachment.html From auszon3 at gmail.com Tue Jul 26 22:11:02 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Wed, 27 Jul 2016 02:11:02 +0000 Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: <7421037.s5N4yk3Dks@dhcp-10-40-1-131.brq.redhat.com> Message-ID: When is the next inventory release because I'm waiting for the fix? Or is there any other way that I can exploit the latest inventory? Thanks! Austin Kuo ? 2016?7?21? ???18:34??? > Sure. > https://issues.jboss.org/browse/HAWKULAR-1099 > > On Thu, Jul 21, 2016 at 6:23 PM Lukas Krejci wrote: > >> On ?tvrtek 21. ?ervence 2016 10:16:27 CEST Austin Kuo wrote: >> > Oops, it's 0.17.2.Final. I just saw it from the browser. >> > >> >> Ok, then this is a bug... Would you be able to write up a JIRA for it with >> repro steps so that I can take a look at it? >> >> > On Thu, Jul 21, 2016 at 6:15 PM Austin Kuo wrote: >> > > Not sure about the version since i'm using the docker image provide by >> > > pilhuhn. >> > > I'm try to run it directly, but how to make it run at the host 0.0.0.0 >> > > such that I can access it from remote ? >> > > >> > > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci >> wrote: >> > >> That's definitely a bug. What version of inventory are you using? >> > >> >> > >> I think I've fixed a problem like this in 0.17.2.Final. But of >> course, >> > >> this >> > >> could be a different instantiation of it. >> > >> >> > >> On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: >> > >> > I was creating 2 different resource types with the same http >> client at >> > >> >> > >> the >> > >> >> > >> > same time. >> > >> > But one succeed, the other failed with the response 400 and body: >> > >> > { >> > >> > >> > >> > "errorMsg" : "The transaction has already been closed" >> > >> > >> > >> > } >> > >> > >> > >> > Is it not allowed to do so? >> > >> > >> > >> > Austin. >> > >> >> > >> -- >> > >> Lukas Krejci >> > >> >> > >> _______________________________________________ >> > >> hawkular-dev mailing list >> > >> hawkular-dev at lists.jboss.org >> > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> >> -- >> Lukas Krejci >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160727/d0544fbb/attachment.html From miburman at redhat.com Wed Jul 27 06:19:31 2016 From: miburman at redhat.com (Michael Burman) Date: Wed, 27 Jul 2016 06:19:31 -0400 (EDT) Subject: [Hawkular-dev] Metrics storage usage and compression In-Reply-To: <1856197092.20160795.1469614716249.JavaMail.zimbra@redhat.com> Message-ID: <312924081.20160874.1469614771085.JavaMail.zimbra@redhat.com> Hi, Lately there has been some discussion on the AOS scalability lists for our storage usage when used in Openshift. While we can scale, the issue is that some customers do not wish to allocate large amounts of storage for storing metrics, as I assume they view metrics and monitoring as secondary functions (now that's whole another discussion..) To the numbers, they're predicting that at maximum scale, Hawkular-Metrics would use close to ~4TB of disk for one week of data. This is clearly too much, and we don't deploy any other compression methods currently than LZ4, which according to my tests is quite bad for our data model. So I created a small prototype that reads our current data model, compresses it and stores it to a new data model (and verifies that the returned data equals to sent data). For testing I used a ~55MB extract from the MiQ instance that QE was running. One caveat of course here, the QE instance is not in heavy usage. For following results, I decided to remove COUNTER type of data, as they looked to be "0" in most cases and compression would basically get rid of all of them, giving too rosy picture. When storing to our current data model, the disk space taken by "data" table was 74MB. My prototype uses the method of Facebook's Gorilla paper (same as what for example Prometheus uses), and in this test I used a one day block size (storing one metric's one day data to one row inside Cassandra). The end result was 3,1MB of storage space used. Code can be found from bitbucket.org/burmanm/compress_proto (Golang). I know Prometheus advertises estimated 1.3 bytes per timestamped value, but those numbers require certain sort of test data that does not represent anything I have (the compression scheme's efficiency depends on the timestamp delta and value deltas and delta-deltas). The prototype lacks certain features, for example I want it to encode compression type to the first 1 byte of the header for each row - so we could add more compression types in the future for different workloads - and availabilities would probably have better compression if we changed the disk presentation to something bit based. ** Read performance John brought up the first question - now that we store large amount of datapoints in a single row, what happens to our performance when we want to read only some parts of the data? - We need to read rows we don't need and then discard those + We reduce the amount of rows read from the Cassandra (less overhead for driver & server) + Reduced disk usage means we'll store more of the data in memory caches How does this affect the end result? I'll skip the last part of the advantage in my testing now and make sure all the reads for both scenarios are happening from the in-memory SSTables or at least disk cache (the testing machine has enough memory to keep everything in memory). For this scenario I stored 1024 datapoints for a single metric, storing them inside one block of data, thus trying to maximize the impact of unnecessary reads. I'm only interested in the first 360 datapoints. In the scenario, our current method requests 360 rows from Cassandra and then processes them. In the compressed mode, we request 1 row (which has 1024 stored metrics) and then filter out those we don't need in the client. Results: BenchmarkCompressedPartialReadSpeed-4 275371 ns/op BenchmarkUncompressedPartialReadSpeed-4 1303088 ns/op As we can see, filtering on the HWKMETRICS side yields quite a large speedup instead of letting Cassandra to read so many rows (all of the rows were from the same partition in this test). ** Storing data Next, lets address some issues we're going to face because of the distributed nature of our solution. We have two issues compared to Prometheus for example (I use it as an example as it was used by one Openshift PM) - we let data to arrive out-of-order and we must deal with distributed nature of our data storage. We are also stricter when it comes to syncing to the storage, while Prometheus allows some data to be lost in between the writes. I can get back to optimization targets later. For storing the data, to be able to apply this sort of compression to it, we would need to always know the previous stored value. To be able to do this, we would need to do read-write path to the Cassandra and this is exactly one of the weaknesses of Cassandra's design (in performance and consistency). Clearly we need to overcome this issue somehow, while still keeping those properties that let us have our advantages. ** First phase of integration For the first phase, I would propose that we keep our current data model for short term storage. We would store the data here as it arrives and then later rewrite it to the compressed scheme in different table. For reads we would request data from the both tables and merge the results. This should not be visible to the users at all and it's a simple approach to the issue. A job framework such as the one John develops currently is required. There are some open questions to this, and I hope some of you have some great ideas I didn't think. Please read the optimization part also if I happened to mention your idea as some future path. - How often do we process the data and do we restrict the out-of-order capabilities to certain timeslice? If we would use something like 4 hour blocks as default, should we start compressing rows after one hour of block closing? While we can technically reopen the row and reindex the whole block, it does not make sense to do this too often. If we decide to go with the reindexing scenario, in that case we could start writing the next block before it closes (like every 15 minutes we would re-encode the currently open blocks if they have new incoming data). We have to be careful here as to not overwhelm our processing power and Cassandra's. This is a tradeoff between minimum disk space usage or minimum CPU/memory usage. - Compression block size changes. User could configure this - increasing it on the fly is no problem for reads, but reducing is slightly more complex scenario). If user increases the size of the block, our query would just pick some extra rows that are instantly discarded, but nothing would break. However, decreasing the size would confuse our Cassandra reads unless we know the time of the block size change and adjust queries accordingly for times before this event and after. ** Optimizing the solution The following optimizations would increase the performance of Hawkular-Metrics ingestion rate a lot and as such are probably worth investigation at some point. But they're also complex and I would want to refrain from implementing them in the first phase so that we could get compression quicker to the product - so thta we would not miss certain deadlines. - Stop writing to the Cassandra in the first phase. Instead we write to something more ephemeral, such as mmap backed memory cache that is distributed among the Hawkular nodes. It would also need some sort of processing locality (direct the write to the node that controls the hash of the metricId for example - sort of like HBase does), unless we want to employ locks to prevent ordering issues if we encode already in the memory. From memory we would then store blocks to the permanent Cassandra store. The clients need to be token/hash-method aware to send data to the correct node. Benefits for that solution is increased write speed as we such backend easily reaches a million writes per second and the only bottleneck would be our JSON parsing performance. Reads could be served from both storages without much overhead. This optimization would be worth it even without the compression layer, but I would say this is not our most urgent issue (but if the write ingestion speed becomes an issue, this is the best solution to increasing and it's used in many Cassandra solutions, for time series I think SignalFX uses somewhat same approach, although they first write to Kafka). From hrupp at redhat.com Wed Jul 27 07:48:57 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Wed, 27 Jul 2016 13:48:57 +0200 Subject: [Hawkular-dev] Hawkular.org In-Reply-To: References: <587e0bc6-44d9-3f14-e872-5a58b2058223@redhat.com> <23adbfcc-1e2f-2e92-bb4d-d205618e7c82@redhat.com> <6091FA5E-9110-4F49-BAD6-A7A297E075F9@redhat.com> <488951763.4062228.1467887126176.JavaMail.zimbra@redhat.com> Message-ID: <8688C533-E96A-488C-8EE4-AA8E4DAE69E1@redhat.com> On 26 Jul 2016, at 16:28, Jay Shaughnessy wrote: > I understand the decision to have APM and Metrics as top level options > but I'd like to see Standalone Alerting mentioned somewhere. Perhaps > in Labs? Also, a small thing but we've been trying to migrate from > 'Hawkular Alerts' to 'Hawkular Alerting' in text/docs, etc. I'd like to > see that changed on hawkular.org. +1 on labs. Could you please open a PR against the web site? From hrupp at redhat.com Wed Jul 27 07:41:45 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Wed, 27 Jul 2016 13:41:45 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> Message-ID: On 22 Jul 2016, at 17:58, Juraci Paix?o Kr?hling wrote: > Do we still see the main monitored subject as the application, or do we > care about the OS/environment? In other words: does it make sense to > have two application instances with the same feed-id? If so, having I am not sure I understood the question. Right now the feed-id more or less "identifies the agent". In a case where one WildFly has one embedded agent, the feed-id more or less also identifies that server (at least in standalone mode; I don't think Domain+ Docker/K8s makes too much sense, as you now would have 2 competing orchestration systems, but I may be wrong). Now what is an "application instance". If it is a WF running a certain .?ar file, I don't think it does not make sense to have two with the same feed-id with this current model I described. > an > alternate algorithm for containers that comes up with an ID based on > local artifacts would solve most of the issues, I believe. Can you describe that please? Heiko From jkremser at redhat.com Wed Jul 27 10:44:29 2016 From: jkremser at redhat.com (Jiri Kremser) Date: Wed, 27 Jul 2016 16:44:29 +0200 Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: <7421037.s5N4yk3Dks@dhcp-10-40-1-131.brq.redhat.com> Message-ID: Or is there any other way that I can exploit the latest inventory? There is. build the inventory: git clone https://github.com/hawkular/hawkular-inventory.git cd hawkular-inventory mvn clean install -DskipTests ..and then build the services that will use the inventory snapshot version git clone https://github.com/hawkular/hawkular-services.git cd hawkular-services sed -i "s/\(\)[^<]*/\10.18.0.Final-SNAPSHOT/g" ./pom.xml mvn clean install -Pdev -DskipTests ./dist/target/hawkular-*/bin/standalone.sh -Dhawkular.log.cassandra=WARN -Dhawkular.log.inventory.rest.requests=DEBUG -Dhawkular.rest.user=jdoe -Dhawkular.rest.password=password -Dhawkular.agent.enabled=true' jk On Wed, Jul 27, 2016 at 4:11 AM, Austin Kuo wrote: > When is the next inventory release because I'm waiting for the fix? Or is > there any other way that I can exploit the latest inventory? > > Thanks! > Austin Kuo ? 2016?7?21? ???18:34??? > >> Sure. >> https://issues.jboss.org/browse/HAWKULAR-1099 >> >> On Thu, Jul 21, 2016 at 6:23 PM Lukas Krejci wrote: >> >>> On ?tvrtek 21. ?ervence 2016 10:16:27 CEST Austin Kuo wrote: >>> > Oops, it's 0.17.2.Final. I just saw it from the browser. >>> > >>> >>> Ok, then this is a bug... Would you be able to write up a JIRA for it >>> with >>> repro steps so that I can take a look at it? >>> >>> > On Thu, Jul 21, 2016 at 6:15 PM Austin Kuo wrote: >>> > > Not sure about the version since i'm using the docker image provide >>> by >>> > > pilhuhn. >>> > > I'm try to run it directly, but how to make it run at the host >>> 0.0.0.0 >>> > > such that I can access it from remote ? >>> > > >>> > > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci >>> wrote: >>> > >> That's definitely a bug. What version of inventory are you using? >>> > >> >>> > >> I think I've fixed a problem like this in 0.17.2.Final. But of >>> course, >>> > >> this >>> > >> could be a different instantiation of it. >>> > >> >>> > >> On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: >>> > >> > I was creating 2 different resource types with the same http >>> client at >>> > >> >>> > >> the >>> > >> >>> > >> > same time. >>> > >> > But one succeed, the other failed with the response 400 and body: >>> > >> > { >>> > >> > >>> > >> > "errorMsg" : "The transaction has already been closed" >>> > >> > >>> > >> > } >>> > >> > >>> > >> > Is it not allowed to do so? >>> > >> > >>> > >> > Austin. >>> > >> >>> > >> -- >>> > >> Lukas Krejci >>> > >> >>> > >> _______________________________________________ >>> > >> hawkular-dev mailing list >>> > >> hawkular-dev at lists.jboss.org >>> > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>> >>> >>> -- >>> Lukas Krejci >>> >> > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160727/917c59b3/attachment.html From auszon3 at gmail.com Wed Jul 27 11:37:04 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Wed, 27 Jul 2016 15:37:04 +0000 Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: <7421037.s5N4yk3Dks@dhcp-10-40-1-131.brq.redhat.com> Message-ID: THANKS! On Wed, Jul 27, 2016 at 10:44 PM Jiri Kremser wrote: > Or is there any other way that I can exploit the latest inventory? > > There is. > > build the inventory: > > git clone https://github.com/hawkular/hawkular-inventory.git > cd hawkular-inventory > mvn clean install -DskipTests > > ..and then build the services that will use the inventory snapshot version > > git clone https://github.com/hawkular/hawkular-services.git > cd hawkular-services > sed -i > "s/\(\)[^<]*/\10.18.0.Final-SNAPSHOT/g" > ./pom.xml > mvn clean install -Pdev -DskipTests > ./dist/target/hawkular-*/bin/standalone.sh -Dhawkular.log.cassandra=WARN > -Dhawkular.log.inventory.rest.requests=DEBUG -Dhawkular.rest.user=jdoe > -Dhawkular.rest.password=password -Dhawkular.agent.enabled=true' > > jk > > On Wed, Jul 27, 2016 at 4:11 AM, Austin Kuo wrote: > >> When is the next inventory release because I'm waiting for the fix? Or is >> there any other way that I can exploit the latest inventory? >> >> Thanks! >> Austin Kuo ? 2016?7?21? ???18:34??? >> >>> Sure. >>> https://issues.jboss.org/browse/HAWKULAR-1099 >>> >>> On Thu, Jul 21, 2016 at 6:23 PM Lukas Krejci wrote: >>> >>>> On ?tvrtek 21. ?ervence 2016 10:16:27 CEST Austin Kuo wrote: >>>> > Oops, it's 0.17.2.Final. I just saw it from the browser. >>>> > >>>> >>>> Ok, then this is a bug... Would you be able to write up a JIRA for it >>>> with >>>> repro steps so that I can take a look at it? >>>> >>>> > On Thu, Jul 21, 2016 at 6:15 PM Austin Kuo wrote: >>>> > > Not sure about the version since i'm using the docker image provide >>>> by >>>> > > pilhuhn. >>>> > > I'm try to run it directly, but how to make it run at the host >>>> 0.0.0.0 >>>> > > such that I can access it from remote ? >>>> > > >>>> > > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci >>>> wrote: >>>> > >> That's definitely a bug. What version of inventory are you using? >>>> > >> >>>> > >> I think I've fixed a problem like this in 0.17.2.Final. But of >>>> course, >>>> > >> this >>>> > >> could be a different instantiation of it. >>>> > >> >>>> > >> On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: >>>> > >> > I was creating 2 different resource types with the same http >>>> client at >>>> > >> >>>> > >> the >>>> > >> >>>> > >> > same time. >>>> > >> > But one succeed, the other failed with the response 400 and body: >>>> > >> > { >>>> > >> > >>>> > >> > "errorMsg" : "The transaction has already been closed" >>>> > >> > >>>> > >> > } >>>> > >> > >>>> > >> > Is it not allowed to do so? >>>> > >> > >>>> > >> > Austin. >>>> > >> >>>> > >> -- >>>> > >> Lukas Krejci >>>> > >> >>>> > >> _______________________________________________ >>>> > >> hawkular-dev mailing list >>>> > >> hawkular-dev at lists.jboss.org >>>> > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >>>> >>>> >>>> -- >>>> Lukas Krejci >>>> >>> >> _______________________________________________ >> hawkular-dev mailing list >> hawkular-dev at lists.jboss.org >> https://lists.jboss.org/mailman/listinfo/hawkular-dev >> >> > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160727/0ce03d3e/attachment-0001.html From jpkroehling at redhat.com Wed Jul 27 12:02:17 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Wed, 27 Jul 2016 18:02:17 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> Message-ID: <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> On 27.07.2016 13:41, Heiko W.Rupp wrote: > On 22 Jul 2016, at 17:58, Juraci Paix?o Kr?hling wrote: > >> Do we still see the main monitored subject as the application, or do we >> care about the OS/environment? In other words: does it make sense to >> have two application instances with the same feed-id? If so, having > > I am not sure I understood the question. IIRC, at the beginning of Hawkular, the main idea was to have metrics on a per-application basis, instead of a per-deployment/instance. The OS metrics were an "extra". So, the question is if we still have a focus on the "per application metrics", no matter how many instances (containers, VMs, ...) are there for this application. Or if we still need the OS metrics, even for container environments. If we need the OS metrics, then we need one feed-id per "application" instance. > Right now the feed-id more or less "identifies the agent". In a case where > one WildFly has one embedded agent, the feed-id more or less also > identifies that server (at least in standalone mode; I don't think Domain+ > Docker/K8s makes too much sense, as you now would have 2 competing > orchestration systems, but I may be wrong). > Now what is an "application instance". If it is a WF running a certain .?ar > file, I don't think it does not make sense to have two with the same > feed-id with this current model I described. Right, that's my point. For an infra where the application is deployed on Docker containers, like in OpenShift, it doesn't matter to Hawkular how many containers are there for a given application: Openshift would auto scale up and down on demand. In such scenario, the feed-id isn't of much help, as it would change very often. >> an >> alternate algorithm for containers that comes up with an ID based on >> local artifacts would solve most of the issues, I believe. > > Can you describe that please? Sure: if we assume that the feed-id for an image should remain the same for all instances of that image (I'm not sure yet that's the case), then the alternative algorithm could be used when the Agent starts, based on the checksum for the content of the artifacts found on standalone/deployments . Example: - standalone/deployments/foobar.war sha1sum: 96866e09c9ade125d9f3d3fc9766ccba5961968e - standalone/deployments/foobar-ds.xml sha1sum: 644faffab30df784c995fa75e4ff84bcf0857d6e - feed-id: sha1sum(sha1sum(foobar.war) + sha1sum(foobar-ds.xml)) result: a4f36be49f641967e51e2707c1fe6d0a29c2f682 If there's a new image for the foobar-service, there are two possible scenarios: - Only the underlying artifacts were changed (OS updates, EAP updates, ...), but the foobar-service itself is the same. On this scenario, the resulting sha1sum is the same, so, the same feed-id as before is used. - foobar-service has changed, no matter if the OS was updated or not: the sha1sum is new, so, new feed-id. For non-container deployments, the feed-id could still be generated the way it is today. A new configuration parameter on standalone.xml would make sense, on this case, to specify the feed-id algo to use. - Juca. From mazz at redhat.com Wed Jul 27 12:51:53 2016 From: mazz at redhat.com (John Mazzitelli) Date: Wed, 27 Jul 2016 12:51:53 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> Message-ID: <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> > - feed-id: sha1sum(sha1sum(foobar.war) + sha1sum(foobar-ds.xml)) > result: a4f36be49f641967e51e2707c1fe6d0a29c2f682 Calculating the feed ID based on discovered resources is going to be very difficult if not impossible with the current implementation - the agent needs the feed ID upfront during startup before it even runs discovery (i.e. before it even knows about any resources - let along child resources like deployments) - it requires the feed ID internally to start many components, it can't talk to hawkular-inventory without it, and it needs the feed ID to connect to the hawkular cmdgw server. From jpkroehling at redhat.com Wed Jul 27 13:06:12 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Wed, 27 Jul 2016 19:06:12 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> Message-ID: On 27.07.2016 18:51, John Mazzitelli wrote: >> - feed-id: sha1sum(sha1sum(foobar.war) + sha1sum(foobar-ds.xml)) >> result: a4f36be49f641967e51e2707c1fe6d0a29c2f682 > > Calculating the feed ID based on discovered resources is going to be very difficult if not impossible with the current implementation - the agent needs the feed ID upfront during startup before it even runs discovery (i.e. before it even knows about any resources - let along child resources like deployments) - it requires the feed ID internally to start many components, it can't talk to hawkular-inventory without it, and it needs the feed ID to connect to the hawkular cmdgw server. I considered this, but I thought it wouldn't be a problem to implement a separate Wildfly extension that would calculate this during the boot and put the resulting value in a system property for later use by the agent itself. Would that work? - Juca. From jpkroehling at redhat.com Wed Jul 27 13:08:46 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Wed, 27 Jul 2016 19:08:46 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> Message-ID: <62ae28fc-caaa-bb5d-9e54-97a1b13deb7d@redhat.com> On 27.07.2016 19:06, Juraci Paix?o Kr?hling wrote: > On 27.07.2016 18:51, John Mazzitelli wrote: >>> - feed-id: sha1sum(sha1sum(foobar.war) + sha1sum(foobar-ds.xml)) >>> result: a4f36be49f641967e51e2707c1fe6d0a29c2f682 >> >> Calculating the feed ID based on discovered resources is going to be very difficult if not impossible with the current implementation - the agent needs the feed ID upfront during startup before it even runs discovery (i.e. before it even knows about any resources - let along child resources like deployments) - it requires the feed ID internally to start many components, it can't talk to hawkular-inventory without it, and it needs the feed ID to connect to the hawkular cmdgw server. > > I considered this, but I thought it wouldn't be a problem to implement a > separate Wildfly extension that would calculate this during the boot and > put the resulting value in a system property for later use by the agent > itself. Would that work? By the way: I don't think this extension needs to wait for the deployers either: all it would need to do is to get a VirtualJarInputStream for each of the deployments, and calculate the sha1sum based on that. - Juca. From jsanda at redhat.com Wed Jul 27 15:17:11 2016 From: jsanda at redhat.com (John Sanda) Date: Wed, 27 Jul 2016 15:17:11 -0400 Subject: [Hawkular-dev] Metrics storage usage and compression In-Reply-To: <312924081.20160874.1469614771085.JavaMail.zimbra@redhat.com> References: <312924081.20160874.1469614771085.JavaMail.zimbra@redhat.com> Message-ID: > On Jul 27, 2016, at 6:19 AM, Michael Burman wrote: > > Hi, > > Lately there has been some discussion on the AOS scalability lists for our storage usage when used in Openshift. While we can scale, the issue is that some customers do not wish to allocate large amounts of storage for storing metrics, as I assume they view metrics and monitoring as secondary functions (now that's whole another discussion..) > > To the numbers, they're predicting that at maximum scale, Hawkular-Metrics would use close to ~4TB of disk for one week of data. This is clearly too much, and we don't deploy any other compression methods currently than LZ4, which according to my tests is quite bad for our data model. So I created a small prototype that reads our current data model, compresses it and stores it to a new data model (and verifies that the returned data equals to sent data). For testing I used a ~55MB extract from the MiQ instance that QE was running. One caveat of course here, the QE instance is not in heavy usage. For following results, I decided to remove COUNTER type of data, as they looked to be "0" in most cases and compression would basically get rid of all of them, giving too rosy picture. Would we use Cassandra?s compression in addition to this or turn it off? Will this compression work with non-numeric data? I am wondering about availability and string metrics. I don?t necessarily see it as a deal breaker if it only handles numeric data because that is mostly what we?re dealing with, and in OpenShift neither availability nor string metrics are currently used. > > When storing to our current data model, the disk space taken by "data" table was 74MB. My prototype uses the method of Facebook's Gorilla paper (same as what for example Prometheus uses), and in this test I used a one day block size (storing one metric's one day data to one row inside Cassandra). The end result was 3,1MB of storage space used. Code can be found from bitbucket.org/burmanm/compress_proto (Golang). I know Prometheus advertises estimated 1.3 bytes per timestamped value, but those numbers require certain sort of test data that does not represent anything I have (the compression scheme's efficiency depends on the timestamp delta and value deltas and delta-deltas). The prototype lacks certain features, for example I want it to encode compression type to the first 1 byte of the header for each row - so we could add more compression types in the future for different workloads - and availabilities would probably have better compression if we changed the disk presentat! > ion to something bit based. > > ** Read performance > > John brought up the first question - now that we store large amount of datapoints in a single row, what happens to our performance when we want to read only some parts of the data? > > - We need to read rows we don't need and then discard those > + We reduce the amount of rows read from the Cassandra (less overhead for driver & server) > + Reduced disk usage means we'll store more of the data in memory caches > > How does this affect the end result? I'll skip the last part of the advantage in my testing now and make sure all the reads for both scenarios are happening from the in-memory SSTables or at least disk cache (the testing machine has enough memory to keep everything in memory). For this scenario I stored 1024 datapoints for a single metric, storing them inside one block of data, thus trying to maximize the impact of unnecessary reads. I'm only interested in the first 360 datapoints. > > In the scenario, our current method requests 360 rows from Cassandra and then processes them. In the compressed mode, we request 1 row (which has 1024 stored metrics) and then filter out those we don't need in the client. Results: > > BenchmarkCompressedPartialReadSpeed-4 275371 ns/op > BenchmarkUncompressedPartialReadSpeed-4 1303088 ns/op > > As we can see, filtering on the HWKMETRICS side yields quite a large speedup instead of letting Cassandra to read so many rows (all of the rows were from the same partition in this test). We definitely want to do more testing with reads to understand the impact. We have several endpoints that allow you to fetch data points from multiple metrics. It is great to see these numbers for reading against a single metric. What about 5, 10, 20, etc. metrics? And what about when the query spans multiple blocks? Another thing we?ll need to test is how larger row sizes impact Cassandra streaming operations which happen when nodes join/leave the cluster and during anti-entropy repair. > > ** Storing data > > Next, lets address some issues we're going to face because of the distributed nature of our solution. We have two issues compared to Prometheus for example (I use it as an example as it was used by one Openshift PM) - we let data to arrive out-of-order and we must deal with distributed nature of our data storage. We are also stricter when it comes to syncing to the storage, while Prometheus allows some data to be lost in between the writes. I can get back to optimization targets later. > > For storing the data, to be able to apply this sort of compression to it, we would need to always know the previous stored value. To be able to do this, we would need to do read-write path to the Cassandra and this is exactly one of the weaknesses of Cassandra's design (in performance and consistency). Clearly we need to overcome this issue somehow, while still keeping those properties that let us have our advantages. > > ** First phase of integration > > For the first phase, I would propose that we keep our current data model for short term storage. We would store the data here as it arrives and then later rewrite it to the compressed scheme in different table. For reads we would request data from the both tables and merge the results. This should not be visible to the users at all and it's a simple approach to the issue. A job framework such as the one John develops currently is required. If we know we have the uncompressed data, then we don?t need to read from the compressed table(s). I would expect this to be the case for queries involving recent data, e.g., past 4 hours. > > There are some open questions to this, and I hope some of you have some great ideas I didn't think. Please read the optimization part also if I happened to mention your idea as some future path. > > - How often do we process the data and do we restrict the out-of-order capabilities to certain timeslice? If we would use something like 4 hour blocks as default, should we start compressing rows after one hour of block closing? While we can technically reopen the row and reindex the whole block, it does not make sense to do this too often. If we decide to go with the reindexing scenario, in that case we could start writing the next block before it closes (like every 15 minutes we would re-encode the currently open blocks if they have new incoming data). We have to be careful here as to not overwhelm our processing power and Cassandra's. This is a tradeoff between minimum disk space usage or minimum CPU/memory usage. Query patterns are going to dictate idea block sizes. For a first (and probably second) iteration, I say we go with a reasonable default like a day. Does reindexing the row mean updating existing cells on disk? If so, I would like to think about ways that allow us to continue keeping data immutable as we currently do. > > - Compression block size changes. User could configure this - increasing it on the fly is no problem for reads, but reducing is slightly more complex scenario). If user increases the size of the block, our query would just pick some extra rows that are instantly discarded, but nothing would break. However, decreasing the size would confuse our Cassandra reads unless we know the time of the block size change and adjust queries accordingly for times before this event and after. > > ** Optimizing the solution > > The following optimizations would increase the performance of Hawkular-Metrics ingestion rate a lot and as such are probably worth investigation at some point. But they're also complex and I would want to refrain from implementing them in the first phase so that we could get compression quicker to the product - so thta we would not miss certain deadlines. > > - Stop writing to the Cassandra in the first phase. Instead we write to something more ephemeral, such as mmap backed memory cache that is distributed among the Hawkular nodes. It would also need some sort of processing locality (direct the write to the node that controls the hash of the metricId for example - sort of like HBase does), unless we want to employ locks to prevent ordering issues if we encode already in the memory. From memory we would then store blocks to the permanent Cassandra store. The clients need to be token/hash-method aware to send data to the correct node. We ought to look at Infinispan for this. We are already talking about other use cases for Infinispan so it would be nice if it works out. Lastly, this is great stuff! From mfoley at redhat.com Wed Jul 27 15:19:29 2016 From: mfoley at redhat.com (Michael Foley) Date: Wed, 27 Jul 2016 15:19:29 -0400 (EDT) Subject: [Hawkular-dev] Inventory API question In-Reply-To: References: <7421037.s5N4yk3Dks@dhcp-10-40-1-131.brq.redhat.com> Message-ID: <1424374452.1608113.1469647169154.JavaMail.zimbra@redhat.com> The QE automation makes extensive use of the Hawkular REST API for setup, verification, etc .... There might be some useful code examples in the test suite here https://github.com/ManageIQ/wrapanapi/blob/master/mgmtsystem/hawkular.py Michael ----- Original Message ----- From: "Austin Kuo" To: "Discussions around Hawkular development" Sent: Wednesday, July 27, 2016 11:37:04 AM Subject: Re: [Hawkular-dev] Inventory API question THANKS! On Wed, Jul 27, 2016 at 10:44 PM Jiri Kremser < jkremser at redhat.com > wrote: Or is there any other way that I can exploit the latest inventory? There is. build the inventory: git clone https://github.com/hawkular/hawkular-inventory.git cd hawkular-inventory mvn clean install -DskipTests ..and then build the services that will use the inventory snapshot version git clone https://github.com/hawkular/hawkular-services.git cd hawkular-services sed -i "s/\(\)[^<]*/\10.18.0.Final-SNAPSHOT/g" ./pom.xml mvn clean install -Pdev -DskipTests ./dist/target/hawkular-*/bin/standalone.sh -Dhawkular.log.cassandra=WARN -Dhawkular.log.inventory.rest.requests=DEBUG -Dhawkular.rest.user=jdoe -Dhawkular.rest.password=password -Dhawkular.agent.enabled=true' jk On Wed, Jul 27, 2016 at 4:11 AM, Austin Kuo < auszon3 at gmail.com > wrote:
When is the next inventory release because I'm waiting for the fix? Or is there any other way that I can exploit the latest inventory? Thanks! Austin Kuo < auszon3 at gmail.com >? 2016?7?21? ???18:34???
Sure. https://issues.jboss.org/browse/HAWKULAR-1099 On Thu, Jul 21, 2016 at 6:23 PM Lukas Krejci < lkrejci at redhat.com > wrote:
On ?tvrtek 21. ?ervence 2016 10:16:27 CEST Austin Kuo wrote: > Oops, it's 0.17.2.Final. I just saw it from the browser. > Ok, then this is a bug... Would you be able to write up a JIRA for it with repro steps so that I can take a look at it? > On Thu, Jul 21, 2016 at 6:15 PM Austin Kuo < auszon3 at gmail.com > wrote: > > Not sure about the version since i'm using the docker image provide by > > pilhuhn. > > I'm try to run it directly, but how to make it run at the host 0.0.0.0 > > such that I can access it from remote ? > > > > On Thu, Jul 21, 2016 at 5:31 PM Lukas Krejci < lkrejci at redhat.com > wrote: > >> That's definitely a bug. What version of inventory are you using? > >> > >> I think I've fixed a problem like this in 0.17.2.Final. But of course, > >> this > >> could be a different instantiation of it. > >> > >> On ?tvrtek 21. ?ervence 2016 8:42:03 CEST Austin Kuo wrote: > >> > I was creating 2 different resource types with the same http client at > >> > >> the > >> > >> > same time. > >> > But one succeed, the other failed with the response 400 and body: > >> > { > >> > > >> > "errorMsg" : "The transaction has already been closed" > >> > > >> > } > >> > > >> > Is it not allowed to do so? > >> > > >> > Austin. > >> > >> -- > >> Lukas Krejci > >> > >> _______________________________________________ > >> hawkular-dev mailing list > >> hawkular-dev at lists.jboss.org > >> https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Lukas Krejci
_______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev
_______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev
_______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160727/ad235892/attachment-0001.html From lkrejci at redhat.com Wed Jul 27 16:15:27 2016 From: lkrejci at redhat.com (Lukas Krejci) Date: Wed, 27 Jul 2016 22:15:27 +0200 Subject: [Hawkular-dev] Hawkular Inventory 0.17.3.Final Released Message-ID: <5464828.z9cvTtq8N7@localhost.localdomain> Hi all, I am happy to announce the release of Hawkular Inventory 0.17.3.Final. There are now new features but a couple of important bugfixes: * transactions are now properly handled during /hawkular/inventory/traversal. It should no longer happen that a traversal would return stale data. * [HAWKULAR-1099] transaction retries actually work with Titan now (Titan closes a transaction on failure while Inventory tried to rollback a failed transaction, leading Titan to complain about trying to work with a closed transaction) * update to /traversal queries to handle /traversal/recursive (which would give you all the entities in inventory, so don't do that ;) ) -- Lukas Krejci From mazz at redhat.com Thu Jul 28 09:31:49 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 28 Jul 2016 09:31:49 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <62ae28fc-caaa-bb5d-9e54-97a1b13deb7d@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> <62ae28fc-caaa-bb5d-9e54-97a1b13deb7d@redhat.com> Message-ID: <1423739412.9358466.1469712709987.JavaMail.zimbra@redhat.com> I would say first we should not expect people to be happy to have install a _second_ subsystem extension to just to support Hawkular (having a separate feature pack, installer, separate in standalone.xml, etc. etc). But we could add more extension *services* or DUPs (see below) to the Hawkular WildFly Agent subsystem itself and it just comes with the agent with no additional installation or configs necessary. Also, this feature of building a feed ID based on content within the app server is going to have drawbacks when you consider it won't work if the app server's deployments could ever change (add deployments, remove deployments, or even just patching existing deployments if the version string is part of the deployment filename [foobar-1.0.war -> foobar-2.0.war]) So this feature would be very restrictive in who should be using it and people should be very aware of it in case they turn that feature on when they shouldn't (because the consequences would be bad - the agent's feed ID would change underneath of it simply when a deployment changed). That said, what might be possible (given the restrictions noted above), if we do want to support such a thing, we could allow a new pre-defined value for 's "feed-id" attribute. We already have one - "autogenerate". We could have something like "autogenerate-based-on-deployments". If the agent sees this, it could in theory add a DUP (DeploymentUnitProcessor) to the series of DUPs in the app server. This DUP of ours would pass through the deployments, but before it does it looks at the deployment name and keeps a running sha1 hash, building up the final hash when all deployments are passed through, put it somewhere the agent can access, and that will be the feed ID. (note if the "real" deployer that will handle that deployment actually rejects the deployment, we might have a problem because we'll have hashed on something that isn't really deployed - I don't know if our DUP can see when a deployment is rejected down the chain). We already do this kind of thing in RHQ. We add a DUP [1] and in the DUP (in RHQ anyway) we look at the name of the deployment and if it isn't rhq.war, the deployment is rejected [2]. For Hawkular, we just let it pass through and deploy but we'd create the hash based on the name. [1] https://github.com/rhq-project/rhq/blob/master/modules/enterprise/server/startup-subsystem/src/main/java/org/rhq/enterprise/startup/StartupSubsystemAdd.java#L65-L71 [2] https://github.com/rhq-project/rhq/blob/master/modules/enterprise/server/startup-subsystem/src/main/java/org/rhq/enterprise/startup/StartupCrippledDeploymentProcessor.java#L15-L31 ----- Original Message ----- > On 27.07.2016 19:06, Juraci Paix?o Kr?hling wrote: > > On 27.07.2016 18:51, John Mazzitelli wrote: > >>> - feed-id: sha1sum(sha1sum(foobar.war) + sha1sum(foobar-ds.xml)) > >>> result: a4f36be49f641967e51e2707c1fe6d0a29c2f682 > >> > >> Calculating the feed ID based on discovered resources is going to be very > >> difficult if not impossible with the current implementation - the agent > >> needs the feed ID upfront during startup before it even runs discovery > >> (i.e. before it even knows about any resources - let along child > >> resources like deployments) - it requires the feed ID internally to start > >> many components, it can't talk to hawkular-inventory without it, and it > >> needs the feed ID to connect to the hawkular cmdgw server. > > > > I considered this, but I thought it wouldn't be a problem to implement a > > separate Wildfly extension that would calculate this during the boot and > > put the resulting value in a system property for later use by the agent > > itself. Would that work? > > By the way: I don't think this extension needs to wait for the deployers > either: all it would need to do is to get a VirtualJarInputStream for > each of the deployments, and calculate the sha1sum based on that. > > - Juca. > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > From hrupp at redhat.com Thu Jul 28 09:56:04 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Thu, 28 Jul 2016 14:56:04 +0100 Subject: [Hawkular-dev] Metrics storage usage and compression In-Reply-To: <312924081.20160874.1469614771085.JavaMail.zimbra@redhat.com> References: <312924081.20160874.1469614771085.JavaMail.zimbra@redhat.com> Message-ID: <1501E00E-7821-4A8D-A801-97C4FB25347F@redhat.com> On 27 Jul 2016, at 11:19, Michael Burman wrote: > When storing to our current data model, the disk space taken by "data" > table was 74MB. My prototype uses the method of Facebook's Gorilla > paper (same as what for example Prometheus uses), and in this test I > used a one day block size (storing one metric's one day data to one > row inside Cassandra). The end result was 3,1MB of storage space used. > Code can be found from bitbucket.org/burmanm/compress_proto (Golang). > I know Prometheus advertises estimated 1.3 bytes per timestamped > value, but those How many bytes to we currently use per timestamped value? From jpkroehling at redhat.com Thu Jul 28 10:08:41 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Thu, 28 Jul 2016 16:08:41 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <1423739412.9358466.1469712709987.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> <62ae28fc-caaa-bb5d-9e54-97a1b13deb7d@redhat.com> <1423739412.9358466.1469712709987.JavaMail.zimbra@redhat.com> Message-ID: <311801b5-5d97-3924-307d-e76c0c63cace@redhat.com> On 28.07.2016 15:31, John Mazzitelli wrote: > Also, this feature of building a feed ID based on content within the app server is going to have drawbacks when you consider it won't work if the app server's deployments could ever change (add deployments, remove deployments, or even just patching existing deployments if the version string is part of the deployment filename [foobar-1.0.war -> foobar-2.0.war]) So this feature would be very restrictive in who should be using it and people should be very aware of it in case they turn that feature on when they shouldn't (because the consequences would be bad - the agent's feed ID would change underneath of it simply when a deployment changed). On the context of Docker, the deployments will "never" change. The appropriate way to deploy a new version of the WAR, or to change a DS is to generate a new image, stop the containers running the old images and start new containers using the new images. > That said, what might be possible (given the restrictions noted above), if we do want to support such a thing, we could allow a new pre-defined value for 's "feed-id" attribute. We already have one - "autogenerate". We could have something like "autogenerate-based-on-deployments". If the agent sees this, it could in theory add a DUP (DeploymentUnitProcessor) to the series of DUPs in the app server. This DUP of ours would pass through the deployments, but before it does it looks at the deployment name and keeps a running sha1 hash, building up the final hash when all deployments are passed through, put it somewhere the agent can access, and that will be the feed ID. (note if the "real" deployer that will handle that deployment actually rejects the deployment, we might have a problem because we'll have hashed on something that isn't really deployed - I don't know if our DUP can see when a deployment is rejected down the chain). s/deployment name/deployment content/ and we are in agreement :) The name of the deployment would possibly always be the same (foobar-service.war), but not its content. - Juca. From hrupp at redhat.com Thu Jul 28 10:20:59 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Thu, 28 Jul 2016 15:20:59 +0100 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> Message-ID: <00687A69-4370-4338-9441-27F2A6171C49@redhat.com> On 27 Jul 2016, at 17:02, Juraci Paix?o Kr?hling wrote: > Sure: if we assume that the feed-id for an image should remain the > same for all instances of that image (I'm not sure yet that's the > case), then the alternative algorithm could be used when the Agent > starts, based on the checksum for the content of the artifacts found > on standalone/deployments . Example: > > - standalone/deployments/foobar.war > sha1sum: 96866e09c9ade125d9f3d3fc9766ccba5961968e There is one case we need to consider here: we have an image with foobar.war;v1 with a certain sha. Now the user creates a new image with foobar.war;v2. For the user this is logically the same thing but with a different version. We should not treat that as different applications. In this case we would rather need to record the fact that v1 was replaced by v2 (by putting a tag) and continue the already recorded metrics for the same (logical) application. From jkremser at redhat.com Thu Jul 28 10:25:22 2016 From: jkremser at redhat.com (Jiri Kremser) Date: Thu, 28 Jul 2016 16:25:22 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> Message-ID: Calculating the feed ID based on discovered resources is going to be very difficult if not impossible with the current implementation - the agent needs the feed ID upfront during startup before it even runs discovery (i.e. before it even knows about any resources - let along child resources like deployments) - it requires the feed ID internally to start many components, it can't talk to hawkular-inventory without it, and it needs the feed ID to connect to the hawkular cmdgw server. Updating the feed name ex post: https://git.io/vKAl2 Calculating the feed ID based on discovered resources... - identity hashes in inventory, Lukas knows On Wed, Jul 27, 2016 at 6:51 PM, John Mazzitelli wrote: > > - feed-id: sha1sum(sha1sum(foobar.war) + sha1sum(foobar-ds.xml)) > > result: a4f36be49f641967e51e2707c1fe6d0a29c2f682 > > Calculating the feed ID based on discovered resources is going to be very > difficult if not impossible with the current implementation - the agent > needs the feed ID upfront during startup before it even runs discovery > (i.e. before it even knows about any resources - let along child resources > like deployments) - it requires the feed ID internally to start many > components, it can't talk to hawkular-inventory without it, and it needs > the feed ID to connect to the hawkular cmdgw server. > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160728/94d8941d/attachment.html From miburman at redhat.com Thu Jul 28 10:29:30 2016 From: miburman at redhat.com (Michael Burman) Date: Thu, 28 Jul 2016 10:29:30 -0400 (EDT) Subject: [Hawkular-dev] Metrics storage usage and compression In-Reply-To: References: <312924081.20160874.1469614771085.JavaMail.zimbra@redhat.com> Message-ID: <241765150.22589497.1469716170440.JavaMail.zimbra@redhat.com> Hi, We can use Cassandra's compression in addition to ours. There's always some metadata in the blocks that could benefit a little bit (but not much). This does not help with string metrics, but they benefit a lot from deflate compression already (and there isn't necessarily much better algorithms than deflate for strings - LZMA2 is _a lot_ slower). For availabilities, we can benefit as there's only the timestamp delta-delta to store (which could be close to 0 bits and value delta is also 0 bits). Multiple metrics cause no differences here, if you fetch 360 rows from 25 metrics (9000 rows) or 25 rows. Each 360 rows equals same performance hit. The only case where we could see some performance issue is comparing single datapoint fetching, but how realistic reading scenario is that - and is it worth optimizing? Nodes joining is certainly one aspect that must be monitored, however a single row isn't that big. For example, when transferring the MiQ data, there are few rows which have ~2800 datapoints. Those take ~1600 bytes. Largest row I could find is ~11kB. Lets say our worst cases are around 5 bytes per stored timestamp, that gives us 8640 datapoints if frequency is 10s and a total of ~43kB. If we stick to the immutability, then we will have uncompressed data lying around for out-of-order data that arrives later than the compression moment. Which would be X time after the blockSize closes. However, given that uncompressed data is slower to read than compressed, do we really want to have that immutability? We can also avoid cells modification and instead use new rows (but which one it is depends of course on the schema): - Assume blockSize of one day, so each compressed block would have a timestamp starting at 00:00:00. - If we query something, we must always use as startTime the 00:00:00 - Now, that allows us to store data in 01:00:00, 02:00:00, etc. they would be found with the same query and we could just merge the 01:00:00, 02:00:00 etc later to a single block of 00:00:00. - At the merge time we could also take all outlier timestamps and merge them to the single block Infinispan sounds like a fine option, I didn't evaluate any technical options yet for that storage. - Micke ----- Original Message ----- From: "John Sanda" To: "Discussions around Hawkular development" Sent: Wednesday, July 27, 2016 10:17:11 PM Subject: Re: [Hawkular-dev] Metrics storage usage and compression > On Jul 27, 2016, at 6:19 AM, Michael Burman wrote: > > Hi, > > Lately there has been some discussion on the AOS scalability lists for our storage usage when used in Openshift. While we can scale, the issue is that some customers do not wish to allocate large amounts of storage for storing metrics, as I assume they view metrics and monitoring as secondary functions (now that's whole another discussion..) > > To the numbers, they're predicting that at maximum scale, Hawkular-Metrics would use close to ~4TB of disk for one week of data. This is clearly too much, and we don't deploy any other compression methods currently than LZ4, which according to my tests is quite bad for our data model. So I created a small prototype that reads our current data model, compresses it and stores it to a new data model (and verifies that the returned data equals to sent data). For testing I used a ~55MB extract from the MiQ instance that QE was running. One caveat of course here, the QE instance is not in heavy usage. For following results, I decided to remove COUNTER type of data, as they looked to be "0" in most cases and compression would basically get rid of all of them, giving too rosy picture. Would we use Cassandra?s compression in addition to this or turn it off? Will this compression work with non-numeric data? I am wondering about availability and string metrics. I don?t necessarily see it as a deal breaker if it only handles numeric data because that is mostly what we?re dealing with, and in OpenShift neither availability nor string metrics are currently used. > > When storing to our current data model, the disk space taken by "data" table was 74MB. My prototype uses the method of Facebook's Gorilla paper (same as what for example Prometheus uses), and in this test I used a one day block size (storing one metric's one day data to one row inside Cassandra). The end result was 3,1MB of storage space used. Code can be found from bitbucket.org/burmanm/compress_proto (Golang). I know Prometheus advertises estimated 1.3 bytes per timestamped value, but those numbers require certain sort of test data that does not represent anything I have (the compression scheme's efficiency depends on the timestamp delta and value deltas and delta-deltas). The prototype lacks certain features, for example I want it to encode compression type to the first 1 byte of the header for each row - so we could add more compression types in the future for different workloads - and availabilities would probably have better compression if we changed the disk presentat! > ion to something bit based. > > ** Read performance > > John brought up the first question - now that we store large amount of datapoints in a single row, what happens to our performance when we want to read only some parts of the data? > > - We need to read rows we don't need and then discard those > + We reduce the amount of rows read from the Cassandra (less overhead for driver & server) > + Reduced disk usage means we'll store more of the data in memory caches > > How does this affect the end result? I'll skip the last part of the advantage in my testing now and make sure all the reads for both scenarios are happening from the in-memory SSTables or at least disk cache (the testing machine has enough memory to keep everything in memory). For this scenario I stored 1024 datapoints for a single metric, storing them inside one block of data, thus trying to maximize the impact of unnecessary reads. I'm only interested in the first 360 datapoints. > > In the scenario, our current method requests 360 rows from Cassandra and then processes them. In the compressed mode, we request 1 row (which has 1024 stored metrics) and then filter out those we don't need in the client. Results: > > BenchmarkCompressedPartialReadSpeed-4 275371 ns/op > BenchmarkUncompressedPartialReadSpeed-4 1303088 ns/op > > As we can see, filtering on the HWKMETRICS side yields quite a large speedup instead of letting Cassandra to read so many rows (all of the rows were from the same partition in this test). We definitely want to do more testing with reads to understand the impact. We have several endpoints that allow you to fetch data points from multiple metrics. It is great to see these numbers for reading against a single metric. What about 5, 10, 20, etc. metrics? And what about when the query spans multiple blocks? Another thing we?ll need to test is how larger row sizes impact Cassandra streaming operations which happen when nodes join/leave the cluster and during anti-entropy repair. > > ** Storing data > > Next, lets address some issues we're going to face because of the distributed nature of our solution. We have two issues compared to Prometheus for example (I use it as an example as it was used by one Openshift PM) - we let data to arrive out-of-order and we must deal with distributed nature of our data storage. We are also stricter when it comes to syncing to the storage, while Prometheus allows some data to be lost in between the writes. I can get back to optimization targets later. > > For storing the data, to be able to apply this sort of compression to it, we would need to always know the previous stored value. To be able to do this, we would need to do read-write path to the Cassandra and this is exactly one of the weaknesses of Cassandra's design (in performance and consistency). Clearly we need to overcome this issue somehow, while still keeping those properties that let us have our advantages. > > ** First phase of integration > > For the first phase, I would propose that we keep our current data model for short term storage. We would store the data here as it arrives and then later rewrite it to the compressed scheme in different table. For reads we would request data from the both tables and merge the results. This should not be visible to the users at all and it's a simple approach to the issue. A job framework such as the one John develops currently is required. If we know we have the uncompressed data, then we don?t need to read from the compressed table(s). I would expect this to be the case for queries involving recent data, e.g., past 4 hours. > > There are some open questions to this, and I hope some of you have some great ideas I didn't think. Please read the optimization part also if I happened to mention your idea as some future path. > > - How often do we process the data and do we restrict the out-of-order capabilities to certain timeslice? If we would use something like 4 hour blocks as default, should we start compressing rows after one hour of block closing? While we can technically reopen the row and reindex the whole block, it does not make sense to do this too often. If we decide to go with the reindexing scenario, in that case we could start writing the next block before it closes (like every 15 minutes we would re-encode the currently open blocks if they have new incoming data). We have to be careful here as to not overwhelm our processing power and Cassandra's. This is a tradeoff between minimum disk space usage or minimum CPU/memory usage. Query patterns are going to dictate idea block sizes. For a first (and probably second) iteration, I say we go with a reasonable default like a day. Does reindexing the row mean updating existing cells on disk? If so, I would like to think about ways that allow us to continue keeping data immutable as we currently do. > > - Compression block size changes. User could configure this - increasing it on the fly is no problem for reads, but reducing is slightly more complex scenario). If user increases the size of the block, our query would just pick some extra rows that are instantly discarded, but nothing would break. However, decreasing the size would confuse our Cassandra reads unless we know the time of the block size change and adjust queries accordingly for times before this event and after. > > ** Optimizing the solution > > The following optimizations would increase the performance of Hawkular-Metrics ingestion rate a lot and as such are probably worth investigation at some point. But they're also complex and I would want to refrain from implementing them in the first phase so that we could get compression quicker to the product - so thta we would not miss certain deadlines. > > - Stop writing to the Cassandra in the first phase. Instead we write to something more ephemeral, such as mmap backed memory cache that is distributed among the Hawkular nodes. It would also need some sort of processing locality (direct the write to the node that controls the hash of the metricId for example - sort of like HBase does), unless we want to employ locks to prevent ordering issues if we encode already in the memory. From memory we would then store blocks to the permanent Cassandra store. The clients need to be token/hash-method aware to send data to the correct node. We ought to look at Infinispan for this. We are already talking about other use cases for Infinispan so it would be nice if it works out. Lastly, this is great stuff! _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From jpkroehling at redhat.com Thu Jul 28 10:31:49 2016 From: jpkroehling at redhat.com (=?UTF-8?Q?Juraci_Paix=c3=a3o_Kr=c3=b6hling?=) Date: Thu, 28 Jul 2016 16:31:49 +0200 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <00687A69-4370-4338-9441-27F2A6171C49@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <00687A69-4370-4338-9441-27F2A6171C49@redhat.com> Message-ID: <002d887c-7157-e73b-d75d-236f55785f68@redhat.com> On 28.07.2016 16:20, Heiko W.Rupp wrote: > There is one case we need to consider here: > > we have an image with foobar.war;v1 with a certain sha. > Now the user creates a new image with foobar.war;v2. > For the user this is logically the same thing but with a > different version. > We should not treat that as different applications. > In this case we would rather need to record the fact > that v1 was replaced by v2 (by putting a tag) and > continue the already recorded metrics for the same > (logical) application. Do you mean that the contents of v1 are different of that for v2? If so, only the feed-id will be different. Everything else is as it is today: it's the same case as if the company had deployed a new VM with the v2, router some traffic to it for A/B testing, and shut down the v1 once they are happy with v2. If it is important to have consistent feed-ids across the whole life of the application, then I see no other option than to "force" the consumer to specify a feed-id on the image building process. - Juca. From miburman at redhat.com Thu Jul 28 10:42:42 2016 From: miburman at redhat.com (Michael Burman) Date: Thu, 28 Jul 2016 10:42:42 -0400 (EDT) Subject: [Hawkular-dev] Metrics storage usage and compression In-Reply-To: <1501E00E-7821-4A8D-A801-97C4FB25347F@redhat.com> References: <312924081.20160874.1469614771085.JavaMail.zimbra@redhat.com> <1501E00E-7821-4A8D-A801-97C4FB25347F@redhat.com> Message-ID: <853054147.22593882.1469716962535.JavaMail.zimbra@redhat.com> Hi, I managed to delete that data already, so here comes another calculation based on the real values (not theoretical). These come from Cassandra 2.2.8 and the dataset is Gauges of idling MiQ instance Cassandra data.. 2168597 values in 1528 keys: LZ4: 54331865 bytes -> 25.05 bytes per value Deflate: 28491886 bytes -> 13.14 bytes per value LZ4 + Gorilla: 1574659 bytes -> 0.726 bytes per value The last one is too good because the dataset includes a lot of gauges that have "0" as all of their values. In the dataset the Gorilla compression's worst cases are around 5 bytes per value, but as you can see, the 0 valued measurements considerably improve the compression ratio. How realistic measurements these idlings are .. I don't know. Note: Cassandra 3.8 improves these values, especially with LZ4: LZ4 -> ~15 bytes per value Deflate: ~10 bytes per value In this Openshift example, with just Cassandra upgrade + Deflate we would have improved 25 -> 10. - Micke ----- Original Message ----- From: "Heiko W.Rupp" To: "Discussions around Hawkular development" Sent: Thursday, July 28, 2016 4:56:04 PM Subject: Re: [Hawkular-dev] Metrics storage usage and compression On 27 Jul 2016, at 11:19, Michael Burman wrote: > When storing to our current data model, the disk space taken by "data" > table was 74MB. My prototype uses the method of Facebook's Gorilla > paper (same as what for example Prometheus uses), and in this test I > used a one day block size (storing one metric's one day data to one > row inside Cassandra). The end result was 3,1MB of storage space used. > Code can be found from bitbucket.org/burmanm/compress_proto (Golang). > I know Prometheus advertises estimated 1.3 bytes per timestamped > value, but those How many bytes to we currently use per timestamped value? _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From hrupp at redhat.com Thu Jul 28 11:14:15 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Thu, 28 Jul 2016 16:14:15 +0100 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <002d887c-7157-e73b-d75d-236f55785f68@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <00687A69-4370-4338-9441-27F2A6171C49@redhat.com> <002d887c-7157-e73b-d75d-236f55785f68@redhat.com> Message-ID: <898811F2-3FF7-4119-A327-85B5D02D0BC8@redhat.com> On 28 Jul 2016, at 15:31, Juraci Paix?o Kr?hling wrote: > Do you mean that the contents of v1 are different of that for v2? Yes. > If it is important to have consistent feed-ids across the whole life > of the application, then I see no other option than to "force" the > consumer to specify a feed-id on the image building process. I did not say that :) What I said is that for the 'foobar' application we want to have a continuity of the recorded data even if the version changed. In my post some days ago I wrote: > For these user cases we should probably abstract this to the level > of a flock. We want to monitor the size of the flocks and also when > flock members die and are born, but no longer try to identify > individual members. We brand them to denote being part of the flock. We can put labels on the deployments and those labels are reported back into the Hawkular. So even if the feed id changes we can still identify that the server/node belongs to 'foobar'. From mazz at redhat.com Thu Jul 28 11:58:30 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 28 Jul 2016 11:58:30 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> Message-ID: <480142421.9399181.1469721510003.JavaMail.zimbra@redhat.com> > Calculating the feed ID based on discovered resources... - identity hashes > in inventory, Lukas knows chicken and the egg - agent needs to know its feed ID before it can even put anything into inventory. But the fact an agent can change its feed ID via inventory REST call is interesting. However, that might not fly because its more than Inventory we are worried about. The agent stores things in Hawkular *Metrics* that refer to the (now old) feed ID (like metric IDs and metric tag names and values). I don't like this talk of changing feed IDs at runtime. This goes against everything we talked about and designed when we started. Feeds identify an agent - and a feed ID should never change (at least, that's what we said up until now, and everything is implemented in the agent with that assumption in mind). Allowing an agent to change its feed ID during its lifetime is going to cause problems - probably even problems we haven't thought of yet. IMO, feed IDs need to be locked. An agent, once it determines its feed ID and registers it, should never change that feed ID. From vnguyen at redhat.com Thu Jul 28 12:29:19 2016 From: vnguyen at redhat.com (Viet Nguyen) Date: Thu, 28 Jul 2016 12:29:19 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <00687A69-4370-4338-9441-27F2A6171C49@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <23754756.1606277.1467548084344.JavaMail.zimbra@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <00687A69-4370-4338-9441-27F2A6171C49@redhat.com> Message-ID: <535916281.7276178.1469723359416.JavaMail.zimbra@redhat.com> Heiko, OpenShift source-to-image (S2I) does what you just describe. The DeploymentConfig ('app' config or cattle breeder if you will) remains static. Very likely that in a production OpenShift setup there will be a persistent storage for the WF instance. So regardless of where the WF pod has lived we can still identify its former self. No? Viet ----- Original Message ----- From: "Heiko W.Rupp" To: "Juraci Paix?o Kr?hling" Cc: "Discussions around Hawkular development" Sent: Thursday, July 28, 2016 7:20:59 AM Subject: Re: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env On 27 Jul 2016, at 17:02, Juraci Paix?o Kr?hling wrote: > Sure: if we assume that the feed-id for an image should remain the > same for all instances of that image (I'm not sure yet that's the > case), then the alternative algorithm could be used when the Agent > starts, based on the checksum for the content of the artifacts found > on standalone/deployments . Example: > > - standalone/deployments/foobar.war > sha1sum: 96866e09c9ade125d9f3d3fc9766ccba5961968e There is one case we need to consider here: we have an image with foobar.war;v1 with a certain sha. Now the user creates a new image with foobar.war;v2. For the user this is logically the same thing but with a different version. We should not treat that as different applications. In this case we would rather need to record the fact that v1 was replaced by v2 (by putting a tag) and continue the already recorded metrics for the same (logical) application. _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From miburman at redhat.com Thu Jul 28 12:34:11 2016 From: miburman at redhat.com (Michael Burman) Date: Thu, 28 Jul 2016 12:34:11 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <480142421.9399181.1469721510003.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <9B1802A1-517E-47B0-8671-8342A6CC05DD@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> <480142421.9399181.1469721510003.JavaMail.zimbra@redhat.com> Message-ID: <120926100.22659881.1469723651245.JavaMail.zimbra@redhat.com> Hi, Is there any reason the agent can't use container as identification? HOSTNAME="hawkular-cassandra-1-6o1y8" I guess that's exported to all Openshift containers at least (I don't have Kubernetes instance to check). Unfortunately the image id was not there, but that hostname includes the container-name + generated id. If you deploy additional instances, you get a new id, redeploy - new id. - Micke ----- Original Message ----- From: "John Mazzitelli" To: "Discussions around Hawkular development" Sent: Thursday, July 28, 2016 6:58:30 PM Subject: Re: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env > Calculating the feed ID based on discovered resources... - identity hashes > in inventory, Lukas knows chicken and the egg - agent needs to know its feed ID before it can even put anything into inventory. But the fact an agent can change its feed ID via inventory REST call is interesting. However, that might not fly because its more than Inventory we are worried about. The agent stores things in Hawkular *Metrics* that refer to the (now old) feed ID (like metric IDs and metric tag names and values). I don't like this talk of changing feed IDs at runtime. This goes against everything we talked about and designed when we started. Feeds identify an agent - and a feed ID should never change (at least, that's what we said up until now, and everything is implemented in the agent with that assumption in mind). Allowing an agent to change its feed ID during its lifetime is going to cause problems - probably even problems we haven't thought of yet. IMO, feed IDs need to be locked. An agent, once it determines its feed ID and registers it, should never change that feed ID. _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From mazz at redhat.com Thu Jul 28 13:08:05 2016 From: mazz at redhat.com (John Mazzitelli) Date: Thu, 28 Jul 2016 13:08:05 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <120926100.22659881.1469723651245.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> <480142421.9399181.1469721510003.JavaMail.zimbra@redhat.com> <120926100.22659881.1469723651245.JavaMail.zimbra@redhat.com> Message-ID: <1415357893.9411072.1469725685273.JavaMail.zimbra@redhat.com> > Is there any reason the agent can't use container as identification? > > HOSTNAME="hawkular-cassandra-1-6o1y8" Things like that would work well. We could even allow the agent to support notation like this: or whatever syntax makes sense From anuj1708 at gmail.com Fri Jul 29 03:23:19 2016 From: anuj1708 at gmail.com (Anuj Garg) Date: Fri, 29 Jul 2016 12:53:19 +0530 Subject: [Hawkular-dev] Approach for firebase push notification Message-ID: Hello all, I was doubtful if we gonna provide apk of android client of hawkular through Google play store or user will have to compile for themselves. I assume the case to be of play store. And if that is the case then clients can not use their own google account to setup push notifications for alerts as configration file is needed to be inside apk. I suggest that hawkular can provide one instance of firebase account for it and all the hawkular servers will use the same. With the workflow I suggest, there will not remain the need of setting up unified push server to provide notification. Steps :- - With any user creation on any hawkular serve, there will be created a 32 Byte ID that we can assume to be unique. - Any client that sign in to that user will retrieve that string and will register to that as topic subscription. - When ever a new alert is created. It will fire a HTTP request to Firebase with unique id as toopic and Server key provided by hawkular inbuilt. - Rest work of manupulating the recieved alert will be handled on client side. Please write your views on this. Thanks Anuj Garg -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160729/8b3c6968/attachment-0001.html From hrupp at redhat.com Fri Jul 29 03:52:55 2016 From: hrupp at redhat.com (Heiko W.Rupp) Date: Fri, 29 Jul 2016 09:52:55 +0200 Subject: [Hawkular-dev] Hawkular Inventory 0.17.3.Final Released In-Reply-To: <5464828.z9cvTtq8N7@localhost.localdomain> References: <5464828.z9cvTtq8N7@localhost.localdomain> Message-ID: Great. can you please open a PR for Hawkular-services? So that we can include that in the next release? Thanks Heiko On 27 Jul 2016, at 22:15, Lukas Krejci wrote: > Hi all, > > I am happy to announce the release of Hawkular Inventory 0.17.3.Final. > > There are now new features but a couple of important bugfixes: > * transactions are now properly handled during > /hawkular/inventory/traversal. > It should no longer happen that a traversal would return stale data. > * [HAWKULAR-1099] transaction retries actually work with Titan now > (Titan > closes a transaction on failure while Inventory tried to rollback a > failed > transaction, leading Titan to complain about trying to work with a > closed > transaction) > * update to /traversal queries to handle /traversal/recursive (which > would > give you all the entities in inventory, so don't do that ;) ) > > -- > Lukas Krejci > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev -- Reg. Adresse: Red Hat GmbH, Technopark II, Haus C, Werner-von-Siemens-Ring 14, D-85630 Grasbrunn Handelsregister: Amtsgericht M?nchen HRB 153243 Gesch?ftsf?hrer: Charles Cachera, Michael Cunningham, Michael O'Neill, Eric Shander From jshaughn at redhat.com Fri Jul 29 14:09:02 2016 From: jshaughn at redhat.com (Jay Shaughnessy) Date: Fri, 29 Jul 2016 14:09:02 -0400 Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: <1415357893.9411072.1469725685273.JavaMail.zimbra@redhat.com> References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <05bba1fc-7c5d-ea9b-679f-489da235aa48@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> <480142421.9399181.1469721510003.JavaMail.zimbra@redhat.com> <120926100.22659881.1469723651245.JavaMail.zimbra@redhat.com> <1415357893.9411072.1469725685273.JavaMail.zimbra@redhat.com> Message-ID: Are multiple servers/agents per container a possibility? If so we'd need to append a differentiator to the container identifier. Also, with respect to a feedid based on deployments, isn't it possible that multiple containers run the same deployments for load-balancing purposes? Wouldn't that result in multiple agents running with the same feedid? Anyway, It seems if we could derive a feedid from the container, with perhaps a new token like autogenerate-from-container, that could be a nice way to easily identify a feed for a container. This doesn't solve the issue of replacing a stopped container with a new container, thus generating new inventory and orphaning the existing inventory. But in that situation I think we may just want to look for a decent way to kill that inventory, like TTL or something. On 7/28/2016 1:08 PM, John Mazzitelli wrote: >> Is there any reason the agent can't use container as identification? >> >> HOSTNAME="hawkular-cassandra-1-6o1y8" > Things like that would work well. > > We could even allow the agent to support notation like this: > > > > or whatever syntax makes sense > > _______________________________________________ > hawkular-dev mailing list > hawkular-dev at lists.jboss.org > https://lists.jboss.org/mailman/listinfo/hawkular-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160729/889a5e09/attachment.html From miburman at redhat.com Fri Jul 29 14:22:29 2016 From: miburman at redhat.com (Michael Burman) Date: Fri, 29 Jul 2016 14:22:29 -0400 (EDT) Subject: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env In-Reply-To: References: <40BEEBC6-6D73-4BFA-90C2-86D4D7EA0B5C@redhat.com> <090cb013-9cc9-eb7a-cc7d-8321b565d90b@redhat.com> <1350220148.9101019.1469638313555.JavaMail.zimbra@redhat.com> <480142421.9399181.1469721510003.JavaMail.zimbra@redhat.com> <120926100.22659881.1469723651245.JavaMail.zimbra@redhat.com> <1415357893.9411072.1469725685273.JavaMail.zimbra@redhat.com> Message-ID: <247696944.23501272.1469816549406.JavaMail.zimbra@redhat.com> Hi, Running multiple servers per pod sounds like a defeat of the whole idea of docker ;). If you run multiple instances of this same pod, each one will have a different hostname (the hawkular-cassandra would stay the same, but the part after it would change). - Micke ----- Original Message ----- From: "Jay Shaughnessy" To: hawkular-dev at lists.jboss.org Sent: Friday, July 29, 2016 9:09:02 PM Subject: Re: [Hawkular-dev] Identification of WildFly in container in a Kube/Openshift env Are multiple servers/agents per container a possibility? If so we'd need to append a differentiator to the container identifier. Also, with respect to a feedid based on deployments, isn't it possible that multiple containers run the same deployments for load-balancing purposes? Wouldn't that result in multiple agents running with the same feedid? Anyway, It seems if we could derive a feedid from the container, with perhaps a new token like autogenerate-from-container, that could be a nice way to easily identify a feed for a container. This doesn't solve the issue of replacing a stopped container with a new container, thus generating new inventory and orphaning the existing inventory. But in that situation I think we may just want to look for a decent way to kill that inventory, like TTL or something. On 7/28/2016 1:08 PM, John Mazzitelli wrote: Is there any reason the agent can't use container as identification? HOSTNAME="hawkular-cassandra-1-6o1y8" Things like that would work well. We could even allow the agent to support notation like this: or whatever syntax makes sense _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev _______________________________________________ hawkular-dev mailing list hawkular-dev at lists.jboss.org https://lists.jboss.org/mailman/listinfo/hawkular-dev From mazz at redhat.com Fri Jul 29 17:24:53 2016 From: mazz at redhat.com (John Mazzitelli) Date: Fri, 29 Jul 2016 17:24:53 -0400 (EDT) Subject: [Hawkular-dev] Agent 0.20.1.Final released Message-ID: <1636818687.9774142.1469827493307.JavaMail.zimbra@redhat.com> The Hawkular WildFly Agent 0.20.1.Final has been released. Should be available from Nexus. From auszon3 at gmail.com Sun Jul 31 10:23:41 2016 From: auszon3 at gmail.com (Austin Kuo) Date: Sun, 31 Jul 2016 14:23:41 +0000 Subject: [Hawkular-dev] Local lock contention when creating many resourceTypes concurrently Message-ID: Hi, I was trying to create about 6 resource types at the same time. But one of the response is: { "errorMsg" : "Local lock contention" } Is there a limitation of the number of resourceTypes which can be created at a time? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160731/b570a3a3/attachment.html