[GSoC] Hawkular Android Client
by Artur Dryomov
Hi everyone,
This year I will be working on the Hawkular Android client application as
part of the Google Summer of Code 2015 program.
The application itself will use Hawkular API and AeroGear SDK. In coming
days I’ll research these areas, especially documentation, and will try to
create some sort of architecture and basic design.
Thank you all for this opportunity!
Artur.
1 year, 1 month
Hawkular Metrics Openshift Containers
by Matt Wringe
I have a new subproject in Hawkular Metrics which sets up creating
components for Openshift/Fabric8
(https://github.com/hawkular/hawkular-metrics/pull/200).
There are 3 main parts
Cassandra: creates a custom seed provider to support
ReplicationControllers in Kubernetes, creates a folder/zip archive which
can be used to generate a Docker image. It may make sense to move the
Cassandra parts out to a separate project.
Hawkular Metrics: creates a folder/zip archive which can be used to
generate a Docker image
Kubernetes: pulls everything together into a single kubernetes
application. Can be used to deploy an application zip into fabric8 (via
drag and drop in the web console or via the maven plugin) or deploy all
the components into Openshift via the kubernetes.json configuration file.
The docker images are not created and deployed to a docker registry as
part of the build, it will just create a folder where you can run the
docker build from. None of the maven docker plugins I looked at seemed
to really work properly, so its still a manual process to do the build
(and push to a registry). Its something which needs to be improved.
The Cassandra service currently only supports adding new nodes to a
cluster and not removing them via the ReplicationController. This is due
to the replication factor being set to be 1 by default (which means when
a node is removed, so is the data it contained).
I believe the docker subproject of hawkular metrics is obsolete and can
be removed
(https://github.com/hawkular/hawkular-metrics/tree/master/docker), but
someone please correct me if I am wrong. It's scripts are referring to
the console which no longer exists as part of the project.
- Matt
1 year, 1 month
Tenant Id - Not Part of URL
by Stefan Negrea
Hello Everybody,
I've been working on a PR for the upcoming Hawkular Metrics release that will remove the tenant id from the end-point URLs. The tenant id will be moved to either a header parameter or a query parameter. The query parameter is in place for cases (such as curl) where setting a header is not possible, difficult, or inconvenient.
Here is an example of the change:
Existing URL:
/{tenantId}/gauge/{metricId}/data
New URL:
/gauge/{metricId}/data
Tenant id set via:
1) header - tenantId
2) query parameter - tenantId
There are two exceptions to this rule, /tenants and /db/{tenantid}/series. The /tenants end-point will be changed into something different in the upcoming releases since it is mostly a management type API that does not belong in the same place with the regular metrics endpoint. And /db/{tenantid}/series end-point is needed in this exact format for compatibility with Influxdb compatible services.
Now, to the merits of this change. The tenant id is volatile, can change any time, and changes to it should be expected; but the rest of the URL is fixed. The second issue is that the tenant id is a security concern. So we were limited in design choices since a security concern was leaking as part of the URL.
So removing the tenant id from the URL will give us permanent & consistent addresses for resources (metrics and metric data points). And we will gain a lot of flexibility on the security side. In the future, users could authenticate with a user/pass combo and the backend would transform that into a tenant id to be used on the request. If the same user later decides to use a tenant id to pass along the request, the URL of the resources would not change. Another expectation is that tenant id is not sufficient, it is typically a combo of id + secret; so we would have resorted to a header or query param for the second piece of information (the secret).
This change will give us the flexibility to adjust the security model (the meaning of tenant ids and ways to validate them) without compromising the URL structure. This will help Hawkular Metrics as it gets integrated into more and more projects and products.
Here are the links to the JIRA and the PR for this change:
https://github.com/hawkular/hawkular-metrics/pull/202
https://issues.jboss.org/browse/HWKMETRICS-68
Thank you,
Stefan Negrea
Software Engineer
1 year, 1 month
New and noteworthy in hawkular-parent 25
by Peter Palaga
Hi *,
hawkular-parent 25 brings the following:
* srcdeps-maven-plugin 0.0.5
* meets the promisses falsely done for 0.0.4:
* less console output
* built without tests
* wildfly-maven-plugin 1.1.0.Alpha4
I have sent PRs to all components repos.
Thanks,
Peter
_______________________________________________
hawkular-dev mailing list
hawkular-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/hawkular-dev
1 year, 1 month
OpenShift Deployment
by Matthew Wringe
With the changes that are now going to include Prometheus, how do we want
to deploy this in OpenShift?
We can have a few options:
ALL-IN-ONE CONTAINER
We put both Hawkular Services and Prometheus in the same container.
Pros:
- easy to deploy in plain docker (but this doesn't appear to be a usecase
we are targetting anyways)
- shares the same network connection (even localhost) and ip address (eg
but both services are on the different ports).
- Does't require any special wiring of components.
- Can share the same volume mount
- version of components can't get out of sync.
Cons:
- workflow doesn't work nicely. Docker containers are meant to only run a
single application and running two can cause problems. Eg lifecycle events
would become tricky and require some hacks to get around things.
- can't independently deploy things
- can't reuse or share any existing Prometheus docker containers.
ALL-IN-ONE POD
Hawkular Services and Prometheus are in their own containers, but they are
both deployed within the same pod.
Pros:
- shares the same network connection.
- bound to the same machine (useful if sharing the same hostpath pv) and
don' need to worry about external network configurations (eg firewalls
between OpenShift nodes)
- pvs can be shared or separate.
- lifecycle events will work properly.
Cons:
- lifecycle hooks will mean that both containers will have to pass before
either one will enter the ready state. So if Prometheus is failing for some
reason, Hawkular Services will not be available under the service.
- cannot independently update one container. If we need to deploy a new
container we will need to bring down the whole pod.
- are stuck with a 1:1 ratio between Hawkular Services and Prometheus
SEPARATE PODS
Hawkular Services and Prometheus have their own separate pods.
Pros:
- can independently run components and each component has its own separate
lifecycle
- if in the future we want to cluster Hawkular Services. this will make it
a lot easier and will also allow for running an n:m ratio between Hawkular
Services and Prometheus
- probably the more 'correct' way to deploy things as we don't have a
strong requirement for Hawkular Services and Prometheus to run together.
Cons:
- more complex wiring. We will need to have extra services and routes
created to handle this. This mean more things running and more chances for
things to go wrong. Also more things to configure
- reusing a PV between Hawkular Services and Prometheus could be more
challenging (especially if we are using hostpath pvs). Updating the
Prometheus scrape endpoint may require a new component and container.
8 years
Can ManageIQ product/live_metrics be moved into manageiq-providers-hawkular ?
by Lucas Ponce
Hi,
In the context of tasks of HAWKULAR-1275 I think that moving those config
files inside manageiq-providers-hawkular may make sense as probably I need
to split them per type (EAP6 might have different metrics than EAP7, for
example).
I guess it shouldn't be a problem as our provider is the only user for this.
Does anyone see any issue if I perform this change ?
Thanks,
Lucas
8 years, 1 month
Metric endpoint access
by Matthew Wringe
For Hawkular Services, we want to be able to handle monitoring EAP
instances no matter where they are running.
So we could have some eap instances running on bare metal, running in a vm,
running as docker images somewhere, running in various OpenShift or
kubernetes clusters.
For baremetal and vm instances, this should be similar to how we have
handled them in the past.
For OpenShift or Kubernetes, I am not sure if we have figure out how this
should function. Particularly with metric endpoints that need to be
accessed from outside of the OpenShift cluster.
If we are running Hawkular Services in an OpenShift cluster and monitoring
eap pods within that cluster, by default Hawkular Services should be able
to communicate with all the eap pods in the cluster by their ip address. So
this is not much of an issue.
But, if the ovs-multitenant SDN plugin is enabled instead, then only pods
within the same project can communicate with each other. So if we are
running Hawkular Services in one project we cannot reach the metric
endpoint of eap instances running in another project. Running Hawkular
Services in the 'default' project (vnid0) gives it special privileges to
read from any pod, but this also means that only admins will be able to
install this.
There is also the new ovs-networkpolicy plugin, which allows for Kubernetes
network policy. And this may further limit communication between pods.
If we move Hawkular Services outside of the OpenShift cluster, then this
can get tricky and I don't know what we can really do here. Even if we were
to have Hawkular Services run with the same network setup as OpenShift (so
it can access the pod endpoints) I don't think we can do this with multiple
OpenShift instances.
Normally, if you want to expose something outside of an OpenShift cluster,
you would do so using a route. But this is not going to work for individual
pods in a replica set.
There is also the API proxy that could be used to access individual pod
endpoints, but I think this could cause a performance problem. And the
agent may not know the endpoint to tell p8s to start scraping from.
Has anyone started to look into this yet?
8 years, 1 month