[hawkular-alerts] Integration with Eureka
by Ashutosh Dhundhara
Hi All,
Is it possible to track UP/DOWN status of micro-services registered with
Eureka and fire email notifications if a service goes down? If yes, please
point me to right direction.
--
Regards,
Ashutosh Dhundhara
9 months
hawkular.org - Redux Initiative
by Stefan Negrea
Hello Everybody,
I would like to kickstart "Redux Initiative" for the Hawkular website. The
goal is to refocus the message and information presented on the website to
core active Hawkular projects and nothing more.
I created a document that outlines the plan ("How" section), the
modifications ("Changes" section), and a visual map of the initial cleanup
stage ("Visual Changes" section). This document was created with feedback
from Hawkular contributors and was triggered by a very insightful
interaction with a manager from Open Innovation Lab.
How to participate? The "How" section of the document outlines the big
plan. First step is to reach consensus on the changes. Please, read the
document, comment on the proposal, and participate in the discussions that
might spark here.
The document:
https://docs.google.com/a/redhat.com/document/d/1XsB_DzMjjG7QjJw-kX41PeuU...
Thank you,
Stefan Negrea
7 years, 3 months
Re: [Hawkular-dev] Dynamic UI PoC Presentation
by Caina Costa
On Mon, Jul 17, 2017 at 9:39 AM, Alissa Bonas <abonas(a)redhat.com> wrote:
> 1. a diagram that shows an example of architecture components and data
> format of entities (json/code/configuration) will help to understand the
> proposal.
>
This is an example of an Entity hierarchy, it shows what kind of entities
we can create, as well as other patterns that we can use. The farther the
key is from Entity, the more specialized it is, which means that the
.applicable? method on those are a lot more picky on what kind of data it
matches to. Views follow the same hierarchy and the engine matches the same
way, and the first defined view is used, so let's say:
If we have a WildflyDomainControllerServer to render, first it will try to
find WildFlyDomainControllerServerView, then WildFlyDomainServerView, then
WildFlyServerView, then WildFlyServerBaseView, then ServerView. That means
that adding new entities are not going to break the views being used.
Also, WildFlyServerBase is an abstract entity, in the sense that it only
provides implementation, and is not to be matched. This can be achieved by
setting .applicable? to return false. Entities are just normal ruby
objects, there is nothing special about them, they just need to answer to
the .applicable? method and receive 1 argument in its initializer.
> What I mean is - which component should define/work with which format for
> the example entities? what should be defined on hawkular side, what is
> fetched and defined in ruby gem, what parts are defined and how they are
> processed in miq ui, what parts in miq server + how does that work (or not
> :)) with persisting objects in miq, etc. Can/should 2 completely different
> entities (such as middleware server and a deployment) use your proposal,
> given that they might have some similar common fields? (for example,
> "name", "status")
>
Entities define the "canonical truth" of the responses from the server, and
views define how to represent them as JSON. They don't tackle how we're
going to present data, and not how to fetch them.
To persist data on MiQ: this PoC does not take any action on persisting
stuff, it only cares about representation. To do that, we just need a new
JSON field on every middleware table, to save the response from the server,
and then we can use it. Something like this:
entity = Entity.constantize(MiddlewareServer.first.data)
render json: View.for(entity)
>
> 2. I noticed that the discussion moved to jbmc list although it originated
> in hawkular-dev. the tech discussion is definitely more suitable in
> hawkular-dev as a continue to the original thread on the topic.
>
>
>
> On Mon, Jul 17, 2017 at 3:27 PM, Caina Costa <cacosta(a)redhat.com> wrote:
>
>> Yes, that's exactly what it does, with some caveats: we have a hierarchy
>> of server types, as in, we first have to implement a generic server type
>> that implements the default attributes to all the servers. From there, we
>> can subclass for more specific server types, and the view/entity runtime
>> takes care of match the hawkular data with the defined entities. So let's
>> say we have a hierarchy like that:
>>
>> MiddlewareServer > WildFlyServer > WildFly11Server
>>
>> For this example, let's say that MiddlewareServer includes only the
>> summary, WildFlyServer includes metrics, and WildFly11Server includes power
>> operations.
>>
>> When matching the data with Entity.constantize(data), we match first the
>> more specialized server, so WildFly11Server, and then WildFlyServer, then
>> the more generic MiddlewareServer. This is automatic on the runtime, and if
>> we add new server types, it will try to match in the reverse of the order
>> provided, first the most specific, then going forward for less specific
>> entities.
>>
>> So, in summary:
>>
>> It does enable us to add new server types with no code change on the
>> ManageIQ side, by providing more generic interfaces that we can match upon,
>> which means that while we might not have all information by default, we
>> will have a representation that makes sense. It also enables us to expand
>> new server types easily with small changes.
>>
>>
>>
>> On Mon, Jul 17, 2017 at 8:31 AM, Thomas Heute <theute(a)redhat.com> wrote:
>>
>>> I just watched the recording, it was not clear to me the benefits it
>>> brings (or if it's just internal).
>>>
>>> I was hoping to see how one can add a new server type with no code
>>> change on the ManageIQ side, maybe I misinterpreted the current work.
>>>
>>> Can you explain ?
>>>
>>> Thomas
>>>
>>> On Thu, Jul 13, 2017 at 6:13 PM, Caina Costa <cacosta(a)redhat.com> wrote:
>>>
>>>> Hello guys,
>>>>
>>>> Thanks you all for joining the presentation, lots of great questions!
>>>> For those of you that could not join, here's the recording:
>>>>
>>>> https://bluejeans.com/s/hnR7@/
>>>>
>>>> And the slides are attached.
>>>>
>>>> As always, if you have any questions, please do not hesitate to get in
>>>> touch. I'm available on IRC and e-mail.
>>>>
>>>
>>>
>>
>
7 years, 3 months
New Hawkular Blog Post: OpenTracing EJB instrumentation
by Thomas Heute
New Hawkular blog post from noreply(a)hawkular.org (Juraci Paixão Kröhling): http://ift.tt/2u3KYMP
OpenTracing features more and more framework integrations, allowing for transparent instrumentation of applications with minimal effort. This blog post will show how to use the EJB instrumentation to automatically trace EJB invocations.
For this demo, we’ll generate a project using the Wildfly Swarm project generator, which allows us to have a seed project with the appropriate OpenTracing support in place. The concrete OpenTracing solution we will use is provided by the Jaeger project, which is also provided as a Wildfly Swarm Fraction.
With that, we’ll create a simple JAX-RS endpoint with an EJB facet, invoking a set of EJB services in different ways to demonstrate all the features of this integration.
Our application has one endpoint called /order, responsible for receiving requests to place orders in our system. When we call this endpoint, we also call some other EJB services, like
AccountService to send a notification that an order has been placed
OrderService to effectively place the order
InventoryService to change our inventory
InventoryNotificationService, to notify other backend systems.
As we are only interested in the tracing parts, we’ll not implement the business code itself, only the scaffolding.
Our demo project is heavily based on the opentracing-ejb-example from the repository opentracing-contrib/java-ejb. We have also prepared an archive with the final outcome of this demo, which you can use as reference.
The seed project
To generate the seed project, open the Wildfly Swarm generator and create a project with the "Group ID" io.opentracing.contrib.ejb and "Artifact ID" demo-example. Add the dependencies EJB, CDI, JAX-RS, OpenTracing, and Jaeger.
Make sure to select all listed dependencies and confirm that they are shown in the "Selected dependencies" section, otherwise, you might not have all the required fractions for this demo.
Click on Generate Project and you’ll get a ZIP file with the seed project. Uncompress it and add the following dependency to the pom.xml, within the dependencies node and after the WildFly Swarm Fractions dependencies:
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-ejb</artifactId>
<version>0.0.2</version>
</dependency>
It’s now a good time to perform a sanity build, to make sure everything is in place. The first build might take a few minutes:
$ mvn wildfly-swarm:run
...
...
2017-07-26 12:02:19,154 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly Swarm is Ready
If it looks good, stop the server with Ctrl+C and let’s start coding our application!
The application
Let’s start by defining a JAX-RS endpoint that also acts as a stateless EJB. This is a common trick to get JAX-RS endpoints to be managed as EJBs, so that they can be invoked via JMX or get monitoring features. Or, in our case, to get traced via EJB interceptors.
This endpoint is where we get our HTTP requests from and where our transaction starts, from the tracing perspective. Once we receive an HTTP request, we call the AccountService#sendNotification method and then the OrderService#processOrderPlacement.
Note that we annotate the class with @Interceptors(OpenTracingInterceptor.class), which means that all methods on this class are to be traced.
src/main/java/io/opentracing/contrib/ejb/demoexample/Endpoint.java:
package io.opentracing.contrib.ejb.demoexample;
import io.opentracing.contrib.ejb.OpenTracingInterceptor;
import javax.ejb.Stateless;
import javax.inject.Inject;
import javax.interceptor.Interceptors;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import java.util.logging.Logger;
/**
* This is a regular JAX-RS endpoint with EJB capabilities. We use the EJB capability to specify an interceptor,
* so that every method on this class is wrapped on its own span. If the OpenTracing JAX-RS integration is being used,
* it would be a good idea to not have the interceptor at this level, to avoid having too much "noise".
*
* @author Juraci Paixão Kröhling
*/
@Path("/order")
@Stateless
@Interceptors(OpenTracingInterceptor.class)
public class Endpoint {
private static final Logger log = Logger.getLogger(Endpoint.class.getName());
@Inject
AccountService accountService;
@Inject
OrderService orderService;
@POST
@Path("/")
public String placeOrder() {
log.info("Request received to place an order");
accountService.sendNotification();
orderService.processOrderPlacement();
return "Order placed";
}
}
Our AccountService is a simple stateless EJB, responsible for sending a notification about the new order to the owner of the account. Here, we could call another service, or send an email, SMS or any other form of message.
As this is a regular EJB, we are able to automatically join the span context from the JAX-RS endpoint, making this call a child span of the main transaction. This is all transparent to you as developer.
Note again that we annotate the bean with @Interceptors(OpenTracingInterceptor.class). As our interceptor is just like any other EJB interceptor, you could use a ejb-jar.xml to automatically use this inteceptor on all available beans. Whether or not to trace all beans is a per-deployment decision, so, no ejb-jar.xml is provided by the integration.
src/main/java/io/opentracing/contrib/ejb/demoexample/AccountService.java:
package io.opentracing.contrib.ejb.demoexample;
import io.opentracing.contrib.ejb.OpenTracingInterceptor;
import javax.ejb.Stateless;
import javax.interceptor.Interceptors;
import java.util.logging.Logger;
/**
* This is a simple synchronous EJB, without any knowledge about span context or other OpenTracing semantics. All it
* does is specify an interceptor and it's shown as the child of a parent span.
*
* @author Juraci Paixão Kröhling
*/
@Stateless
@Interceptors(OpenTracingInterceptor.class)
public class AccountService {
private static final Logger log = Logger.getLogger(AccountService.class.getName());
public void sendNotification() {
log.info("Notifying the account owner about a new order");
}
}
Our OrderService is responsible for actually placing the order: it’s where the business knowledge resides. We’ll later look into details at the InventoryService, but for now, we need to know that this service requires a SpanContext to be explicitly passed. We can get this context from the EJBContext, stored under a context data entry that can be retrieved with the constant io.opentracing.contrib.ejb.OpenTracingInterceptor.SPAN_CONTEXT.
src/main/java/io/opentracing/contrib/ejb/demoexample/OrderService.java:
package io.opentracing.contrib.ejb.demoexample;
import io.opentracing.SpanContext;
import io.opentracing.contrib.ejb.OpenTracingInterceptor;
import javax.annotation.Resource;
import javax.ejb.EJBContext;
import javax.ejb.Stateless;
import javax.inject.Inject;
import javax.interceptor.Interceptors;
import java.util.logging.Logger;
import static io.opentracing.contrib.ejb.OpenTracingInterceptor.SPAN_CONTEXT;
/**
* This is a regular synchronous stateless EJB. It demonstrates how to get the span context for the span it's wrapped
* on. This can be used to pass down the call chain, create child spans or add baggage items.
*
* @author Juraci Paixão Kröhling
*/
@Stateless
@Interceptors(OpenTracingInterceptor.class)
public class OrderService {
private static final Logger log = Logger.getLogger(OrderService.class.getName());
@Resource
EJBContext ctx;
@Inject
InventoryService inventoryService;
public void processOrderPlacement() {
log.info("Placing order");
Object ctxParentSpan = ctx.getContextData().get(SPAN_CONTEXT);
if (ctxParentSpan instanceof SpanContext) {
inventoryService.changeInventory((SpanContext) ctxParentSpan);
return;
}
inventoryService.changeInventory(null);
}
}
Our InventoryService is responsible for interfacing with backend systems dealing with inventory control. We don’t want to block the parent transaction while interacting with those systems, so, we make this an asynchronous EJB. When dealing with asynchronous objects, it’s a good idea to be explicit about the span context, as there are potential concurrency issues when sharing a context between a synchronous and an asynchronous bean.
The OpenTracing EJB integration is able to intercept the method call and detect if there is a span context among the parameters, which is the case of the changeInventory(SpanContext) method. In this situation, the following happens behind the scenes:
The caller makes a method call, passing the SpanContext
The interceptor is activated, creating a new child span using the SpanContext as the parent
The interceptor replaces the original SpanContext with this new child span on the method call
The intercepted method is finally invoked, wrapped by the new child span.
Note that the SpanContext passed by the OrderService is not the same as the one received by InventoryService. While this might cause some confusion, we believe this is the right semantic for this use case, as it allows for a complete tracing picture, without any explicit tracing code, apart from passing the context around.
src/main/java/io/opentracing/contrib/ejb/demoexample/InventoryService.java
package io.opentracing.contrib.ejb.demoexample;
import io.opentracing.SpanContext;
import io.opentracing.contrib.ejb.OpenTracingInterceptor;
import javax.ejb.Asynchronous;
import javax.ejb.Stateless;
import javax.inject.Inject;
import javax.interceptor.Interceptors;
import java.util.logging.Logger;
/**
* This is an asynchronous stateless EJB with spans created automatically by the interceptor. Note that the span context
* that this method sees is <b>not</b> the same as the span context sent by the caller: the interceptor wraps this
* method call on its own span, and replaces the span context by the context of this new span. This is done so that this
* span context can be passed along to the next service "as is".
*
* @author Juraci Paixão Kröhling
*/
@Asynchronous
@Stateless
@Interceptors({OpenTracingInterceptor.class})
public class InventoryService {
private static final Logger log = Logger.getLogger(InventoryService.class.getName());
@Inject
InventoryNotificationService inventoryNotificationService;
public void changeInventory(SpanContext context) {
log.info("Changing the inventory");
inventoryNotificationService.sendNotification(context);
}
}
And finally, our last service, InventoryNotificationService: in this case, we notify another set of backend systems that a new order has been placed. Again, this is an asynchronous EJB and works like the one above, but additionally, we wanted to manually create a "business span", called sendNotification. This method could send several notifications, wrapping each one into a span of its own. As we manually started it, we manually finish it as well.
src/main/java/io/opentracing/contrib/ejb/demoexample/InventoryNotificationService.java
package io.opentracing.contrib.ejb.demoexample;
import io.opentracing.Span;
import io.opentracing.SpanContext;
import io.opentracing.util.GlobalTracer;
import javax.ejb.Asynchronous;
import javax.ejb.Stateless;
import java.util.logging.Logger;
/**
* This is the final call in the chain. This is an asynchronous stateless EJB, which obtains the span context
* via a method parameter. This bean is not intercepted in any way by us, so, the span context received is exactly
* the same as what was sent by the caller.
*
* @author Juraci Paixão Kröhling
*/
@Stateless
@Asynchronous
public class InventoryNotificationService {
private static final Logger log = Logger.getLogger(InventoryNotificationService.class.getName());
public void sendNotification(SpanContext context) {
Span span = GlobalTracer.get().buildSpan("sendNotification").asChildOf(context).startManual();
log.info("Sending an inventory change notification");
span.finish();
}
}
Now, let’s do a final sanity check and see if everything is in the right place: mvn wildfly-swarm:run . As before, the final message should be WildFly Swarm is Ready. Hit Ctrl+C and let’s setup our tracing backend.
The tracing backend
Instrumenting our code is one part of the story. The other part is to plug in an actual OpenTracing implementation that is capable of capturing the spans and submitting them to a backend service. For our demo, we’ll use Jaeger. If you don’t have a Jaeger server running yet, one can be started via Docker as follows:
docker run \
--rm \
-p5775:5775/udp \
-p6831:6831/udp \
-p6832:6832/udp \
-p5778:5778 \
-p16686:16686 \
-p14268:14268 \
--name jaeger \
jaegertracing/all-in-one:latest
Tying everything together
Now that we have our code ready and a tracing backend, let’s start Wildfly Swarm passing a service name, which is the only property required by the Jaeger client. By default, Jaeger’s Java tracer will attempt to send traces via UDP to a Jaeger Agent located on the local machine. If you are using a different architecture, refer to the Jaeger’s documentation on how to use environment variables to configure the client, or refer to the Jaeger’s fraction for Wildfly Swarm.
For our demo, we’ll use a property that is recognized by the Jaeger’s Wildfly Swarm Fraction. The other two properties are telling Jaeger that every request should be sampled.
mvn wildfly-swarm:run -Dswarm.jaeger.service-name=order-processing -Dswarm.jaeger.sampler-type=const -Dswarm.jaeger.sampler-parameter=1
Watch the logs for an entry containing com.uber.jaeger.Configuration: if everything is correctly set, it should show the complete configuration of the Jaeger client, like this:
2017-07-26 12:03:09,139 INFO [com.uber.jaeger.Configuration] (ServerService Thread Pool -- 6) Initialized tracer=Tracer(version=Java-0.20.0, serviceName=order-processing, reporter=RemoteReporter(queueProcessor=RemoteReporter.QueueProcessor(open=true), sender=UdpSender(udpTransport=ThriftUdpTransport(socket=java.net.DatagramSocket@7270de22, receiveBuf=null, receiveOffSet=-1, receiveLength=0)), maxQueueSize=100, closeEnqueueTimeout=1000), sampler=ConstSampler(decision=true, tags={sampler.type=const, sampler.param=true}), ipv4=-1062731153, tags={hostname=carambola, jaeger.version=Java-0.20.0, ip=192.168.2.111}, zipkinSharedRpcSpan=false)
Once the message WildFly Swarm is Ready is seen, we can start making requests to our endpoint:
$ curl -X POST localhost:8080/order
Order placed
And this can be seen on the server’s log:
2017-07-26 12:03:19,302 INFO [io.opentracing.contrib.ejb.demoexample.Endpoint] (default task-2) Request received to place an order
2017-07-26 12:03:19,304 INFO [io.opentracing.contrib.ejb.demoexample.AccountService] (default task-2) Notifying the account owner about a new order
2017-07-26 12:03:19,307 INFO [io.opentracing.contrib.ejb.demoexample.OrderService] (default task-2) Placing order
2017-07-26 12:03:19,314 INFO [io.opentracing.contrib.ejb.demoexample.InventoryService] (EJB default - 1) Changing the inventory
2017-07-26 12:03:19,322 INFO [io.opentracing.contrib.ejb.demoexample.InventoryNotificationService] (EJB default - 2) Sending an inventory change notification
At this point, the complete trace with its 6 spans can be seen on Jaeger’s UI, located at http://localhost:16686/search , if you are using the Docker command we listed before.
Conclusion
EJBs are a very important part of Java EE and we expect the OpenTracing EJB framework integration to complement the other Java EE related integrations, like the Servlet and JAX-RS. In this blog post, we’ve shown how tracing EJBs can be accomplished by transparently tracing synchronous stateless EJBs, intercepting span contexts in asynchronous EJBs and by exposing the span context via EJB contexts, as well as manually starting spans to include specific business logic into the trace.
from Hawkular Blog
7 years, 3 months
some of the new hAlerts UI
by John Mazzitelli
Just wanted to give a quick update of the new hAlerts 2.0 UI that is under construction. I know folks are hearing about hAlerts but haven't seen it. Below is what Lucas, Jay, and myself have been working on the past two weeks or so. Still a work in progress, but this is what we've got so far.
The first is the dashboard that shows you an overview of what alerts are in the system - some graphs, timelines, and counters - all using Patternfly styles.
The second is the alerts list that utilizes the Patternfly Compound Expansion List . You can filter on date, severity (critical, high, etc), status (open, acked, resolved); it shows you details about alerts including lifecycle info (who acknowledged and resolved them), users' notes about the alerts, what conditions caused the alerts to fire, etc. etc. You can ack, resolve, and delete alerts.
Other parts of the UI not shown here are the triggers and actions pages (let's you define triggers and actions). But the snapshots below are the more colorful and pretty pages - I'm all about sharing eye candy :)
7 years, 4 months
New Hawkular Blog Post: Grafana: new query interface
by Thomas Heute
New Hawkular blog post from noreply(a)hawkular.org (Joel Takvorian): http://ift.tt/2uPct0T
There have been improvements lately in the Hawkular Grafana datasource that are worth mentioning.
The way to query metrics by tags has changed since plugin v1.0.8. It now takes advantage of the Hawkular Metrics' tags query language, that was introduced server-side in Metrics 0.24.0 and enhanced in 0.25.0. To sum it up, Metrics integrates a parser that allows queries such as: tag1 = 'awesome!' AND tag2 NOT IN ['foo', 'bar'].
The datasource in now able to fetch bucketized metrics stats, instead of raw metrics. It consists in aggregating datapoints in slices of time (buckets) and providing, for each slice, some statistics like min, max, average and more. The exact content of a bucket is described in Metrics REST API. Hawkular has always been able to provide metric stats server-side, but being able to use them in the Grafana plugin is new, introduced in v1.0.9.
The new query interface
Tags query language
The first change is that you don’t have to choose between query by tag and query by metric id anymore, you can do both at the same time. Querying by tag will refine the available list of metric names (much like a filter) and can result in multiple metrics from a single query. By selecting a metric name, you restrict the query to only display that one. This filtering is really nice when there’s tons of metrics available, like in the case of hundreds of OpenShift pods being monitored with the same tenant.
The simple key/value pairs interface is now replaced with a more elaborated query builder, following the pattern: 'tag-key' 'operator' 'tag-value(s)' ['AND/OR' etc.]
The following images show a walk-through:
Selecting the tag key
Selecting the tag operator
Selecting the tag value
The text fields include dynamic suggestions, you can use Grafana template variables within tag values, or enter free text. Once you’ve set up a tag query expression, the relevant metrics immediately show up on the chart and the list of available metrics in the dropdown list in updated.
Filtered metrics
This query builder lets you build almost any tag query that the Hawkular server understands. There are however some corner cases. For now this builder doesn’t allow you to prioritize expressions with parentheses. For instance, you cannot build c1 = 'foo' OR (b1 != 'B' AND a1 = 'abcd'). As a workaround you can turn off the query builder and directly type in your query expression.
Toggle editor mode
It will be sent as is to the Hawkular Metrics server. This will also be useful to fill the gap if the language evolves server-side and this plugin isn’t updated immediately.
Stats query
The other important feature is the ability to run stats queries against Hawkular Metrics, instead of raw queries. There are several reasons to do this:
it reduces the network load, and client-side processing load, especially when raw data would contain tons of datapoints
it enables some aggregation methods
it also allows higher-level analysis with stats such as percentiles
To enable it, just clear the raw checkbox.
Toggle stats mode
When you clear the raw checkbox, you can configure Multiple series aggregation to None, Sum or Average and can configure Stat type as avg, min, max, median, sum and different percentiles. You can display several different Stat types within the same query.
Stats without aggregation: each two metrics show avg, min and max
Same query with series aggregation: the two metrics are averaged into one, which shows avg, min and max
If the query returns multiple series, use Multiple series aggregation to define if and how to aggregate them. None will show them individually on the chart. But consider for instance the case of an OpenShift pod with many replicas, and you’re tracking their memory usage. It may be more relevant here to aggregate all of them, both sum and average are meaningful here.
The Stat type option refers to an aggregation at a different level: not between multiple metrics, but within a single metric, all raw datapoints are aggregated within time buckets.
Conclusion
These two improvements aim a common goal, that is facilitating querying over large amounts of data. This is becoming crucial especially in the context of microservices and applications running on container platforms, as the number of metrics explodes. Proper metrics tagging is the corner stone to make sense of this data.
from Hawkular Blog
7 years, 4 months
New Hawkular Blog Post: Protecting Jaeger UI with a sidecar security proxy
by Thomas Heute
New Hawkular blog post from noreply(a)hawkular.org (Juraci Paixão Kröhling): http://ift.tt/2uhE5Lj
In a production deployment of Jaeger, it may be advantageous to restrict access to Jaeger’s Query service, which includes the UI. For instance, you might have internal security requirements to allow only certain groups to access trace data, or you might have deployed Jaeger into a public cloud. In a true microservices way, one possible approach is to add a sidecar to the Jaeger Query service, acting as a security proxy. Incoming requests hit our sidecar instead of reaching Jaeger’s Query service directly and the sidecar would be responsible for enforcing the authentication and authorization constraints.
Incoming HTTP requests arrive at the route ①, which uses the internal service ② to resolve and communicate with the security proxy ③. Once the request is validated and all security constraints are satisfied, the request reaches Jaeger ④.
For demonstration purposes we’ll make use of Keycloak as our security solution, but the idea can be adapted to work with any security proxy. This demo should also work without changes with Red Hat SSO. For this exercise, we’ll need:
A Keycloak (or Red Hat SSO) server instance running. We’ll call its location ${REDHAT_SSO_URL}
An OpenShift cluster, where we’ll run Jaeger backend components. It might be as easy as oc cluster up
A local clone of the Jaeger OpenShift Production template
Note that we are not trying to secure the communication between the components, like from the Agent to the Collector. For this scenario, there are other techniques that can be used, such as mutual authentication via certificates, employing istio or other similar tools.
Preparing Keycloak
For this demo, we’ll run Keycloak via Docker directly on the host machine. This is to stress that Keycloak does not need to be running on the same OpenShift cluster as our Jaeger backend.
The following command should start an appropriate Keycloak server locally. If you already have your own Keycloak or Red Hat SSO server, skip this step.
docker run --rm --name keycloak-server -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=password -p 8080:8080 jboss/keycloak
Once the Keycloak server is up and running, let’s create a realm for Jaeger:
Login into Keycloak (http://<YOUR_IP>:8080/auth/admin/master/console) with admin as username and password as password
In the top left corner, mouse over the Select realm box and click Add realm. Name it jaeger and click Create
On Clients, click Create and set proxy-jaeger as the name and save it
Set the Access Type to confidential and * as Valid Redirect URIs and save it. You might want to fine tune this in a production environment, otherwise you might be open to an attack known as "Unvalidated Redirects and Forwards".
Open the Installation tab and select Keycloak OIDC JSON and copy the JSON that is shown. It should look like this, but the auth-server-url and secret will have different values.
{
"realm": "jaeger",
"auth-server-url": "http://ift.tt/2tmR0IR",
"ssl-required": "external",
"resource": "proxy-jaeger",
"credentials": {
"secret": "7f201319-1dfd-43cc-9838-057dac439046"
}
}
And finally, let’s create a role and a user, so that we can log into Jaeger’s Query service:
Under the Configure left-side menu, open the Roles page and click Add role
As role name, set user and click Save
Under the Manage left-side menu, open the Users page and click Add user
Fill out the form as you wish and set Email verified to ON and click on Save
Open the Credentials tab for this user and set a password (temporary or not).
Open the Role mappings tab for this user, select the role user from the Available Roles list and click Add selected
Preparing OpenShift
For this demo, we assume you have an OpenShift cluster running already. If you don’t, then you might want to check out tools like minishift. If you are running a recent version of Fedora, CentOS or Red Hat Enterprise Linux you might want to install the package origin-clients and run oc cluster up --version=latest. This should get you a basic OpenShift cluster running locally.
To make it easier for our demonstration, we’ll add cluster-admin rights to our developer user and we’ll create the Jaeger namespace:
oc login -u system:admin
oc new-project jaeger
oc adm policy add-cluster-role-to-user cluster-admin developer -n jaeger
oc login -u developer
Preparing the Jaeger OpenShift template
We’ll use the Jaeger OpenShift Production template as the starting point: either clone the entire repository, or just get a local version of the template.
The first step is to add the sidecar container to the query-deployment object. Under the containers list, after we specify the jaeger-query, let’s add the sidecar:
- image: jboss/keycloak-proxy
name: ${JAEGER_SERVICE_NAME}-query-security-proxy
volumeMounts:
- mountPath: /opt/jboss/conf
name: security-proxy-configuration-volume
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
path: "/"
port: 8080
Note that container specifies a volumeMount named security-proxy-configuration-volume: we’ll use it to store the proxy’s configuration file. You should add the volume under the spec/template/spec node for query-deployment, sibling to the dnsPolicy property (it’s probably right under the previous code snippet):
volumes:
- configMap:
name: ${JAEGER_SERVICE_NAME}-configuration
items:
- key: proxy
path: proxy.json
name: security-proxy-configuration-volume
Now, we need to specify the ConfigMap, with the proxy’s configuration entry. To do that, we add a new top-level item to the template. As a suggestion, we recommend keeping it close to where it’s consumed. For instance, right before the query-deployment:
- apiVersion: v1
kind: ConfigMap
metadata:
name: ${JAEGER_SERVICE_NAME}-configuration
labels:
app: jaeger
jaeger-infra: security-proxy-configuration
data:
proxy: |
{
"target-url": "http://localhost:16686",
"bind-address": "0.0.0.0",
"http-port": "8080",
"applications": [
{
"base-path": "/",
"adapter-config": {
"realm": "jaeger",
"auth-server-url": "${REDHAT_SSO_URL}",
"ssl-required": "external",
"resource": "proxy-jaeger",
"credentials": {
"secret": "THE-SECRET-FROM-INSTALLATION-FILE"
}
}
,
"constraints": [
{
"pattern": "/*",
"roles-allowed": [
"user"
]
}
]
}
]
}
Note that we are only allowing users with the role user to log into our Jaeger UI. In a real world scenario, you might want to adjust this to fit your setup. For instance, your user data might come from LDAP, and you only want to allow users from specific LDAP groups to access the Jaeger UI.
The secret within the credentials should match the secret we got from Keycloak at the beginning of this exercise. Our most curious readers will note that we mentioned the template parameter REDHAT_SSO_URL under the property auth-server-url. Either change that to your Keycloak server, or let’s specify a template parameter, allowing us to set this at deployment time. Under the parameters section of the template, add the following property:
- description: The URL to the Red Hat SSO / Keycloak server
displayName: Red Hat SSO URL
name: REDHAT_SSO_URL
required: true
value: http://THE-URL-FROM-THE-INSTALLATION-FILE:8080/auth
This value should be a location that is reacheable by both your browser and by the sidecar, like your host’s LAN IP (192.x, 10.x). Localhost/127.x is not going to work.
As a final step, we need to change the service to direct requests to the port 8080 (proxy) instead of 16686. This is done by changing the property targetPort on the service named query-service, setting it to 8080:
- apiVersion: v1
kind: Service
metadata:
name: ${JAEGER_SERVICE_NAME}-query
labels:
app: jaeger
jaeger-infra: query-service
spec:
ports:
- name: jaeger-query
port: 80
protocol: TCP
targetPort: 8080
selector:
jaeger-infra: query-pod
type: LoadBalancer
As a reference, here’s the complete template file that can be used for this blog post.
Deploying
Now that we have everything ready, let’s deploy Jaeger into our OpenShift cluster. Run the following command from the same directory you stored the YAML file from the previous steps, referenced here by the name jaeger-production-template.yml:
oc process -f jaeger-production-template.yml | oc create -n jaeger -f -
During the first couple of minutes, it’s OK if the pods jaeger-query and jaeger-collector fail, as Cassandra will still be booting. Eventually, the service should be up and running, as shown in the following image.
Once it is ready to serve requests, click on URL for the route (http://ift.tt/2uIOgZQ). You should be presented with a login screen, served by the Keycloak server. Login with the credentials you set on the previous steps, and you should reach the regular Jaeger UI.
Conclusion
In this exercise, we’ve seen how to add a security proxy to our Jaeger Query pod as a sidecar. All incoming requests go through this sidecar and all features available in Keycloak can be used transparently, such as 2-Factor authentication, service accounts, single sign-on, brute force attack protection, LDAP support and much more.
from Hawkular Blog
7 years, 4 months