New Hawkular Blog Post: Protecting Jaeger UI with a sidecar security proxy
by Thomas Heute
New Hawkular blog post from noreply(a)hawkular.org (Juraci Paixão Kröhling): http://ift.tt/2uhE5Lj
In a production deployment of Jaeger, it may be advantageous to restrict access to Jaeger’s Query service, which includes the UI. For instance, you might have internal security requirements to allow only certain groups to access trace data, or you might have deployed Jaeger into a public cloud. In a true microservices way, one possible approach is to add a sidecar to the Jaeger Query service, acting as a security proxy. Incoming requests hit our sidecar instead of reaching Jaeger’s Query service directly and the sidecar would be responsible for enforcing the authentication and authorization constraints.
Incoming HTTP requests arrive at the route ①, which uses the internal service ② to resolve and communicate with the security proxy ③. Once the request is validated and all security constraints are satisfied, the request reaches Jaeger ④.
For demonstration purposes we’ll make use of Keycloak as our security solution, but the idea can be adapted to work with any security proxy. This demo should also work without changes with Red Hat SSO. For this exercise, we’ll need:
A Keycloak (or Red Hat SSO) server instance running. We’ll call its location ${REDHAT_SSO_URL}
An OpenShift cluster, where we’ll run Jaeger backend components. It might be as easy as oc cluster up
A local clone of the Jaeger OpenShift Production template
Note that we are not trying to secure the communication between the components, like from the Agent to the Collector. For this scenario, there are other techniques that can be used, such as mutual authentication via certificates, employing istio or other similar tools.
Preparing Keycloak
For this demo, we’ll run Keycloak via Docker directly on the host machine. This is to stress that Keycloak does not need to be running on the same OpenShift cluster as our Jaeger backend.
The following command should start an appropriate Keycloak server locally. If you already have your own Keycloak or Red Hat SSO server, skip this step.
docker run --rm --name keycloak-server -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=password -p 8080:8080 jboss/keycloak
Once the Keycloak server is up and running, let’s create a realm for Jaeger:
Login into Keycloak (http://<YOUR_IP>:8080/auth/admin/master/console) with admin as username and password as password
In the top left corner, mouse over the Select realm box and click Add realm. Name it jaeger and click Create
On Clients, click Create and set proxy-jaeger as the name and save it
Set the Access Type to confidential and * as Valid Redirect URIs and save it. You might want to fine tune this in a production environment, otherwise you might be open to an attack known as "Unvalidated Redirects and Forwards".
Open the Installation tab and select Keycloak OIDC JSON and copy the JSON that is shown. It should look like this, but the auth-server-url and secret will have different values.
{
"realm": "jaeger",
"auth-server-url": "http://ift.tt/2tmR0IR",
"ssl-required": "external",
"resource": "proxy-jaeger",
"credentials": {
"secret": "7f201319-1dfd-43cc-9838-057dac439046"
}
}
And finally, let’s create a role and a user, so that we can log into Jaeger’s Query service:
Under the Configure left-side menu, open the Roles page and click Add role
As role name, set user and click Save
Under the Manage left-side menu, open the Users page and click Add user
Fill out the form as you wish and set Email verified to ON and click on Save
Open the Credentials tab for this user and set a password (temporary or not).
Open the Role mappings tab for this user, select the role user from the Available Roles list and click Add selected
Preparing OpenShift
For this demo, we assume you have an OpenShift cluster running already. If you don’t, then you might want to check out tools like minishift. If you are running a recent version of Fedora, CentOS or Red Hat Enterprise Linux you might want to install the package origin-clients and run oc cluster up --version=latest. This should get you a basic OpenShift cluster running locally.
To make it easier for our demonstration, we’ll add cluster-admin rights to our developer user and we’ll create the Jaeger namespace:
oc login -u system:admin
oc new-project jaeger
oc adm policy add-cluster-role-to-user cluster-admin developer -n jaeger
oc login -u developer
Preparing the Jaeger OpenShift template
We’ll use the Jaeger OpenShift Production template as the starting point: either clone the entire repository, or just get a local version of the template.
The first step is to add the sidecar container to the query-deployment object. Under the containers list, after we specify the jaeger-query, let’s add the sidecar:
- image: jboss/keycloak-proxy
name: ${JAEGER_SERVICE_NAME}-query-security-proxy
volumeMounts:
- mountPath: /opt/jboss/conf
name: security-proxy-configuration-volume
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
httpGet:
path: "/"
port: 8080
Note that container specifies a volumeMount named security-proxy-configuration-volume: we’ll use it to store the proxy’s configuration file. You should add the volume under the spec/template/spec node for query-deployment, sibling to the dnsPolicy property (it’s probably right under the previous code snippet):
volumes:
- configMap:
name: ${JAEGER_SERVICE_NAME}-configuration
items:
- key: proxy
path: proxy.json
name: security-proxy-configuration-volume
Now, we need to specify the ConfigMap, with the proxy’s configuration entry. To do that, we add a new top-level item to the template. As a suggestion, we recommend keeping it close to where it’s consumed. For instance, right before the query-deployment:
- apiVersion: v1
kind: ConfigMap
metadata:
name: ${JAEGER_SERVICE_NAME}-configuration
labels:
app: jaeger
jaeger-infra: security-proxy-configuration
data:
proxy: |
{
"target-url": "http://localhost:16686",
"bind-address": "0.0.0.0",
"http-port": "8080",
"applications": [
{
"base-path": "/",
"adapter-config": {
"realm": "jaeger",
"auth-server-url": "${REDHAT_SSO_URL}",
"ssl-required": "external",
"resource": "proxy-jaeger",
"credentials": {
"secret": "THE-SECRET-FROM-INSTALLATION-FILE"
}
}
,
"constraints": [
{
"pattern": "/*",
"roles-allowed": [
"user"
]
}
]
}
]
}
Note that we are only allowing users with the role user to log into our Jaeger UI. In a real world scenario, you might want to adjust this to fit your setup. For instance, your user data might come from LDAP, and you only want to allow users from specific LDAP groups to access the Jaeger UI.
The secret within the credentials should match the secret we got from Keycloak at the beginning of this exercise. Our most curious readers will note that we mentioned the template parameter REDHAT_SSO_URL under the property auth-server-url. Either change that to your Keycloak server, or let’s specify a template parameter, allowing us to set this at deployment time. Under the parameters section of the template, add the following property:
- description: The URL to the Red Hat SSO / Keycloak server
displayName: Red Hat SSO URL
name: REDHAT_SSO_URL
required: true
value: http://THE-URL-FROM-THE-INSTALLATION-FILE:8080/auth
This value should be a location that is reacheable by both your browser and by the sidecar, like your host’s LAN IP (192.x, 10.x). Localhost/127.x is not going to work.
As a final step, we need to change the service to direct requests to the port 8080 (proxy) instead of 16686. This is done by changing the property targetPort on the service named query-service, setting it to 8080:
- apiVersion: v1
kind: Service
metadata:
name: ${JAEGER_SERVICE_NAME}-query
labels:
app: jaeger
jaeger-infra: query-service
spec:
ports:
- name: jaeger-query
port: 80
protocol: TCP
targetPort: 8080
selector:
jaeger-infra: query-pod
type: LoadBalancer
As a reference, here’s the complete template file that can be used for this blog post.
Deploying
Now that we have everything ready, let’s deploy Jaeger into our OpenShift cluster. Run the following command from the same directory you stored the YAML file from the previous steps, referenced here by the name jaeger-production-template.yml:
oc process -f jaeger-production-template.yml | oc create -n jaeger -f -
During the first couple of minutes, it’s OK if the pods jaeger-query and jaeger-collector fail, as Cassandra will still be booting. Eventually, the service should be up and running, as shown in the following image.
Once it is ready to serve requests, click on URL for the route (http://ift.tt/2uIOgZQ). You should be presented with a login screen, served by the Keycloak server. Login with the credentials you set on the previous steps, and you should reach the regular Jaeger UI.
Conclusion
In this exercise, we’ve seen how to add a security proxy to our Jaeger Query pod as a sidecar. All incoming requests go through this sidecar and all features available in Keycloak can be used transparently, such as 2-Factor authentication, service accounts, single sign-on, brute force attack protection, LDAP support and much more.
from Hawkular Blog
7 years, 9 months
Calculated metrics: sharing ideas
by Joel Takvorian
Hi,
I just want to share some ideas about the eventuality of having a language
to perform some arithmetic / aggregations on metrics, before it goes out of
my head...
Here's an example of what I would personally love to see in Hawkular:
----------------------------------------------
*Example:*
*sum(stats(rate((id(my_metric), tags(a=foo AND b=bar),
regexp(something_.+)), 5m), 10m))*
|
|=> "id", "tags" and "regexp" all return a set of raw metrics (0-1 for id,
0-n for tags and regexp)
|==> "(a,b,c)" takes n parameters, all sets of metrics, and flatten them in
a single set
|===> rate(set_of_raw_metrics, rate_period) computes the rate for each of
them and return a set of metrics (map n=>n)
|====> stats(set_of_raw_metrics, bucket_size) bucketize the raw metrics,
returning the same number of bucketized metrics (map n=>n)
|=====> sum(set_of_stats_metrics) sums every buckets, returning a single
bucketized metric (fold n=>1)
*Other:*
Functions like "sum" that take stats_metrics could have overloaded shortcut
"sum(set_of_raw_metrics, bucket_size)" to perform the bucketing.
In other words above example could be rewritten:
*sum(rate((id(my_metric), tags(a=foo AND b=bar), regexp(something_.+)),
5m), 10m)*
Note: we can do aggregations like "sum" on raw data if necessary, it just
means we have to interpolate.
*Scalar operations:*
*sum((id(a_metric_in_milliseconds), 1000*id(a_metric_in_seconds)), 10m)*
----------------------------------------------
Of course many other functions could come growing the library.
Now I suppose the big question, if we want to do such thing, is "are we
going to invent our own language?" I don't know if there are standards for
this, and if they are good.
The Prometheus query language cannot be transposed because a label is not a
tag and it makes no sense for us to write something like
"my_metric{tag=foo}", it's either "my_metric" or "tag=foo".
The same language could be used both for read-time on-the-fly aggregations
and write-time / cron-based rollups.
WDYT?
7 years, 9 months
OWASP ZAP for security testing
by Heiko Rupp
I was last week in a session about "Security during the build", where
the presenter
talked about enforcing checks for security issues during the build phase
(preferably
nightly CI run)
One of the interesting tools is
https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
which is a web client that can run attacks against web applications to
try things like
* sql injection
* cross site forgery
* just parameter fuzzing
and much more.
While this is a bit hard to set up with pure REST-APIs (if they don't
follow HATEOAS),
it seems worth doing anyway to make sure that the obvious things don't
hit.
And before someone mentions that this does not apply to us because we
use Cassandra
and not a SQL data store: it is possible to generate profiles and e.g.
switch off the sql injection attack vector.
7 years, 9 months
New Hawkular Blog Post: OpenTracing JAX-RS Instrumentation
by Thomas Heute
New Hawkular blog post from noreply(a)hawkular.org (Pavol Loffay): http://ift.tt/2tYLEaM
In the previous demo we have demonstrated how to instrument a Spring Boot app using OpenTracing, a vendor-neutral standard for distributed tracing. In this article we are going to instrument a Java API for RESTful Web Services (JAX-RS), and show you how to trace the business layer and add custom data to the trace.
Demo application
Creating a JAX-RS app from scratch can be a time consuming task, therefore in this case we are going to use Wildfly Swarm’s app generator. Select JAX-RS and CDI dependencies and hit generate button.
Figure 1: Wildfly Swarm generator.
The generated application contains one REST endpoint which returns hello world string. This endpoint is accessible on http://localhost:8080/hello. In the next step we are going to add instrumentation and simple business logic.
Instrumentation
Adding OpenTracing instrumentation to JAX-RS is very simple, just include the following dependency in the classpath and the tracing feature will be automatically registered.
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-jaxrs2</artifactId>
</dependency>
OpenTracing is just an API, therefore it is required to register a specific tracer instance. In this demo we are going to use Jaeger tracing system. The tracer should be created and initialized only once per process, hence ServletContextListener is the ideal place for this task:
@WebListener
public class TracingContextListener implements ServletContextListener {
@Inject
private io.opentracing.Tracer tracer;
@Override
public void contextInitialized(ServletContextEvent sce) {
GlobalTracer.register(tracer);
}
@Override
public void contextDestroyed(ServletContextEvent sce) {}
@Produces
@Singleton
public static io.opentracing.Tracer jaegerTracer() {
return new Configuration("wildfly-swarm", new Configuration.SamplerConfiguration(
ProbabilisticSampler.TYPE, 1),
new Configuration.ReporterConfiguration())
.getTracer();
}
}
Tracer initialization code requires to specify app name, which is in this case wildfly-swarm and sampler configuration.
Note that we are suing Java’s Context and Dependency Injection (CDI) to share a tracer instance in our app. If we forget to register a specific tracer instance, then the tracing feature would use NoopTracer. Now we can verify tracing by starting Jaeger server using the following command: docker run --rm -it --network=host jaegertracing/all-in-one and accessing the endpoint at http://localhost:8080/hello. Our trace with one span should be present in the UI at http://localhost:16686.
Instrumenting business logic
JAX-RS instrumentation provides nice visibility into your app, however, it is often necessary to add custom data to the trace to see what is happening in the service or database layer.
The following code snippet shows how the service layer can create and add data to the trace:
public class BackendService {
@Inject
private io.opentracing.Tracer tracer;
public String action() throws InterruptedException {
int random = new Random().nextInt(200);
try (ActiveSpan span = tracer.buildSpan("action").startActive()) {
anotherAction();
Thread.sleep(random);
}
return String.valueOf(random);
}
private void anotherAction() {
tracer.activeSpan().setTag("anotherAction", "data");
}
Note that it’s not necessary to manually pass a span instance around. The method anotherAction accesses the current active span from the tracer.
With the additional instrumentation shown above, an invocation of the REST endpoint would result in a trace consisting of two spans, one representing the inbound server request, and the other the business logic. The span representing server processing is automatically considered as the parent for span created in business layer. If we created span in anotherAction then its parent would be span created in action method.
Figure 1: Jaeger showing reported spans.
Video
Conclusion
We have demonstrated that instrumenting a JAX-RS app is just a matter of adding a dependency and registering a tracer instance. If we would like to use a different OpenTracing implementation, Zipkin for instance, it would just require changing tracer producer code. No changes to the application or business logic! In the next demo we will wire this app with Spring Boot created in previous demo and deploy them on Kubernetes.
Links
OpenTracing: http://opentracing.io
Github repository with demo: http://ift.tt/2s6QrTB
OpenTracing JAX-RS instrumentation: http://ift.tt/2uxvWQO
Jaeger: http://ift.tt/2eOSqHE
from Hawkular Blog
7 years, 9 months
Generic Hawkular-UI in MiQ
by Heiko W.Rupp
Hey,
the current way we implement the Hawkular-part of the MiQ ui is static,
where we write .haml files that show what properties and relations to
show.
Basically for each resource type one such page exists.
Adding a new kind of server like e.g. Apache Tomcat would need to add
a ton new .haml files.
In RHQ we had a pretty generic UI that was driven off of the metadata
inside the plugin descriptor. If a resource type had <operation>
elements,
then the UI showed the operations tab. Similar for the list of metrics
or the Events tab etc. Also for the resource configuration, the tab and
the
list of configuration properties was driven off of the plugin
descriptor.
See also [1].
The idea is now to apply the same mechanics to the ManageIQ UI so that
the resource type definitions coming from the agent can drive the UI.
We most probably need to extend the current config [2] to show
- what is shown by default
- how relations are to be shown
- which properties should be grouped together
The agent would store those in the resource type, which MiQ
can pull and build the UI from those definitions.
There is currently a rough spot: how to deal with one / more "competing"
WildFly RTs?
In RHQ we had the one canonical definition of a
resource type. Now each agent could send a different one. While
technically we can
work with that, it may be confusing if 2 WF-standalone look different.
It will not happen
often though - especially in container-land, where the config is "backed
into the image".
I wonder if we should change this way of inventory a bit to be similar
to RHQ (but more
simple):
- RT definition is done on the server
- agent asks for the RT definition on 1st start
- MiQ also gets the RT definition from the server.
With Inventory.v3 this may mean that some startup code needs to populate
RTs
and probably periodically refresh them.
Thoughts?
[1]
https://github.com/pilhuhn/misc_writing/blob/master/HowToWriteAnRhqPlugin...
[2]
https://github.com/hawkular/hawkular-agent/blob/master/hawkular-javaagent...
7 years, 9 months
Blogs on Hawkular.org and DZone.com
by Pavol Loffay
Hello bloggers!
I have a good news! Editors from DZone[1] are watching our blog. If you
post an interesting article there is a chance that they will migrate it
onto DZone. However, It's a manual process and they are watching several
sites so they might miss some interesting articles.
We can still manually share articles on DZone and link them with
hawkular.org. It's a preferred way if you want to be sure that your article
will be shared there!
[1]: https://dzone.com/
Regards,
--
PAVOL LOFFAY
Red Hat Česká republika <https://www.redhat.com/>
Purkyňova 111 TPB-B 612 45 Brno
M: +421948286055
<https://red.ht/sig>
7 years, 9 months