REST-api pagination
by Heiko W.Rupp
Hey,
triggered by https://issues.jboss.org/browse/HWKINVENT-33
and https://issues.jboss.org/browse/HAWKULAR-125
we should make sure that all the Hawkular-subprojects use the
same pagination mechanism in the REST-endpoints.
== Why pagination?
Pagination serves two main purposes
* Limit the work the server has to do by not returning all results at
once
* Limit the work a client has to do with a huge list of results.
The limiting of work not only applies to CPU time, but also to memory
consumption
and in the case of remote clients also to load time. Smaller lists just
load faster.
== What is there?
Inseide RHQ we use both: Link-headers and paging information
inside the returned objects
There are 'page', 'ps' for page number and ps for page size,
have links to 'prev', 'next','current' and 'last' (if applicable)
And an additional header "X-collection-size" for the total number of
items
See
https://github.com/rhq-project/rhq/blob/master/modules/enterprise/server/...
and
https://github.com/rhq-project/rhq/blob/master/modules/enterprise/server/...
There is RFC5988 that talks about linking headers and which defines a
number of link types (
http://tools.ietf.org/html/rfc5988#section-6.2.2 ), from where the RHQ
ones were taken.
This url
http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api#pagi...
has some "best practices" (defined by whom?) that are basically like the
RHQ link headers,
but instead of using 'ps' for the page size, they use 'per_page', which
is apparently what GitHub
does. Also the total collection count is 'X-Total-Count' and not RHQ's
'X-collection-size'.
We had a discussion about the style (link/body) in the past:
http://pilhuhn.blogspot.de/2013/02/best-practice-for-paging-in-restful-ap...
and also the objections from the JS side, who have issues getting at the
http-headers, but can easily digest body elements), which led to support
both paging in body and headers depending on the media type requested (
see
https://github.com/rhq-project/rhq/blob/master/modules/enterprise/server/...
)
For convenience I suggest to copy over the methods from RHQ and re-use
them.
9 years, 8 months
Looking for a first project to integrate accounts
by Juraci Paixão Kröhling
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
All,
Hawkular Accounts have reached a point where it's possible for other
components to integrate with it. As such, I would like to ask if
there's a project that would volunteer for it, to be the first with
this integration, so that I can fix possible integration bugs that
might come from this experience.
I know that you all are busy with the MVP, but if you think you can
afford to spend, say, a day on this, I'll be ready to guide you.
Those are the items that I would like to see being used during this
first integration:
- - Securing the backend with Keycloak
- - Consuming the tenant information (Persona)
- - Protecting resources based on the Persona's role
Needless to say, I'll support you in achieving this and will promptly
fix the issues that may arise.
If you need more information before you volunteer, you can take a look
at the readme file from the accounts module:
https://github.com/hawkular/hawkular-accounts
- - Juca.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAEBCAAGBQJVNh5dAAoJECKM1e+fkPrXiL8H/jGBE9/SDTD19GNPcdvsueKm
qBKkR2bKX0FP+hKJjzDFgnEIq3HYV9UXCPyPC2pODUXnWIDUqHMoRUz+eF2rCtLT
Q93tfuC3gE1Kn6fY1ZPs8l3fL8RZXjZWLx4q9Qnxey3roIJtDUFWKX777O0eBb+q
EkO4zWbl42qwZCv0pWMdlUusWHDm9A1X6+G70cjGjl8nfRChNBm9oU2LpBWDo0jq
Fkw9dToobLFt5O3MmjSBZo4xYF65WXIDsGdrfXJilERvtJJgWsiq8/sgEWMaNewv
0HitfhhkUbbW8XTYAqwDtYB2v4EllIge2klekPqu3vrBulOHLnYXVibwRWE0zdY=
=6ath
-----END PGP SIGNATURE-----
9 years, 8 months
WildFly NoSQL integration prototype
by John Mazzitelli
cross-posting from the wildfly-dev list - could be of interest to us.
----- Forwarded Message -----
To: "WildFly Dev List" <wildfly-dev(a)lists.jboss.org>
Sent: Friday, April 24, 2015 10:01:10 AM
Subject: [wildfly-dev] WildFly NoSQL integration prototype
Are you interested in allowing NoSQL databases access from WildFly
application deployments? This email is about an integration effort to
allow WildFly applications to use NoSQL. Feedback is welcome on this
effort, as well as help in improving [1]. Some basic unit tests are
already added that show a session bean reading/writing MongoDB [2] +
Cassandra [3] databases. In order for the tests to pass, the local
machine must already be running MongoDB or Cassandra databases.
1. Things that currently (seems to be) working in the prototype:
* During WildFly startup, MongoDB/Cassandra databases are connected to
based on settings in their respective subsystems. See the configuration
example [4].
* Applications can access native MongoDB/Cassandra objects that
represent database connections (with internal native connection
pooling). See @Resource example [2][3]. Will see how the requirements
evolve going forward and whether @Resource is the right way and/or
whether other annotations are needed.
2. Currently not working in the prototype:
* Multiple hosts/ports cannot be specified yet for target database.
* Protection against applications closing pooled connections.
* NoSQL drivers currently may create threads in EE application threads
which could leak ClassLoaders/AccessControlContexts. One solution might
be to contribute a patch that allows WildFly to do the thread creation
in some fashion for the NoSQL drivers.
* We have not (yet) tried using (Java) security manager support with the
NoSQL driver clients.
* Additional NoSQL connection attributes need to be added to the NoSQL
subsystems.
* Native NoSQL class objects are bound to JNDI currently (e.g.
MongoClient). We might want to bind wrapper or proxy objects so that we
can extend the NoSQL classes or in some cases, prevent certain actions
(e.g. prevent calls to MongoClient.close()). Perhaps we will end up
with a mixed approach, where we could extend the NoSQL driver if that is
the only way to manage it, or contribute a listener patch for WildFly to
get control during certain events (e.g. for ignoring close of pooled
database connections).
* The prototype currently gives all (WildFly) deployments access to the
Cassandra/MongoDB driver module classloaders. This is likely to change
but not yet sure to what.
3. The Weld (CDI) project is also looking at NoSQL enhancements, as is
the Narayana project. There is also the Hibernate OGM project that is
pushing on JPA integration and will also help contribute changes to the
NoSQL drivers that are needed for WildFly integration (e.g. introduce
alternative way for NoSQL drivers manage thread creation for background
task execution).
4. We will need a place to track issues for NoSQL integration. If the
NoSQL integration changes are merged directly into WildFly, perhaps we
could have a nosql category under https://issues.jboss.org/browse/WFLY.
5. You can view outstanding issues in the MongoDB server [5], Java
driver [6] to get feel for problems that others have run into (just like
you would with WildFly). You can view outstanding issues in the
Cassandra server [7] and Java driver [8] to get a feel for problems as well.
6. Infinispan [9] integration in WildFly is still going strong.
Infinispan is still the backbone of WildFly clustering and also
available for applications to use as a datasource.
7. The standalone.xml settings [4] will soon change (would like to
eliminate the "name=default", add more attributes and get the multiple
host/ports wired in).
8. If the NoSQL unit tests do stay in the WildFly repo, they will need
to be disabled by default, as most WildFly developers will not have a
NoSQL database running. Speaking of which, we need to wire the unit
tests to update the standalone.xml to contain the MongoDB/Cassandra
subsystem settings [4].
9. What version of NoSQL databases will work with the WildFly NoSQL
integration? At this point, we will only work with one version of each
NoSQL database that is integrated with. Because we are likely to need
some changes in the NoSQL client drivers, we will work with the upstream
communities to ensure the NoSQL driver code can run in an EE container
thread, without causing leaks. First we have to identity the changes
that we need (e.g. find some actual leaks that I only suspect will
happen at this point and propose some changes). The Hibernate OGM team
is going to help with the driver patches (thanks Hibernate OGM team! :-)
10. Going forward, how can WildFly extend the NoSQL (client driver
side) capabilities to improve the different application life cycles
through development, test, production?
Scott
[1] https://github.com/scottmarlow/wildfly/tree/nosql-dev
[2]
https://github.com/scottmarlow/wildfly/blob/nosql-dev/testsuite/compat/sr...
[3]
https://github.com/scottmarlow/wildfly/blob/nosql-dev/testsuite/compat/sr...
[4] https://gist.github.com/scottmarlow/b8196bdc56431bb171c8
[5] https://jira.mongodb.org/browse/SERVER
[6] https://jira.mongodb.org/browse/JAVA
[7] https://issues.apache.org/jira/browse/CASSANDRA
[8] https://datastax-oss.atlassian.net/browse/JAVA
[9] http://infinispan.org
_______________________________________________
wildfly-dev mailing list
wildfly-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/wildfly-dev
9 years, 8 months
What is Business Transaction Management (prep for F2F)
by Gary Brown
Hi all
As suggested recently by others, we want to focus the F2F sessions on discussions rather than presentations. So in that spirit, I thought it would be good to get the "What is Business Transaction Management" discussion out of the way as a ML thread, so that the F2F session can discuss what and how to build BTM in a Hawkular context.
Taking some excerpts from Wikipedia:
"Business transaction management (BTM), is the practice of managing information technology (IT) from a business transaction perspective. It provides a tool for tracking the flow of transactions across IT infrastructure, in addition to detection, alerting, and correction of unexpected changes in business or technical conditions. BTM provides visibility into the flow of transactions across infrastructure tiers, including a dynamic mapping of the application topology.
Using BTM, application support teams are able to search for transactions based on message context and content – for instance, time of arrival or message type – providing a way to isolate causes for common issues such as application exceptions, stalled transactions, and lower-level issues such as incorrect data values.
The ultimate goal of BTM is to improve service quality for users conducting business transactions while improving the effectiveness of the IT applications and infrastructure across which those transactions execute. The main benefit of BTM is its capacity to identify precisely where transactions are delayed within the IT infrastructure. BTM also aims to provide proactive problem prevention and the generation of business service intelligence for optimization of resource provisioning and virtualization."
Some of the applications of BTM listed are:
"BTM solutions capture all of the transaction instances in the production environment and as such can be used for monitoring as well as for analysis and planning. Some applications include:
* Outage avoidance and problem isolation: Identification and isolation of tier-specific performance and availability issues.
* Service level management: Monitoring of SLAs and alerting of threshold breaches both at the end-user and infrastructure tier level.
* Infrastructure optimization: Modification of the configuration of data center infrastructure to maximize utilization and improve performance.
* Capacity planning: Analysis of usage and performance trends in order to estimate future capacity requirements.
* Change management: Analysis of the impact of change on transaction execution.
* Cloud management: Track the end-to-end transaction flow across both cloud (private, hybrid, public) and dedicated (on-premises, off-premises) infrastructure."
Obviously we need to focus our efforts on the monitoring/alerting aspects initially, and this is what I am expecting the F2F discussion will be focused on, but a couple of these areas may be of interest in the future.
Any views on the above appreciated.
Regards
Gary
9 years, 8 months
Serialization format for inventory relations
by Jiri Kremser
Hello devs,
currently, the inventory REST API can't answer requests like 'give me all related entities to this entity' or similar. I am working on exposing the relationships. Although, it's not probably the use case the REST was designed for (relations are not resources), I would like to stick with standards as much as possible.
I suggest using the JSON-LD[1] standard as the underlying serialization format.
pros:
* w3c standard
* can be converted into RDF, RDFa, Turtle, etc. and stored in arbitrary triplestore (might be useful for some offline analysis, querying with SPARQL, etc)
* HATEOASS ready, because the ids are URIs
cons:
* not as concise as plain JSON
Here is an example of 1 relation/edge:
{
"@context": {
"inv": "http://hawkular.org/inventory/0.0.1",
"baseUrl": "http://127.0.0.1:8080/hawkular/inventory/"
},
"@id": "baseUrl:relationships/1337",
"inv:shortId": "2",
"@type": "inv:Relationship",
"inv:source": {
"@id": "baseUrl:tenants/acme",
"@type": "inv:Tenant",
"inv:shortId": "acme"
},
"inv:label": "contains",
"inv:target": {
"@id": "baseUrl:acme/resourceTypes/URL",
"@type": "inv:ResourceType",
"inv:shortId": "URL"
},
"inv:properties": {
"created": "2000-01-01",
"strength": 0.8
}
}
(inv:properties can be omitted)
The work is almost done in here[2]
[1]: http://www.w3.org/TR/json-ld/
[2]: https://github.com/hawkular/hawkular-inventory/pull/55
wdyt?
9 years, 8 months
Looking for the opportunities code
by Sebastian Łaskawiec
Hey!
My name is Sebastian and I'm a remote productization engineer for JBoss
Data Grid from Poland. I also contribute to the project New Castle (this is
a replacement for Brew) and I'm really interested in Hawkular as our
monitoring approach in the future.
I would love to help you with the coding in my free slots. Personally, I'm
very interested in cloud-related technologies, so I think I would fit in
Metrics Storage or Analytics. Could you please give me a hint where I could
start and how I could help?
I have also a small question regarding to the project's assumptions - do we
plan to analyze logs with Hawkular? This feature together with alerting
might create a lot of new use cases.
Thanks!
Sebastian
9 years, 8 months
inventory hierarchy
by John Mazzitelli
Well, after staring at things today, I am coming to the conclusion I probably should re-arrange my agent configuration before I get too far ahead. Need to know what people think.
The current hawkular monitor agent defines things like this - you define your metric definitions and then you define where your servers are (host/port) and what metrics to collect from those servers. A very small agent config would be something like:
<metric-set-dmr name="my-metrics">
<metric-dmr name="my-threads" resource="/core-service=platform-mbean/type=threading" attribute="thread-count"
</metric-set-dmr>
<managed-resources>
<remote-dmr name="my-server" host="myhost" port="9990" metricSets="my-metrics" />
</managed-resources>
But after looking at the kinds of metrics we want to collect, I think we need to introduce some intermediate data - specifically, resource hierarchy.
For example, if you look at Kettle (which has a pretty interesting and non-trivial DMR hierarchy) there are things like this:
Alerts EAR (/deployment=hawkular-alerts-ear-1.0.ear)
|
\-- Alerts REST WAR (/deployment=hawkular-alerts-ear-1.0.ear/subdeployment=hawkular-alerts-rest.war/)
|
\-- Undertow Subsystem (/deployment=hawkular-alerts-ear-1.0.ear/subdeployment=hawkular-alerts-rest.war/subsystem=undertow/)
|
\-- METRIC: active-sessions
\-- METRIC: sessions-created
|
\-- Servlet (/deployment=hawkular-alerts-ear-1.0.ear/subdeployment=hawkular-alerts-rest.war/subsystem=undertow/servlet=org.hawkular.alerts.rest.HawkularAlertsApp/)
|
\-- METRIC: max-request-time
\-- METRIC: min-request-time
\-- METRIC: request-count
There's also EJB3 subdeployments under the EAR which have singleton-beans that have execution-time and other invocation metrics.
So, clearly, I think there has to be something like managed entities in between those endpoint servers and metrics in the agent configuration. Essentially, we need to define resources in the config.
I was thinking of renaming "managed-resources" to "managed-servers" and then have "managed-resources" in between servers and metrics:
<metric-set-dmr name="my-undertow-metrics">
<metric-dmr name="Active Sessions" attribute="active-sessions" />
<!-- resource denotes a child resource under the resource that has this metric defined -->
<metric-dmr name="Servlet Requests" resource="/subsystem=undertow/servlet=org.hawkular.alerts.rest.HawkularAlertsApp" attribute="request-count" />
</metric-set-dmr>
<managed-resource-set-dmr = "Alerts Application">
<resource-dmr name"Alerts EAR"
path="/deployment=hawkular-alerts-ear-1.0.ear"
<resource-dmr name="Alerts WAR"
parent="Alerts EAR"
path="/subdeployment=hawkular-alerts-rest.war"
metricSets="my-metrics"/>
</managed-resource-set-dmr>
<managed-servers>
<remote-dmr name="my-server" host="myhost" port="9990" resourceSets="Alerts Application" />
</managed-servers>
One difference between this and RHQ is I don't define those low-low level resources as separate resources (in RHQ I would have to define the undertow subsystem as a child under the WAR and then a servlet under the undertow resource). There is only the EAR and the WAR resource with the low level metrics from undertow and the servlet being "assigned" or linked to the WAR resouce.
So the Hawkular inventory hierachy would be this (compare with the Wildfly hierarchy):
Alerts EAR
|
\-- Alerts WAR
|
\-- METRICS: Active Sessions (coming from undertow)
\-- METRICS: Servlet Requests (coming from the servlet inside undertow)
9 years, 8 months
bus rest client
by John Mazzitelli
Did I mention this yet?
There is a REST client for sending bus messages. Single jar, only two dependencies (jboss logging, apache HTTP client). To send a bus message (in this case, to the alerts topic):
new RestClient(new URL("http://localhost:8080")).postTopicMessage("HawkularAlertData", yourJsonPayload, null);
That "null" can be a map of headers if you need to send headers.
The client is here: https://github.com/hawkular/hawkular-bus/tree/master/hawkular-bus-rest-cl... and the maven artifact is org.hawkular.bus:hawkular-bus-rest-client - its just a jar.
9 years, 8 months
hawkular monitor agent integrated in kettle
by John Mazzitelli
The Hawkular Monitor agent is now integrated in kettle. When you build kettle, along with all the other stuff, you will get a new subsystem extension installed which will collect metrics from the kettle instance itself. It stores data directly to Hawkular-Metrics and sends a bus message so alerts can do stuff with it.
You will see nothing in the UI regarding this stuff - but every 5 minutes you should see some diagnostics in your log file. And nothing has yet been done wrt inventory integration.
9 years, 8 months