Packaging
by Thomas Heute
I would prefer if the whole WF packaging/layering would happen in
Hawkular rather than with multiple layers of packaging as it is today.
So far I have done 2 things:
1 - Basically renamed Kettle to Hawkular (+ few extras). kettle
directory is now 'dist' which should be more intuitive.
https://github.com/theute/hawkular/commit/e2b937b467e4ed8c184b51add5af7b5...
2 - Bring the nest distro build from Bus to Hawkular:
https://github.com/theute/hawkular/commit/84ed6f505a5db73ecf9264c1bc8dea3...
I understand that there could still be need for the nest (I think
Alert uses it for testing) but I find the double customization of WF
quite complex.
My next items unless I am stopped:
- Fix the docker build as I guess I broke it
- do more proper WF layering (not use 'modules/base')
- understand "hawkular-nest-wf-extension" ;)
- upgrade to WF9
- update the docs
Thomas
9 years, 6 months
Branch Kettle?
by Jay Shaughnessy
Related to the "Adding git SHA1 & Co. to manifest.mf to improve
traceability" thread...
In the interim between now and when we move to strong versioning could
we perhaps just create a "stable" branch for kettle? Note the term
"stable" as opposed to "release" or something like that. This idea was
kicked around at the F2F as a possible solution.
The stable branch would have poms updated with non-snapshot versions of
the components and so would be reproducible. It could be used for demos
without fear of snapshot regression. It could be used by QE for
test-case development and [non-release] qualification. Kettle itself
could be versioned, bumping it's version whenever a consumed component
version was updated.
Components would be required to make at least an initial release to get
things going, and then should make subsequent releases at relatively
short intervals, maybe every few weeks.
Will this work and be helpful?
9 years, 6 months
Event Store
by Gary Brown
Hi
As discussed yesterday, we need a general event store that can persist relevant events off the bus. However we didn't decide where this functionality should reside, whether in an existing component or in a new component.
Although it could be provided as a separate component, I am not sure we have any use cases where it would be used independent of the bus, so think it is probably best to include it as additional functionality within the Bus component.
Agree/disagree?
The other issues are:
- how to determine which events should be persisted, as we may want some events to trigger further processing by subscribers on the bus, but not necessarily be stored for historic analysis.
- for the event store we will need to know the type of the event being stored
So one possible solution to both these issues is, if the 'type' of the event is provided (possibly as a header value), then it will be persisted?
Then within Elasticsearch, we can use the tenant/persona as the Index (to truly segment the data) and the event type to ensure different information types are partitioned/indexed appropriately.
Thoughts?
Regards
Gary
9 years, 6 months
Adding git SHA1 & Co. to manifest.mf to improve traceability
by Peter Palaga
Hi *,
citing from https://issues.jboss.org/browse/HAWKULAR-176 :
== Motivation
The proposed changes should improve the traceability of the components
delivered with kettle. Because we use SNAPSHOTs to build kettle ATM,
there is no way to find out which state of the individual component's
git repos underlie the given kettle distribution.
In a situation when Lukáš has a working kettle distro and Thomas H.
cannot succeed to build one, they can go through the SHA1 hashes listed
in the manifest.mf files of the kettle components to find out where is
the difference.
This proposal is not a proper solution of the problem that kettle builds
are not reproducible. It just picks some low hanging fruits to soften
the possible negative impact of our irreproducible builds.
== Changes
Maven should be configured in such a way that .jar, .war and *.ear files
will have the following new entries added to their manifest.mf files:
Built-From-Git-SHA1 - the last git commit's hash
Built-On - the time when the build has started
Built-From-Git-Branch - the git branch being built from
Further, when the release profile is active, the build should fail, in
case there are uncommitted local changes.
See https://github.com/hawkular/hawkular-parent-pom/pull/21
Best,
Peter
9 years, 6 months
Inventory authz workflows in Hawkular SaaS
by Lukas Krejci
I've been scratching my head for the most part of today to come up with the
following workflows. While there already is an implementation of the inventory
integrated with accounts, following the f2f discussions it needs to see some
changes.
Before implementing them though, I'd like to run the below workflows and
assumptions through Juca in particular to correct any of the misconceptions
about accounts that I still might have ;)
Account resources:
Each entity in inventory is represented as a "resource" in accounts. The word
"resource" is something different here than what it is inside inventory.
Operations:
* update-tenant - checked on the actual tenant
* delete-tenant - checked on the actual tenant
* create-environment - checked on the tenant the environment should belong to
* update-environment - checked on the actual environment
* delete-environment - checked on the actual environment
* create-feed - checked on the environment the feed should belong to
* update-feed - checked on the actual feed entity
* delete-feed - checked on the actual feed entity
* ... etc. for all the rest of the inventory entity types
* relate - checked on the source inventory entity of a relationship to be
created
Notice the absence of "read-*" privileges which are implied - each persona can
only read its tenant and everything underneath it.
Also notice the lack of "create-tenant" - that operation doesn't make sense
actually because each persona IS a tenant. The tenant is created by the
inventory on the fly if needed (and yes, tenantId is going to move to the
headers or query params in the inventory REST the same way as it did in
metrics ;) ).
Now to the workflows:
* New user registration:
1) user created in KC
3) a tenant is created with the same ID as the user with the user set as its
owner
4) Step 3) implies the user has the "Super User" role on the tenant
* New org registration:
1) User is registered normally
2) User creates a new organization
3) The user is set as owner of the org, having SU perms on it, the user is
also a member of the org. This is all done implicitly by accounts.
4) New tenant with the same ID as org is created with org as its owner
5) The above means that both the org and the user that created it have SU on
the new tenant
* Adding org2 under org1:
1) Orgs registered normally
2) org2 added under org1 (using some accounts mechanism)
3) The above means that org1 is now the owner of org2
4) Roles are assigned on the tenant of org1 to the org2
5) The above means that org2 might NOT be SU on org1's tenant
6) Steps 4) and 5) might be repeated for any entity in the tenant of org1.
7) Note that this is entirely doable using accounts mechanisms.
8) Might require "translation" from the ID of an accounts resource to an URL
of the "real" entity in the component's REST or in the UI.
* Adding user to an org:
1) User is registered normally
2) Org is registered normally
3) User is added to the org (using some accounts mechanism)
4) This means the user has SU on the tenant of the org (because org is SU on
the tenant)
5) This also means that the user might not be SU on any of the org's sub-orgs.
* Assigning operations to roles:
1) This is entirely in accounts. Each component defines a set of operations
that can be invoked. The operations then can be added to roles. This also puts
constraints on the possible names of the operations (i.e. they should probably
be prefixed by the component name).
* Listing tenants:
1) No-one has the "full picture"
2) Listing tenants is equivalent to listing the user along with the orgs
they're member of (singular "they" to be politically correct - don't you love
English? ;) ).
3) Operations on the individual tenants depend on the roles the user has on
the corresponding orgs.
K, to not make this email even longer, I'll stop here. Does the above sound
reasonable? What can be simplified or staged to later versions? What is/is not
supported in the current accounts impl and UI?
Cheers,
Lukas
9 years, 6 months
Automagic Jira state transitioning (for HAWKULAR-* )
by Heiko W.Rupp
Last week we were talking about auto-actions in Jira.
This has now been implemented for HAWKULAR-* on all github.com/hawkular
repos.
See below for a transition that happened when a PR was sent.
I think for this to work, the JIRA state has to be in "Coding in
progress" already.
Heiko
Forwarded message:
> From: Anonymous (JIRA) <issues(a)jboss.org>
> To: hrupp(a)redhat.com
> Subject: [JBoss JIRA] (HAWKULAR-175) Include bus sample using ActiveMQ
> virtual topics
> Date: Fri, 8 May 2015 11:26:46 -0400 (EDT)
>
>
> [
> https://issues.jboss.org/browse/HAWKULAR-175?page=com.atlassian.jira.plug...
> ]
>
> Issue was automatically transitioned when Gary Brown created pull
> request #17 in GitHub
> ---------------------------------------------------------------------------------------
> Status: Pull Request Sent (was: Open)
>
>
>> Include bus sample using ActiveMQ virtual topics
>> ------------------------------------------------
>>
>> Key: HAWKULAR-175
>> URL: https://issues.jboss.org/browse/HAWKULAR-175
>> Project: Hawkular
>> Issue Type: Task
>> Components: Bus
>> Reporter: Gary Brown
>> Assignee: Gary Brown
>>
>> In standard JMS, for a consuming application to operate in a cluster
>> to support load balancing and fail over it needs to consume from a
>> queue.
>> However in some situations, the information being published to that
>> queue would also be of use for other applications. In this scenario,
>> it would either be necessary for the producer to know about the
>> number of consuming apps, and send the message to an individual queue
>> per app (which makes it difficult to dynamically add further
>> consuming apps), or switch to using a topic, so the publisher is
>> independent of the number of consumers (which loses the benefit of
>> load balanced consumers, as each clustered instance of the app would
>> perform duplicate processing of the messages).
>> ActiveMQ provides a solution using "virtual topics" where producers
>> simply publish to a topic (and therefore don't care about the number
>> of consumers), but the consumers use queues scoped to the application
>> name - and therefore multiple independent apps can receive the same
>> message, and also have multiple load balanced instances of the app on
>> different servers in a cluster.
>> A modified version of the simple MDB sample should be provided to
>> demonstrate use of this "virtual topic" capability using the standard
>> JMS APIs.
>> See http://activemq.apache.org/virtual-destinations.html for further
>> details.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.15#6346)
9 years, 6 months
GSoC 2015 - Hawkular - pluggable data processors for metrics
by Aakarsh Agarwal
Hi folks,
First of all sorry for the delay in this introductory mail from my side.
My name is Aakarsh Agarwal, currently a student at Indian Institute of Engineering, Roorkee, India.
I have been selected for GSoC 2015 with JBoss, to do a project on "Hawkular - pluggable data processors for metrics". This project is aimed to develop interface for plugins that improve the performance of Hawkular-Metrics making it more dependable, dynamic and extending the scope of its usage in operating with data sets.
To add, I hangout on irc channel with username - "akki_007" and my blog will be coming soon which will be updated regularly about my progress on this project. Looking forward to start with the work asap.
Hopefully, it will be a great Summer of Code!
Regards
Aakarsh Agarwal
9 years, 7 months