Le 10/11/2015 18:09, John Sanda a écrit :
> After this long introduction, there are three main reasons the
current solution needs improvements:
> >
> >1) Addressability -> the current solution does not work in the distributed
environment because there is no clear way how to access the public API of the services
deployed. Let's say the installation spread across 5 containers. How can I make a
public API call from a Metrics instance to an Alerts instance. There is no directory to
know where the Alerts or Metrics instances are deployed.
Addressability is provided by the messaging system. There is no need for a directory. You
just to need to communicate with the messaging server/broker. Beyond that there are a lot
of features around addressability and routing such as message selectors, message grouping,
hierarchical topics, and more.
+1
> >2) Load distribution -> there is no clear way on how the load distribution
works or who is responsible for it.
Again, this is largely handled by the messaging system. Multiple consumers take messages
from a queue where each message corresponds to work to be done.
Right, load distribution is a feature of the messaging system. As for
HTTP load balancing, there is hardware and software dedicated to this, I
would avoid building an HTTP load balancer into the project.
> >3) Availability of the existing public API -> There is no
reason to implement a new API just for the purposes of communicating between the
components. Given that we strive for this micro-service architecture, the one and single
public API should be the main method for communicating between components for
request-reply.
I do not think it is a given that we strive for a micro-service architecture. It might
make more sense in an OpenShift environment, but I don’t think it necessarily does in
general.
> >We might need to extend what we have but the public API should be front and
centre. So if we use JMS or HTTP (or both, or UDP, or any other method), the public API
interface should be available over all the channels. Sure there might be difference on how
to make a request in JMS vs HTTP (one with temporary queues, and the other with direct
http connections) but the functionality should be identical.
I don’t know that I agree with this. Suppose we decide to offer an API for inserting
metric data over UDP to support really high throughput situations. Are you suggesting for
example that the operations we provide via REST for reading metric data should also be
available via the UDP API? And since the motivation for a UDP API is
performance/throughput, we might event want to a more compact request format than JSON.
Lastly and most importantly, if you want push for an alternative communication mechanism
between components in H-Metrics and H-Alerts, then you push for the same across all of
Hawkular because it does not make to have to support two different mechanisms.
The public API (JSON over HTTP) is how the users must interact with the
Hawkular platform. It is the unique entry point for the users.
It's has been agreed during the project inception to make Hawkular
components talk to each other via the bus. It is not a trivial change to
break this assumption: everything the bus provides (see above) will need
to be re-implemented.
That is why I am opposed to the change, unless it is proven that it is a
limitation.