On 6 May 2009, at 11:14, Adrian Cole wrote:
This is interesting. What is important, afterall? In the case of a grid, it is more like a quorum that allows operations to continue without data loss. I'm not sure if individual instances matter as complete sets of EC2s could go up or down and there still be no effect on cluster as a whole.
Would it not be the cache instances, or jgroups configuration that are the most important managed resource in this case?
I tend to agree with Adrian, although if an individual node were to be 'bounced' for whatever reason, naming as far as management is concerned could be something statically configured. E.g., we could introduce a configuration element for node name, such that the management console would see information scoped on a few things:
+ app-1-cluster
+ data-cache
+ node1
+ node2
+ node3
+ some-other-cache
+ node1
+ node2
+ node3
+ app-2-cluster
- etc -
Current configuration allows you to specify cache name and cluster name. Node name could be added. E.g.:
<global>
<transport clusterName="app-1-cluster" nodeName="node1">
</transport>
</global>
<cache name="data-cache">
....
</cache>
If nodeName is not specified then network address is used. These details could then be exposed via JMX for collection by the JOPR agent.
So assuming that solves the naming problem, I still think the main problem is discovery.
So it makes sense that the console talks to agents, and agents talk to the process being run locally, which is fine. But how does the console find agents? :-) Or is the console's location statically configured on each agent so the agent reports its location on startup? This makes sense to me...
-Adrian
On Wed, May 6, 2009 at 11:11 AM, Heiko W. Rupp
<hwr@redhat.com> wrote:
Manik Surtani schrieb: Is there a way to use JGroups for discovery? If the console was running
Yes of course. in the same VM as any of the cache instances, it could delegate discovery to the cache, which could expose a set of addresses.
The console (be it Jopr or Embedded Jopr) never connects to a managed
resource itself, but the agent-plugin does this. So you could e.g. have
an agent running within EC2 that has the Infinispan plugin, which talks to
all the cache nodes and the server UI would run in the enterprise and would
talk to that agent.
The most difficult part would be to get the naming of the individual IS instances
on the various hosts right (*) - especially when only one agent is managing multiple
instances.
(*) The name of a resource must not change on the next discovery run. That is
why for example the process id is not allowed, as a process restart would find a
different resource and the existing one would be marked as down.
_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev