On 13/09/16 22:40, Brian Stansberry wrote:
>
There is nothing fundamental that says you can’t do this so long as you’re not trying to
execute these ops as part of the start/stop of an MSC service. It doesn’t sound like
that’s the case.
Indeed.
It does sound though like you want to expose management over the
non-management interface. That’s a significant security hole.
Well, it is a protocol operation which has a management side-effect. The
way we have approached that in other similar situations is to either
require access through a loopback interface or require authentication
and authorization be enabled on the endpoint and an Admin permission on
the subject requesting the operation. Note however that the Hot Rod
endpoint would be using a different security realm compared to the
management one.
> I guess that in standalone mode this wouldn't be too much of
an issue,
> with two caveats:
> - all nodes in a cluster should apply the changes to their own
> configuration, leveraging the model rollback mechanism to handle
> failures on other nodes
There is no multi-process model rollback mechanism with standalone servers.
I know: this would have to be implemented by the subsystem using the
cluster transport.
> - new nodes joining the cluster (and therefore with a possibly
outdated
> configuration) would receive the configuration of caches already running
> in the cluster and applying them locally
>
How does this happen?
Again, via the cluster transport.
> The real tricky bit is obviously domain mode. The server
receiving the
> cache creation request would need to delegate it to the DC who would
> then apply it across the profile. However this clashes with the fact
> that, as far as I know, there is no way for a server to communicate with
> its DC. Is this type of functionality planned ?
>
Not actively. We looked into it a bit in the context of DOMAIN_PING but for that use case
it became apparent that invoking a bunch of management ops against the DC was a
non-scalable solution. This sounds different; a server would need to figure out what
profile stores the infinispan subsystem config (it may not be the final one associated
with the server group since profile x can include profile y) and then make a change to
that profile. That’s scalable.
If we set up this kind of connection we’d need to ensure the caller’s security context
propagates. Having an external request come in to a server and then get treated by the DC
as if it were from a trusted caller like a slave HC or server would be bad.
As described above, the caller might not be in the same security realm
as the management stuff.
> I have created a document which describes what we'd like to
do at [1]
>
> Tristan
>
>
> [1]
https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> wildfly-dev mailing list
> wildfly-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/wildfly-dev
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat