One thing I would like to add to the discussion, which I was thinking about yesterday.
Our customer base has grown substantially, and the type of customer we are now engaging
with have much larger deployments than in the past. I continue to here complaints about
those customers ability to move from one major EAP release to another. We also have a
specific customer that made us commit to having our JMS client remain compatible across
major releases (EAP 5 JMS client can talk to an EAP 6 server, and an EAP 7 server too). I
know of another customer that has complained about remote EJB calls in the past, in this
regard as well. With these large deployments it becomes very difficult to a "big
bang" upgrade, and they need to ability to have old clients talk to new servers.
So, with the design of this, it would be great to take into account this need. I'm
just thinking that if there ends up being a need for the wire protocol to change in the
future, that we would need to still be able to support the older clients.
Andy
----- Original Message -----
From: "David M. Lloyd" <david.lloyd(a)redhat.com>
To: jboss-as7-dev(a)lists.jboss.org
Sent: Tuesday, October 18, 2011 7:43:50 PM
Subject: Re: [jboss-as7-dev] Clustered invocation design
Great. We're now officially moving forward with #2. I am not
addressing automatic connection management at this time; it will be
the
user's responsibility to establish connections for now. We can
revisit
our options once we're up and running.
On 10/18/2011 08:07 PM, Jason Greene wrote:
> I think 2 is the only realistic option since it supports distant
> remote clients. It's also similar to what we had before. Async
> topology changes shouldn't be an issue IMO. The server endpoint
> always has the option to redirect to the right location. As long
> as we have the requirement that the server endpoints be a jgroups
> cluster, then we can reuse the jgroups view IDs which have already
> formed a consensus.
>
> Another interesting aspect is stateless load balancing policy and
> stateful session affinity. Since time is short my thoughts are
> start basic: round robin + sticky failover.
>
> Sent from my iPad
>
> On Oct 18, 2011, at 9:31 AM, Jaikiran Pai<jpai(a)redhat.com> wrote:
>
>> From someone who has very limited knowledge of clustering
>> internals -
>> the only "advantage" (for us) in going with #1 would be the
>> ability to
>> setup easy connectivity? From what I understand, #2 would allow
>> more
>> flexibility to the users (and a bit of complexity for us in
>> managing the
>> connections and authentication) on setting up the cluster for the
>> invocations. Am I right?
>>
>> -Jaikiran
>> On Tuesday 11 October 2011 09:29 PM, David M. Lloyd wrote:
>>> There are at least two basic paths we can follow for clustered
>>> invocation based on the current architecture. The right choice is
>>> going
>>> to depend primarily upon the expected use cases, which I am not
>>> in a
>>> position to properly judge.
>>>
>>> Option 1: Clustered Invocation Transport
>>> ----------------------------------------
>>>
>>> In this option, we introduce a new "LAN" transport type for
>>> invocation
>>> on the cluster. The transport would use direct TCP connections or
>>> UDP
>>> messages (or both, depending on request size) to convey the
>>> invocation.
>>> The characteristics of this option are as follows:
>>>
>>> - Security: reliance on physical network security only (no TLS or
>>> authentication)
>>> - Latency is very low, even to new nodes
>>> - Topology changes can be conveyed as separate asynchronous
>>> messages
>>> - Invocations from external networks would happen through a proxy
>>> node,
>>> with Remoting being bridged to the LAN, to perform security
>>> functions
>>>
>>> Option 2: Load-balanced Remoting Connections
>>> --------------------------------------------
>>>
>>> In this option, we rely on the client to establish one or more
>>> Remoting
>>> connection(s) to one or more of the nodes of the cluster. Logic
>>> in the
>>> client will be used to determine what connection(s) to use for
>>> what
>>> clusters. We have the option of automatically connecting as
>>> topology
>>> changes or requiring the user to set up the connections in
>>> advance.
>>> Note that automatic connection cannot work in the case of
>>> user-interactive authentication. Characteristics:
>>>
>>> - Security: full authentication and TLS supported
>>> - Latency is low once the connection is established, however
>>> there is
>>> some overhead involved in authentication and security negotiation
>>> - Topology changes should be asynchronous notifications
>>> - Each connection has to be separately authenticated
>>> - Automatically establishing connections is not presently
>>> supported, so
>>> we'd need a bit of infrastructure for that. Deal with
>>> user-interactive
>>> authentication. Deal with connection lifecycle management. Deal
>>> with
>>> configuration. This will be a point of fragility
>>>
>>> Summary
>>> -------
>>>
>>> For both options, we have to determine an appropriate
>>> load-balancing
>>> strategy. The choice of direction will affect how our clustering
>>> and
>>> transaction interceptors function. We also have to suss out the
>>> logic
>>> around dealing with conflicting or wrongly-ordered topology
>>> updates;
>>> hopefully our existing policies will continue to apply.
>>>
>>
>> _______________________________________________
>> jboss-as7-dev mailing list
>> jboss-as7-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
>
> _______________________________________________
> jboss-as7-dev mailing list
> jboss-as7-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev
--
- DML
_______________________________________________
jboss-as7-dev mailing list
jboss-as7-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/jboss-as7-dev