[
https://issues.jboss.org/browse/JBTM-312?page=com.atlassian.jira.plugin.s...
]
Jonathan Halliday commented on JBTM-312:
----------------------------------------
'clustering' has vague meaning and different semantics for the various tx
components, so this actually breaks down into a number of distinct areas:
- HA, meaning continued overall operation of a multi-node system in a timely manner even
in the case of failure of some subset of the nodes.
- load balancing, meaning a single system as perceived by the user is actually implemented
as multiple nodes for scalability.
Background:
Core is not distributions aware, except in so far as txoj allows multiple JVMs to share
object state by sharing an objectstore. As long as the store implementation has the
required behaviour there is no further work needed. Some older stores do, the newer
optimised ones that use in-process state cache don't and we have no plans to change
that.
Transactions are always presumed abort until prepare. There is no failover for pre-prepare
tx - node failure always results in rollback. No plans to change that either, as there is
no call for it. post-prepare (actually post log write) tx have state in the store. In the
event of failure it's the job of the recovery manager to complete the tx. There must
be exactly one recovery manager process per store. The recovery manager may be in-process
with the tm or, for certain components, out of process.
JTA is not distribution aware, except in so far as specific XAResource drivers may do
networking. For recovery processing, the log records in the store must be supplemented by
additional configuration information. For example, in the case of non-serializable
XAResources the recovery process must have datasource definitions equivalent to the tm
process.
JTS is distribution aware, its transport endpoint identifiers being based on IORs i.e. on
IP addresses. Log records are only recoverable if the IORs they reference are reachable.
An IOR is assumed to belong to at most one JVM at a time. Some ORBs contain
HA/loadbalancing support at the transport level, but we don't currently utilise this.
XTS is distribution aware, its transport endpoint identifiers being URLs i.e. based on IP
address or hostname. Log records are only recoverable if the URLs they reference are
reachable. A hostname/IP address is assumed to route to exactly one JVM i.e. we don't
currently support http level load balancing. Such load balancers are typically
'session sticky' based on a notion of http session that is not equivalent to WS
session or WS transaction context. Without a WS-AT/WS-BA context aware level7 loadbalancer
it's not possible to overcome this.
Deployment models:
Typical large scale deployments involve multiple nodes (o/s instance), each with its own
tm, recovery manager and store. In such cases load balancing is a concern external to the
tm - a transaction will be owned by the resident tm in whichever node the call arrives.
For distributed tx (JTS, XTS) the tm may be out of process, optionally on another node.
This allows for separate scaling of tm node and business logic node at the cost of
additional communication overhead. In practice there is little call for this model and
thus little incentive to invest time in improving it. Possible enhancements would include
a HA / load balancing front end to allow a single business logic node to utilise multiple
tm nodes.
For certain store implementations, the store many be shared by multiple processes. These
include exactly one recovery manager and one or more transaction managers. The dominate
use cases are:
- out of process recovery manager, for fault isolation. In the event of the tm process
(often the same as the business logic process) crashing, the rec mgr process continues to
run and may complete outstanding tx in a more timely fashion than awaiting a process
restart. This is desirable where shared resource managers may be holding locks that
prevent ongoing operations of other nodes. In practice it's not commonly seen. This is
expected to continue to be the case, particularly as newer store implements drop support
for out of process access in favour of greater performance.
- single store for simplified deployment administration. By centralising the log records
for all systems, nodes can be otherwise stateless. The JDBC store for example puts the log
records in a db server, allowing the nodes to have volatile local storage e.g. non-RAID
disks. This one is a bit of a red herring, as you can achieve the same benefits using
multiple copies of the store e.g. multiple dbs/tables in the same db server or multiple
dirs on the SAN. This model has potential to find favour in cloud environments where
nodes are added or removed frequently.
Requirements:
The most significant shortcomings of the current software are:
- recovery may be delayed due to unavailability of a recovery manager process. This can be
addressed by allowing 'proxy recovery' ie. attaching the store to a recovery
process running on another node. This suits environments where store availability is
greater than process availability for a given node. It's feasible only where the
replacement recovery manager can easily duplicate sufficient additional configuration to
replace the original. For JTA that means e.g. datasource configuration. For JTS/XTS it
also requires IP/hostname failover. This is largely a case of documentation and testing
rather than code changes. The need for manual reassignment of the store to a new
recovery manager may be removed by additional work to allow a hot standby recovery manager
with heartbeat protocol to the primary.
- XTS can't be used in http load balanced environments. Addressing this will require
work in the http load balancing code rather than the tm.
- uniq node configuration requires manual work. This is a pain in highly dynamic clusters
such as cloud environments where nodes are added frequently. The ability to have new nodes
auto configure from a central server may be desirable, although in practice the tm is
normally deployed embedded and this capability probably belongs at a server level rather
than component i.e. tm level.
Add clustering support
----------------------
Key: JBTM-312
URL:
https://issues.jboss.org/browse/JBTM-312
Project: JBoss Transaction Manager
Issue Type: Task
Security Level: Public(Everyone can see)
Components: JTA, JTS, Recovery, Transaction Core, XTS
Affects Versions: 4.3.0.BETA2
Reporter: Mark Little
Assignee: Jonathan Halliday
--
This message is automatically generated by JIRA.
For more information on JIRA, see:
http://www.atlassian.com/software/jira