Been digesting this for a bit. :)
anonymous wrote : Synchronization of multiple local file based repositories as you
describe is difficult for something outside of the profile service as the exact state of a
deployment is the raw deployment content + any admin edits that may have occurred. Raw
content sitting on the disk may not be in the profile depending on what removing a
deployment from a profile actually does.
I wasn't clear about one thing when I described that concept -- I was only thinking of
that approach in terms of the basic.ProfileServiceImpl; i.e. an equivalent to
VFSDeploymentScannerImpl that handles DeploymentPhase.APPLICATION_CLUSTERED. This is one
reason I considered it low priority. It doesn't fit with the full profile service
impl, which works differently.
anonymous wrote : So 'hotdeployment' is purely a profile implementation detail,
how we reconcile local repositories across a cluster would be part of the
org.jboss.profileservice.spi.DeploymentRepository.getModifiedDeployments()
implementation.
This part isn't clear to me. I certainly see how keeping different repositories in
sync across a cluster is a detail of the repository implementation. And I can see how a
cluster-aware DeploymentRepository instance could somewhat control when a ProfileImpl is
aware of a change; e.g. don't make the profile aware of the change until all the
repository instances in the cluster are aware of it.
But that doesn't get to controlling how the profile changes get reflected in the
runtime. That's a task of the MainDeployer and the deployers.
I'll talk about a specific ideal scenario:
An ear is deployed on all 4 nodes of a cluster. New version of the ear is deployed. Goal
is that the ear be brought to a certain deployment stage (DeploymentStages.REAL?) on all
nodes in the cluster such that we know the deployment will work on all nodes. A 2PC
"prepare". At that point a cluster-wide "commit" is executed, the
deployments are brought to the final stage where they handle requests, and the old version
is removed. If there is a failure during the "prepare", the new version is
rolled back and the old version left in place.
There can be other variations on the above, but the main point is there is a multistep
deployment process that requires intra-cluster communication at various points. Who
controls that process is my question -- doesn't seem like its a concern of the
DeploymentRepository, also doesn't seem like a proper concern of a deployer. My last
post mentioned "some variant of the HDScanner concept" but that's not it
either; you're right, HDScanner is just a trivial link between the profile and the
MainDeployer and shouldn't be made into something else. Seems like this is at least
partly a concern of a cluster-aware MainDeployer.
anonymous wrote : The simple notion of having a deployment available across all nodes in
the cluster, the DeploymentPhase.APPLICATION_CLUSTERED notion of the DeploymentRepository.
There could be an annotation for this down to a bean level rather than a coarse deployment
level. We could support this using the MetaDataRepository levels and have a clustered
MetaDataRepository where we just distributed the BeanMetaData(for example) for the
annotated bean. Provided the deployers/deployment attachments were properly integrated
with the MetaDataRepository, it should just work, but there is a dependency issue.
This is the part I need to dig into more to get a better understanding of what you mean.
Perhaps that will answer my question above. :)
anonymous wrote : Cluster wide dependencies between beans. Dependencies on components that
are marked as @Clustered needs a cluster aware dependency implementation. Custom
dependencies are supported by the mc, but the component deployers generating the component
metadata have to generate the correct dependency implementation.
Good point. This will be a nice thing to have.
This again has a coordination aspect; e.g. bean A on node 1 expresses a dependency on bean
B that will be deployed on node2. If both A and B are known, cluster-wide, to the
respository, you don't want A's deployment to fail with a missing dependency just
because node 2's deployers haven't processed B yet.
anonymous wrote : Clustering like security is a cross cutting notion that we need the
proper metadata for, and hooks to which cluster aware variations of the aspects can be
applied.
Agreed. Right now the clustering metadata that exists is scattered, and it only exists for
EJBs and web sessions.
When I think in terms of priority order though, I'm thinking somewhat differently. To
me, it's
1) Restoring some sort of ability to have repository information synchronized. This is
really the only thing the old farming did, in a half-assed way ;). I'd like to have
this for 5.0.0.GA as I don't like taking something away, even if was half-assed.
2) Sorting out the "coordination" issue I've been talking about. The lack of
that kind of coordination IMHO has always been the biggest weakness in the old
FarmService.
3) Cluster wide dependencies. This could be done earlier if our solution for 1) ensures,
following my example, that node 2's ProfileImpl knows about bean B before node 1's
ProfileImpl knows about bean A.
4) Adding cluster-aware aspects to beans other than the existing JEE ones. Includes
refactoring existing JEE clustered aspect impls to use as much of a common code base as
possible.
Creating proper clustering metadata is an underlying task that occurs throughout the
above.
View the original post :
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4152982#...
Reply to the post :
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&a...