]
Brian Stansberry commented on WFCORE-2876:
------------------------------------------
The WFLY-1305 fix added the undeploy cleanup task and the WFLY-264 feature done at roughly
the same time didn't account for it.
runtime-failure-causes-rollback does not seem to have effect when
configured in model
-------------------------------------------------------------------------------------
Key: WFCORE-2876
URL:
https://issues.jboss.org/browse/WFCORE-2876
Project: WildFly Core
Issue Type: Bug
Components: Deployment Scanner
Affects Versions: 3.0.0.Beta21
Reporter: Miroslav Novak
Assignee: ehsavoie Hugonnet
This is follow up on discussion in WFCORE-1912. There is difference in behavior if
deployment is deployed like:
{code}deploy ~/tmp/mdb1.jar --unmanaged
--headers={rollback-on-runtime-failure=false}{code}
and if it's deployed by copying to deployments directory and setting
{{rollback-on-runtime-failure=false}} in model:
{code}
/subsystem=deployment-scanner/scanner=default:write-attribute(name=runtime-failure-causes-rollback,value=false)
{code}
If it's deployed by the 1st way using CLI {{deploy}} command then If deployment is
missing some dependencies (like connection factories, queues/topics) and those are added
later then deployment is able recover from it and start working without need to
reload/restart server or redeploy.
However if it's deployed the 2nd way then deployment does not recover when missing
dependencies are added. It looks like attribute {{runtime-failure-causes-rollback}} does
not have the effect in this case.
This is causing problems when Artemis is configured in colocated HA topology with
replicated journal where it takes some time for Artemis to activate but deployment which
depends on some connection factories and queues has already tried to deploy. We need
deployment to recover from this situation automatically. Restart/Reload will not help in
this case as it would end up in the same situation. Only manual redeploy will help which
is a workaround.