[
https://issues.jboss.org/browse/WFLY-7418?page=com.atlassian.jira.plugin....
]
James Perkins commented on WFLY-7418:
-------------------------------------
[~dinesh.mahadevan] Unfortunately at this point the best solution is to purge the job
repository. To keep the data you could dump it into a different table, but I realize that
is less than ideal.
One possible option, which would be a request for enhancement, would be allowing a setting
to disable adding batch jobs to the runtime deployment resource. This would however not
allow jobs to be seen, started or restarted from web console or via CLI.
Batch deployments with a large number of executed jobs can lock up or
slow down the web console
-----------------------------------------------------------------------------------------------
Key: WFLY-7418
URL:
https://issues.jboss.org/browse/WFLY-7418
Project: WildFly
Issue Type: Enhancement
Components: Batch, Web Console
Reporter: James Perkins
Assignee: James Perkins
Priority: Major
Labels: move_to_halnext
Batch deployments which contain a large number of executed jobs can be extremely slow to
process as the {{/deployment=batch.war/subsystem=batch-jberet}} processes each job
instance then each job execution of that job instance.
One possibly helpful option for the web console would be to add a new description
attribute to indicate the resource may be slow to process. The web console might be able
to run a background task to populate data rather than locking up the UI. There would still
be an issue with a large memory footprint here however.
JBeret might want to consider having a way to archive jobs too rather than just purge
them. Some users may want to keep all job execution data. Archiving this data could reduce
the size of the current data being retrieved.
--
This message was sent by Atlassian Jira
(v7.13.8#713008)