[JBoss JIRA] (WFCORE-1635) Write attribute on a new deployment scanner fails in batch
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1635?page=com.atlassian.jira.plugi... ]
Brian Stansberry commented on WFCORE-1635:
------------------------------------------
I think if controller.getValue() fails it can just ignore the failure. In the batch the Stage.MODEL handling of the write-attribute runs before the Stage.RUNTIME of the add, which means the Stage.RUNTIME of the add is using the correct value of the scan-interval attribute and there's no need for the Stage.RUNTIME of the write-attribute modify the service.
> Write attribute on a new deployment scanner fails in batch
> ----------------------------------------------------------
>
> Key: WFCORE-1635
> URL: https://issues.jboss.org/browse/WFCORE-1635
> Project: WildFly Core
> Issue Type: Bug
> Components: Deployment Scanner, Domain Management
> Affects Versions: 3.0.0.Alpha3
> Reporter: Chao Wang
> Assignee: Chao Wang
> Priority: Minor
>
> Creating a new deployment-scanner and altering it's attribute fails if done in single batch. Running the commands without batch or running batch on CLI embed-server works fine.
> *reproduce*
> {noformat}
> batch
> /subsystem=deployment-scanner/scanner=scan:add(path=log, relative-to="jboss.server.base.dir", auto-deploy-exploded=false, scan-enabled=false)
> /subsystem=deployment-scanner/scanner=scan:write-attribute(name=scan-interval, value=6000)
> run-batch
> {noformat}
> fails with
> {noformat}
> 08:09:19,076 ERROR [org.jboss.as.controller.management-operation] (management-handler-thread - 4) WFLYCTL0013: Operation ("write-attribute") failed - address: ([
> ("subsystem" => "deployment-scanner"),
> ("scanner" => "scan")
> ]): java.lang.IllegalStateException
> at org.jboss.as.server.deployment.scanner.DeploymentScannerService.getValue(DeploymentScannerService.java:234)
> at org.jboss.as.server.deployment.scanner.DeploymentScannerService.getValue(DeploymentScannerService.java:62)
> at org.jboss.msc.service.ServiceControllerImpl.getValue(ServiceControllerImpl.java:1158)
> at org.jboss.as.controller.OperationContextImpl$OperationContextServiceController.getValue(OperationContextImpl.java:2282)
> at org.jboss.as.server.deployment.scanner.AbstractWriteAttributeHandler.applyUpdateToRuntime(AbstractWriteAttributeHandler.java:58)
> at org.jboss.as.controller.AbstractWriteAttributeHandler$1.execute(AbstractWriteAttributeHandler.java:104)
> at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:890)
> at org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:659)
> at org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:370)
> at org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1344)
> at org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:392)
> at org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:217)
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler.doExecute(ModelControllerClientOperationHandler.java:208)
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler.access$300(ModelControllerClientOperationHandler.java:130)
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1$1.run(ModelControllerClientOperationHandler.java:152)
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1$1.run(ModelControllerClientOperationHandler.java:148)
> at java.security.AccessController.doPrivileged(AccessController.java:686)
> at javax.security.auth.Subject.doAs(Subject.java:569)
> at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:92)
> at org.jboss.as.controller.remote.ModelControllerClientOperationHandler$ExecuteRequestHandler$1.execute(ModelControllerClientOperationHandler.java:148)
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$ManagementRequestContextImpl$1.doExecute(AbstractMessageHandler.java:363)
> at org.jboss.as.protocol.mgmt.AbstractMessageHandler$AsyncTaskRunner.run(AbstractMessageHandler.java:472)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.lang.Thread.run(Thread.java:785)
> at org.jboss.threads.JBossThread.run(JBossThread.java:320)
> {noformat}
> using the embed server works
> {noformat}
> embed-server
> batch
> /subsystem=deployment-scanner/scanner=scan:add(path=log, relative-to="jboss.server.base.dir", auto-deploy-exploded=false, scan-enabled=false)
> /subsystem=deployment-scanner/scanner=scan:write-attribute(name=scan-interval, value=6000)
> run-batch
> {noformat}
> Setting only as minor as there is no real use case behind this (scan-interval can be set while adding a new scanner) - run into it quite accidentally. No regression against previous release.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFCORE-1632) Server processing request isn't stopped immediately but waits for request processing to finish
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1632?page=com.atlassian.jira.plugi... ]
Brian Stansberry commented on WFCORE-1632:
------------------------------------------
This is an interesting problem. The graceful shutdown timeout was meant to be a limit on how long the server would try and be graceful, but not a limit on how long it would take to stop once it stopped being graceful.
I don't think we should act as if it's a shutdown timeout, at least not without a formal RFE, as that's an open ended commitment. But, still, it's better if it can be closer to that.
I haven't looked closely at the link in the description, but I figure it's ultimately about using a shared thread pool that is depended upon but not controlled by the webserver instead of using one that the webserver itself shuts down.
> Server processing request isn't stopped immediately but waits for request processing to finish
> ----------------------------------------------------------------------------------------------
>
> Key: WFCORE-1632
> URL: https://issues.jboss.org/browse/WFCORE-1632
> Project: WildFly Core
> Issue Type: Bug
> Components: Domain Management, IO, Server
> Reporter: Lin Gao
> Assignee: Lin Gao
> Priority: Critical
> Original Estimate: 1 day
> Remaining Estimate: 1 day
>
> When stopping server which is processing request, it terminates the connection from the client but doesn't stop the request processing as such.
> After debugging and searching when the issue appeared I've found out that the issue was introduced with this commit: [https://github.com/wildfly/wildfly-core/commit/7304c019705c5f7ec0378e1c51...]
> Steps to reproduce:
> 1) start EAP server with deployed app from attachment
> 2) create request to long running application: {{curl -i http://127.0.0.1:8080/long-running-servlet/HeavyProcessing?duration=25000}}
> 3) stop server (you can do it even gracefully) using {{./jboss-cli.sh -c ":shutdown(timeout=1)"}}
> See that server is stopped after 25 seconds since request from step 2 was issued, as that is duration of the request processing requested by param duration, instead of being terminated after 1 second.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6803) Add multi-server support to mod_cluster
by Paul Ferraro (JIRA)
Paul Ferraro created WFLY-6803:
----------------------------------
Summary: Add multi-server support to mod_cluster
Key: WFLY-6803
URL: https://issues.jboss.org/browse/WFLY-6803
Project: WildFly
Issue Type: Enhancement
Components: Clustering
Affects Versions: 10.0.0.Final
Reporter: Paul Ferraro
Assignee: Paul Ferraro
Currently, mod_cluster subsystem supports only a single configuration, which references the default undertow server. However, Undertow supports multiple servers, and exposes a distinct route capability per server (see WFLY-6778).
mod_cluster should therefore support multiple "profiles", where each profile references a specific undertow server.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFCORE-1379) service.bat points user to wrong directory
by Kuthair Habboush (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1379?page=com.atlassian.jira.plugi... ]
Kuthair Habboush commented on WFCORE-1379:
------------------------------------------
Can someone please post the link to the other defect? TIA
> service.bat points user to wrong directory
> ------------------------------------------
>
> Key: WFCORE-1379
> URL: https://issues.jboss.org/browse/WFCORE-1379
> Project: WildFly Core
> Issue Type: Bug
> Components: Scripts
> Affects Versions: 2.0.10.Final
> Reporter: Nicklas Karlsson
> Assignee: Tomaz Cerar
> Priority: Trivial
> Fix For: 2.2.0.CR1, 3.0.0.Alpha1
>
>
> Running service.bat from the docs/contrib/scripts/service dir tells user to run the script under bin/service*s* but the binary paths to the services expects bin/service, resulting in service install failure with file not found
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6793) Batch subsystem cannot be removed with a remove operation
by James Perkins (JIRA)
[ https://issues.jboss.org/browse/WFLY-6793?page=com.atlassian.jira.plugin.... ]
James Perkins updated WFLY-6793:
--------------------------------
Description:
The {{batch-jberet}} subsystem fails when a {{remove}} operation is invoked.
{code}
[standalone@localhost:9990 /] /subsystem=batch-jberet:remove
{
"outcome" => "failed",
"failure-description" => "WFLYCTL0171: Removing services has lead to unsatisfied dependencies:
Service org.wildfly.batch.thread.pool.batch was depended upon by service org.wildfly.batch.configuration",
"rolled-back" => true,
"response-headers" => undefined
}
{code}
-The batch configuration dependency needs to be removed before the its dependencies are removed.-
-The thread-pool resource should also require a reload before removal.- The thread-pool is used in deployments and therefore shouldn't just be removed without a reload if there are deployments using it.
was:
The {{batch-jberet}} subsystem fails when a {{remove}} operation is invoked.
{code}
[standalone@localhost:9990 /] /subsystem=batch-jberet:remove
{
"outcome" => "failed",
"failure-description" => "WFLYCTL0171: Removing services has lead to unsatisfied dependencies:
Service org.wildfly.batch.thread.pool.batch was depended upon by service org.wildfly.batch.configuration",
"rolled-back" => true,
"response-headers" => undefined
}
{code}
-The batch configuration dependency needs to be removed before the its dependencies are removed.-
The thread-pool resource should also require a reload before removal. It's used in deployments and therefore shouldn't just be removed without a reload.
> Batch subsystem cannot be removed with a remove operation
> ---------------------------------------------------------
>
> Key: WFLY-6793
> URL: https://issues.jboss.org/browse/WFLY-6793
> Project: WildFly
> Issue Type: Bug
> Components: Batch
> Reporter: James Perkins
> Assignee: James Perkins
>
> The {{batch-jberet}} subsystem fails when a {{remove}} operation is invoked.
> {code}
> [standalone@localhost:9990 /] /subsystem=batch-jberet:remove
> {
> "outcome" => "failed",
> "failure-description" => "WFLYCTL0171: Removing services has lead to unsatisfied dependencies:
> Service org.wildfly.batch.thread.pool.batch was depended upon by service org.wildfly.batch.configuration",
> "rolled-back" => true,
> "response-headers" => undefined
> }
> {code}
> -The batch configuration dependency needs to be removed before the its dependencies are removed.-
> -The thread-pool resource should also require a reload before removal.- The thread-pool is used in deployments and therefore shouldn't just be removed without a reload if there are deployments using it.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (WFLY-6793) Batch subsystem cannot be removed with a remove operation
by James Perkins (JIRA)
[ https://issues.jboss.org/browse/WFLY-6793?page=com.atlassian.jira.plugin.... ]
James Perkins updated WFLY-6793:
--------------------------------
Git Pull Request: (was: https://github.com/wildfly/wildfly/pull/9010)
> Batch subsystem cannot be removed with a remove operation
> ---------------------------------------------------------
>
> Key: WFLY-6793
> URL: https://issues.jboss.org/browse/WFLY-6793
> Project: WildFly
> Issue Type: Bug
> Components: Batch
> Reporter: James Perkins
> Assignee: James Perkins
>
> The {{batch-jberet}} subsystem fails when a {{remove}} operation is invoked.
> {code}
> [standalone@localhost:9990 /] /subsystem=batch-jberet:remove
> {
> "outcome" => "failed",
> "failure-description" => "WFLYCTL0171: Removing services has lead to unsatisfied dependencies:
> Service org.wildfly.batch.thread.pool.batch was depended upon by service org.wildfly.batch.configuration",
> "rolled-back" => true,
> "response-headers" => undefined
> }
> {code}
> -The batch configuration dependency needs to be removed before the its dependencies are removed.-
> The thread-pool resource should also require a reload before removal. It's used in deployments and therefore shouldn't just be removed without a reload.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months
[JBoss JIRA] (DROOLS-1224) Not able to update kie-server container version using REAT API
by Maciej Swiderski (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1224?page=com.atlassian.jira.plugi... ]
Maciej Swiderski moved RHBPMS-4074 to DROOLS-1224:
--------------------------------------------------
Project: Drools (was: JBoss BPMS Platform)
Key: DROOLS-1224 (was: RHBPMS-4074)
Workflow: GIT Pull Request workflow (was: CDW v1)
Docs QE Status: NEW
Component/s: kie server
(was: Kie-Server)
Affects Version/s: 6.4.0.Final
(was: 6.3.0.GA)
QE Status: NEW
> Not able to update kie-server container version using REAT API
> --------------------------------------------------------------
>
> Key: DROOLS-1224
> URL: https://issues.jboss.org/browse/DROOLS-1224
> Project: Drools
> Issue Type: Bug
> Components: kie server
> Affects Versions: 6.4.0.Final
> Reporter: Maciej Swiderski
> Assignee: Edson Tirelli
>
> I updated a kie-container's release Id using "UpdateReleaseIdCommand". It worked fine, but after the server restart it's going back to the old release Id's version.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 10 months