[JBoss JIRA] (WFLY-4840) Deprecated element cluster-passivation-store from ejb subsystem does not work
by Brian Stansberry (JIRA)
[ https://issues.jboss.org/browse/WFLY-4840?page=com.atlassian.jira.plugin.... ]
Brian Stansberry commented on WFLY-4840:
----------------------------------------
WFCORE-1106 may help with this, by letting the remove + add + reload workflow Rado showed above work without having to do remove + reload + add. The WFCORE-1106 thing would assume that both resources expose the same capability, which seems possible since it looks like they both install a service with the same name.
I haven't tried but given Rado's comment above I believe this workflow should work fine:
reload --admin-only=true
/subsystem=ejb3/passivation-store=infinispan:remove
/subsystem=ejb3/cluster-passivation-store=infinispan:add
reload --admin-only=false
I'm not sure why you'd want to mirror the state between the resources, not if the goal is to get rid of cluster-passivation-store. WF 10 already includes the current API, as will EAP 7.0, so any kind of mirroring becomes an API change, and will likely be a source of bugs.
> Deprecated element cluster-passivation-store from ejb subsystem does not work
> -----------------------------------------------------------------------------
>
> Key: WFLY-4840
> URL: https://issues.jboss.org/browse/WFLY-4840
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Reporter: Ondřej Chaloupka
> Assignee: Dominik Pospisil
> Priority: Minor
>
> There is a mismatch in behaviour of deprecated element {{cluster-passivation-store}} under {{ejb}} subsystem.
> When element is deprecated still it should work and only prints warning that it's deprecated.
> {code}
> [standalone@localhost:9990 /] /subsystem=ejb3/cluster-passivation-store=infinispan:read-resource()
> {
> "outcome" => "failed",
> "failure-description" => "WFLYCTL0216: Management resource '[
> (\"subsystem\" => \"ejb3\"),
> (\"cluster-passivation-store\" => \"infinispan\")
> ]' not found",
> "rolled-back" => true
> }
> {code}
> but
> {code}
> [standalone@localhost:9990 /] /subsystem=ejb3/cluster-passivation-store=infinispan:add()
> {
> "outcome" => "failed",
> "failure-description" => "WFLYCTL0158: Operation handler failed: org.jboss.msc.service.DuplicateServiceException: Service jboss.ejb.cache.factory.distributable.infinispan is already registered",
> "rolled-back" => true
> }
> {code}
> _A note:_ the element {{cluster-passivation-store}} was replaced by {{passivation-store}} which works fine
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (WFLY-6336) request-limit should have its attributes as 'required'
by Tomaz Cerar (JIRA)
[ https://issues.jboss.org/browse/WFLY-6336?page=com.atlassian.jira.plugin.... ]
Tomaz Cerar updated WFLY-6336:
------------------------------
Summary: request-limit should have its attributes as 'required' (was: request-limit should have its attributes as 'required)
> request-limit should have its attributes as 'required'
> ------------------------------------------------------
>
> Key: WFLY-6336
> URL: https://issues.jboss.org/browse/WFLY-6336
> Project: WildFly
> Issue Type: Bug
> Components: Web (Undertow)
> Affects Versions: 10.0.0.Final
> Reporter: Tomaz Cerar
> Assignee: Tomaz Cerar
>
> When creating the 'request-limit' filter in Undertow, there is not required to provide any of two available attributes. Although when I attach such filter to listener and perform request to server, I get following error:
> {code}
> ERROR [io.undertow.request] (default I/O-1) UT005071: Undertow request failed HttpServerExchange{ GET /long-running-servlet/HeavyProcessing request {Accept=[text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8], Accept-Language=[cs,en-US;q=0.8,en;q=0.6], Cache-Control=[no-cache], Accept-Encoding=[gzip, deflate, sdch], DNT=[1], Pragma=[no-cache], User-Agent=[Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36], Connection=[keep-alive], Upgrade-Insecure-Requests=[1], Host=[localhost:8080]} response {}}: java.lang.RuntimeException: WFLYUT0059: Could not construct handler for class: class io.undertow.server.handlers.RequestLimitingHandler. with parameters {
> "max-concurrent-requests" => undefined,
> "queue-size" => undefined
> }
> at org.wildfly.extension.undertow.filters.Filter.createHandler(Filter.java:111)
> at org.wildfly.extension.undertow.filters.Filter.createHttpHandler(Filter.java:68)
> at org.wildfly.extension.undertow.filters.RequestLimitHandler.createHttpHandler(RequestLimitHandler.java:37)
> at org.wildfly.extension.undertow.filters.FilterService.createHttpHandler(FilterService.java:57)
> at org.wildfly.extension.undertow.filters.FilterRef.createHttpHandler(FilterRef.java:69)
> at org.wildfly.extension.undertow.LocationService.configureHandlerChain(LocationService.java:96)
> at org.wildfly.extension.undertow.Host.configureRootHandler(Host.java:117)
> at org.wildfly.extension.undertow.Host.getOrCreateRootHandler(Host.java:171)
> at org.wildfly.extension.undertow.Host$HostRootHandler.handleRequest(Host.java:285)
> at io.undertow.server.handlers.NameVirtualHostHandler.handleRequest(NameVirtualHostHandler.java:54)
> at io.undertow.server.handlers.error.SimpleErrorPageHandler.handleRequest(SimpleErrorPageHandler.java:76)
> at io.undertow.server.handlers.CanonicalPathHandler.handleRequest(CanonicalPathHandler.java:49)
> at io.undertow.server.handlers.ChannelUpgradeHandler.handleRequest(ChannelUpgradeHandler.java:158)
> at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202)
> at io.undertow.server.protocol.http.HttpReadListener.handleEventWithNoRunningRequest(HttpReadListener.java:233)
> at io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:131)
> at io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:145)
> at io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:92)
> at io.undertow.server.protocol.http.HttpOpenListener.handleEvent(HttpOpenListener.java:51)
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
> at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:291)
> at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286)
> at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
> at org.xnio.nio.QueuedNioTcpServer$1.run(QueuedNioTcpServer.java:121)
> at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:580)
> at org.xnio.nio.WorkerThread.run(WorkerThread.java:464)
> Caused by: java.lang.IllegalArgumentException
> at org.jboss.dmr.ModelValue.asInt(ModelValue.java:58)
> at org.jboss.dmr.ModelNode.asInt(ModelNode.java:240)
> at org.wildfly.extension.undertow.filters.Filter.createHandler(Filter.java:94)
> ... 25 more
> {code}
> I think we should make those attributes as required when creating such filter as they are required for it's construction anyway.
> Expected:
> Make both attributes of {{request-limit}} filter as required when creating such filter and write operation error in case they are not provided.
> NOTE: also it seems that now when I set attribute {{queue-size}} to 0 then this queue is unlimited. Maybe it would be more suitable to set behaviour like this:
> - undefined -> unlimited queue
> - 0 -> no queue
> - 1..N -> queue size
> well this would be a change of behaviour so maybe it's too late to do such change already...
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (WFCORE-433) git backend for loading/storing the configuration XML for wildfly
by Jive JIRA Integration (JIRA)
[ https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin... ]
Jive JIRA Integration updated WFCORE-433:
-----------------------------------------
Forum Reference: https://developer.jboss.org/docs/DOC-55424
> git backend for loading/storing the configuration XML for wildfly
> -----------------------------------------------------------------
>
> Key: WFCORE-433
> URL: https://issues.jboss.org/browse/WFCORE-433
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: James Strachan
> Assignee: Jason Greene
>
> when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker, heroku et al) it'd be great to have a git repository for the configuration folder so that writes work something like:
> * git pull
> * write the, say, standalone.xml file
> * git commit -a -m "some comment"
> * git push
> (with a handler to deal with conflicts; such as last write wins).
> Then an optional periodic 'git pull' and reload configuration if there is a change.
> This would then mean that folks could use a number of wildfly containers using docker / openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly management console within cloud environments (as the management console would, under the covers, be loading/saving from/to git)
> Folks could then benefit from git tooling when dealing with versioning and audit logs of changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (WFCORE-433) git backend for loading/storing the configuration XML for wildfly
by Jive JIRA Integration (JIRA)
[ https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin... ]
Jive JIRA Integration updated WFCORE-433:
-----------------------------------------
Forum Reference: https://developer.jboss.org/docs/DOC-55424
> git backend for loading/storing the configuration XML for wildfly
> -----------------------------------------------------------------
>
> Key: WFCORE-433
> URL: https://issues.jboss.org/browse/WFCORE-433
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: James Strachan
> Assignee: Jason Greene
>
> when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker, heroku et al) it'd be great to have a git repository for the configuration folder so that writes work something like:
> * git pull
> * write the, say, standalone.xml file
> * git commit -a -m "some comment"
> * git push
> (with a handler to deal with conflicts; such as last write wins).
> Then an optional periodic 'git pull' and reload configuration if there is a change.
> This would then mean that folks could use a number of wildfly containers using docker / openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly management console within cloud environments (as the management console would, under the covers, be loading/saving from/to git)
> Folks could then benefit from git tooling when dealing with versioning and audit logs of changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (WFCORE-433) git backend for loading/storing the configuration XML for wildfly
by Jive JIRA Integration (JIRA)
[ https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin... ]
Jive JIRA Integration updated WFCORE-433:
-----------------------------------------
Forum Reference: https://developer.jboss.org/docs/DOC-55424
> git backend for loading/storing the configuration XML for wildfly
> -----------------------------------------------------------------
>
> Key: WFCORE-433
> URL: https://issues.jboss.org/browse/WFCORE-433
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: James Strachan
> Assignee: Jason Greene
>
> when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker, heroku et al) it'd be great to have a git repository for the configuration folder so that writes work something like:
> * git pull
> * write the, say, standalone.xml file
> * git commit -a -m "some comment"
> * git push
> (with a handler to deal with conflicts; such as last write wins).
> Then an optional periodic 'git pull' and reload configuration if there is a change.
> This would then mean that folks could use a number of wildfly containers using docker / openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly management console within cloud environments (as the management console would, under the covers, be loading/saving from/to git)
> Folks could then benefit from git tooling when dealing with versioning and audit logs of changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (WFCORE-433) git backend for loading/storing the configuration XML for wildfly
by Jive JIRA Integration (JIRA)
[ https://issues.jboss.org/browse/WFCORE-433?page=com.atlassian.jira.plugin... ]
Jive JIRA Integration updated WFCORE-433:
-----------------------------------------
Forum Reference: https://developer.jboss.org/docs/DOC-55424
> git backend for loading/storing the configuration XML for wildfly
> -----------------------------------------------------------------
>
> Key: WFCORE-433
> URL: https://issues.jboss.org/browse/WFCORE-433
> Project: WildFly Core
> Issue Type: Feature Request
> Components: Domain Management
> Reporter: James Strachan
> Assignee: Jason Greene
>
> when working with wildfly in a cloud/paas environment (like openshift, fabric8, docker, heroku et al) it'd be great to have a git repository for the configuration folder so that writes work something like:
> * git pull
> * write the, say, standalone.xml file
> * git commit -a -m "some comment"
> * git push
> (with a handler to deal with conflicts; such as last write wins).
> Then an optional periodic 'git pull' and reload configuration if there is a change.
> This would then mean that folks could use a number of wildfly containers using docker / openshift / fabric8 and then have a shared git repository (e.g. the git repo in openshift or fabric8) to configure a group of wildfly containers. Folks could then reuse the wildfly management console within cloud environments (as the management console would, under the covers, be loading/saving from/to git)
> Folks could then benefit from git tooling when dealing with versioning and audit logs of changes to the XML; along with getting the benefit of branching, tagging.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (WFCORE-1422) EAP init scripts don't detach jbossas process
by Bartosz Spyrko-Śmietanko (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1422?page=com.atlassian.jira.plugi... ]
Bartosz Spyrko-Śmietanko updated WFCORE-1422:
---------------------------------------------
Affects Version/s: 2.1.0.CR1
> EAP init scripts don't detach jbossas process
> ---------------------------------------------
>
> Key: WFCORE-1422
> URL: https://issues.jboss.org/browse/WFCORE-1422
> Project: WildFly Core
> Issue Type: Bug
> Components: Scripts
> Affects Versions: 2.1.0.CR1
> Reporter: Bartosz Spyrko-Śmietanko
> Assignee: Bartosz Spyrko-Śmietanko
> Labels: downstream_dependency
>
> When starting WF via wildfly-init-redhat.sh, the init script and runuser commands are never disconnected and stay in the process tree.
> Reproduce:
> - create a new OS user "jbossadm"
> - unzip WF in his home directory
> - copy docs/contrib/scripts/init.d/wildfly.conf to /etc/default
> configure it with:
> JBOSS_HOME=/home/jbossadm/wildfly
> JBOSS_USER=jbossadm
> JBOSS_CONSOLE_LOG=/home/jbossadm/console.log
> JBOSS_MODE=domain
> - copy docs/contrib/scripts/init.d/wildfly-init-redhat.sh to /etc/init.d
> - become su, and start with:
> /etc/init.d/wildfly-init-redhat.sh start
> -run: ps axfo pid,ppid,user,command | grep jboss
> {noformat}
> PID PPID USER COMMAND
> 479 1 root /bin/sh /etc/init.d/wildfly-init-redhat.sh start
> 481 479 root \_ runuser -s /bin/bash jbossadm -c ulimit -S -c 0 >/dev/null 2>&1 ; LAUNCH_JBOSS_IN_BACKGROUND=1 JBOSS_PIDFILE=/var/run
> 482 481 jbossadm \_ bash -c ulimit -S -c 0 >/dev/null 2>&1 ; LAUNCH_JBOSS_IN_BACKGROUND=1 JBOSS_PIDFILE=/var/run/wildfly/wildfly.pid
> 483 482 jbossadm \_ /bin/sh /home/jbossadm/jboss-eap-7/bin/domain.sh --domain-config=domain.xml --host-config=host.xml
> 579 483 jbossadm \_ java -D[Process Controller] -server -Xms64m -Xmx512m -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack
> 596 579 jbossadm \_ java -D[Host Controller] -Dorg.jboss.boot.log.file=/home/jbossadm/jboss-eap-7/domain/log/host-control
> 677 579 jbossadm \_ java -D[Server:server-one] -Xms64m -Xmx512m -server -XX:MetaspaceSize=96m -XX:MaxMetaspaceSize=256m -
> 727 579 jbossadm \_ java -D[Server:server-two] -Xms64m -Xmx512m -server -XX:MetaspaceSize=96m -XX:MaxMetaspaceSize=256m -
> {noformat}
> Expectation: "domain.sh" to become detached and have "1" as its parent.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (DROOLS-1084) kjar maven dependencies are downloaded even if scope is set to provided or test
by Lindsay Thurmond (JIRA)
Lindsay Thurmond created DROOLS-1084:
----------------------------------------
Summary: kjar maven dependencies are downloaded even if scope is set to provided or test
Key: DROOLS-1084
URL: https://issues.jboss.org/browse/DROOLS-1084
Project: Drools
Issue Type: Bug
Affects Versions: 6.3.0.Final
Reporter: Lindsay Thurmond
Assignee: Mark Proctor
Attachments: debug1.png, debug2.png, debug3.png
The creation of a new kbase triggers the specified rules kjar to be downloaded from the remote Maven repository. This works as expected but has the side effect of also downloading the Maven dependencies for the kjar. The problem is that it is downloading ALL the Maven dependencies even if they are specified as provided or test scope. This shouldn't happen since provided dependencies are expected to already be on the classpath and we should never need test dependencies at all during runtime at all.
I did some digging into the Drools source to and found out that
{{KieRepositoryImpl#getKieModule()}}
contains logic to check the classpath for the KieModule and if it can't find it to load everything from the Maven repo which includes downloading all the dependencies (and dependencies of dependencies and so on).
Unfortunately the code for checking the classpath is not actually implemented and looks like this:
{code}
private KieModule checkClasspathForKieModule(ReleaseId releaseId) {
// TODO
// ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
// URL url = classLoader.getResource( ((ReleaseIdImpl)releaseId).getPomPropertiesPath() );
return null;
}
{code}
After nothing is found on the classpath everything is downloaded from Maven. You can see all the stuff that is going to be downloaded (if it's not already in your Maven repo) in
{{DefaultProjectDependenciesResolver#resolve() //line 159}}
You can even see here that the dependencies have been marked as provided, but regardless they are going to be downloaded.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (WFCORE-1245) Improve readability of missing dependency logs
by Bartosz Spyrko-Śmietanko (JIRA)
[ https://issues.jboss.org/browse/WFCORE-1245?page=com.atlassian.jira.plugi... ]
Bartosz Spyrko-Śmietanko reassigned WFCORE-1245:
------------------------------------------------
Assignee: Bartosz Spyrko-Śmietanko (was: Dennis Reed)
> Improve readability of missing dependency logs
> ----------------------------------------------
>
> Key: WFCORE-1245
> URL: https://issues.jboss.org/browse/WFCORE-1245
> Project: WildFly Core
> Issue Type: Enhancement
> Components: Domain Management
> Reporter: Bartosz Spyrko-Śmietanko
> Assignee: Bartosz Spyrko-Śmietanko
> Attachments: after_1.log, after_2.log, before.log, bz1283294-reproducer.zip
>
>
> When deploying an ear using initialize-in-order option, if one of the subdeployments contains an EJB that depends on an EJB from another subdeployment and the dependency subdeployment fails log output makes it hard to understand the root cause.
> Structure of deployment is as follows:
> {noformat}
> reproducer.ear
> |- service-locator.jar
> | |- ServiceLocator (Stateless EJB)
> | |- TestQueue (JNDI Resource)
> |- client.jar
> |- TestEjb (Stateless EJB)
> |- ServiceLocator
> {noformat}
> If the TestQueue JNDI resource cannot be injected in the ServiceLocator, the deployment failure output lists a number of missing services per each EJB in the dependant subdeployment (.ORB, .HandleDelegate, .ValidatorFactory, etc).
> When the dependant subdeployment has a larger number of EJBs the log output very quickly becomes hard to read.
> Example with a single dependant EJB:
> {noformat}
> 14:27:43,092 ERROR [org.jboss.as.controller.management-operation] (management-handler-thread - 2) WFLYCTL0013: Operation ("deploy") failed - address: ({"deployment" => "reproducer-1.0-SNAPSHOT.ear"}) - failure description: {
> "WFLYCTL0180: Services with missing/unavailable dependencies" => [
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"client.jar\".batch.environment is missing [jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"client.jar\".beanmanager]",
> "jboss.naming.context.java.comp.testEar.client.TestEjb.ValidatorFactory is missing [jboss.naming.context.java.comp.testEar.client.TestEjb]",
> "jboss.naming.context.java.comp.testEar.client.TestEjb.ORB is missing [jboss.naming.context.java.comp.testEar.client.TestEjb]",
> "jboss.naming.context.java.comp.testEar.client.TestEjb.HandleDelegate is missing [jboss.naming.context.java.comp.testEar.client.TestEjb]",
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"client.jar\".weld.weldClassIntrospector is missing [jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"client.jar\".beanmanager]",
> "jboss.deployment.unit.\"reproducer-1.0-SNAPSHOT.ear\".deploymentCompleteService is missing [jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"client.jar\".deploymentCompleteService]",
> "jboss.naming.context.java.comp.testEar.client.TestEjb.InstanceName is missing [jboss.naming.context.java.comp.testEar.client.TestEjb]",
> "jboss.naming.context.java.comp.testEar.client.TestEjb.Validator is missing [jboss.naming.context.java.comp.testEar.client.TestEjb]",
> "jboss.naming.context.java.comp.testEar.service-locator.test_ServiceLocator.env.queue.TestQueue is missing [jboss.naming.context.java.jboss.resources.queue.TestQueue]",
> "jboss.naming.context.java.comp.testEar.client.TestEjb.InAppClientContainer is missing [jboss.naming.context.java.comp.testEar.client.TestEjb]"
> ],
> "WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
> "Services that were unable to start:" => [
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"client.jar\".INSTALL",
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"service-locator.jar\".CLEANUP",
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"service-locator.jar\".component.test_ServiceLocator.JndiBindingsService",
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"service-locator.jar\".component.test_ServiceLocator.START",
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"service-locator.jar\".deploymentCompleteService",
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"service-locator.jar\".jndiDependencyService",
> "jboss.deployment.subunit.\"reproducer-1.0-SNAPSHOT.ear\".\"service-locator.jar\".moduleDeploymentRuntimeInformationStart",
> "jboss.deployment.unit.\"reproducer-1.0-SNAPSHOT.ear\".CLEANUP"
> ]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month
[JBoss JIRA] (JBWEB-313) Deadlock in WsRemoteEndpointImplServer.onWritePossible
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/JBWEB-313?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on JBWEB-313:
-----------------------------------------------
Michael Cada <mcada(a)redhat.com> changed the Status of [bug 1299057|https://bugzilla.redhat.com/show_bug.cgi?id=1299057] from ON_QA to VERIFIED
> Deadlock in WsRemoteEndpointImplServer.onWritePossible
> ------------------------------------------------------
>
> Key: JBWEB-313
> URL: https://issues.jboss.org/browse/JBWEB-313
> Project: JBoss Web
> Issue Type: Bug
> Affects Versions: JBossWeb-7.5.0.GA
> Environment: JBoss EAP 6.4.5
> Reporter: Aaron Ogburn
> Assignee: Remy Maucherat
> Attachments: JBWEB-313.patch
>
>
> A deadlock is possible in WsRemoteEndpointImplServer.onWritePossible:
> {code}
> http-/0.0.0.0:8080-1":
> at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServer.onWritePossible(WsRemoteEndpointImplServer.java:93)
> - waiting to lock <0x00000006dee6a1a8> (a java.nio.HeapByteBuffer)
> - locked <0x00000006dee6a200> (a java.lang.Object)
> at org.apache.tomcat.websocket.server.WsHttpUpgradeHandler$WsWriteListener.onWritePossible(WsHttpUpgradeHandler.java:243)
> at org.apache.catalina.core.StandardWrapperValve.async(StandardWrapperValve.java:605)
> at org.apache.catalina.core.StandardWrapperValve.event(StandardWrapperValve.java:350)
> at org.apache.catalina.core.StandardContextValve.event(StandardContextValve.java:171)
> at org.apache.catalina.valves.ValveBase.event(ValveBase.java:185)
> at org.apache.catalina.core.StandardHostValve.event(StandardHostValve.java:252)
> at org.apache.catalina.valves.ValveBase.event(ValveBase.java:185)
> at org.apache.catalina.core.StandardEngineValve.event(StandardEngineValve.java:121)
> at org.apache.catalina.connector.CoyoteAdapter.event(CoyoteAdapter.java:228)
> at org.apache.coyote.http11.Http11NioProcessor.event(Http11NioProcessor.java:232)
> at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.event(Http11NioProtocol.java:818)
> at org.apache.tomcat.util.net.NioEndpoint$ChannelProcessor.run(NioEndpoint.java:939)
> - locked <0x00000006deeeb9c0> (a java.lang.Object)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at org.apache.tomcat.util.net.NioEndpoint$DefaultThreadFactory$1$1.run(NioEndpoint.java:1249)
> at java.lang.Thread.run(Thread.java:745)
> "EJB default - 1":
> at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServer.onWritePossible(WsRemoteEndpointImplServer.java:81)
> - waiting to lock <0x00000006dee6a200> (a java.lang.Object)
> at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServer.doWrite(WsRemoteEndpointImplServer.java:76)
> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.writeMessagePart(WsRemoteEndpointImplBase.java:444)
> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.startMessage(WsRemoteEndpointImplBase.java:334)
> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase$TextMessageSendHandler.write(WsRemoteEndpointImplBase.java:741)
> - locked <0x00000006dee6a1a8> (a java.nio.HeapByteBuffer)
> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendPartialString(WsRemoteEndpointImplBase.java:239)
> at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendString(WsRemoteEndpointImplBase.java:182)
> at org.apache.tomcat.websocket.WsRemoteEndpointBasic.sendText(WsRemoteEndpointBasic.java:37)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years, 1 month