[JBoss JIRA] (WFLY-3369) Performance issue of jaxws-client on JDK 1.7.0_55
by Jim Ma (JIRA)
[ https://issues.jboss.org/browse/WFLY-3369?page=com.atlassian.jira.plugin.... ]
Jim Ma commented on WFLY-3369:
------------------------------
It turns out the root cause is the jdk7 performance regression which happens since jdk7u40. When the HttpURLConnection.setFixedLengthStreamingMode() is called before connect , jdk7u55 is about 6 times slower than jdk7u25 to respond for HttpURLConnection.getOutputStream(). I've reported this issue to Oralce and wait jdk team's response. Once the jdk bug is confirmed and created, I'll let you know.
> Performance issue of jaxws-client on JDK 1.7.0_55
> -------------------------------------------------
>
> Key: WFLY-3369
> URL: https://issues.jboss.org/browse/WFLY-3369
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Web Services
> Affects Versions: JBoss AS7 7.1.1.Final
> Environment: Red Hat Enterprise 6.4, Solaris 10, JBoss AS 7.1.3
> Reporter: Zhang Boya
> Assignee: Jim Ma
>
> Assume that a JAX-WS deploys on somewhere else, developer generates the JAX-WS client side classes by using wsdl2java tools. These classes have been exported as 'exmaple-ws-client.jar'. When developer needs to access this JAX-WS in their web application, this JAR file would be published with WAR file together. By default, the JBoss AS would supply an instance of CXF's implementation for the JAX-WS client when web application be deployed to JBoss AS. The class name of this JAX-WS client would be 'org.apache.cxf.jaxws.JaxWsClientProxy'. The JAX-WS invocations of client side got a very bad performance since I upgraded JDK from 1.7.0_25 to 1.7.0_55, especially on Solaris.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (DROOLS-509) Constraint method is resolved erroneously when potential ambiguities exist
by Davide Sottara (JIRA)
Davide Sottara created DROOLS-509:
-------------------------------------
Summary: Constraint method is resolved erroneously when potential ambiguities exist
Key: DROOLS-509
URL: https://issues.jboss.org/browse/DROOLS-509
Project: Drools
Issue Type: Bug
Security Level: Public (Everyone can see)
Affects Versions: 6.1.0.Beta4, 5.6.0.Final
Reporter: Davide Sottara
Assignee: Mario Fusco
Priority: Critical
Fix For: 6.1.0.CR1
Assume:
Class X implement I1
Interface I1 extends I0
I0 defines the property foo
Write a rule:
I1( foo == .. )
and insert an instance of class X
The condition analyzer will look in classes first, rather than interfaces, assuming that "foo" is provided by X.
The next time an instance of some class Y implementing I1 is inserted,
a ClassCastException will be thrown since X is expected
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (WFLY-3425) missing persistence unit error unclear
by Caleb Cushing (JIRA)
[ https://issues.jboss.org/browse/WFLY-3425?page=com.atlassian.jira.plugin.... ]
Caleb Cushing commented on WFLY-3425:
-------------------------------------
in my case this is the result of being unable to find persistence.xml at all, the error should reflect that. I did not have it located properly on the classpath (because shrinkwrap is confusing)
> missing persistence unit error unclear
> --------------------------------------
>
> Key: WFLY-3425
> URL: https://issues.jboss.org/browse/WFLY-3425
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Reporter: Caleb Cushing
> Assignee: Jason Greene
>
> Caused by: java.lang.IllegalArgumentException: JBAS016069: Error injecting persistence unit into CDI managed bean. Can't find a persistence unit named in deployment c11651d6-a1da-4f69-8b09-07224801d593.war
> so does this error mean, it can't find my persistence.xml? does it mean that it can't find the persistence unit named in my xml on the server? I notice that there are 2 spaces between named and in, I'm guessing this means it would put a name there, perhaps ensuring the log entry has quotes '' whether there should be something there. If this means it can't find my persistence.xml it should say so.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (WFLY-3425) missing persistence unit error unclear
by Caleb Cushing (JIRA)
Caleb Cushing created WFLY-3425:
-----------------------------------
Summary: missing persistence unit error unclear
Key: WFLY-3425
URL: https://issues.jboss.org/browse/WFLY-3425
Project: WildFly
Issue Type: Feature Request
Security Level: Public (Everyone can see)
Reporter: Caleb Cushing
Assignee: Jason Greene
Caused by: java.lang.IllegalArgumentException: JBAS016069: Error injecting persistence unit into CDI managed bean. Can't find a persistence unit named in deployment c11651d6-a1da-4f69-8b09-07224801d593.war
so does this error mean, it can't find my persistence.xml? does it mean that it can't find the persistence unit named in my xml on the server? I notice that there are 2 spaces between named and in, I'm guessing this means it would put a name there, perhaps ensuring the log entry has quotes '' whether there should be something there. If this means it can't find my persistence.xml it should say so.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (WFLY-3421) Rehashing on view change can result in premature session/ejb expiration
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-3421?page=com.atlassian.jira.plugin.... ]
Paul Ferraro updated WFLY-3421:
-------------------------------
Description:
Session/ejb expiration is scheduled only the the owning node of a given session/ejb. When a node leaves each node that assumes ownership of the sessions/ejbs that were previously owned by the leaving node schedules expiration of those sessions. However, view change can also lead to ownership changes for any session/ejb. We are currently handling this properly. If a session/ejb changes ownership, the expiration scheduling is never cancelled, and that session/ejb will expire prematurely, unless the node reacquires ownership. When using sticky sessions, this issue is not apparent, since subsequent requests will direct to the previous owner, who will cancel expiration on the old owner and reschedule expiration on the new owner properly. However, this will be a problem for web sessions if sticky sessions is disabled - and for @Stateful EJBs, if the ejb client receives updated affinity information prior to subsequent requests.
There are at least 2 ways to address this:
# When a request arrives for an existing session/ejb, we immediately cancel any scheduled expiration/eviction. This is currently a unicast, which typically results in a local call - but can go remote if the ownership has changed. Making this a cluster-wide broadcast would fix the issue.
# We can allow the scheduler to expose the set of keys that are currently schedule, and, on topology change, cancel those sessions/ejbs for which the current node is no longer the owner - and reschedule on the new owner.
Option 1 adds an additional cluster-wide RPC per request.
Option 2 adds N*(N-1) unicast RPCs per view change, where N is the cluster size (i.e. each node sends 1 rpc to every other node containing the set of session/ejb IDs to schedule for expiration),
Option 2 is the least invasive solution of the two.
EDIT: There is a 3rd options, i.e. modify the expiration tasks such that they skip expiration if the session/ejb is not owned by the current node. This is prevents the premature expiration issue, but we need some additional strategy to reschedule the session/ejb expiration on the node on the current owner.
was:
Session/ejb expiration is scheduled only the the owning node of a given session/ejb. When a node leaves each node that assumes ownership of the sessions/ejbs that were previously owned by the leaving node schedules expiration of those sessions. However, view change can also lead to ownership changes for any session/ejb. We are currently handling this properly. If a session/ejb changes ownership, the expiration scheduling is never cancelled, and that session/ejb will expire prematurely, unless the node reacquires ownership. When using sticky sessions, this issue is not apparent, since subsequent requests will direct to the previous owner, who will cancel expiration on the old owner and reschedule expiration on the new owner properly. However, this will be a problem for web sessions if sticky sessions is disabled - and for @Stateful EJBs, if the ejb client receives updated affinity information prior to subsequent requests.
There are 2 ways to address this:
# When a request arrives for an existing session/ejb, we immediately cancel any scheduled expiration/eviction. This is currently a unicast, which typically results in a local call - but can go remote if the ownership has changed. Making this a cluster-wide broadcast would fix the issue.
# We can allow the scheduler to expose the set of keys that are currently schedule, and, on topology change, cancel those sessions/ejbs for which the current node is no longer the owner - and reschedule on the new owner.
Option 1 adds an additional cluster-wide RPC per request.
Option 2 adds N*(N-1) unicast RPCs per view change, where N is the cluster size (i.e. each node sends 1 rpc to every other node containing the set of session/ejb IDs to schedule for expiration),
Option 2 is the least invasive solution - so we'll go with that.
> Rehashing on view change can result in premature session/ejb expiration
> -----------------------------------------------------------------------
>
> Key: WFLY-3421
> URL: https://issues.jboss.org/browse/WFLY-3421
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: Clustering
> Affects Versions: 8.1.0.CR2
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
> Priority: Critical
> Fix For: 8.1.0.Final, 9.0.0.Alpha1
>
>
> Session/ejb expiration is scheduled only the the owning node of a given session/ejb. When a node leaves each node that assumes ownership of the sessions/ejbs that were previously owned by the leaving node schedules expiration of those sessions. However, view change can also lead to ownership changes for any session/ejb. We are currently handling this properly. If a session/ejb changes ownership, the expiration scheduling is never cancelled, and that session/ejb will expire prematurely, unless the node reacquires ownership. When using sticky sessions, this issue is not apparent, since subsequent requests will direct to the previous owner, who will cancel expiration on the old owner and reschedule expiration on the new owner properly. However, this will be a problem for web sessions if sticky sessions is disabled - and for @Stateful EJBs, if the ejb client receives updated affinity information prior to subsequent requests.
> There are at least 2 ways to address this:
> # When a request arrives for an existing session/ejb, we immediately cancel any scheduled expiration/eviction. This is currently a unicast, which typically results in a local call - but can go remote if the ownership has changed. Making this a cluster-wide broadcast would fix the issue.
> # We can allow the scheduler to expose the set of keys that are currently schedule, and, on topology change, cancel those sessions/ejbs for which the current node is no longer the owner - and reschedule on the new owner.
> Option 1 adds an additional cluster-wide RPC per request.
> Option 2 adds N*(N-1) unicast RPCs per view change, where N is the cluster size (i.e. each node sends 1 rpc to every other node containing the set of session/ejb IDs to schedule for expiration),
> Option 2 is the least invasive solution of the two.
> EDIT: There is a 3rd options, i.e. modify the expiration tasks such that they skip expiration if the session/ejb is not owned by the current node. This is prevents the premature expiration issue, but we need some additional strategy to reschedule the session/ejb expiration on the node on the current owner.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (WFLY-3424) The JBeret ArtifactFactory needs to override the destroy method
by James Perkins (JIRA)
James Perkins created WFLY-3424:
-----------------------------------
Summary: The JBeret ArtifactFactory needs to override the destroy method
Key: WFLY-3424
URL: https://issues.jboss.org/browse/WFLY-3424
Project: WildFly
Issue Type: Bug
Security Level: Public (Everyone can see)
Components: Batch
Reporter: James Perkins
Assignee: James Perkins
The {{org.wildfly.jberet.WildFlyArtifactFactory}} needs to override the {{destroy(Object)}} method to delegate destroying to the container.
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (WFLY-3419) beans in module jars are not discovered except the first jar in module
by Jason Greene (JIRA)
[ https://issues.jboss.org/browse/WFLY-3419?page=com.atlassian.jira.plugin.... ]
Jason Greene commented on WFLY-3419:
------------------------------------
[~swd847] and I were chatting, and we think this restriction should probably be removed, as a module is viewed as a single entity in all other aspects, the resources are just an implementation detail.
> beans in module jars are not discovered except the first jar in module
> ----------------------------------------------------------------------
>
> Key: WFLY-3419
> URL: https://issues.jboss.org/browse/WFLY-3419
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: CDI / Weld
> Affects Versions: 8.0.0.Final, 8.1.0.Final
> Reporter: Petr Sakař
> Assignee: Jason Greene
> Attachments: jboss-module-test.src.zip, module.multiplejars.zip, module.ok.zip, servlet-cdi-test-jar.src.zip, servlet-cdi-test3.src.zip, servlet-cdi-test3.war
>
>
> CDI does not scan module with multiple jars. Only first jar is scanned.
> Workaround is to package everything in single jar file.
> To reproduce:
> {CODE}
> #download attached files
> #download and unzip wildfly
> cd wildfly-9.0.0.Alpha1-SNAPSHOT #or other version > 8.0.0
> unzip ../module.multiplejars.zip
> cp ../servlet-cdi-test3.war standalone/deployments/
> ./bin/standalone.sh
> # observe deployment fails
> # stop server
> unzip ../module.ok.zip
> ./bin/standalone.sh
> # observe deployment succeeds
> {CODE}
> The same scenario succeeds in both cases on EAP 6.2.x.
> module.ok.zip was created from module.multiplejars.zip by merging jar files into single one
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (WFLY-2982) refactor two phase persistence unit services (PhaseOnePersistenceUnitServiceImpl + PersistenceUnitServiceImpl) into one service
by Scott Marlow (JIRA)
[ https://issues.jboss.org/browse/WFLY-2982?page=com.atlassian.jira.plugin.... ]
Scott Marlow commented on WFLY-2982:
------------------------------------
Some of the competing concerns for this task:
* The DataSource may be needed before any application classloaders have been used. The DataSource is currently used when Hibernate is called to create the entity manager factory. For the two phase bootstrap approach, Hibernate accesses the DataSource during the first bootstrap phase. Currently, we do not have a way to prevent application classloaders from being used before the WildFly Install deployment phase. Persistence providers are allowed to rewrite entity classes as per the JPA specification (which has been part of EE since EE 5).
* The CDI bean manager passed to the persistence provider (for entity listeners) cannot be used until the WildFly Install deployment phase.
* Entity class rewriting needs to be supported as per the JPA spec (see the above).
> refactor two phase persistence unit services (PhaseOnePersistenceUnitServiceImpl + PersistenceUnitServiceImpl) into one service
> -------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-2982
> URL: https://issues.jboss.org/browse/WFLY-2982
> Project: WildFly
> Issue Type: Feature Request
> Security Level: Public(Everyone can see)
> Components: JPA / Hibernate
> Affects Versions: 8.0.0.Final
> Reporter: Scott Marlow
> Assignee: Scott Marlow
> Fix For: 9.0.0.CR1
>
>
> Related feedback from https://github.com/wildfly/wildfly/pull/4722:
> {quote}
> This is asymetrical, so if this service , and phase one are stopped, and then phase 1 is started again, phase 2 will appear started. The side effects seem minor though
> {quote}
> {quote}
> We have a long term goal to somehow verify every service can be stopped and started with the expected effect. Right now many of our services are not restartable which makes potential new features like partial deployment standby hard to achieve. So Im just offering pointers whenever I see lifecycle symmetry issues.
> {quote}
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months
[JBoss JIRA] (WFLY-3419) beans in module jars are not discovered except the first jar in module
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-3419?page=com.atlassian.jira.plugin.... ]
Stuart Douglas commented on WFLY-3419:
--------------------------------------
Looks like one of your files is missing a beans.xml. Every jar that you want to have picked up needs to have beans.xml.
> beans in module jars are not discovered except the first jar in module
> ----------------------------------------------------------------------
>
> Key: WFLY-3419
> URL: https://issues.jboss.org/browse/WFLY-3419
> Project: WildFly
> Issue Type: Bug
> Security Level: Public(Everyone can see)
> Components: CDI / Weld
> Affects Versions: 8.0.0.Final, 8.1.0.Final
> Reporter: Petr Sakař
> Assignee: Jason Greene
> Attachments: jboss-module-test.src.zip, module.multiplejars.zip, module.ok.zip, servlet-cdi-test-jar.src.zip, servlet-cdi-test3.src.zip, servlet-cdi-test3.war
>
>
> CDI does not scan module with multiple jars. Only first jar is scanned.
> Workaround is to package everything in single jar file.
> To reproduce:
> {CODE}
> #download attached files
> #download and unzip wildfly
> cd wildfly-9.0.0.Alpha1-SNAPSHOT #or other version > 8.0.0
> unzip ../module.multiplejars.zip
> cp ../servlet-cdi-test3.war standalone/deployments/
> ./bin/standalone.sh
> # observe deployment fails
> # stop server
> unzip ../module.ok.zip
> ./bin/standalone.sh
> # observe deployment succeeds
> {CODE}
> The same scenario succeeds in both cases on EAP 6.2.x.
> module.ok.zip was created from module.multiplejars.zip by merging jar files into single one
--
This message was sent by Atlassian JIRA
(v6.2.3#6260)
10 years, 6 months