[JBoss JIRA] (WFLY-5432) Wildfly SOAP webservice causes JVM crash
by Alessio Soldano (JIRA)
[ https://issues.jboss.org/browse/WFLY-5432?page=com.atlassian.jira.plugin.... ]
Alessio Soldano commented on WFLY-5432:
---------------------------------------
I've tried but could not reproduce this issue (I'm on fedora, oracle jdk 1.8.0_40). I had to modify SubmitOrder in your war as the reference to the wsdl was broken (the war as is fails deployment on WildFly), used a simple @WebService annotation in it, hence full code-first-approach.
Then I deployed the modified war on a vanilla WildFly 9.0.1.Final (didn't change anything in the configuration) and started the container.
I downloaded SOAPUI 5.2.0 and created a test project using the wsdl of the deployed application. Then I modified the request message to have valid data (no "?") and started the load test with 10 threads, 1ms delay, 1200 sec limit.
I've got 1.680.520 successful invocations (no error/failure) with an average of ~1400 tps (clearly, I could have had far better results by tuning logging, etc....)
To be honest, as part of our perf tests, we've gone way further than the numbers reported in this jira.
So, unless you can provide a way for reproducing, I'm not sure how to help here and would likely close this jira as cannot reproduce, sorry.
> Wildfly SOAP webservice causes JVM crash
> ----------------------------------------
>
> Key: WFLY-5432
> URL: https://issues.jboss.org/browse/WFLY-5432
> Project: WildFly
> Issue Type: Bug
> Components: Web Services
> Affects Versions: 9.0.1.Final, 10.0.0.CR2
> Environment: Tested on Mac OSX 10.10.2, jdk 1.8.60, wildfly 9.0.1. Final / 10.cr2
> Also tested on Centos 6.5, jdk 1.7.51, wildfly 9.0.1. Final
> Reporter: Davide Marchetti
> Assignee: Alessio Soldano
> Priority: Blocker
> Labels: crash, jvm, soap, webservice
> Attachments: SOAPTest.war
>
>
> The JVM crashes with either a sigsegv or sigbus error when performing a SOAP load test.
> The error printed is:
> # A fatal error has been detected by the Java Runtime Environment:
> #
> # SIGSEGV (0xb) at pc=0x00007f5da922afb8, pid=75645, tid=140039627204352
> #
> # JRE version: OpenJDK Runtime Environment (7.0_51-b02) (build 1.7.0_51-mockbuild_2014_01_15_01_39-b00)
> # Java VM: OpenJDK 64-Bit Server VM (24.45-b08 mixed mode linux-amd64 compressed oops)
> # Problematic frame:
> # J org.apache.cxf.message.ExchangeImpl.get(Ljava/lang/Class;)Ljava/lang/Object;
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
> #
>
> #
> # If you would like to submit a bug report, please include
> # instructions on how to reproduce the bug and visit:
> # 15:56:03,314 INFO [stdout] (default task-17) Submitted order request received with transId: <XXXXX>, opId: <SCO>, ...
> http://icedtea.classpath.org/bugzilla
> #
> The webservice does nothing, just a System.out.print("xxxxx").
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 2 months
[JBoss JIRA] (WFLY-5438) JUnit version mismatch
by Josef Cacek (JIRA)
Josef Cacek created WFLY-5438:
---------------------------------
Summary: JUnit version mismatch
Key: WFLY-5438
URL: https://issues.jboss.org/browse/WFLY-5438
Project: WildFly
Issue Type: Bug
Components: Build System
Reporter: Josef Cacek
Assignee: Paul Gier
Priority: Critical
JUnit dependency defined in WildFly Core is {{4.11}}, but current WildFly-Arquillian version depends on version {{4.12}}. It can lead to API incompatibilities and also project compilation problems (as was for instance in the [PR 8200|https://github.com/wildfly/wildfly/pull/8200#issuecomment-143217516])
>From my PoV is the best solution upgrade JUnit version in WF-CORE or exclude the junit:junit from the wildfly-core dependency.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 2 months
[JBoss JIRA] (WFLY-5437) Stale data after cluster wide rebalance with remote EJB invocations
by Richard Janík (JIRA)
Richard Janík created WFLY-5437:
-----------------------------------
Summary: Stale data after cluster wide rebalance with remote EJB invocations
Key: WFLY-5437
URL: https://issues.jboss.org/browse/WFLY-5437
Project: WildFly
Issue Type: Bug
Components: Clustering
Reporter: Richard Janík
Assignee: Paul Ferraro
Hi,
we're occasionally getting stale data on remote EJB invocations (number counter returns number 1 lower than expected, see example). This is usually preceded (~6 seconds before that) by cluster wide rebalance after a node is brought back from dead.
- 2000 clients, stale data is uncommon
- requests from a single client are separated by a 4 second window.
An example of stale data:
{code}
2015/08/28 12:45:11:868 EDT [WARN ][Runner - 553] HOST perf17.mw.lab.eng.bos.redhat.com:rootProcess:c - Error sampling data: <org.jboss.smartfrog.loaddriver.RequestProcessingException: Response serial does not match. Expected: 87, received: 86, runner: 553.>
{code}
Server side log excerpt about rebalance:
{code}
[JBossINF] [0m[0m12:45:02,780 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,ee,perf21) ISPN000094: Received new cluster view for channel web: [perf21|7] (4) [perf21, perf20, perf18, perf19]
[JBossINF] [0m[0m12:45:02,781 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,ee,perf21) ISPN000094: Received new cluster view for channel ejb: [perf21|7] (4) [perf21, perf20, perf18, perf19]
[JBossINF] [0m[0m12:45:03,660 INFO [org.infinispan.CLUSTER] (remote-thread--p3-t6) ISPN000310: Starting cluster-wide rebalance for cache repl, topology CacheTopology{id=12, rebalanceId=5, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[perf21: 20, perf20: 20, perf18: 20]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (4)[perf21: 15, perf20: 15, perf18: 15, perf19: 15]}, unionCH=null, actualMembers=[perf21, perf20, perf18, perf19]}
[JBossINF] [0m[0m12:45:03,660 INFO [org.infinispan.CLUSTER] (remote-thread--p4-t16) ISPN000310: Starting cluster-wide rebalance for cache dist, topology CacheTopology{id=16, rebalanceId=7, currentCH=DefaultConsistentHash{ns=80, owners = (3)[perf21: 27+26, perf20: 26+28, perf18: 27+26]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[perf21: 20+20, perf20: 20+20, perf18: 20+20, perf19: 20+20]}, unionCH=null, actualMembers=[perf21, perf20, perf18, perf19]}
[JBossINF] [0m[0m12:45:03,663 INFO [org.infinispan.CLUSTER] (remote-thread--p4-t19) ISPN000310: Starting cluster-wide rebalance for cache clusterbench-ee7.ear.clusterbench-ee7-web-default.war, topology CacheTopology{id=16, rebalanceId=7, currentCH=DefaultConsistentHash{ns=80, owners = (3)[perf21: 27+26, perf20: 26+28, perf18: 27+26]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[perf21: 20+20, perf20: 20+20, perf18: 20+20, perf19: 20+20]}, unionCH=null, actualMembers=[perf21, perf20, perf18, perf19]}
[JBossINF] [0m[0m12:45:03,664 INFO [org.infinispan.CLUSTER] (remote-thread--p4-t18) ISPN000310: Starting cluster-wide rebalance for cache clusterbench-ee7.ear.clusterbench-ee7-web-passivating.war, topology CacheTopology{id=16, rebalanceId=7, currentCH=DefaultConsistentHash{ns=80, owners = (3)[perf21: 27+26, perf20: 26+28, perf18: 27+26]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[perf21: 20+20, perf20: 20+20, perf18: 20+20, perf19: 20+20]}, unionCH=null, actualMembers=[perf21, perf20, perf18, perf19]}
[JBossINF] [0m[0m12:45:03,759 INFO [org.infinispan.CLUSTER] (remote-thread--p4-t18) ISPN000336: Finished cluster-wide rebalance for cache clusterbench-ee7.ear.clusterbench-ee7-web-passivating.war, topology id = 16
[JBossINF] [0m[0m12:45:03,820 INFO [org.infinispan.CLUSTER] (remote-thread--p3-t7) ISPN000336: Finished cluster-wide rebalance for cache repl, topology id = 12
[JBossINF] [0m[0m12:45:03,832 INFO [org.infinispan.CLUSTER] (remote-thread--p4-t18) ISPN000336: Finished cluster-wide rebalance for cache dist, topology id = 16
[JBossINF] [0m[0m12:45:03,958 INFO [org.infinispan.CLUSTER] (remote-thread--p3-t3) ISPN000310: Starting cluster-wide rebalance for cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar, topology CacheTopology{id=12, rebalanceId=5, currentCH=ReplicatedConsistentHash{ns = 60, owners = (3)[perf21: 20, perf20: 20, perf18: 20]}, pendingCH=ReplicatedConsistentHash{ns = 60, owners = (4)[perf21: 15, perf20: 15, perf18: 15, perf19: 15]}, unionCH=null, actualMembers=[perf21, perf20, perf18, perf19]}
[JBossINF] [0m[0m12:45:04,760 INFO [org.infinispan.CLUSTER] (remote-thread--p4-t18) ISPN000336: Finished cluster-wide rebalance for cache clusterbench-ee7.ear.clusterbench-ee7-web-default.war, topology id = 16
[JBossINF] [0m[0m12:45:06,331 INFO [org.infinispan.CLUSTER] (remote-thread--p3-t9) ISPN000336: Finished cluster-wide rebalance for cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar, topology id = 12
{code}
And a link to our jobs if you're interested:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-failover-ejb-...
This behavior has been observed with jvmkill and undeploy scenario, on REPL-SYNC, REPL-ASYNC and DIST-SYNC caches.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 2 months
[JBoss JIRA] (WFCORE-1023) embed-host-controller doesn't test the parameter value emptiness
by Petr Kremensky (JIRA)
Petr Kremensky created WFCORE-1023:
--------------------------------------
Summary: embed-host-controller doesn't test the parameter value emptiness
Key: WFCORE-1023
URL: https://issues.jboss.org/browse/WFCORE-1023
Project: WildFly Core
Issue Type: Feature Request
Components: CLI
Affects Versions: 2.0.0.CR5
Reporter: Petr Kremensky
Assignee: Alexey Loubyansky
Priority: Minor
embed-host-controller doesn't test the parameter value emptiness for configuration files
{noformat}
[disconnected /] embed-server --server-config=
Cannot start embedded server: The --server-config (or -c) parameter requires a value.
[disconnected /] embed-server -c=
Cannot start embedded server: The --server-config (or -c) parameter requires a value.
[disconnected /] embed-host-controller --host-config=
[domain@embedded /]
[disconnected /] embed-host-controller --domain-config=
[domain@embedded /]
[disconnected /] embed-host-controller -c=
[domain@embedded /]
{noformat}
Follow up to WFCORE-933.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 2 months