[
https://issues.jboss.org/browse/WFLY-5822?page=com.atlassian.jira.plugin....
]
Richard Achmatowicz edited comment on WFLY-5822 at 2/3/16 7:11 PM:
-------------------------------------------------------------------
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf
lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Response time comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
was (Author: rachmato):
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf
lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk
'{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Response time comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Clustering performance regression in ejbremote-dist-sync scenario
------------------------------------------------------------------
Key: WFLY-5822
URL:
https://issues.jboss.org/browse/WFLY-5822
Project: WildFly
Issue Type: Bug
Components: Clustering, EJB
Affects Versions: 10.0.0.CR5
Reporter: Michal Vinkler
Assignee: Richard Achmatowicz
Priority: Critical
Compared to EAP 6, all SYNC scenarios have the same/better performance except of this
one, wonder why?
Compare these results:
stress-ejbremote-dist-sync
7.0.0.ER2:
[
throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
6.4.0.GA:
[
throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
---------------------------------------
Just for comparison: ejbremote REPL_SYNC scenario *performs well* on the other hand:
stress-ejbremote-repl-sync
7.0.0.ER2:
[
throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
6.4.0.GA:
[
throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)