[jboss-jira] [JBoss JIRA] (WFLY-5822) Clustering performance regression in ejbremote-dist-sync scenario

Richard Achmatowicz (JIRA) issues at jboss.org
Wed Feb 3 19:13:00 EST 2016


    [ https://issues.jboss.org/browse/WFLY-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13158550#comment-13158550 ] 

Richard Achmatowicz edited comment on WFLY-5822 at 2/3/16 7:12 PM:
-------------------------------------------------------------------

After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:

{noformat}
 EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$  cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$  cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$  cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$  cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stress-ejbremote-dist-sync-custom-4nodes/2/

{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$  cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$  cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$  cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$  cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stress-ejbremote-dist-sync-custom-4nodes/6/

Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stress-ejbremote-dist-sync-custom-4nodes/2/artifact/report/graph-throughput.png
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stress-ejbremote-dist-sync-custom-4nodes/6/artifact/report/graph-throughput.png

Response time comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stress-ejbremote-dist-sync-custom-4nodes/2/artifact/report/graph-reponse-times.png
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stress-ejbremote-dist-sync-custom-4nodes/6/artifact/report/graph-reponse-times.png



was (Author: rachmato):
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:

{noformat}
 EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$  cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$  cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$  cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$  cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stress-ejbremote-dist-sync-custom-4nodes/2/

{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$  cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$  cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$  cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$  cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stress-ejbremote-dist-sync-custom-4nodes/6/

Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stress-ejbremote-dist-sync-custom-4nodes/2/artifact/report/graph-throughput.png
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stress-ejbremote-dist-sync-custom-4nodes/6/artifact/report/graph-throughput.png

Response time comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stress-ejbremote-dist-sync-custom-4nodes/2/artifact/report/graph-response-times.png
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stress-ejbremote-dist-sync-custom-4nodes/6/artifact/report/graph-response-times.png


> Clustering performance regression in ejbremote-dist-sync scenario 
> ------------------------------------------------------------------
>
>                 Key: WFLY-5822
>                 URL: https://issues.jboss.org/browse/WFLY-5822
>             Project: WildFly
>          Issue Type: Bug
>          Components: Clustering, EJB
>    Affects Versions: 10.0.0.CR5
>            Reporter: Michal Vinkler
>            Assignee: Richard Achmatowicz
>            Priority: Critical
>
> Compared to EAP 6, all SYNC scenarios have the same/better performance except of this one, wonder why?
> Compare these results:
> stress-ejbremote-dist-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-stress-ejbremote-dist-sync/4/artifact/report/graph-throughput.png]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-stress-ejbremote-dist-sync_noperf21/1/artifact/report/graph-throughput.png]
> ---------------------------------------
> Just for comparison: ejbremote REPL_SYNC scenario *performs well* on the other hand:
> stress-ejbremote-repl-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-stress-ejbremote-repl-sync/3/artifact/report/graph-throughput.png]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-stress-ejbremote-repl-sync_noperf21/2/artifact/report/graph-throughput.png]



--
This message was sent by Atlassian JIRA
(v6.4.11#64026)


More information about the jboss-jira mailing list