[JBoss JIRA] (WFLY-5822) Clustering performance regression in ejbremote-dist-sync scenario
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/WFLY-5822?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz edited comment on WFLY-5822 at 2/3/16 7:10 PM:
-------------------------------------------------------------------
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Response time comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
was (Author: rachmato):
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Response time comparison:
> Clustering performance regression in ejbremote-dist-sync scenario
> ------------------------------------------------------------------
>
> Key: WFLY-5822
> URL: https://issues.jboss.org/browse/WFLY-5822
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, EJB
> Affects Versions: 10.0.0.CR5
> Reporter: Michal Vinkler
> Assignee: Richard Achmatowicz
> Priority: Critical
>
> Compared to EAP 6, all SYNC scenarios have the same/better performance except of this one, wonder why?
> Compare these results:
> stress-ejbremote-dist-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
> ---------------------------------------
> Just for comparison: ejbremote REPL_SYNC scenario *performs well* on the other hand:
> stress-ejbremote-repl-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (WFLY-5822) Clustering performance regression in ejbremote-dist-sync scenario
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/WFLY-5822?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz edited comment on WFLY-5822 at 2/3/16 7:08 PM:
-------------------------------------------------------------------
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Response time comparison:
was (Author: rachmato):
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Response time comparison:
> Clustering performance regression in ejbremote-dist-sync scenario
> ------------------------------------------------------------------
>
> Key: WFLY-5822
> URL: https://issues.jboss.org/browse/WFLY-5822
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, EJB
> Affects Versions: 10.0.0.CR5
> Reporter: Michal Vinkler
> Assignee: Richard Achmatowicz
> Priority: Critical
>
> Compared to EAP 6, all SYNC scenarios have the same/better performance except of this one, wonder why?
> Compare these results:
> stress-ejbremote-dist-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
> ---------------------------------------
> Just for comparison: ejbremote REPL_SYNC scenario *performs well* on the other hand:
> stress-ejbremote-repl-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (WFLY-5822) Clustering performance regression in ejbremote-dist-sync scenario
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/WFLY-5822?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz edited comment on WFLY-5822 at 2/3/16 7:07 PM:
-------------------------------------------------------------------
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Throughput comparison:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
Response time comparison:
was (Author: rachmato):
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
> Clustering performance regression in ejbremote-dist-sync scenario
> ------------------------------------------------------------------
>
> Key: WFLY-5822
> URL: https://issues.jboss.org/browse/WFLY-5822
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, EJB
> Affects Versions: 10.0.0.CR5
> Reporter: Michal Vinkler
> Assignee: Richard Achmatowicz
> Priority: Critical
>
> Compared to EAP 6, all SYNC scenarios have the same/better performance except of this one, wonder why?
> Compare these results:
> stress-ejbremote-dist-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
> ---------------------------------------
> Just for comparison: ejbremote REPL_SYNC scenario *performs well* on the other hand:
> stress-ejbremote-repl-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (WFLY-5822) Clustering performance regression in ejbremote-dist-sync scenario
by Richard Achmatowicz (JIRA)
[ https://issues.jboss.org/browse/WFLY-5822?page=com.atlassian.jira.plugin.... ]
Richard Achmatowicz edited comment on WFLY-5822 at 2/3/16 7:06 PM:
-------------------------------------------------------------------
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
{noformat}
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
{noformat}
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
{noformat}
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
{noformat}
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
was (Author: rachmato):
After two days, two labs, four jobs, one 24 hour soak test, and two reboots of the perf lab machines, I have some 4 node timing results from my ad hoc timing jobs:
EAP 7.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-7x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
96.4953 8102435
-bash-4.2$ cat eap-7x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
94.9006 8118185
-bash-4.2$ cat eap-7x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
95.6555 8043827
-bash-4.2$ cat eap-7x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
93.8335 8067639
The job:
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-7x-stre...
EAP 6.x - 4 nodes - JDK 8
-----------------
-bash-4.2$ cat eap-6x-perf22-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
101.525 5788487
-bash-4.2$ cat eap-6x-perf23-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
99.5138 5805899
-bash-4.2$ cat eap-6x-perf24-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.601 5753810
-bash-4.2$ cat eap-6x-perf25-timing.txt | awk '{print $5}' | awk '{count+=$1;recs+=1} END {print count/recs " " recs}'
100.747 5697932
The job (run with JDK 8):
https://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/mvinkler_eap-6x-stre...
> Clustering performance regression in ejbremote-dist-sync scenario
> ------------------------------------------------------------------
>
> Key: WFLY-5822
> URL: https://issues.jboss.org/browse/WFLY-5822
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, EJB
> Affects Versions: 10.0.0.CR5
> Reporter: Michal Vinkler
> Assignee: Richard Achmatowicz
> Priority: Critical
>
> Compared to EAP 6, all SYNC scenarios have the same/better performance except of this one, wonder why?
> Compare these results:
> stress-ejbremote-dist-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
> ---------------------------------------
> Just for comparison: ejbremote REPL_SYNC scenario *performs well* on the other hand:
> stress-ejbremote-repl-sync
> 7.0.0.ER2: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-7x-str...]
> 6.4.0.GA: [throughput|http://jenkins.mw.lab.eng.bos.redhat.com/hudson/job/eap-6x-str...]
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (WFLY-6121) Distributed SFSB cache entries may rehash to different nodes following topology change
by Paul Ferraro (JIRA)
Paul Ferraro created WFLY-6121:
----------------------------------
Summary: Distributed SFSB cache entries may rehash to different nodes following topology change
Key: WFLY-6121
URL: https://issues.jboss.org/browse/WFLY-6121
Project: WildFly
Issue Type: Bug
Components: Clustering
Affects Versions: 10.0.0.CR5
Reporter: Paul Ferraro
Assignee: Paul Ferraro
A distributed SFSB is stored in 2 cache entries, one mapping the bean ID to a serialization group ID, and another mapping the group ID to a map of bean instances.
We ensure that both cache entries will hash to the local node, but following a topology change, these entries could get re-mapped to different nodes.
If we made the group ID = the bean Id of the first bean in the group, this solves the problem for 90% of use cases.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (WFLY-5904) A deadlock can result if an EJB is being passivated at the same time an invocation made
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-5904?page=com.atlassian.jira.plugin.... ]
Paul Ferraro updated WFLY-5904:
-------------------------------
Affects Version/s: 10.0.0.Final
> A deadlock can result if an EJB is being passivated at the same time an invocation made
> ---------------------------------------------------------------------------------------
>
> Key: WFLY-5904
> URL: https://issues.jboss.org/browse/WFLY-5904
> Project: WildFly
> Issue Type: Bug
> Components: Clustering, EJB
> Affects Versions: 10.0.0.Final
> Reporter: Stuart Douglas
> Assignee: Paul Ferraro
> Attachments: passivationTestFailureDump.txt
>
>
> As seen in EJBClientDescriptorTestCase.testClientInvocationTimeout
> This can be reproduced locally by adding a loop around the final invocations in the test:
> {code}
> for (int i = 0; i < 1000; ++i) {
> Assert.assertEquals("bar", remote1.getManagedBeanMessage());
> Assert.assertEquals("bar", remote2.getManagedBeanMessage());
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months