[Hawkular-dev] First round of perf tests with more cassandra nodes - results

Michael Foley mfoley at redhat.com
Thu Jan 14 08:58:23 EST 2016


Filip, 

Thank you for this. This is very valuable information. 

I just want to add a few things: 


    * That Hawk-Metrics is linearly scalable is very important. 1 node, in the context of Openshift, supports up to 120 containers. But Openshift scales way beyond that ...so it is important that Hawk-Metrics is linearly scalable. 
    * The slope of the line is very important. For example, 2 nodes should support twice the volume, 3 nodes should support thrice the volume, 4 nodes support four times the volume. When we scale linearly, then Hawk-Metrics will be able to have a solution for the larger Openshift installations. 
    * Additional end-to-end testing is planned, and will be done ...and by this I mean metrics in the context of Openshift. But before that testing begins, we need to see that Hawk-Metrics standalone will scale. 

We are meeting once per week to iterate on performance on scalability. But with these test results, I think there needs to be a response so we can understand this better and define the next iteration sooner. 

Michael 

----- Original Message -----

From: "Filip Brychta" <fbrychta at redhat.com> 
To: "Discussions around Hawkular development" <hawkular-dev at lists.jboss.org> 
Sent: Thursday, January 14, 2016 8:48:07 AM 
Subject: [Hawkular-dev] First round of perf tests with more cassandra nodes - results 

Hello, 
I did first quick perf testing of haw metrics STANDALONE with more cassandra nodes and it showed some interesting results. 

Important note is that hawkular and cassandra cluster were running on VMs with shared storage. Which is very poor design for cassandra cluster but it still showed some patterns which will be true for every setup. 
Witch proper cassandra cluster (dedicated disks, CommitLog and SSTables on different disks, ...) the results should be definitely better. 

Summary of what was found (something is obvious even without testing): 
- small messages (1 datapoint per request) utilize heavily cpu on hawkular host and cassandra hosts are utilized gently 
- bigger messages (100 datapoints per request) are less demanding on hawkular host's cpu, cassandra hosts are utilized little bit more 
- with week cpu on hawkular host, adding more cassandra nodes makes performance even worst 
- for small messages (1 datapoint per request) even with sufficient cpu on hawkular host the performance improvement was only ~ 25% when number of nodes in the cluster was increased from 1 to 2 
- for bigger messages (100 datapoints per request) with sufficient cpu on hawkular host the performance improvement was ~ 75% when number of nodes in the cluster was increased from 1 to 2 
- for small messages (1 datapoint per request) even with sufficient cpu on hawkular host the performance does NOT scale up with more cassandra nodes (see results - performance dropped when 4th node was added) 
- for bigger messages (100 datapoints per request) with sufficient cpu on hawkular host the performance scales up but not linearly (this could be caused by shared storage and with proper cassandra cluster the results will be better) 

Questions: 
- why is the performance getting worst when adding 3th and 4th storage nodes when sending small messages and having sufficient cpu on hawkular host? 


About the test: 
- load generator was hitting following endpoint http://${server.host}:${server.port}/hawkular/metrics/gauges/data 
- one test run takes 4 minutes 
- message with one datapoint looks like this [{"id":"gaugeID","data":[{"timestamp": "@{CurrentTimestamp}", "value": 10.12}]}] 
- load generator was using 300 threads (each thread is acting like single client) and was sending messages containing 1 or 100 datapoints 
- hawkular metrics is deployed on wildfly-9.0.2.Final 
- metrics version: "Implementation-Version":"0.12.0-SNAPSHOT","Built-From-Git-SHA1":"c35deda5d6d03429e97f1ed4a6e4ef12cf7f3a00" 


Results: 

=================================================== 
VMs with 2 cores and shared storage, 4GB of memory. 
=================================================== 
300 threads, 1 datapoint, hawkular_metrics | org.apache.cassandra.locator.SimpleStrategy | {"replication_factor":"1" 
++++++++++++++++++++++++++++++++ 
1 cassandra node ~ 3945 req/sec 
2 cassandra nodes ~ 3751 req/sec 
3 cassandra nodes ~ 3318 req/sec 
4 cassandra nodes ~ 2726 req/sec 

In this case the cpu on hawkular VM was fully used and adding more cassandra nodes actually made performance worst. 
Cpu on cassandra nodes was never fully used 


300 threads, 100 datapoint, hawkular_metrics | org.apache.cassandra.locator.SimpleStrategy | {"replication_factor":"1" 
++++++++++++++++++++++++++++++++ 
1 cassandra nodes ~ 102 req/sec 
2 cassandra nodes ~ 138 req/sec 
3 cassandra nodes ~ 188 req/sec 
4 cassandra nodes ~ 175 req/sec 


With week cpu on hawkular VM and big messages (100 datapoints in each) there is still some improvement when adding more cassandra nodes. 
Cpu on cassandra nodes was never fully used 

=================================================== 
Hawkular VM with 4 cores, cassandra VMs 2 cores and shared storage,4GB of memory. 
=================================================== 
300 threads, 1 datapoint, hawkular_metrics | org.apache.cassandra.locator.SimpleStrategy | {"replication_factor":"1" 
++++++++++++++++++++++++++++++++ 
1 cassandra node ~ 5150 req/sec 
2 cassandra nodes ~ 5667 req/sec 
3 cassandra nodes ~ 5799 req/sec 
4 cassandra nodes ~ 5476 req/sec 

With stronger cpu on hawkular VM adding more cassandra nodes improves performance but there is a drop when 4th node is added. 
Cpu on cassandra nodes was never fully used 

300 threads, 100 datapoint, hawkular_metrics | org.apache.cassandra.locator.SimpleStrategy | {"replication_factor":"1" 
++++++++++++++++++++++++++++++++ 
1 cassandra nodes ~ 111 req/sec 
2 cassandra nodes ~ 173 req/sec 
3 cassandra nodes ~ 206 req/sec 
4 cassandra nodes ~ 211 req/sec 

With stronger cpu on hawkular VM adding more cassandra nodes improves performance. 
Cpu on cassandra nodes was never fully used 

=================================================== 
Hawkular VM with 8 cores, cassandra VMs 2 cores and shared storage,4GB of memory. Cpu on hawkular machine is used 30-40% 
=================================================== 
300 threads, 1 datapoint, hawkular_metrics | org.apache.cassandra.locator.SimpleStrategy | {"replication_factor":"1" 
++++++++++++++++++++++++++++++++ 
1 cassandra node ~ 5424 req/sec 
2 cassandra nodes ~ 6810 req/sec 
3 cassandra nodes ~ 6576 req/sec 
4 cassandra nodes ~ 6094 req/sec 

Why there is a drop for 3th and 4th node? 

300 threads, 100 datapoint, hawkular_metrics | org.apache.cassandra.locator.SimpleStrategy | {"replication_factor":"1" 
++++++++++++++++++++++++++++++++ 
1 cassandra nodes ~ 97 req/sec 
2 cassandra nodes ~ 168 req/sec 
3 cassandra nodes ~ 222 req/sec 
4 cassandra nodes ~ 241 req/sec 


Please let me know what you would like to see in next rounds of testing. 

Filip 
_______________________________________________ 
hawkular-dev mailing list 
hawkular-dev at lists.jboss.org 
https://lists.jboss.org/mailman/listinfo/hawkular-dev 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/hawkular-dev/attachments/20160114/43d15d8e/attachment.html 


More information about the hawkular-dev mailing list