[infinispan-dev] Performance validation of Remote & Embedded Query functionality

Sanne Grinovero sanne at infinispan.org
Thu Jun 5 06:53:52 EDT 2014


Hi Radim, all,
I'm in a face to face meeting with Adrian and Gustavo, to make plans
for the next steps of Query development.
One thing which is essential is of course having some idea of its
scalability and general performance characteristics, both to identify
the low hanging fruits which might be in the code, to be able to give
some guidance to users on expectations, and also to know which
configuration options are better for each use case: I have some
expectations but these haven't been validated on the new-gen Query
functionality.

I was assuming that we would have to develop it: we need it to be able
to get working on our laptops as a first step to use to identify the
most obvious mistakes, as well as make it possible to validate in the
QA lab on more realistic hardware configurations, when the most
obvious issues will have been resolved.
Martin suggested that you have this task too as one of the next goals,
so let's develop it together?

We couldn't think of a cool example to use as a model, but roughly
this is what we aim to cover, and the data we aim to collect;
suggestions are very welcome:

## Benchmark 1: Remote Queries (over Hot Rod)
Should perform a (weighted) mixture of the following Read and Write operations:
 (R) Ranges (with and without pagination)
 (R) Exact
 (R) All the above, combined (with and without pagination)
 (W) Insert an entry
 (W) Delete an entry
 (W) Update an entry

 Configuration options
- data sizes: let's aim at having a consistent data set of at least 4GB.
- 1 node / 4 nodes / 8 nodes
        - REPL/DIST for the Data storing Cache
- variable ratio of results out of the index (Query retrieves just 5
entries out of a million vs. half a million)
- control ratio of operations; eg. : no writes / mostly writes / just
Range queries
        - for write operations: make sure to trigger some Merge events
- SYNC / ASYNC indexing backend and control IndexWriting tuning
- NRT / Non-NRT backends (Infinispan IndexManager only available as non-NRT)
- FSDirectory / InfinispanDirectory
          -- Infinispan Directory: Stored in REPL / DIST independently
from the Data Cache
                                 : With/Without CacheStore
- Have an option to run "Index-Less" (the tablescan approach)
- Have an option to validate that the queries are returning the expected results

Track:
 - response time: all samples, build distribution of outliers, output
histograms.
 - break down response time of different phases: be able to generate
an histogram of a specific phase only.
 - Count the number of RPCs generated by a specific operation
 - Count the number of CacheStore writes/reads being triggered
 - number of parallel requests it can handle

Data:
It could be random generated but in that case let's have it use a
fixed seed and make sure it generates the same data set at each run,
probably depending just on the target size.
We should also test for distribution of properties of the searched
fields, since we want to be able to predict results to validate them
(or find a different way to validate).
Having a random generator makes preparation faster and allows us to
generate a specific data size, but in alternative we could download
some known public data set; assertions on validity of queries would be
much simpler.

I would like to set specific goals to be reached for each metric, but
let's see the initial results first. We should then also narrow down
the configuration option combinations that we actually want to run in
a set of defined profiles to match common use cases, but let's have
the code ready to run any combination.

## Benchmark 2: Embedded Queries

Same tests as Remote Queries (using the same API, so no full-text).
We might want to develop this one first for simplicity, but results
for the Remote Query functionality are more urgent.

## Benchmark 3: CapeDwarf & Objectify

Help the CapeDwarf team by validating embedded queries; it's useful
for us to have a benchmark running a more complex application. I'm not
too familiar with RadarGun, do you think this could be created as a
RadarGun job, so to have the benchmark run regularly and simplify
setup ?

## Benchmark 4: Hibernate OGM

Another great use case for a more complex test ;-)
The remote support for OGM still needs some coding though, but we
could start looking at the embedded mode.

Priorities?
Some of these are totally independent, but we don't have many hands to
work on it.

I'm going to move this to a wiki, unless I get some "revolutionary" suggestions.

Cheers,
Sanne


More information about the infinispan-dev mailing list