[jboss-jira] [JBoss JIRA] (JBRULES-3405) Evaluate benchmark results per average ranking, not per average score

Geoffrey De Smet (JIRA) jira-events at lists.jboss.org
Wed Feb 29 09:16:37 EST 2012


    [ https://issues.jboss.org/browse/JBRULES-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12672112#comment-12672112 ] 

Geoffrey De Smet commented on JBRULES-3405:
-------------------------------------------

Out-of-the-box there are 2 BenchmarkComparator's:

drools-planner/drools-planner-core/src/main/java/org/drools/planner/benchmark/core/comparator/TotalScoreSolverBenchmarkComparator.java
drools-planner/drools-planner-core/src/main/java/org/drools/planner/benchmark/core/comparator/WorstScoreSolverBenchmarkComparator.java

To complete this issue, we need
1) a 3th one
2) Then discuss if we want to make that 3th one the default.

To override the default is done in 
{code}
<plannerBenchmark>
<solverBenchmarkComparator>...</>
...
{code}
                
> Evaluate benchmark results per average ranking, not per average score
> ---------------------------------------------------------------------
>
>                 Key: JBRULES-3405
>                 URL: https://issues.jboss.org/browse/JBRULES-3405
>             Project: Drools
>          Issue Type: Feature Request
>      Security Level: Public(Everyone can see) 
>          Components: drools-planner
>    Affects Versions: 5.4.0.Beta2
>            Reporter: Lukáš Petrovický
>            Assignee: Geoffrey De Smet
>
> Currently. when deciding which solver config is the winner in a particular benchmark, the one with the best average score is picked. I believe this to be the wrong approach.
> When averaging two numbers, one if which is significantly smaller than the other, the average isn't the best metric you could use. (10, 10, 10, 20 and 1000 give you 210 as the average - is that really the best metric available to describe the data set?)
> For this reason, I would like the winner-picking algorithm to work different:
> 1) For each input file in the benchmark, rank the solvers per their score. (Basically 1st to Nth place.)
> 2) Then make a median of all these "places" and the best-placed algorithm wins.
> This way, you don't compare the solver results themselves. You compare how the solvers did in relation to the other solvers - which is something I consider much more important.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.jboss.org/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

       



More information about the jboss-jira mailing list