bm_experiments.evaluate_experiment module

Evaluating passed experiments produced by this benchmark, for instance if metric was changed or some visualisations are missing

The expected experiment structure is following:

  • registration-results.csv coordinate file with path to landmarks and images

  • particular experiment with warped landmarks

Sample Usage

python evaluate_experiment.py         -e ./results/BmUnwarpJ         -d ./data-images         --visual

Copyright (C) 2016-2019 Jiri Borovec <jiri.borovec@fel.cvut.cz>

bm_experiments.evaluate_experiment.create_parser()[source]

parse the input parameters :return dict: {str: any}

bm_experiments.evaluate_experiment.main(path_experiment, path_dataset, visual=False, nb_workers=1)[source]

main entry points

Parameters
  • path_experiment (str) – path to the experiment folder

  • path_dataset (str) – path to the dataset with all landmarks

  • visual (bool) – whether visualise the registration results

  • nb_workers (int) – number of parallel jobs

bm_experiments.evaluate_experiment.NAME_CSV_RESULTS = 'registration-results_NEW.csv'[source]

file name of new table with registration results

bm_experiments.evaluate_experiment.NAME_CSV_SUMMARY = 'results-summary_NEW.csv'[source]

file name of new table with registration summary

bm_experiments.evaluate_experiment.NAME_TXT_SUMMARY = 'results-summary_NEW.txt'[source]

file with formatted registration summary

bm_experiments.evaluate_experiment.NB_WORKERS = 1[source]

default number of used threads