State-of-the-Art Object Proposal Benchmark

Quantitative evaluation of the current object proposal techniques

Pre-Computed Evaluation Results

The Github page of the project contains, apart from the code to reproduce the experiments of this paper, the pre-computed evaluation results of all state-of-the-art techniques evaluated in this work. This way, you can easily reproduce the plots shown in this work and add your own method.

You can find the following folders:
  • global_eval: Global evaluation on all SoA techniques, each of them sweeping their own number of proposals.
  • per_categ_eval: Per-category evaluation on all SoA techniques, each of them sweeping their own number of proposals.
  • all_categ: Per-category evaluation on all SoA techniques, all at the same number of proposals.
  • area_distrib: Area distribution of the ground-truth annotations.
  • class_distrib: Percentage of annotated objects of each of the classes.
The name of every file is pretty self-explanatory, as well as the column names of all values contained in the files.

Discussion

Would you like to discuss something about the evaluation? Let us know below!