All these results are a team effort with co-workers from University of Amsterdam, Qualcomm Research and/or Euvision Technologies.

ImageNet / ILSVRC

For the ImageNet Large Scale Visual Recognition Challenge, the following results were obtained:

YearTrackOverall PerformanceAccuracyProceedings
2015Object detection (DET, provided data)2nd0.54 (Mean Average Precision)ILSVRC2015
2015Object localization (CLS+LOC)3rd0.13 (Flat cost)ILSVRC2015
2014Object detection (DET, provided data)3rd0.32 (Mean Average Precision)ILSVRC2014
2014Object detection (DET, external data)4th0.35 (Mean Average Precision)ILSVRC2014
2013Object detection (DET)1st0.23 (Mean Average Precision)ILSVRC2013
2012Image categorization (CLS)5th0.29 (Flat cost)ILSVRC2012
2011Object localization (CLS+LOC)1st0.43 (Flat cost)ILSVRC2011
2011Image categorization (CLS)2nd0.31 (Flat cost)ILSVRC2011
TRECVID

TRECVID is organized by the US National Institute of Standards and Technology (NIST), with participation from leading academic and industrial research labs from all continents. The evaluation measures and references to the full conference proceedings are specified in the table below for the concept detection task (SIN) by the University of Amsterdam MediaMill/Qualcomm Research team:

YearOverall PerformanceMean Average PrecisionProceedings
20151st0.36 to be released
20141st0.33TRECVID 2014
20131st0.32TRECVID 2013
20122nd0.30TRECVID 2012
20112nd0.17TRECVID 2011
20101st0.09TRECVID 2010
20091st0.23TRECVID 2009
20081st0.19TRECVID 2008
PASCAL VOC

For the PASCAL Visual Object Classes (VOC) Challenge, the following results were obtained:

YearTrackOverall PerformanceAccuracyProceedings
2012Image categorization2nd0.74 (Mean Average Precision)VOC2012
2012Object detection1st0.41 (Mean Average Precision)VOC2012
2011Image categorization3rd0.73 (Mean Average Precision)VOC2011
2011Object detection3rd0.36 (Mean Average Precision)VOC2011
2010Image categorization4th0.69 (Mean Average Precision)VOC2010
2010Object detection3rd0.33 (Mean Average Precision)VOC2010
2009Image categorization3rd0.62 (Mean Average Precision)VOC2009
2008Image categorization1st0.54 (Mean Average Precision)VOC2008
ImageCLEF

The evaluation measures and references for the photo annotation task by the University of Amsterdam team at the Image Cross Language Evaluation Forum are:

YearOverall PerformanceAccuracyProceedings
20114th0.43 (Mean Average Precision)Image CLEF 2011
20102nd0.41 (Mean Average Precision)Image CLEF 2010
20091st0.84 (Area Under Curve)Image CLEF 2009

 

The site and its contents are © 2008-2022 Koen van de Sande, except for the files (and other contents) that are © of the respective owners. This site is not affiliated with or endorsed by my employer. Any trademarks used on this site are hereby acknowledged. Should there be any problems with the site, please contact the webmaster.