Publications

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Search for Publication


Year(s) from:  to 
Author:
Keywords (separated by spaces):

Learning Object Class Detectors from Weakly Annotated Video

A. Prest, C. Leistner, J. Civera, C. Schmid and V. Ferrari
IEEE Conference on Computer Vision and Pattern Recognition
June 2012

Abstract

Object detectors are typically trained on a large set of still images annotated by bounding-boxes. This paper introduces an approach for learning object detectors from real-world web videos known only to contain objects of a target class. We propose a fully automatic pipeline that localizes objects in a set of videos of the class and learns a detector for it. The approach extracts candidate spatio-temporal tubes based on motion segmentation and then selects one tube per video jointly over all videos. To compare to the state of the art, we test our detector on still images, i.e., Pascal VOC 2007. We observe that frames extracted from web videos can differ significantly in terms of quality to still images taken by a good camera. Thus, we formulate the learning from videos as a domain adaptation task. We show that training from a combination of weakly annotated videos and fully annotated still images using domain adaptation improves the performance of a detector trained from still images alone.


Download in pdf format
@InProceedings{eth_biwi_00905,
  author = {A. Prest and C. Leistner and J. Civera and C. Schmid and V. Ferrari},
  title = {Learning Object Class Detectors from Weakly Annotated Video},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  year = {2012},
  month = {June},
  pages = {3282-3289},
  keywords = {}
}