Joint Vanishing Point Extraction and Tracking
T. Kroeger, D. Dai, L. Van Gool, CVPR 2015 (Oral).
[Pdf] [Supplementary Material] [Video of Talk at CVPR] [Slides]


Abstract: We present a novel vanishing point (VP) detection and tracking algorithm for calibrated monocular image sequences. Previous VP detection and tracking methods usually assume known camera poses for all frames or detect and track separately. We advance the state-of-the-art by combining VP extraction on a Gaussian sphere with recent advances in multi-target tracking on probabilistic occupancy fields. The solution is obtained by solving a Linear Program (LP). This enables the joint detection and tracking of multiple VPs over sequences. Unlike existing works we do not need known camera poses, and at the same time avoid detecting and tracking in separate steps. We also propose an extension to enforce VP orthogonality. We augment an existing video dataset consisting of 48 monocular videos with multiple annotated VPs in 14448 frames for evaluation. Although the method is designed for unknown camera poses, it is also helpful in scenarios with known poses, since a multi-frame approach in VP detection helps to regularize in frames with weak VP line support.

Vanishing Points for the Antwerp Street-View Dataset

We used the Antwerp Street-View dataset for the task of vanishing point extraction and tracking.

The datasets consists of 4 sets of street-view videos, with 12 videos in each set, and a total of 48 videos. The 12 videos in each set were captured simultaneously from 12 cameras, which were rigidly mounted onto a van, in different parts of the town of Antwerp at a framerate of 10 fps. For each of the 4 sets a SfM point cloud, corresponding 2D feature locations, and precise 6-DoF camera poses for all views and frames are provided.

For our evaluation we annotated (possibly multiple) VPs in each frame, and VP identities across time. The semi-automatic annotation procedure is described in the Supplementary Material.

The VP annotation can be downloaded here.