Publications

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Search for Publication


Year(s) from:  to 
Author:
Keywords (separated by spaces):

Object Referring in Visual Scene with Spoken Language

Arun Balajee Vasudevan and Dengxin Dai and Luc Van Gool
IEEE Winter Conference on Applications of Computer Vision
2018

Abstract

Object referring has important applications, especially for human-machine interaction. While having received great attention, the task is mainly attacked with written language (text) as input rather than spoken language (speech), which is more natural. This paper investigates Object Referring with Spoken Language (ORSpoken) by presenting two datasets and one novel approach. Objects are annotated with their locations in images, text descriptions and speech descriptions. This makes the datasets ideal for multi-modality learning. The approach is developed by carefully taking down ORSpoken problem into three sub-problems and introducing task-specific vision-language interactions at the corresponding levels. Experiments show that our method outperforms competing methods consistently and significantly. The approach is also evaluated in the presence of audio noise, showing the efficacy of the proposed vision-language interaction methods in counteracting background noise.


Link to publisher's page
Download in pdf format
@InProceedings{eth_biwi_01448,
  author = {Arun Balajee Vasudevan and Dengxin Dai and Luc Van Gool},
  title = {Object Referring in Visual Scene with Spoken Language},
  booktitle = {IEEE Winter Conference on Applications of Computer Vision },
  year = {2018},
  keywords = {}
}