This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Search for Publication

Year(s) from:  to 
Keywords (separated by spaces):

Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Recognition

Naoya Takahashi, Michael Gygli, Beat Pfister, Luc Van Gool
Proc. Interspeech
San Fransisco, September 2016


We propose a novel method for Acoustic Event Detection (AED). In contrast to speech, sounds coming from acoustic events may be produced by a wide variety of sources. Furthermore, distinguishing them often requires analyzing an extended time period due to the lack of a clear sub-word unit. In order to incorporate the long-time frequency structure for AED, we introduce a convolutional neural network (CNN) with a large input field. In contrast to previous works, this enables to train audio event detection end-to-end. Our architecture is inspired by the success of VGGNet and uses small, 3x3 convolutions, but more depth than previous methods in AED. In order to prevent over-fitting and to take full advantage of the modeling capabilities of our network, we further propose a novel data augmentation method to introduce data variation. Experimental results show that our CNN significantly outperforms state of the art methods including Bag of Audio Words (BoAW) and classical CNNs, achieving a 16% absolute improvement.

Link to publisher's page
  author = {Naoya Takahashi and Michael Gygli and Beat Pfister and Luc Van Gool},
  title = {Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Recognition},
  booktitle = {Proc. Interspeech},
  year = {2016},
  month = {September},
  keywords = {Acoustic Event , CNN}