Computer Vision Lab, ETH Zurich
We present a sampling-free approach for computing the epistemic uncertainty of a neural network. Epistemic uncertainty is an important quantity for the deployment of deep neural networks in safety-critical applications, since it represents how much one can trust predictions on new data. Recently promising works were proposed using noise injection combined with Monte-Carlo sampling at inference time to estimate this quantity (e.g. Monte-Carlo dropout). Our main contribution is an approximation of the epistemic uncertainty estimated by these methods that does not require sampling, thus notably reducing the computational overhead. We apply our approach to large-scale visual tasks (i.e., semantic segmentation and depth regression) to demonstrate the advantages of our method compared to sampling-based approaches in terms of quality of the uncertainty estimates as well as of computational overhead. Link to paper: https://arxiv.org/abs/1908.00598 Bio: After finishing a bachelor degree in physics at the university of heidelberg, Janis studied robotics at Technical University Munich. He wrote his Master thesis in collaboration with Autonomous Intelligent Driving GmbH and under the supervision of Feberico Tombari on uncertainty estimation in neural networks. Subsequently he worked at Qualcomm AI Research in Amsterdam on source compression using deep neural networks, from where he recently joined CVL as a PhD student.