This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Search for Publication

Year(s) from:  to 
Keywords (separated by spaces):

An Object-Dependent Hand Pose Prior from Sparse Training Data

Henning Hamer, Juergen Gall, Thibaut Weise, Luc Van Gool
IEEE Conference on Computer Vision and Pattern Recognition
June 2010


In this paper, we propose a prior for hand pose estimation that integrates the direct relation between a manipulating hand and a 3d object. This is of particular interest for a variety of applications since many tasks performed by humans require hand-object interaction. Inspired by the ability of humans to learn the handling of an object from a single example, our focus lies on very sparse training data. We express estimated hand poses in local object coordinates and extract for each individual hand segment, the relative position and orientation as well as contact points on the object. The prior is then modeled as a spatial distribution conditioned to the object. Given a new object of the same object class and new hand dimensions, we can transfer the prior by a procedure involving a geometric warp. In our experiments, we demonstrate that the prior may be used to improve the robustness of a 3d hand tracker and to synthesize a new hand grasping a new object. For this, we integrate the prior into a unified belief propagation framework for tracking and synthesis.

Download in pdf format
  author = {Henning Hamer and Juergen Gall and Thibaut Weise and Luc Van Gool},
  title = {An Object-Dependent Hand Pose Prior from Sparse Training Data},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  year = {2010},
  month = {June},
  pages = {671-678},
  keywords = {}