This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Search for Publication

Year(s) from:  to 
Keywords (separated by spaces):

Human Pose Co-Estimation and Applications

M. Eichner, V. Ferrari
IEEE Transactions on Pattern Analysis and Machine Intelligence
Vol. 34, No. 11, pp. 2282-2288, November 2012


Most existing techniques for articulated human pose estimation consider each person independently. Here we tackle the problem in a new setting, coined Human Pose Co-estimation (PCE), where multiple persons are in a common, but unknown pose. The task of PCE is to estimate their poses jointly and to produce prototypes characterizing the shared pose. Since the poses of the individual persons should be similar to the prototype, PCE has less freedom compared to estimating each pose independently, which simplifies the problem. We demonstrate our PCE technique on two applications. The first is estimating pose of people performing the same activity synchronously, such as during aerobic, cheerleading and dancing in a group. We show that PCE improves pose estimation accuracy over estimating each person independently. The second application is learning prototype poses characterizing a pose class directly from an image search engine queried by the class name (e.g. `lotus pose'). We show that PCE leads to better pose estimation in such images, and it learns meaningful prototypes which can be used as priors for pose estimation in novel images.

Download in pdf format
  author = {M. Eichner and V. Ferrari},
  title = {Human Pose Co-Estimation and Applications},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence },
  year = {2012},
  month = {November},
  pages = {2282-2288},
  volume = {34},
  number = {11},
  keywords = {}