This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

Search for Publication

Year(s) from:  to 
Keywords (separated by spaces):

A Non-Invasive Approach For Driving Virtual Talking Heads From Real Facial Movements

G. Fanelli and M. Fratarcangeli
IEEE 3DTV Conference
Kos Island, Greece, May 2007


In this paper, we depict a system to accurately control the facial animation of synthetic virtual heads from the movements of a real person. Such movements are tracked using Active Appearance Models from videos acquired using a cheap webcam. Tracked motion is then encoded by employing the widely used MPEG-4 Facial and Body Animation standard. Each animation frame is thus expressed by a compact subset of Facial Animation Parameters (FAPs) defined by the standard. We precompute, for each FAP, the corresponding facial configuration of the virtual head to animate through an accurate anatomical simulation. By linearly interpolating, frame by frame, the facial configurations corresponding to the FAPs, we obtain the animation of the virtual head in an easy and straightforward way.

Download in pdf format
  author = {G. Fanelli and M. Fratarcangeli},
  title = {A Non-Invasive Approach For Driving Virtual Talking Heads From Real Facial Movements},
  booktitle = {IEEE 3DTV Conference},
  year = {2007},
  month = {May},
  publisher = {IEEE},
  keywords = {Face Tracking, Active Appearance Models, Inverse Compositional Algorithm, Facial Animation, 3D Motion Animation.}