首页 | 本学科首页   官方微博 | 高级检索  
     


Creating facial animation of characters via MoCap data
Authors:Kei Hirose  Tomoyuki Higuchi
Affiliation:1. Graduate School of Engineering Science , Osaka University , 1-3 Machikaneyama-cho, Toyonaka-shi, Osaka , Japan;2. The Institute of Statistical Mathematics , Tokyo , Japan
Abstract:We consider the problem of generating 3D facial animation of characters. An efficient procedure is realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In some cases of artistic animation, the MoCap actor/actress and the 3D character facial animation show different expressions. For example, from the original facial MoCap data of speaking, a user would like to create the character facial animation of speaking with a smirk. In this paper, we propose a new easy-to-use system for making character facial animation via MoCap data. Our system is based on the interpolation: once the character facial expressions of the starting and the ending frames are given, the intermediate frames are automatically generated by information from the MoCap data. The interpolation procedure consists of three stages. First, the time axis of animation is divided into several intervals by the fused lasso signal approximator. In the second stage, we use the kernel k-means clustering to obtain control points. Finally, the interpolation is realized by using the control points. The user can easily create a wide variety of 3D character facial expressions by changing the control points.
Keywords:3D facial animation  fused lasso  interpolation  kernel k-means  MoCap data
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号