Minggu, 23 Oktober 2011

The modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements, and sparked interest among a number of researchers.[1]
The Facial Action Coding System (with 46 action units such as "lip bite" or "squint") which had been developed in 1976 became a popular basis for many systems.[2] As early as 2001 MPEG-4 included 68 facial animation parameters for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.[2][3]
In some cases, an affective space such as the PAD emotional state model can be used to assign specific emotions to the faces of avatars. In this approach the PAD model is used as a high level emotional space, and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two level structure: the PAD-PEP mapping and the PEP-FAP translation model.[4]

0 komentar:

Posting Komentar

SEO Stats powered by MyPagerank.Net

Blog Archive