摘要:AbstractEye movement combined with lip synchronization, eye movements, and emotional facial expression revealed an interesting research field that gives information about verbal and nonverbal behaviors occurring in the human body. Most of the previous researchers focused on eyes gazes, lip synching and emotion expression which are the most important features that can transfer nonverbal information to enhance, understand or express emotion. In this paper, the recent advances in 3D facial expression are introduced focusing on the presentation of Xface platform toolkit that developed a 3D talking avatars synthesis by implementing text-to-speech engine (TTS) to depict the basic lip shapes necessary for each phonemes to convey the dialogue. This work is believed to give the future direction that can lead into new research issue in facial animation.