Traditionally, driving a digital human requires facial mocap equipment and accurate performance. This prevents from digital human being used in more fields. Is it possible to use other methods to drive digital human? However, photorealistic digital human usually has an exquisite face which leads to a huge challenge in speech or text driven facial animation, especially, with subtle human emotions.

Matt AI project aims to push the boundaries of driving high-fidelity digital human with AI technology in real-time. We propose a simple and effective deep learning based method which produces facial animation via speech with almost the same quality as facial motion capture.





Registered Address:29/F., Three Pacific Place, No. 1 Queen's Road East, Wanchai, Hong Kong