User Tools

Site Tools


democap:motiontransferface

Table of Contents

Start Page » DEMoCap: Drag[en]gine Motion Capture » Low Level Motion Transfer: Motion Transfer Face

Motion Transfer Face

Uses input from a face tracker to change the weights of facial vertex position sets (or blend shapes, blend keys, morphs in other applications). Each facial expression is assigned a vertex position set. The values read from the input device change the weights of these vertex position sets to blend between the facial expressions.

The supported facial expressions are based on the facial expressions supported by the Drag[en]gine game engine. The support there is based on the OpenXR XR_HTC_facial_tracking Extension.

You can find here a PDF with descriptions of the supported expressions for the right side. The left side is analogous. Images are copyright Khronos group:

Name

Name of motion transfer to identify it in the list. Name is not required to be unique.

Eye

Defines the vertex position sets to use for eye expressions. This affects the eye lid movement. Eye movement is captured using Eyes motion transfer.

Jaw

Defines the vertex position sets to use for jaw expressions. This affects the jaw movement.

Cheek

Defines the vertex position sets to use for cheek expressions. This affects cheek puffing and sucking in.

Mouth

Defines the vertex position sets to use for mouth expressions. This affects lip and mouth corner movement and is used for emotions as well as articulation. For making actors speak it is usually better to use Speech Animation Editor with vertex position sets geared for individual phoneme. Using facial expression tracking though is simpler and does not require complex setups.

Some vertex position shapes build on top of each other. You have to design them appropriately to work correctly.

Tongue

Defines the vertex position sets to use for tongue expressions. This affects tongue movement and is used for emotions as well as articulation. For making actors speak it is usually better to use Speech Animation Editor with vertex position sets geared for individual phoneme. Using facial expression tracking though is simpler and does not require complex setups.

Some vertex position shapes build on top of each other. You have to design them appropriately to work correctly.

You could leave a comment if you were logged in.
democap/motiontransferface.txt · Last modified: 2023/06/04 17:16 by dragonlord