{{tag>motioncapture democap}} [[:start|Start Page]] >> [[main|DEMoCap: Drag[en]gine Motion Capture]] >> **Low Level Motion Transfer: Motion Transfer Face** {{ :democap:motion_transfer_face.png?direct|Motion Transfer Face}} Uses input from a face tracker to change the weights of facial vertex position sets (or blend shapes, blend keys, morphs in other applications). Each facial expression is assigned a vertex position set. The values read from the input device change the weights of these vertex position sets to blend between the facial expressions. The supported facial expressions are based on the facial expressions supported by the [[dragengine:about|Drag[en]gine game engine]]. The support there is based on the [[https://registry.khronos.org/OpenXR/specs/1.0/html/xrspec.html#XR_HTC_facial_tracking|OpenXR XR_HTC_facial_tracking Extension]]. You can find here a PDF with descriptions of the supported expressions for the right side. The left side is analogous. Images are copyright Khronos group: * {{ :democap:democap_facial_expressions.pdf |Facial Expressions}} ====== Motion Transfer Menu ====== Click the button ''...'' to show the motion transfer menu. It contains these entries: ===== Auto Rig ===== Automatically fills in vertex position set names using predefined auto-rig definitions. Right-click the motion transfer panel to access the auto-rig options in the context menu. * **Auto Rig...**: Opens a dialog to select from a list of predefined auto-rig definitions. The selected definition is applied to fill in the vertex position set names matching the character model. * **Auto Rig Best Matching**: Automatically tries all available auto-rig definitions and applies the one that matches the most vertex position sets in the character model. A dialog is shown if no matching definition is found. Auto-rig definitions are XML files (*.deard) loaded from ''/content/autorig''. The following definitions are provided out of the box for facial expression vertex position set assignment: * ARKit Face Shapes (iPhone/iPad, Meta Quest, Unreal MetaHuman, Unity Live Capture, VRCFT) * VRChat (VRCFT Unified Expressions) * VRoid (VRM) * Drag[en]gine More definitions can be added for example by creating a modification using Mod.IO or creating a feature issue on GitHub and attaching the new file. ====== Name ====== Name of motion transfer to identify it in the list. Name is not required to be unique. ====== Eye ====== Defines the vertex position sets to use for eye expressions. This affects the eye lid movement. Eye movement is captured using [[democap:motiontransfereyes|Eyes]] motion transfer. ====== Jaw ====== Defines the vertex position sets to use for jaw expressions. This affects the jaw movement. ====== Cheek ====== Defines the vertex position sets to use for cheek expressions. This affects cheek puffing and sucking in. ====== Mouth ====== Defines the vertex position sets to use for mouth expressions. This affects lip and mouth corner movement and is used for emotions as well as articulation. For making actors speak it is usually better to use [[gamedev:deigde:editors:speechanimation|Speech Animation Editor]] with vertex position sets geared for individual phoneme. Using facial expression tracking though is simpler and does not require complex setups. Some vertex position shapes build on top of each other. You have to design them appropriately to work correctly. ====== Tongue ====== Defines the vertex position sets to use for tongue expressions. This affects tongue movement and is used for emotions as well as articulation. For making actors speak it is usually better to use [[gamedev:deigde:editors:speechanimation|Speech Animation Editor]] with vertex position sets geared for individual phoneme. Using facial expression tracking though is simpler and does not require complex setups. Some vertex position shapes build on top of each other. You have to design them appropriately to work correctly.