- Sceneform SDK for Android was open sourced and archived (github.com/google-ar/sceneform-android-sdk) with version 1.16.0.
- This site (developers.google.com/sceneform) serves as the documentation archive for the previous version, Sceneform SDK for Android 1.15.0.
- Do not use version 1.17.0 of the Sceneform Maven artifacts.
- The 1.17.1 Maven artifacts can be used. Other than the version, however, the 1.17.1 artifacts are identical to the 1.15.0 artifacts.
Augmented Faces
Stay organized with collections
Save and categorize content based on your preferences.
Augmented Faces allows your app to automatically identify different
regions of a detected face, and use those regions to overlay assets such as
textures and models in a way that properly matches the contours and regions of
an individual face.
How does Augmented Faces work?
The AugmentedFaces sample
app overlays the facial features of a fox onto a user's face using both the
assets of a model and a texture.
![]()
The 3D model consists of two fox ears and a fox nose. Each is a separate bone
that can be moved individually to follow the facial region they are attached to:
![]()
The texture consists of eye shadow, freckles, and other coloring:
![]()
When you run the sample app, it calls APIs to detect a face and overlays both the texture and the models onto the face.
Identifying an augmented face mesh
In order to properly overlay textures and 3D models on a detected face, ARCore
provides detected regions and an augmented face mesh. This mesh
is a virtual representation of the face, and consists of the vertices, facial
regions, and the center of the user's head. Note that the
orientation
of the mesh is different for Sceneform.
![]()
When a user's face is detected by the camera, ARCore performs these steps to
generate the augmented face mesh, as well as center and region poses:
It identifies the center pose and a face mesh.
- The center pose, located behind the nose, is the physical center point of the user's head (in other words, inside the skull).
![]()
- The face mesh consists of hundreds of vertices that make up the face, and
is defined relative to the center pose.
![]()
The AugmentedFace
class uses the face mesh and center pose to identify
face region poses on the user's face. These regions are:
- Left forehead (
LEFT_FOREHEAD
)
- Right forehead (
RIGHT_FOREHEAD
)
- Tip of the nose (
NOSE_TIP
)
These elements -- the center pose, face mesh, and face region poses -- comprise
the augmented face mesh and are used by AugmentedFace
APIs as positioning
points and regions to place the assets in your app.
Next steps
Start using Augmented Faces in your own apps. To learn more, see:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-06-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-26 UTC."],[[["Augmented Faces automatically identifies face regions to overlay assets like textures and models, realistically conforming to individual faces."],["It utilizes a 3D model with movable bones (e.g., ears, nose) and a texture for features like eye shadow, freckles, etc., to augment the user's face."],["ARCore provides an augmented face mesh, consisting of vertices, facial regions, and the head's center, for precise overlay placement."],["The process involves detecting the center pose, creating a face mesh, and identifying face region poses (forehead, nose tip) for asset positioning."],["Developers can leverage Augmented Faces by creating specific assets and using the Sceneform developer guide for implementation."]]],["Augmented Faces utilizes ARCore to detect a user's face and overlay digital assets. ARCore identifies the face's center pose and generates a face mesh composed of vertices. Using this mesh and center pose, the system determines region poses, such as the left and right forehead, and the nose tip. These elements help position assets like textures (eye shadow, freckles) and 3D models (fox ears, nose) onto the face, allowing the system to move these elements with the face, resulting in a proper fit.\n"]]