You can use ML Kit to detect faces in selfie-like images and videos.
Face mesh detection API | |
---|---|
SDK name | face-mesh-detection |
Implementation | Code and assets are statically linked to your app at build time. |
App size impact | ~6.4MB |
Performance | Real-time on most devices. |
Try it out
- Play around with the sample app to see an example usage of this API.
Before you begin
In your project-level
build.gradle
file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.Add the dependency for the ML Kit face mesh detection library to your module's app-level gradle file, which is usually
app/build.gradle
:dependencies { // ... implementation 'com.google.mlkit:face-mesh-detection:16.0.0-beta1' }
Input image guidelines
Images should be taken within ~2 meters (~7 feet) from the device camera, so that the faces are sufficiently large for optimal face mesh recognition. In general, the larger the face, the better the face mesh recognition.
The face should be facing the camera with at least half of the face visible. Any large object between the face and the camera may result in lower accuracy.
If you would like to detect faces in a real-time application, you should also consider the overall dimensions of the input image. Smaller images can be processed faster, so capturing images at lower resolutions reduces latency. However, keep in mind the accuracy requirements above and ensure that the subject's face occupies as much of the image as possible.
Configure the face mesh detector
If you want to change any of the face mesh detector's default settings, specify those settings with a FaceMeshDetectorOptions object. You can change the following settings:
setUseCase
BOUNDING_BOX_ONLY
: Only provides a bounding box for a detected face mesh. This is the fastest face detector, but has with range limitation(faces must be within ~2 meters or ~7 feet of the camera).FACE_MESH
(default option): Provides a bounding box and additional face mesh info (468 3D points and triangle info). When compared to theBOUNDING_BOX_ONLY
use case, latency increases by ~15%, as measured on Pixel 3.
For example:
Kotlin
val defaultDetector = FaceMeshDetection.getClient( FaceMeshDetectorOptions.DEFAULT_OPTIONS) val boundingBoxDetector = FaceMeshDetection.getClient( FaceMeshDetectorOptions.Builder() .setUseCase(UseCase.BOUNDING_BOX_ONLY) .build() )
Java
FaceMeshDetector defaultDetector = FaceMeshDetection.getClient( FaceMeshDetectorOptions.DEFAULT_OPTIONS); FaceMeshDetector boundingBoxDetector = FaceMeshDetection.getClient( new FaceMeshDetectorOptions.Builder() .setUseCase(UseCase.BOUNDING_BOX_ONLY) .build() );
Prepare the input image
To detect faces in an image, create an InputImage
object from either a
Bitmap
, media.Image
, ByteBuffer
, byte array, or a file on the device.
Then, pass the InputImage
object to the FaceDetector
's process
method.
For face mesh detection, you should use an image with dimensions of at least 480x360 pixels. If you are detecting faces in real time, capturing frames at this minimum resolution can help reduce latency.
You can create an InputImage
object from different sources, each is explained below.
Using a media.Image
To create an InputImage
object from a media.Image
object, such as when you capture an image from a
device's camera, pass the media.Image
object and the image's
rotation to InputImage.fromMediaImage()
.
If you use the
CameraX library, the OnImageCapturedListener
and
ImageAnalysis.Analyzer
classes calculate the rotation value
for you.
Kotlin
private class YourImageAnalyzer : ImageAnalysis.Analyzer { override fun analyze(imageProxy: ImageProxy) { val mediaImage = imageProxy.image if (mediaImage != null) { val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees) // Pass image to an ML Kit Vision API // ... } } }
Java
private class YourAnalyzer implements ImageAnalysis.Analyzer { @Override public void analyze(ImageProxy imageProxy) { Image mediaImage = imageProxy.getImage(); if (mediaImage != null) { InputImage image = InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees()); // Pass image to an ML Kit Vision API // ... } } }
If you don't use a camera library that gives you the image's rotation degree, you can calculate it from the device's rotation degree and the orientation of camera sensor in the device:
Kotlin
private val ORIENTATIONS = SparseIntArray() init { ORIENTATIONS.append(Surface.ROTATION_0, 0) ORIENTATIONS.append(Surface.ROTATION_90, 90) ORIENTATIONS.append(Surface.ROTATION_180, 180) ORIENTATIONS.append(Surface.ROTATION_270, 270) } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) @Throws(CameraAccessException::class) private fun getRotationCompensation(cameraId: String, activity: Activity, isFrontFacing: Boolean): Int { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. val deviceRotation = activity.windowManager.defaultDisplay.rotation var rotationCompensation = ORIENTATIONS.get(deviceRotation) // Get the device's sensor orientation. val cameraManager = activity.getSystemService(CAMERA_SERVICE) as CameraManager val sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION)!! if (isFrontFacing) { rotationCompensation = (sensorOrientation + rotationCompensation) % 360 } else { // back-facing rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360 } return rotationCompensation }
Java
private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); static { ORIENTATIONS.append(Surface.ROTATION_0, 0); ORIENTATIONS.append(Surface.ROTATION_90, 90); ORIENTATIONS.append(Surface.ROTATION_180, 180); ORIENTATIONS.append(Surface.ROTATION_270, 270); } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) private int getRotationCompensation(String cameraId, Activity activity, boolean isFrontFacing) throws CameraAccessException { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation(); int rotationCompensation = ORIENTATIONS.get(deviceRotation); // Get the device's sensor orientation. CameraManager cameraManager = (CameraManager) activity.getSystemService(CAMERA_SERVICE); int sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION); if (isFrontFacing) { rotationCompensation = (sensorOrientation + rotationCompensation) % 360; } else { // back-facing rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360; } return rotationCompensation; }
Then, pass the media.Image
object and the
rotation degree value to InputImage.fromMediaImage()
:
Kotlin
val image = InputImage.fromMediaImage(mediaImage, rotation)
Java
InputImage image = InputImage.fromMediaImage(mediaImage, rotation);
Using a file URI
To create an InputImage
object from a file URI, pass the app context and file URI to
InputImage.fromFilePath()
. This is useful when you
use an ACTION_GET_CONTENT
intent to prompt the user to select
an image from their gallery app.
Kotlin
val image: InputImage try { image = InputImage.fromFilePath(context, uri) } catch (e: IOException) { e.printStackTrace() }
Java
InputImage image; try { image = InputImage.fromFilePath(context, uri); } catch (IOException e) { e.printStackTrace(); }
Using a ByteBuffer
or ByteArray
To create an InputImage
object from a ByteBuffer
or a ByteArray
, first calculate the image
rotation degree as previously described for media.Image
input.
Then, create the InputImage
object with the buffer or array, together with image's
height, width, color encoding format, and rotation degree:
Kotlin
val image = InputImage.fromByteBuffer( byteBuffer, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 ) // Or: val image = InputImage.fromByteArray( byteArray, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 )
Java
InputImage image = InputImage.fromByteBuffer(byteBuffer, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 ); // Or: InputImage image = InputImage.fromByteArray( byteArray, /* image width */480, /* image height */360, rotation, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 );
Using a Bitmap
To create an InputImage
object from a Bitmap
object, make the following declaration:
Kotlin
val image = InputImage.fromBitmap(bitmap, 0)
Java
InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);
The image is represented by a Bitmap
object together with rotation degrees.
Process the image
Pass the image to the process
method:
Kotlin
val result = detector.process(image) .addOnSuccessListener { result -> // Task completed successfully // … } .addOnFailureListener { e -> // Task failed with an exception // … }
Java
Task<List<FaceMesh>> result = detector.process(image) .addOnSuccessListener( new OnSuccessListener<List<FaceMesh>>() { @Override public void onSuccess(List<FaceMesh> result) { // Task completed successfully // … } }) .addOnFailureListener( new OnFailureListener() { @Override Public void onFailure(Exception e) { // Task failed with an exception // … } });
Get information about the detected face mesh
If any face is detected in the image, a list of FaceMesh
objects is passed to
the success listener. Each FaceMesh
represents a face that was detected in the
image. For each face mesh, you can get its bounding coordinates in the input
image, as well as any other information that you configured the face mesh
detector to find.
Kotlin
for (faceMesh in faceMeshs) { val bounds: Rect = faceMesh.boundingBox() // Gets all points val faceMeshpoints = faceMesh.allPoints for (faceMeshpoint in faceMeshpoints) { val index: Int = faceMeshpoints.index() val position = faceMeshpoint.position } // Gets triangle info val triangles: List<Triangle<FaceMeshPoint>> = faceMesh.allTriangles for (triangle in triangles) { // 3 Points connecting to each other and representing a triangle area. val connectedPoints = triangle.allPoints() } }
Java
for (FaceMesh faceMesh : faceMeshs) { Rect bounds = faceMesh.getBoundingBox(); // Gets all points List<FaceMeshPoint> faceMeshpoints = faceMesh.getAllPoints(); for (FaceMeshPoint faceMeshpoint : faceMeshpoints) { int index = faceMeshpoints.getIndex(); PointF3D position = faceMeshpoint.getPosition(); } // Gets triangle info List<Triangle<FaceMeshPoint>> triangles = faceMesh.getAllTriangles(); for (Triangle<FaceMeshPoint> triangle : triangles) { // 3 Points connecting to each other and representing a triangle area. List<FaceMeshPoint> connectedPoints = triangle.getAllPoints(); } }
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-11-26 UTC.