The Mobile Vision API is deprecated

Detect Facial Features in Photos

This page is a walkthrough of how to use the Face API to detect a face and its associated facial landmarks (e.g., eyes, nose, etc.) in a photo. We'll show how to draw graphics over the face to indicate the positions of the detected landmarks.

This is the sample application that you built if you went through the instructions on the Getting Started page. Otherwise, if you want to follow along with the code, it's available in the photo-demo folder of our Github samples.

This tutorial will discuss:

  1. Creating a face detector.
  2. Detecting faces with facial landmarks.
  3. Querying the face detector operational status.
  4. Releasing the face detector.

Creating the face detector

In this example, the face detector is created in the onCreate method of the app’s main activity. It is initialized with options for detecting faces with landmarks in a photo:

FaceDetector detector = new FaceDetector.Builder(context)
    .setTrackingEnabled(false)
    .setLandmarkType(FaceDetector.ALL_LANDMARKS)
    .build();

Setting “tracking enabled” to false is recommended for detection for unrelated individual images (as opposed to video or a series of consecutively captured still images), since this will give a more accurate result. But for detection on consecutive images (e.g., live video), having tracking enabled gives a more accurate and faster result.

Note that facial landmarks are not required in detecting the face, and landmark detection is a separate (optional) step. By default, landmark detection is not enabled since it increases detection time. We enable it here in order to visualize detected landmarks.

Detecting faces and facial landmarks

Given a bitmap, we can create Frame instance from the bitmap to supply to the detector:

Frame frame = new Frame.Builder().setBitmap(bitmap).build();

The detector can be called synchronously with a frame to detect faces:

SparseArray<Face> faces = detector.detect(frame);

The result returned includes a collection of Face instances. We can iterate over the collection of faces, the collection of landmarks for each face, and draw the result based on each landmark position:

for (int i = 0; i < faces.size(); ++i) {
    Face face = faces.valueAt(i);
    for (Landmark landmark : face.getLandmarks()) {
        int cx = (int) (landmark.getPosition().x * scale);
        int cy = (int) (landmark.getPosition().y * scale);
        canvas.drawCircle(cx, cy, 10, paint);
    }
}

Querying the face detector's operational status

The first time that an app using the Face API is installed on a device, GMS will download a native library to the device in order to do face detection. Usually this is done by the installer before the app is run for the first time. But if that download has not yet completed, then the above “detect” method will not detect any faces. This could happen if the user is not online, if the user lacks sufficient storage space on their device, or if the download is otherwise delayed (e.g., due to a slow network).

The detector will automatically become operational once the library download has been completed on device.

A detector’s isOperational method can be used to check if the required native library is currently available:

if (!detector.isOperational()) {
    // ...
}

Your app can take action based upon the operational state of a detector (e.g., temporarily disabling certain features, or displaying a notification to the user).

Releasing the face detector

The face detector uses native resources in order to do detection. For this reason, it is necessary to release the detector instance once it is no longer needed:

detector.release();

But note that you can reuse the same face detector instance for detection with multiple photos, if you choose.