The Mobile Vision API is deprecated

Detect Facial Features in Photos

This page is a walkthrough of how to use the Face API to detect a face and its associated facial landmarks (e.g., eyes, nose, etc.) in a photo. We'll show how to draw graphics over the face to indicate the positions of the detected landmarks.

As an example of the kind of output you can get, this is the photo tab in the FaceDetectorDemo sample application from the Getting Started page.

face demo
face demo with landmarks

This tutorial will discuss:

  1. Creating a face detector.
  2. Detecting faces with facial landmarks and classifications.
  3. Getting faces and landmarks and classification features

Creating the face detector

First, import the GoogleMobileVision framework to use the detector API.

@import GoogleMobileVision;

In this example, the face detector is created in the viewDidLoad method of the app’s PhotoViewController. It is initialized with options for detecting faces with landmarks and classifications in a photo:

  NSDictionary *options = @{
    GMVDetectorFaceLandmarkType : @(GMVDetectorFaceLandmarkAll),
    GMVDetectorFaceClassificationType : @(GMVDetectorFaceClassificationAll),
    GMVDetectorFaceTrackingEnabled : @(NO)
  };
  self.faceDetector = [GMVDetector detectorOfType:GMVDetectorTypeFace options:options];

TrackingEnabled enables or disables face tracking, which will maintain a consistent ID for each face when processing consecutive frames. Setting TrackingEnabled to false is recommended for accuracy on individual images (as opposed to video or a series of consecutively captured still images). For detection on consecutive images (e.g., live video), enabling tracking is generally faster and more accurate.

Note that facial landmarks and classifications are not required in face detection. The landmarks and classifications detections are optional because they consume additional time. We enable them here in order to visualize detected landmarks and classifications.

Detecting faces with facial landmarks and classifications

Use the face detector to find faces in an UIImage.

UIImage *image = [UIImage imageNamed:@"multi-face.jpg"];
NSArray<GMVFaceFeature *> *faces = [self.faceDetector featuresInImage:self.faceImageView.image
                                                              options:nil];

The face detector expects images and the faces in them to be in an upright orientation. If you need to rotate the image, pass in orientation information in the dictionary options with GMVDetectorImageOrientation key. The detector will rotate the images for you based on the orientation value.

Getting faces and features

The result returned includes a collection of GMVFaceFeature instances. We can iterate over the collection of faces and each face's landmarks and classifications features, and then draw the result based on the position.

for (GMVFaceFeature *face in faces) {
    // Face
    CGRect rect = face.bounds;
    NSLog(@"%@", NSStringFromRect(rect));

    // Mouth
    if (face.hasMouthPosition) {
        NSLog(@"Mouth %g %g", face.mouthPosition.x, face.mouthPosition.y);
    }

    // Smiling
    if (face.hasSmilingProbability) {
        NSLog(@"Smiling probability %g", face.smilingProbability);
    }
}

The face and landmarks positions are relative to the image in the view coordinate system. To check if a feature was detected, use has-property for each attribute.