The selfie segmentation API takes an input image and produces an output mask. By default, the mask will be the same size as the input image. Each pixel of the mask is assigned a float number that has a range between [0.0, 1.0]. The closer the number is to 1.0, the higher the confidence that the pixel represents a person, and vice versa.
The API works with static images and live video use cases. During live video, the API will leverage output from previous frames to return smoother segmentation results.
Key capabilities:
- Cross-platform support Enjoy the same experience on both Android and iOS.
- Single or multiple user support Easily segment multiple people or just a single person without changing any settings.
- Full and half body support The API can segment both full body and upper body portraits and video.
- Real time results The API is CPU-based and runs in real time on most modern smartphones (20 FPS+) and works well with both still image and live video streams.
- Raw size mask support The segmentation mask output is the same size as the input image by default. The API also supports an option that produces a mask with the model output size instead (e.g. 256x256). This option makes it easier to apply customized rescaling logic or reduces latency if rescaling to the input image size is not needed for your use case.
Example results
Input Image | Output Image + Mask |
---|---|
Under the hood
For more information on how the model was trained and our ML fairness practices, check out our Model Card.