When you pass an image to ML Kit, it detects up to five objects in the image along with the position of each object in the image. When detecting objects in video streams, each object has a unique ID that you can use to track the object from frame to frame.
You can use a custom image classification model to classify the objects that are detected. Please refer to Custom models with ML Kit for guidance on model compatibility requirements, where to find pre-trained models, and how to train your own models.
There are two ways to integrate a custom model. You can bundle the model by putting it inside your app’s asset folder, or you can dynamically download it from Firebase. The following table compares the two options.
Bundled Model | Hosted Model |
---|---|
The model is part of your app's .ipa file, which
increases its size. |
The model is not part of your app's .ipa file. It is
hosted by uploading to
Firebase Machine Learning. |
The model is available immediately, even when the Android device is offline | The model is downloaded on demand |
No need for a Firebase project | Requires a Firebase project |
You must republish your app to update the model | Push model updates without republishing your app |
No built-in A/B testing | Easy A/B testing with Firebase Remote Config |
Try it out
- See the vision quickstart app for an example usage of the bundled model and the automl quickstart app for an example usage of the hosted model.
- See the Material Design showcase app for an end-to-end implementation of this API.
Before you begin
Include the ML Kit libraries in your Podfile:
For bundling a model with your app:
pod 'GoogleMLKit/ObjectDetectionCustom', '7.0.0'
For dynamically downloading a model from Firebase, add the
LinkFirebase
dependency:pod 'GoogleMLKit/ObjectDetectionCustom', '7.0.0' pod 'GoogleMLKit/LinkFirebase', '7.0.0'
After you install or update your project's Pods, open your Xcode project using its
.xcworkspace
. ML Kit is supported in Xcode version 13.2.1 or higher.If you want to download a model, make sure you add Firebase to your iOS project, if you have not already done so. This is not required when you bundle the model.
1. Load the model
Configure a local model source
To bundle the model with your app:
Copy the model file (usually ending in
.tflite
or.lite
) to your Xcode project, taking care to selectCopy bundle resources
when you do so. The model file will be included in the app bundle and available to ML Kit.Create
LocalModel
object, specifying the path to the model file:Swift
let localModel = LocalModel(path: localModelFilePath)
Objective-C
MLKLocalModel *localModel = [[MLKLocalModel alloc] initWithPath:localModelFilePath];
Configure a Firebase-hosted model source
To use the remotely-hosted model, create an CustomRemoteModel
object,
specifying the name you assigned the model when you published it:
Swift
let firebaseModelSource = FirebaseModelSource( name: "your_remote_model") // The name you assigned in // the Firebase console. let remoteModel = CustomRemoteModel(remoteModelSource: firebaseModelSource)
Objective-C
MLKFirebaseModelSource *firebaseModelSource = [[MLKFirebaseModelSource alloc] initWithName:@"your_remote_model"]; // The name you assigned in // the Firebase console. MLKCustomRemoteModel *remoteModel = [[MLKCustomRemoteModel alloc] initWithRemoteModelSource:firebaseModelSource];
Then, start the model download task, specifying the conditions under which you want to allow downloading. If the model isn't on the device, or if a newer version of the model is available, the task will asynchronously download the model from Firebase:
Swift
let downloadConditions = ModelDownloadConditions( allowsCellularAccess: true, allowsBackgroundDownloading: true ) let downloadProgress = ModelManager.modelManager().download( remoteModel, conditions: downloadConditions )
Objective-C
MLKModelDownloadConditions *downloadConditions = [[MLKModelDownloadConditions alloc] initWithAllowsCellularAccess:YES allowsBackgroundDownloading:YES]; NSProgress *downloadProgress = [[MLKModelManager modelManager] downloadModel:remoteModel conditions:downloadConditions];
Many apps start the download task in their initialization code, but you can do so at any point before you need to use the model.
2. Configure the object detector
After you configure your model sources, configure the object detector for your
use case with a CustomObjectDetectorOptions
object. You can change the
following settings:
Object Detector Settings | |
---|---|
Detection mode |
STREAM_MODE (default) | SINGLE_IMAGE_MODE
In In |
Detect and track multiple objects |
false (default) | true
Whether to detect and track up to five objects or only the most prominent object (default). |
Classify objects |
false (default) | true
Whether or not to classify detected objects by using the provided
custom classifier model. To use your custom classification
model, you need to set this to |
Classification confidence threshold |
Minimum confidence score of detected labels. If not set, any classifier threshold specified by the model’s metadata will be used. If the model does not contain any metadata or the metadata does not specify a classifier threshold, a default threshold of 0.0 will be used. |
Maximum labels per object |
Maximum number of labels per object that the detector will return. If not set, the default value of 10 will be used. |
If you only have a locally-bundled model, just create an object detector from
your LocalModel
object:
Swift
let options = CustomObjectDetectorOptions(localModel: localModel) options.detectorMode = .singleImage options.shouldEnableClassification = true options.shouldEnableMultipleObjects = true options.classificationConfidenceThreshold = NSNumber(value: 0.5) options.maxPerObjectLabelCount = 3
Objective-C
MLKCustomObjectDetectorOptions *options = [[MLKCustomObjectDetectorOptions alloc] initWithLocalModel:localModel]; options.detectorMode = MLKObjectDetectorModeSingleImage; options.shouldEnableClassification = YES; options.shouldEnableMultipleObjects = YES; options.classificationConfidenceThreshold = @(0.5); options.maxPerObjectLabelCount = 3;
If you have a remotely-hosted model, you will have to check that it has been
downloaded before you run it. You can check the status of the model download
task using the model manager's isModelDownloaded(remoteModel:)
method.
Although you only have to confirm this before running the object detector, if
you have both a remotely-hosted model and a locally-bundled model, it might make
sense to perform this check when instantiating the ObjectDetector
: create a
detector from the remote model if it's been downloaded, and from the local model
otherwise.
Swift
var options: CustomObjectDetectorOptions! if (ModelManager.modelManager().isModelDownloaded(remoteModel)) { options = CustomObjectDetectorOptions(remoteModel: remoteModel) } else { options = CustomObjectDetectorOptions(localModel: localModel) } options.detectorMode = .singleImage options.shouldEnableClassification = true options.shouldEnableMultipleObjects = true options.classificationConfidenceThreshold = NSNumber(value: 0.5) options.maxPerObjectLabelCount = 3
Objective-C
MLKCustomObjectDetectorOptions *options; if ([[MLKModelManager modelManager] isModelDownloaded:remoteModel]) { options = [[MLKCustomObjectDetectorOptions alloc] initWithRemoteModel:remoteModel]; } else { options = [[MLKCustomObjectDetectorOptions alloc] initWithLocalModel:localModel]; } options.detectorMode = MLKObjectDetectorModeSingleImage; options.shouldEnableClassification = YES; options.shouldEnableMultipleObjects = YES; options.classificationConfidenceThreshold = @(0.5); options.maxPerObjectLabelCount = 3;
If you only have a remotely-hosted model, you should disable model-related functionality—for example, gray-out or hide part of your UI—until you confirm the model has been downloaded.
You can get the model download status by attaching observers to the default
Notification Center. Be sure to use a weak reference to self
in the observer
block, since downloads can take some time, and the originating object can be
freed by the time the download finishes. For example:
Swift
NotificationCenter.default.addObserver( forName: .mlkitModelDownloadDidSucceed, object: nil, queue: nil ) { [weak self] notification in guard let strongSelf = self, let userInfo = notification.userInfo, let model = userInfo[ModelDownloadUserInfoKey.remoteModel.rawValue] as? RemoteModel, model.name == "your_remote_model" else { return } // The model was downloaded and is available on the device } NotificationCenter.default.addObserver( forName: .mlkitModelDownloadDidFail, object: nil, queue: nil ) { [weak self] notification in guard let strongSelf = self, let userInfo = notification.userInfo, let model = userInfo[ModelDownloadUserInfoKey.remoteModel.rawValue] as? RemoteModel else { return } let error = userInfo[ModelDownloadUserInfoKey.error.rawValue] // ... }
Objective-C
__weak typeof(self) weakSelf = self; [NSNotificationCenter.defaultCenter addObserverForName:MLKModelDownloadDidSucceedNotification object:nil queue:nil usingBlock:^(NSNotification *_Nonnull note) { if (weakSelf == nil | note.userInfo == nil) { return; } __strong typeof(self) strongSelf = weakSelf; MLKRemoteModel *model = note.userInfo[MLKModelDownloadUserInfoKeyRemoteModel]; if ([model.name isEqualToString:@"your_remote_model"]) { // The model was downloaded and is available on the device } }]; [NSNotificationCenter.defaultCenter addObserverForName:MLKModelDownloadDidFailNotification object:nil queue:nil usingBlock:^(NSNotification *_Nonnull note) { if (weakSelf == nil | note.userInfo == nil) { return; } __strong typeof(self) strongSelf = weakSelf; NSError *error = note.userInfo[MLKModelDownloadUserInfoKeyError]; }];
The object detection and tracking API is optimized for these two core use cases:
- Live detection and tracking of the most prominent object in the camera viewfinder.
- The detection of multiple objects from a static image.
To configure the API for these use cases:
Swift
// Live detection and tracking let options = CustomObjectDetectorOptions(localModel: localModel) options.shouldEnableClassification = true options.maxPerObjectLabelCount = 3 // Multiple object detection in static images let options = CustomObjectDetectorOptions(localModel: localModel) options.detectorMode = .singleImage options.shouldEnableMultipleObjects = true options.shouldEnableClassification = true options.maxPerObjectLabelCount = 3
Objective-C
// Live detection and tracking MLKCustomObjectDetectorOptions *options = [[MLKCustomObjectDetectorOptions alloc] initWithLocalModel:localModel]; options.shouldEnableClassification = YES; options.maxPerObjectLabelCount = 3; // Multiple object detection in static images MLKCustomObjectDetectorOptions *options = [[MLKCustomObjectDetectorOptions alloc] initWithLocalModel:localModel]; options.detectorMode = MLKObjectDetectorModeSingleImage; options.shouldEnableMultipleObjects = YES; options.shouldEnableClassification = YES; options.maxPerObjectLabelCount = 3;
3. Prepare the input image
Create a VisionImage
object using a UIImage
or a
CMSampleBuffer
.
If you use a UIImage
, follow these steps:
- Create a
VisionImage
object with theUIImage
. Make sure to specify the correct.orientation
.Swift
let image = VisionImage(image: UIImage) visionImage.orientation = image.imageOrientation
Objective-C
MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image]; visionImage.orientation = image.imageOrientation;
If you use a
CMSampleBuffer
, follow these steps:-
Specify the orientation of the image data contained in the
CMSampleBuffer
.To get the image orientation:
Swift
func imageOrientation( deviceOrientation: UIDeviceOrientation, cameraPosition: AVCaptureDevice.Position ) -> UIImage.Orientation { switch deviceOrientation { case .portrait: return cameraPosition == .front ? .leftMirrored : .right case .landscapeLeft: return cameraPosition == .front ? .downMirrored : .up case .portraitUpsideDown: return cameraPosition == .front ? .rightMirrored : .left case .landscapeRight: return cameraPosition == .front ? .upMirrored : .down case .faceDown, .faceUp, .unknown: return .up } }
Objective-C
- (UIImageOrientation) imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation cameraPosition:(AVCaptureDevicePosition)cameraPosition { switch (deviceOrientation) { case UIDeviceOrientationPortrait: return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationLeftMirrored : UIImageOrientationRight; case UIDeviceOrientationLandscapeLeft: return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationDownMirrored : UIImageOrientationUp; case UIDeviceOrientationPortraitUpsideDown: return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationRightMirrored : UIImageOrientationLeft; case UIDeviceOrientationLandscapeRight: return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationUpMirrored : UIImageOrientationDown; case UIDeviceOrientationUnknown: case UIDeviceOrientationFaceUp: case UIDeviceOrientationFaceDown: return UIImageOrientationUp; } }
- Create a
VisionImage
object using theCMSampleBuffer
object and orientation:Swift
let image = VisionImage(buffer: sampleBuffer) image.orientation = imageOrientation( deviceOrientation: UIDevice.current.orientation, cameraPosition: cameraPosition)
Objective-C
MLKVisionImage *image = [[MLKVisionImage alloc] initWithBuffer:sampleBuffer]; image.orientation = [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation cameraPosition:cameraPosition];
4. Create and run the object detector
Create a new object detector:
Swift
let objectDetector = ObjectDetector.objectDetector(options: options)
Objective-C
MLKObjectDetector *objectDetector = [MLKObjectDetector objectDetectorWithOptions:options];
Then, use the detector:
Asynchronously:
Swift
objectDetector.process(image) { objects, error in guard error == nil, let objects = objects, !objects.isEmpty else { // Handle the error. return } // Show results. }
Objective-C
[objectDetector processImage:image completion:^(NSArray
*_Nullable objects, NSError *_Nullable error) { if (objects.count == 0) { // Handle the error. return; } // Show results. }]; Synchronously:
Swift
var objects: [Object] do { objects = try objectDetector.results(in: image) } catch let error { // Handle the error. return } // Show results.
Objective-C
NSError *error; NSArray
*objects = [objectDetector resultsInImage:image error:&error]; // Show results or handle the error.
5. Get information about labeled objects
If the call to the image processor succeeds, it either passes a list of
Object
s to the completion handler or returns the list, depending on whether you called the asynchronous or synchronous method.Each
Object
contains the following properties:frame
A CGRect
indicating the position of the object in the image.trackingID
An integer that identifies the object across images, or `nil` in SINGLE_IMAGE_MODE. labels
label.text
The label's text description. Only returned if the TensorFlow Lite model's metadata contains label descriptions. label.index
The label's index among all the labels supported by the classifier. label.confidence
The confidence value of the object classification. Swift
// objects contains one item if multiple object detection wasn't enabled. for object in objects { let frame = object.frame let trackingID = object.trackingID let description = object.labels.enumerated().map { (index, label) in "Label \(index): \(label.text), \(label.confidence), \(label.index)" }.joined(separator: "\n") }
Objective-C
// The list of detected objects contains one item if multiple object detection // wasn't enabled. for (MLKObject *object in objects) { CGRect frame = object.frame; NSNumber *trackingID = object.trackingID; for (MLKObjectLabel *label in object.labels) { NSString *labelString = [NSString stringWithFormat:@"%@, %f, %lu", label.text, label.confidence, (unsigned long)label.index]; } }
Ensuring a great user experience
For the best user experience, follow these guidelines in your app:
- Successful object detection depends on the object's visual complexity. In order to be detected, objects with a small number of visual features might need to take up a larger part of the image. You should provide users with guidance on capturing input that works well with the kind of objects you want to detect.
- When you use classification, if you want to detect objects that don't fall cleanly into the supported categories, implement special handling for unknown objects.
Also, check out the [ML Kit Material Design showcase app][showcase-link]{: .external } and the Material Design Patterns for machine learning-powered features collection.
Improving performance
If you want to use object detection in a real-time application, follow these guidelines to achieve the best framerates:When you use streaming mode in a real-time application, don't use multiple object detection, as most devices won't be able to produce adequate framerates.
- For processing video frames, use the
results(in:)
synchronous API of the detector. Call this method from theAVCaptureVideoDataOutputSampleBufferDelegate
'scaptureOutput(_, didOutput:from:)
function to synchronously get results from the given video frame. KeepAVCaptureVideoDataOutput
'salwaysDiscardsLateVideoFrames
astrue
to throttle calls to the detector. If a new video frame becomes available while the detector is running, it will be dropped. - If you use the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each processed input frame. See the updatePreviewOverlayViewWithLastFrame in the ML Kit quickstart sample for an example.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-11-28 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-11-28 UTC."],[],[]]
-