Migrating for Android

Update gradle imports

The new SDK only requires one dependency for each ML Kit API. You don't need to specify common libraries like firebase-ml-vision or firebase-ml-natural-language. ML Kit uses the com.google.android.gms namespace for libraries that depend on Google Play Services.

Vision APIs

Bundled models are delivered as part of your application. Thin models must be downloaded. Some APIs are available in both bundled and thin form, others only in one form or the other:

APIBundledThin
Text recognitionx (beta)x
Face detectionxx
Barcode scanningxx
Image labelingxx
Object detection and trackingx-

Update the dependencies for the ML Kit Android libraries in your module (app- level) Gradle file (usually app/build.gradle) according to the following tables:

Bundled models

APIOld ArtifactsNew Artifact
Barcode scanning com.google.firebase:firebase-ml-vision:24.0.1
com.google.firebase:firebase-ml-vision-barcode-model:16.0.1
com.google.mlkit:barcode-scanning:17.3.0
Face contour com.google.firebase:firebase-ml-vision:24.0.1
com.google.firebase:firebase-ml-vision-face-model:19.0.0
com.google.mlkit:face-detection:16.1.7
Image labeling com.google.firebase:firebase-ml-vision:24.0.1
com.google.firebase:firebase-ml-vision-image-label-model:19.0.0
com.google.mlkit:image-labeling:17.0.9
Object detection com.google.firebase:firebase-ml-vision:24.0.1
com.google.firebase:firebase-ml-vision-object-detection-model:19.0.3
com.google.mlkit:object-detection:17.0.2

Thin models

APIOld ArtifactsNew Artifact
Barcode scanning com.google.firebase:firebase-ml-vision:24.0.1 com.google.android.gms:play-services-mlkit-barcode-scanning:18.3.1
Face detection com.google.firebase:firebase-ml-vision:24.0.1 com.google.android.gms:play-services-mlkit-face-detection:17.1.0
Text recognition com.google.firebase:firebase-ml-vision:24.0.1 com.google.android.gms:play-services-mlkit-text-recognition:19.0.1

AutoMLVision Edge

APIOld ArtifactNew Artifact
AutoML without downloading com.google.firebase:firebase-ml-vision:24.0.1
com.google.firebase:firebase-ml-vision-automl:18.0.3
com.google.mlkit:image-labeling-custom:17.0.3
AutoML with downloading com.google.firebase:firebase-ml-vision:24.0.1
com.google.firebase:firebase-ml-vision-automl:18.0.3
com.google.mlkit:image-labeling-custom:17.0.3
com.google.mlkit:linkfirebase:17.0.0

Natural Language APIs

Bundled models are delivered as part of your application. Thin models must be downloaded:

APIBundledThin
Language IDxx
Smart replyxx (beta)

Update the dependencies for the ML Kit Android libraries in your module (app- level) Gradle file (usually app/build.gradle) according to the following tables:

Bundled models

APIOld ArtifactsNew Artifact
Language ID com.google.firebase:firebase-ml-natural-language:22.0.0
com.google.firebase:firebase-ml-natural-language-language-id-model:20.0.7
com.google.mlkit:language-id:17.0.6
Smart reply com.google.firebase:firebase-ml-natural-language:22.0.0
com.google.firebase:firebase-ml-natural-language-smart-reply-model:20.0.7
com.google.mlkit:smart-reply:17.0.4

Thin models

APIOld ArtifactsNew Artifact
Language ID com.google.firebase:firebase-ml-natural-language:22.0.0
com.google.firebase:firebase-ml-natural-language-language-id-model:20.0.7
com.google.android.gms:play-services-mlkit-language-id:17.0.0
Smart reply com.google.firebase:firebase-ml-natural-language:22.0.0
com.google.firebase:firebase-ml-natural-language-smart-reply-model:20.0.7
com.google.android.gms:play-services-mlkit-smart-reply:16.0.0-beta1

Update class names

If your class appears in this table, make the indicated change:

Old classNew class
com.google.firebase.ml.common.FirebaseMLException com.google.mlkit.common.MlKitException
com.google.firebase.ml.vision.common.FirebaseVisionImage com.google.mlkit.vision.common.InputImage
com.google.firebase.ml.vision.barcode.FirebaseVisionBarcodeDetector com.google.mlkit.vision.barcode.BarcodeScanner
com.google.firebase.ml.vision.labeler.FirebaseVisionImageLabel com.google.mlkit.vision.label.ImageLabeler
com.google.firebase.ml.vision.barcode.FirebaseVisionBarcodeDetector com.google.mlkit.vision.barcode.BarcodeScanner
com.google.firebase.ml.vision.automl.FirebaseAutoMLLocalModel com.google.mlkit.common.model.LocalModel
com.google.firebase.ml.vision.automl.FirebaseAutoMLRemoteModel com.google.mlkit.common.model.CustomRemoteModel
com.google.firebase.ml.vision.label.FirebaseVisionOnDeviceImageLabelerOptions com.google.mlkit.vision.label.defaults.ImageLabelerOptions
com.google.firebase.ml.vision.label.FirebaseVisionImageLabel com.google.mlkit.vision.label.ImageLabel
com.google.firebase.ml.vision.label.FirebaseVisionOnDeviceAutoMLImageLabelerOptions com.google.mlkit.vision.label.custom.CustomImageLabelerOptions
com.google.firebase.ml.vision.objects.FirebaseVisionObjectDetectorOptions com.google.mlkit.vision.objects.defaults.ObjectDetectorOptions

For other classes, follow these rules:

  • Remove the FirebaseVision prefix from the class name.
  • Remove other prefixes that start with Firebase prefix from the class name.

Also, in package names replace the com.google.firebase.ml prefix with com.google.mlkit.

Update method names

There are minimal code changes:

  • Detector/scanner/labeler/translator… instantiation has been changed. Each feature now has its own entry point. For example: BarcodeScanning, TextRecognition, ImageLabeling, Translation…. Calls to Firebase service getInstance() are replaced by calls to the feature entry point’sgetClient()method.
  • Default instantiation for TextRecognizer has been removed, since we introduced additional libraries for recognizing other scripts like Chinese and Korean. To use default options with the Latin script text recognition model, please declare a dependency on com.google.android.gms:play-services-mlkit-text-recognition and use TextRecognition.getClient(TextRecognizerOptions.DEFAULT_OPTIONS).
  • Default instantiation for ImageLabeler and ObjectDetector has been removed, since we introduced custom model support for these two features. For example, to use default options with base model in ImageLabeling, please declare a dependency on com.google.mlkit:image-labeling and use ImageLabeling.getClient(ImageLabelerOptions.DEFAULT_OPTIONS) in Java.
  • All handles (detector/scanner/labeler/translator…) are closeable. Ensure that the close() method is called when those objects will no longer be used. If you are using them in a Fragment or AppCompatActivity, one easy way to do that is call LifecycleOwner.getLifecycle() on the Fragment or AppCompatActivity, and then call Lifecycle.addObserver
  • processImage() and detectInImage() in the Vision APIs have been renamed to process() for consistency.
  • The Natural Language APIs now use the term “language tag” (as defined by the BCP 47 standard) instead of “language code”.
  • Getter methods in xxxOptions classes have been removed.
  • getBitmap() method in the InputImage class(replacing FirebaseVisionImage) is not supported anymore as part of the public interface. Please refer to BitmapUtils.java in ML Kit quickstart sample to get bitmap converted from various inputs.
  • FirebaseVisionImageMetadata has been removed, you can just pass image metadata such as width, height, rotationDegrees, format into the InputImages’s construction methods.

Here are some examples of old and new Kotlin methods:

Old

// Construct image labeler with base model and default options.
val imageLabeler = FirebaseVision.getInstance().onDeviceImageLabeler

// Construct object detector with base model and default options.
val objectDetector = FirebaseVision.getInstance().onDeviceObjectDetector

// Construct face detector with given options
val faceDetector = FirebaseVision.getInstance().getVisionFaceDetector(options)

// Construct image labeler with local AutoML model
val localModel =
    FirebaseAutoMLLocalModel.Builder()
      .setAssetFilePath("automl/manifest.json")
      .build()
val autoMLImageLabeler =
    FirebaseVision.getInstance()
      .getOnDeviceAutoMLImageLabeler(
          FirebaseVisionOnDeviceAutoMLImageLabelerOptions.Builder(localModel)
            .setConfidenceThreshold(0.3F)
            .build()
    )

New

// Construct image labeler with base model and default options.
val imageLabeler = ImageLabeling.getClient(ImageLabelerOptions.DEFAULT_OPTIONS)
// Optional: add life cycle observer
lifecycle.addObserver(imageLabeler)

// Construct object detector with base model and default options.
val objectDetector = ObjectDetection.getClient(ObjectDetectorOptions.DEFAULT_OPTIONS)

// Construct face detector with given options
val faceDetector = FaceDetection.getClient(options)

// Construct image labeler with local AutoML model
val localModel =
  LocalModel.Builder()
    .setAssetManifestFilePath("automl/manifest.json")
    .build()
val autoMLImageLabeler =
  ImageLabeling.getClient(
    CustomImageLabelerOptions.Builder(localModel)
    .setConfidenceThreshold(0.3F).build())
  

Here are some examples of old and new Java methods:

Old

// Construct image labeler with base model and default options.
FirebaseVisionImageLabeler imagelLabeler =
     FirebaseVision.getInstance().getOnDeviceImageLabeler();

// Construct object detector with base model and default options.
FirebaseVisionObjectDetector objectDetector =
     FirebaseVision.getInstance().getOnDeviceObjectDetector();

// Construct face detector with given options
FirebaseVisionFaceDetector faceDetector =
     FirebaseVision.getInstance().getVisionFaceDetector(options);

// Construct image labeler with local AutoML model
FirebaseAutoMLLocalModel localModel =
    new FirebaseAutoMLLocalModel.Builder()
      .setAssetFilePath("automl/manifest.json")
      .build();
FirebaseVisionImageLabeler autoMLImageLabeler =
    FirebaseVision.getInstance()
      .getOnDeviceAutoMLImageLabeler(
          FirebaseVisionOnDeviceAutoMLImageLabelerOptions.Builder(localModel)
            .setConfidenceThreshold(0.3F)
            .build());

New

// Construct image labeler with base model and default options.
ImageLabeler imageLabeler = ImageLabeling.getClient(ImageLabelerOptions.DEFAULT_OPTIONS);
// Optional: add life cycle observer
getLifecycle().addObserver(imageLabeler);

// Construct object detector with base model and default options.
ObjectDetector objectDetector = ObjectDetection.getClient(ObjectDetectorOptions.DEFAULT_OPTIONS);

// Construct face detector with given options
FaceDetector faceDetector = FaceDetection.getClient(options);

// Construct image labeler with local AutoML model
LocalModel localModel =
  new LocalModel.Builder()
    .setAssetManifestFilePath("automl/manifest.json")
    .build();
ImageLabeler autoMLImageLabeler =
  ImageLabeling.getClient(
    new CustomImageLabelerOptions.Builder(localModel)
    .setConfidenceThreshold(0.3F).build());
  

API-specific changes

Barcode Scanning

For the Barcode Scanning API, there are now two ways the models can be delivered:

  • Through Google Play Services a.k.a. “thin” (recommended) - this reduces the app size and the model is shared between applications. However, developers will need to ensure that the model is downloaded before using it for the first time.
  • With your app’s APK a.k.a. “bundled” - this increases the app size but it means the model is immediately usable.

The two implementations are slightly different, with the “bundled” version having a number of improvements over the “thin” version. Details on these differences can be found in the Barcode Scanning API guidelines.

Face Detection

For the Face Detection API, there are two ways the models can be delivered:

  • Through Google Play Services a.k.a. “thin” (recommended) - this reduces the app size and the model is shared between applications. However, developers will need to ensure that the model is downloaded before using it for the first time.
  • With your app’s APK a.k.a. “bundled” - this increases the app download size but it means the model is immediately usable.

The behavior of the implementations are the same.

Translation

  • TranslateLanguage now uses readable names for its constants (e.g. ENGLISH) instead of language tags (EN). They are also now @StringDef, instead of @IntDef, and the value of the constant is the matching BCP 47 language tag.

  • If your app uses the “device idle” download condition option, be aware that this option has been removed and cannot be used anymore. You can still use the “device charging” option. If you want more complex behavior, you can delay calling RemoteModelManager.download behind your own logic.

AutoML Image Labeling

If your app uses the “device idle” download condition option, be aware that this option has been removed and cannot be used anymore. You can still use the “device charging” option.

If you want more complex behavior, you can delay calling RemoteModelManager.download behind your own logic.

Object Detection and Tracking

If your app uses object detection with coarse classification, be aware that the new SDK has changed the way it returns the classification category for detected objects.

The classification category is returned as an instance of DetectedObject.Label instead of an integer. All possible categories for the coarse classifier are included in the PredefinedCategory class.

Here is an example of the old and new Kotlin code:

Old

if (object.classificationCategory == FirebaseVisionObject.CATEGORY_FOOD) {
    ...
}

New

if (!object.labels.isEmpty() && object.labels[0].text == PredefinedCategory.FOOD) {
    ...
}
// or
if (!object.labels.isEmpty() && object.labels[0].index == PredefinedCategory.FOOD_INDEX) {
    ...
}

Here is an example of the old and new Java code:

Old

if (object.getClassificationCategory() == FirebaseVisionObject.CATEGORY_FOOD) {
    ...
}

New

if (!object.getLabels().isEmpty()
    && object.getLabels().get(0).getText().equals(PredefinedCategory.FOOD)) {
    ...
}
// or
if (!object.getLabels().isEmpty()
    && object.getLabels().get(0).getIndex() == PredefinedCategory.FOOD_INDEX) {
    ...
}

The “unknown” category has been removed. When the confidence of an object's classification is low, we just don’t return any label.

Remove Firebase dependencies (Optional)

This step only applies when these conditions are met:

  • Firebase ML Kit is the only Firebase component you use.
  • You only use on-device APIs.
  • You don't use model serving.

If this is the case, you can remove the Firebase dependencies after the migration. Follow these steps:

  • Remove the Firebase configuration file by deleting the google-services.json config file at the module (app-level) directory of your app.
  • Replace the Google Services Gradle plugin in your module (app-level) Gradle file (usually app/build.gradle) with the Strict Version Matcher plugin:

Before

apply plugin: 'com.android.application'
apply plugin: 'com.google.gms.google-services'  // Google Services plugin

android {
  // …
}

After

apply plugin: 'com.android.application'
apply plugin: 'com.google.android.gms.strict-version-matcher-plugin'

android {
  // …
}
  • Replace the Google Services Gradle plugin classpath in your project (root-level) Gradle file (build.gradle) with the one for the Strict Version Matcher plugin:

Before

buildscript {
  dependencies {
    // ...

    classpath 'com.google.gms:google-services:4.3.3'  // Google Services plugin
  }
}

After

buildscript {
  dependencies {
    // ...

    classpath 'com.google.android.gms:strict-version-matcher-plugin:1.2.1'
  }
}

Delete your Firebase app at the Firebase console according to the instructions on the Firebase support site.

Getting Help

If you run into any issues, please check out our Community page where we outline the channels available for getting in touch with us.