Scan barcodes with ML Kit on Android

You can use ML Kit to recognize and decode barcodes.

FeatureUnbundledBundled
ImplementationModel is dynamically downloaded via Google Play Services.Model is statically linked to your app at build time.
App sizeAbout 200 KB size increase.About 2.4 MB size increase.
Initialization timeMight have to wait for model to download before first use.Model is available immediately.

Try it out

Before you begin

  1. In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections.

  2. Add the dependencies for the ML Kit Android libraries to your module's app-level gradle file, which is usually app/build.gradle. Choose one of the following dependencies based on your needs:

    For bundling the model with your app:

    dependencies {
      // ...
      // Use this dependency to bundle the model with your app
      implementation 'com.google.mlkit:barcode-scanning:17.3.0'
    }
    

    For using the model in Google Play Services:

    dependencies {
      // ...
      // Use this dependency to use the dynamically downloaded model in Google Play Services
      implementation 'com.google.android.gms:play-services-mlkit-barcode-scanning:18.3.1'
    }
    
  3. If you choose to use the model in Google Play Services, you can configure your app to automatically download the model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:

    <application ...>
          ...
          <meta-data
              android:name="com.google.mlkit.vision.DEPENDENCIES"
              android:value="barcode" >
          <!-- To use multiple models: android:value="barcode,model2,model3" -->
    </application>
    

    You can also explicitly check the model availability and request download through Google Play services ModuleInstallClient API.

    If you don't enable install-time model downloads or request explicit download, the model is downloaded the first time you run the scanner. Requests you make before the download has completed produce no results.

Input image guidelines

  • For ML Kit to accurately read barcodes, input images must contain barcodes that are represented by sufficient pixel data.

    The specific pixel data requirements are dependent on both the type of barcode and the amount of data that's encoded in it, since many barcodes support a variable size payload. In general, the smallest meaningful unit of the barcode should be at least 2 pixels wide, and for 2-dimensional codes, 2 pixels tall.

    For example, EAN-13 barcodes are made up of bars and spaces that are 1, 2, 3, or 4 units wide, so an EAN-13 barcode image ideally has bars and spaces that are at least 2, 4, 6, and 8 pixels wide. Because an EAN-13 barcode is 95 units wide in total, the barcode should be at least 190 pixels wide.

    Denser formats, such as PDF417, need greater pixel dimensions for ML Kit to reliably read them. For example, a PDF417 code can have up to 34 17-unit wide "words" in a single row, which would ideally be at least 1156 pixels wide.

  • Poor image focus can impact scanning accuracy. If your app isn't getting acceptable results, ask the user to recapture the image.

  • For typical applications, it's recommended to provide a higher resolution image, such as 1280x720 or 1920x1080, which makes barcodes scannable from a larger distance away from the camera.

    However, in applications where latency is critical, you can improve performance by capturing images at a lower resolution, but requiring that the barcode make up the majority of the input image. Also see Tips to improve real-time performance.

1. Configure the barcode scanner

If you know which barcode formats you expect to read, you can improve the speed of the barcode detector by configuring it to only detect those formats.

For example, to detect only Aztec code and QR codes, build a BarcodeScannerOptions object as in the following example:

Kotlin

val options = BarcodeScannerOptions.Builder()
        .setBarcodeFormats(
                Barcode.FORMAT_QR_CODE,
                Barcode.FORMAT_AZTEC)
        .build()

Java

BarcodeScannerOptions options =
        new BarcodeScannerOptions.Builder()
        .setBarcodeFormats(
                Barcode.FORMAT_QR_CODE,
                Barcode.FORMAT_AZTEC)
        .build();

The following formats are supported:

  • Code 128 (FORMAT_CODE_128)
  • Code 39 (FORMAT_CODE_39)
  • Code 93 (FORMAT_CODE_93)
  • Codabar (FORMAT_CODABAR)
  • EAN-13 (FORMAT_EAN_13)
  • EAN-8 (FORMAT_EAN_8)
  • ITF (FORMAT_ITF)
  • UPC-A (FORMAT_UPC_A)
  • UPC-E (FORMAT_UPC_E)
  • QR Code (FORMAT_QR_CODE)
  • PDF417 (FORMAT_PDF417)
  • Aztec (FORMAT_AZTEC)
  • Data Matrix (FORMAT_DATA_MATRIX)

Starting from bundled model 17.1.0 and unbundled model 18.2.0, you can also call enableAllPotentialBarcodes() to return all potential barcodes even if they cannot be decoded. This can be used to facilitate further detection, for example by zooming in the camera to get a clearer image of any barcode in the returned bounding box.

Kotlin

val options = BarcodeScannerOptions.Builder()
        .setBarcodeFormats(...)
        .enableAllPotentialBarcodes() // Optional
        .build()

Java

BarcodeScannerOptions options =
        new BarcodeScannerOptions.Builder()
        .setBarcodeFormats(...)
        .enableAllPotentialBarcodes() // Optional
        .build();

Further on, starting from bundled library 17.2.0 and unbundled library 18.3.0, a new feature called auto-zoom has been introduced to further enhance the barcode scanning experience. With this feature enabled, the app is notified when all barcodes within the view are too distant for decoding. As a result, the app can effortlessly adjust the camera's zoom ratio to the recommended setting provided by the library, ensuring optimal focus and readability. This feature will significantly enhance the accuracy and success rate of barcode scanning, making it easier for apps to capture information precisely.

To enable auto-zooming and customize the experience, you can utilize the setZoomSuggestionOptions() method along with your own ZoomCallback handler and desired maximum zoom ratio, as demonstrated in the code below.

Kotlin

val options = BarcodeScannerOptions.Builder()
        .setBarcodeFormats(...)
        .setZoomSuggestionOptions(
            new ZoomSuggestionOptions.Builder(zoomCallback)
                .setMaxSupportedZoomRatio(maxSupportedZoomRatio)
                .build()) // Optional
        .build()

Java

BarcodeScannerOptions options =
        new BarcodeScannerOptions.Builder()
        .setBarcodeFormats(...)
        .setZoomSuggestionOptions(
            new ZoomSuggestionOptions.Builder(zoomCallback)
                .setMaxSupportedZoomRatio(maxSupportedZoomRatio)
                .build()) // Optional
        .build();

zoomCallback is required to be provided to handle whenever the library suggests a zoom should be performed and this callback will always be called on the main thread.

The following code snippet shows an example of defining a simple callback.

Kotlin

fun setZoom(ZoomRatio: Float): Boolean {
    if (camera.isClosed()) return false
    camera.getCameraControl().setZoomRatio(zoomRatio)
    return true
}

Java

boolean setZoom(float zoomRatio) {
    if (camera.isClosed()) {
        return false;
    }
    camera.getCameraControl().setZoomRatio(zoomRatio);
    return true;
}

maxSupportedZoomRatio is related to the camera hardware, and different camera libraries have different ways to fetch it (see the javadoc of the setter method). In case this is not provided, an unbounded zoom ratio might be produced by the library which might not be supported. Refer to the setMaxSupportedZoomRatio() method introduction to see how to get the max supported zoom ratio with different Camera libraries.

When auto-zooming is enabled and no barcodes are successfully decoded within the view, BarcodeScanner triggers your zoomCallback with the requested zoomRatio. If the callback correctly adjusts the camera to this zoomRatio, it is highly probable that the most centered potential barcode will be decoded and returned.

A barcode may remain undecodable even after a successful zoom-in. In such cases, BarcodeScanner may either invoke the callback for another round of zoom-in until the maxSupportedZoomRatio is reached, or provide an empty list (or a list containing potential barcodes that were not decoded, if enableAllPotentialBarcodes() was called) to the OnSuccessListener (which will be defined in step 4. Process the image).

2. Prepare the input image

To recognize barcodes in an image, create an InputImage object from either a Bitmap, media.Image, ByteBuffer, byte array, or a file on the device. Then, pass the InputImage object to the BarcodeScanner's process method.

You can create an InputImage object from different sources, each is explained below.

Using a media.Image

To create an InputImage object from a media.Image object, such as when you capture an image from a device's camera, pass the media.Image object and the image's rotation to InputImage.fromMediaImage().

If you use the CameraX library, the OnImageCapturedListener and ImageAnalysis.Analyzer classes calculate the rotation value for you.

Kotlin

private class YourImageAnalyzer : ImageAnalysis.Analyzer {

    override fun analyze(imageProxy: ImageProxy) {
        val mediaImage = imageProxy.image
        if (mediaImage != null) {
            val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
            // Pass image to an ML Kit Vision API
            // ...
        }
    }
}

Java

private class YourAnalyzer implements ImageAnalysis.Analyzer {

    @Override
    public void analyze(ImageProxy imageProxy) {
        Image mediaImage = imageProxy.getImage();
        if (mediaImage != null) {
          InputImage image =
                InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
          // Pass image to an ML Kit Vision API
          // ...
        }
    }
}

If you don't use a camera library that gives you the image's rotation degree, you can calculate it from the device's rotation degree and the orientation of camera sensor in the device:

Kotlin

private val ORIENTATIONS = SparseIntArray()

init {
    ORIENTATIONS.append(Surface.ROTATION_0, 0)
    ORIENTATIONS.append(Surface.ROTATION_90, 90)
    ORIENTATIONS.append(Surface.ROTATION_180, 180)
    ORIENTATIONS.append(Surface.ROTATION_270, 270)
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
@Throws(CameraAccessException::class)
private fun getRotationCompensation(cameraId: String, activity: Activity, isFrontFacing: Boolean): Int {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    val deviceRotation = activity.windowManager.defaultDisplay.rotation
    var rotationCompensation = ORIENTATIONS.get(deviceRotation)

    // Get the device's sensor orientation.
    val cameraManager = activity.getSystemService(CAMERA_SERVICE) as CameraManager
    val sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION)!!

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360
    }
    return rotationCompensation
}

Java

private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
static {
    ORIENTATIONS.append(Surface.ROTATION_0, 0);
    ORIENTATIONS.append(Surface.ROTATION_90, 90);
    ORIENTATIONS.append(Surface.ROTATION_180, 180);
    ORIENTATIONS.append(Surface.ROTATION_270, 270);
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
private int getRotationCompensation(String cameraId, Activity activity, boolean isFrontFacing)
        throws CameraAccessException {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
    int rotationCompensation = ORIENTATIONS.get(deviceRotation);

    // Get the device's sensor orientation.
    CameraManager cameraManager = (CameraManager) activity.getSystemService(CAMERA_SERVICE);
    int sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION);

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360;
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360;
    }
    return rotationCompensation;
}

Then, pass the media.Image object and the rotation degree value to InputImage.fromMediaImage():

Kotlin

val image = InputImage.fromMediaImage(mediaImage, rotation)

Java

InputImage image = InputImage.fromMediaImage(mediaImage, rotation);

Using a file URI

To create an InputImage object from a file URI, pass the app context and file URI to InputImage.fromFilePath(). This is useful when you use an ACTION_GET_CONTENT intent to prompt the user to select an image from their gallery app.

Kotlin

val image: InputImage
try {
    image = InputImage.fromFilePath(context, uri)
} catch (e: IOException) {
    e.printStackTrace()
}

Java

InputImage image;
try {
    image = InputImage.fromFilePath(context, uri);
} catch (IOException e) {
    e.printStackTrace();
}

Using a ByteBuffer or ByteArray

To create an InputImage object from a ByteBuffer or a ByteArray, first calculate the image rotation degree as previously described for media.Image input. Then, create the InputImage object with the buffer or array, together with image's height, width, color encoding format, and rotation degree:

Kotlin

val image = InputImage.fromByteBuffer(
        byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)
// Or:
val image = InputImage.fromByteArray(
        byteArray,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)

Java

InputImage image = InputImage.fromByteBuffer(byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);
// Or:
InputImage image = InputImage.fromByteArray(
        byteArray,
        /* image width */480,
        /* image height */360,
        rotation,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);

Using a Bitmap

To create an InputImage object from a Bitmap object, make the following declaration:

Kotlin

val image = InputImage.fromBitmap(bitmap, 0)

Java

InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);

The image is represented by a Bitmap object together with rotation degrees.

3. Get an instance of BarcodeScanner

Kotlin

val scanner = BarcodeScanning.getClient()
// Or, to specify the formats to recognize:
// val scanner = BarcodeScanning.getClient(options)

Java

BarcodeScanner scanner = BarcodeScanning.getClient();
// Or, to specify the formats to recognize:
// BarcodeScanner scanner = BarcodeScanning.getClient(options);

4. Process the image

Pass the image to the process method:

Kotlin

val result = scanner.process(image)
        .addOnSuccessListener { barcodes ->
            // Task completed successfully
            // ...
        }
        .addOnFailureListener {
            // Task failed with an exception
            // ...
        }

Java

Task<List<Barcode>> result = scanner.process(image)
        .addOnSuccessListener(new OnSuccessListener<List<Barcode>>() {
            @Override
            public void onSuccess(List<Barcode> barcodes) {
                // Task completed successfully
                // ...
            }
        })
        .addOnFailureListener(new OnFailureListener() {
            @Override
            public void onFailure(@NonNull Exception e) {
                // Task failed with an exception
                // ...
            }
        });

5. Get information from barcodes

If the barcode recognition operation succeeds, a list of Barcode objects are passed to the success listener. Each Barcode object represents a barcode that was detected in the image. For each barcode, you can get its bounding coordinates in the input image, as well as the raw data encoded by the barcode. Also, if the barcode scanner was able to determine the type of data encoded by the barcode, you can get an object containing parsed data.

For example:

Kotlin

for (barcode in barcodes) {
    val bounds = barcode.boundingBox
    val corners = barcode.cornerPoints

    val rawValue = barcode.rawValue

    val valueType = barcode.valueType
    // See API reference for complete list of supported types
    when (valueType) {
        Barcode.TYPE_WIFI -> {
            val ssid = barcode.wifi!!.ssid
            val password = barcode.wifi!!.password
            val type = barcode.wifi!!.encryptionType
        }
        Barcode.TYPE_URL -> {
            val title = barcode.url!!.title
            val url = barcode.url!!.url
        }
    }
}

Java

for (Barcode barcode: barcodes) {
    Rect bounds = barcode.getBoundingBox();
    Point[] corners = barcode.getCornerPoints();

    String rawValue = barcode.getRawValue();

    int valueType = barcode.getValueType();
    // See API reference for complete list of supported types
    switch (valueType) {
        case Barcode.TYPE_WIFI:
            String ssid = barcode.getWifi().getSsid();
            String password = barcode.getWifi().getPassword();
            int type = barcode.getWifi().getEncryptionType();
            break;
        case Barcode.TYPE_URL:
            String title = barcode.getUrl().getTitle();
            String url = barcode.getUrl().getUrl();
            break;
    }
}

Tips to improve real-time performance

If you want to scan barcodes in a real-time application, follow these guidelines to achieve the best framerates:

  • Don't capture input at the camera's native resolution. On some devices, capturing input at the native resolution produces extremely large (10+ megapixels) images, which results in very poor latency with no benefit to accuracy. Instead, only request the size from the camera that's required for barcode detection, which is usually no more than 2 megapixels.

    If scanning speed is important, you can further lower the image capture resolution. However, bear in mind the minimum barcode size requirements outlined above.

    If you are trying to recognize barcodes from a sequence of streaming video frames, the recognizer might produce different results from frame to frame. You should wait until you get a consecutive series of the same value to be confident you are returning a good result.

    The Checksum digit is not supported for ITF and CODE-39.

  • If you use the Camera or camera2 API, throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame. See the VisionProcessorBase class in the quickstart sample app for an example.
  • If you use the CameraX API, be sure that backpressure strategy is set to its default value ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST. This guarantees only one image will be delivered for analysis at a time. If more images are produced when the analyzer is busy, they will be dropped automatically and not queued for delivery. Once the image being analyzed is closed by calling ImageProxy.close(), the next latest image will be delivered.
  • If you use the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. This renders to the display surface only once for each input frame. See the CameraSourcePreview and GraphicOverlay classes in the quickstart sample app for an example.
  • If you use the Camera2 API, capture images in ImageFormat.YUV_420_888 format. If you use the older Camera API, capture images in ImageFormat.NV21 format.