기본적으로 ML Kit의 API는 Google의 학습 모델을 사용합니다.
이러한 모델은 다양한 응용 분야에서 사용할 수 있도록 설계되었습니다. 그러나 일부 사용 사례에는 보다 타겟팅된 모델이 필요합니다. 그렇기 때문에 이제 일부 ML Kit API에서 기본 모델을 맞춤 TensorFlow Lite 모델로 바꿀 수 있습니다.
이미지 라벨 지정 및 Object Detection & Tracking API는 모두 커스텀 이미지 분류 모델을 지원합니다. TensorFlow Hub에서 선택한 고품질의 선행 학습된 모델 또는 TensorFlow, AutoML Vision Edge, TensorFlow Lite Model Maker로 학습된 자체 커스텀 모델과 호환됩니다.
다른 도메인 또는 사용 사례를 위한 커스텀 솔루션이 필요하면 온디바이스 머신러닝 페이지에서 온디바이스 머신러닝을 위한 Google의 모든 솔루션 및 도구에 대한 안내를 확인하세요.
커스텀 모델과 함께 ML Kit를 사용할 때의 이점
ML Kit에서 커스텀 이미지 분류 모델을 사용할 때의 이점은 다음과 같습니다.
사용하기 쉬운 상위 수준 API - 하위 수준 모델 입력/출력을 처리하거나 이미지 전/후 처리 또는 처리 파이프라인을 빌드할 필요가 없습니다.
라벨 매핑을 직접 걱정하지 않아도 됩니다. ML Kit가 TFLite 모델 메타데이터에서 라벨을 추출하고 매핑을 자동으로 수행합니다.
TensorFlow Hub에 게시된 선행 학습된 모델부터 TensorFlow, AutoML Vision Edge, TensorFlow Lite Model Maker로 학습된 새로운 모델에 이르기까지 다양한 소스의 커스텀 모델을 지원합니다.
Firebase에서 호스팅되는 모델을 지원합니다. 요청 시 모델을 다운로드하여 APK 크기를 줄입니다. 앱을 다시 게시할 필요 없이 모델 업데이트를 푸시하고 Firebase 원격 구성을 사용하여 간단한 A/B 테스트를 수행할 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["필요한 정보가 없음","missingTheInformationINeed","thumb-down"],["너무 복잡함/단계 수가 너무 많음","tooComplicatedTooManySteps","thumb-down"],["오래됨","outOfDate","thumb-down"],["번역 문제","translationIssue","thumb-down"],["샘플/코드 문제","samplesCodeIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-29(UTC)"],[[["\u003cp\u003eML Kit allows you to use custom TensorFlow Lite models with the Image Labeling and Object Detection & Tracking APIs for more targeted machine learning solutions.\u003c/p\u003e\n"],["\u003cp\u003eThese custom models offer benefits like easy-to-use APIs, automatic label mapping, support for models from various sources (including TensorFlow Hub and AutoML Vision Edge), and Firebase hosting capabilities.\u003c/p\u003e\n"],["\u003cp\u003ePre-trained models from TensorFlow Hub or custom-trained models built using AutoML Vision Edge, TensorFlow Lite Model Maker, or the TensorFlow Lite Converter are compatible with ML Kit.\u003c/p\u003e\n"],["\u003cp\u003eCustom models must adhere to specific requirements regarding input/output tensors, metadata, and data formats to ensure compatibility with ML Kit.\u003c/p\u003e\n"]]],["ML Kit APIs support replacing default models with custom TensorFlow Lite models for image labeling and object detection. Benefits include easy-to-use APIs, automatic label mapping, and compatibility with various model sources like TensorFlow Hub, AutoML Vision Edge, and TensorFlow Lite Model Maker. You can use pre-trained models from TensorFlow Hub or train your own. Model requirements involve specific tensor formats and metadata, including normalization for FLOAT32 input and label maps for output classes.\n"],null,["By default, ML Kit's APIs make use of Google trained machine learning models.\nThese models are designed to cover a wide range of applications. However, some\nuse cases require models that are more targeted. That is why some ML Kit APIs\nnow allow you to replace the default models with custom TensorFlow Lite models.\n\nBoth the [Image Labeling](/ml-kit/vision/image-labeling) and the\n[Object Detection \\& Tracking](/ml-kit/vision/object-detection) API\noffer support for custom image classification models. They are compatible with a\nselection of high-quality pre-trained models on TensorFlow Hub or your own\ncustom model trained with TensorFlow, AutoML Vision Edge or TensorFlow Lite\nModel Maker.\n\nIf you need a custom solution for other domains or use-cases, visit the\n[On-device Machine Learning page](/learn/topics/on-device-ml) for guidance on\nall of Google's solutions and tools for on-device machine learning.\n\nBenefits of using ML Kit with custom models\n\nThe benefits for using a custom image classification model with ML Kit are:\n\n- **Easy-to-use high level APIs** - No need to deal with low-level model input/output, handle image pre-/post-processing or building a processing pipeline.\n- **No need to worry about label mapping yourself**, ML Kit extracts the labels from TFLite model metadata and does the mapping for you.\n- **Supports custom models from a wide range of sources**, from pre-trained models published on TensorFlow Hub to new models trained with TensorFlow, AutoML Vision Edge or TensorFlow Lite Model Maker.\n- **Supports models hosted with Firebase**. Reduces APK size by downloading models on demand. Push model updates without republishing your app and perform easy A/B testing with Firebase Remote Config.\n- Optimized for integration with **Android's Camera APIs**.\n\nAnd, specifically for [Object Detection and Tracking](/ml-kit/vision/object-detection):\n\n- **Improve classification accuracy** by locating the objects first and only run the classifier on the related image area.\n- **Provide a real-time interactive experience** by providing your users immediate feedback on objects as they are being detected and classified.\n\nUse a pre-trained image classification model\n\nYou can use pre-trained TensorFlow Lite models, provided they meet a\n[set of criteria](#model-compatibility). Through TensorFlow Hub we are offering\na set of vetted models - from Google or other model creators - that meet these\ncriteria.\n\nUse a model published on TensorFlow Hub\n\n[TensorFlow Hub](https://tfhub.dev) offers a wide range of pre-trained image\nclassification models - from various model creators - that can be used with the\nImage Labeling and Object Detection and Tracking APIs. Follow these steps.\n\n1. Pick a model from the [collection of ML Kit compatible models](https://tfhub.dev/ml-kit/collections/image-classification/1).\n2. Download the .tflite model file from the model details page. Where available, pick a model format with metadata.\n3. Follow our guides for the [Image Labeling API](/ml-kit/vision/image-labeling#custom-tflite) or [Object Detection and Tracking API](/ml-kit/vision/object-detection#custom-tflite) on how to bundle model file with your project and use it in your Android or iOS application.\n\nTrain your own image classification model\n\nIf no pre-trained image classification model fits your needs, there are various\nways to train your own TensorFlow Lite model, some of which are outlined and\ndiscussed in more detail below.\n\n| Options to train your own image classification model ||\n|---------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|\n| **AutoML Vision Edge** | - Offered through Google Cloud AI - Create state-of the art image classification models - Easily evaluate between performance and size |\n| **TensorFlow Lite Model Maker** | - Re-train a model (transfer learning), takes less time and requires less data than training a model from scratch |\n| **Convert a TensorFlow model to TensorFlow Lite** | - Train a model with TensorFlow and then convert it to TensorFlow Lite |\n\nAutoML Vision Edge\n\nImage classification models trained using [AutoML Vision Edge](https://cloud.google.com/vision/automl/docs/edge-quickstart)\nare supported by the custom models in the\n[Image Labeling](/ml-kit/vision/image-labeling) and\n[Object Detection and Tracking API](/ml-kit/vision/object-detection)\nAPIs. These APIs also support download of models that are hosted with\n[Firebase model deployment](https://firebase.google.com/docs/ml/use-custom-models).\n\nTo learn more about how to use a model trained with AutoML Vision Edge in your\nAndroid and iOS apps, follow the custom model guides for each API, depending\non your use case.\n| **Note:** ML Kit only supports custom image classification models. Although AutoML Vision allows training of object detection models, these cannot be used with ML Kit.\n\nTensorFlow Lite Model Maker\n\nThe TFLite Model Maker library simplifies the process of adapting and converting\na TensorFlow neural-network model to particular input data when deploying this\nmodel for on-device ML applications. You can follow the [Colab for Image classification with TensorFlow Lite Model Maker](https://ai.google.dev/edge/litert/libraries/modify/image_classification).\n\nTo learn more about how to use a model trained with Model Maker in your Android\nand iOS apps, follow our guides for the [Image Labeling API](/ml-kit/vision/image-labeling)\nor the [Object Detection and Tracking API](/ml-kit/vision/object-detection),\ndepending on your use case.\n\nModels created using TensorFlow Lite converter\n\nIf you have an existing TensorFlow image classification model, you can convert\nit using the [TensorFlow Lite converter](https://www.tensorflow.org/lite/convert).\nPlease ensure the model created meets the compatibility requirements below.\n\nTo learn more about how to use a TensorFlow Lite model in your Android and iOS\napps, follow our guides for the [Image Labeling API](/ml-kit/vision/image-labeling)\nor the [Object Detection and Tracking API](/ml-kit/vision/object-detection),\ndepending on your use case.\n\nTensorFlow Lite model compatibility\n\nYou can use any pre-trained TensorFlow Lite image classification\nmodel, provided it meets these requirements:\n\nTensors\n\n- The model must have only one input tensor with the following constraints:\n - The data is in RGB pixel format.\n - The data is UINT8 or FLOAT32 type. If the input tensor type is FLOAT32, it must specify the NormalizationOptions by attaching [Metadata](#metadata).\n - The tensor has 4 dimensions : BxHxWxC, where:\n - B is the batch size. It must be 1 (inference on larger batches is not supported).\n - W and H are the input width and height.\n - C is the number of expected channels. It must be 3.\n- The model must have at least one output tensor with N classes and either 2 or 4 dimensions:\n - (1xN)\n - (1x1x1xN)\n- Currently only single-head models are fully supported. Multi-head models may output unexpected results.\n\nMetadata\n\nYou can add metadata to the TensorFlow Lite file as explained in\n[Adding metadata to TensorFlow Lite model](https://www.tensorflow.org/lite/convert/metadata).\n\nTo use a model with FLOAT32 input tensor, you must specify the\n[NormalizationOptions](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L295-L318)\nin the metadata.\n\nWe also recommend that you attach this metadata to the output tensor [TensorMetadata](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L396-L397):\n\n- A label map specifying the name of each output class, as an [AssociatedFile](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L434-L438) with type [TENSOR_AXIS_LABELS](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L42-L54) (otherwise only the numerical output class indices can be returned)\n- A default score threshold below which results are considered too low-confidence to be returned, as a [ProcessUnit](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L422-L429) with [ScoreThresholdingOptions](https://github.com/tensorflow/tensorflow/blob/d64c20eb62c5cfb9ff5936204afb8fb7c83cfc84/tensorflow/lite/experimental/support/metadata/metadata_schema.fbs#L360-L366)"]]