Earth Engine provides
ee.Model as a connector to models hosted on
AI. Specifically, Earth Engine will send image or table data as online prediction
requests to a trained model deployed on a Vertex AI endpoint. The model outputs are then
available as Earth Engine images or tables. Currently, TensorFlow models are supported.
TensorFlow is an open source ML platform that supports advanced ML methods such as deep learning. The Earth Engine API provides methods for importing/exporting imagery, training and testing data in TFRecord format. See the ML examples page for demonstrations that use TensorFlow with data from Earth Engine. See the TFRecord page for details about how Earth Engine writes data to TFRecord files.
ee.Model package handles interaction with hosted machine learning models.
Connecting to models hosted on Vertex AI
ee.Model instance can be created with
This is an
ee.Model object that packages Earth Engine data into tensors,
forwards them as predict requests to Vertex
AI then automatically reassembles the responses into Earth Engine data types.
To interact with Earth Engine, a hosted model's inputs/outputs need to be compatible with the TensorProto interchange format, specifically serialized TensorProtos in base64 (reference). This can be done programmatically, as shown on the TensorFlow examples page, after training and before saving, or by reloading saved models.
To use a model with
ee.Model.fromVertexAi(), you must have sufficient
permissions to use the model. Specifically, you (or anyone who uses the model) needs at
Vertex AI user role. You control permissions for your Cloud Project using
Identify and Access Management (IAM) controls.
When deploying your model and endpoint, you will need to specify which region to deploy to.
us-central1 region is recommended since it will likely perform best due to
proximity to Earth Engine servers, but almost any region will work. See the
Vertex AI docs for
details about Vertex AI regions and what features each one supports.
If you are migrating from AI Platform, note that Vertex AI does not have a global endpoint,
ee.Model.fromVertexAi() does not have a
model.predictImage() to make predictions on an
using a hosted model. The return type of
predictImage() is an
ee.Image which can be added to the map, used in other computations,
exported, etc. The image is forwarded to the hosted model in tiles. You control how
the image is tiled using the
outputTileSize parameters. For example, a fully convolutional model may segment
256x256xChannels shaped inputs (scale is unchanged), but the edges of the tiles can be
discarded by the specified overlap. Alternatively, an image classification model may accept
256x256xChannels shaped inputs, but return a 1x1x1 output of probability
(reduced resolution) of some tile characteristic. Note that Earth Engine will always
forward float32 3D tensors, even when bands are scalar (the last dimension will be 1).
Nearly all convolutional models will have a fixed input projection (that of the data
on which the model was trained). In this case, set the
to true in your call to
When visualizing predictions, use caution when zooming out on a model that has a fixed
input projection. This is for the same reason as
here. Specifically, zooming to a large spatial scope can result in requests for too
much data, too many concurrent requests, or too much billable compute.
model.predictProperties() to make predictions on an
ee.FeatureCollection. Inputs are stored as properties of the table. The
inputs can be numeric types, including multidimensional arrays. The outputs of the model
are stored as new properties in the output table.