ArFrame
Per-frame state.
Summary
Enumerations |
|
---|---|
ArCoordinates2dType{
|
enum 2d coordinate systems supported by ARCore. |
ArCoordinates3dType{
|
enum 3d coordinate systems supported by ARCore. |
Typedefs |
|
---|---|
ArFrame
|
typedefstruct ArFrame_
The world state resulting from an update (value type). |
Functions |
|
---|---|
ArFrame_acquireCamera(const ArSession *session, const ArFrame *frame, ArCamera **out_camera)
|
void
Returns the camera object for the session.
|
ArFrame_acquireCameraImage(ArSession *session, ArFrame *frame, ArImage **out_image)
|
Returns the CPU image for the current frame.
|
ArFrame_acquireDepthImage(const ArSession *session, const ArFrame *frame, ArImage **out_depth_image)
|
Deprecated.
Deprecated in release 1.31.0. Please use ArFrame_acquireDepthImage16Bits instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireDepthImage16Bits due to the clearing of the top 3 bits per pixel. Attempts to acquire a depth image that corresponds to the current frame. |
ArFrame_acquireDepthImage16Bits(const ArSession *session, const ArFrame *frame, ArImage **out_depth_image)
|
Attempts to acquire a depth image that corresponds to the current frame.
|
ArFrame_acquireImageMetadata(const ArSession *session, const ArFrame *frame, ArImageMetadata **out_metadata)
|
Gets the camera metadata for the current camera image.
|
ArFrame_acquirePointCloud(const ArSession *session, const ArFrame *frame, ArPointCloud **out_point_cloud)
|
Acquires the current set of estimated 3d points attached to real-world geometry.
|
ArFrame_acquireRawDepthConfidenceImage(const ArSession *session, const ArFrame *frame, ArImage **out_confidence_image)
|
Attempts to acquire the confidence image corresponding to the raw depth image of the current frame.
|
ArFrame_acquireRawDepthImage(const ArSession *session, const ArFrame *frame, ArImage **out_depth_image)
|
Deprecated.
Deprecated in release 1.31.0. Please use ArFrame_acquireRawDepthImage16Bits instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireRawDepthImage16Bits due to the clearing of the top 3 bits per pixel. Attempts to acquire a "raw", mostly unfiltered, depth image that corresponds to the current frame. |
ArFrame_acquireRawDepthImage16Bits(const ArSession *session, const ArFrame *frame, ArImage **out_depth_image)
|
Attempts to acquire a "raw", mostly unfiltered, depth image that corresponds to the current frame.
|
ArFrame_acquireSemanticConfidenceImage(const ArSession *session, const ArFrame *frame, ArImage **out_semantic_confidence_image)
|
Attempts to acquire the semantic confidence image corresponding to the current frame.
|
ArFrame_acquireSemanticImage(const ArSession *session, const ArFrame *frame, ArImage **out_semantic_image)
|
Attempts to acquire the semantic image corresponding to the current frame.
|
ArFrame_create(const ArSession *session, ArFrame **out_frame)
|
void
Allocates a new
ArFrame object, storing the pointer into *out_frame . |
ArFrame_destroy(ArFrame *frame)
|
void
Releases an
ArFrame and any references it holds. |
ArFrame_getAndroidSensorPose(const ArSession *session, const ArFrame *frame, ArPose *out_pose)
|
void
Sets
out_pose to the pose of the Android Sensor Coordinate System in the world coordinate space for this frame. |
ArFrame_getCameraTextureName(const ArSession *session, const ArFrame *frame, uint32_t *out_texture_id)
|
void
Returns the OpenGL ES camera texture name (ID) associated with this frame.
|
ArFrame_getDisplayGeometryChanged(const ArSession *session, const ArFrame *frame, int32_t *out_geometry_changed)
|
void
Checks if the display rotation or viewport geometry changed since the previous call to
ArSession_update . |
ArFrame_getHardwareBuffer(const ArSession *session, const ArFrame *frame, void **out_hardware_buffer)
|
Gets the
AHardwareBuffer
for this frame. |
ArFrame_getLightEstimate(const ArSession *session, const ArFrame *frame, ArLightEstimate *out_light_estimate)
|
void
Gets the current
ArLightEstimate , if Lighting Estimation is enabled. |
ArFrame_getSemanticLabelFraction(const ArSession *session, const ArFrame *frame, ArSemanticLabel query_label, float *out_fraction)
|
Retrieves the fraction of the most recent semantics frame that are
query_label . |
ArFrame_getTimestamp(const ArSession *session, const ArFrame *frame, int64_t *out_timestamp_ns)
|
void
Returns the timestamp in nanoseconds when this image was captured.
|
ArFrame_getUpdatedAnchors(const ArSession *session, const ArFrame *frame, ArAnchorList *out_anchor_list)
|
void
Gets the set of anchors that were changed by the
ArSession_update that produced this Frame. |
ArFrame_getUpdatedTrackData(const ArSession *session, const ArFrame *frame, const uint8_t *track_id_uuid_16, ArTrackDataList *out_track_data_list)
|
void
Gets the set of data recorded to the given track available during playback on this
ArFrame . |
ArFrame_getUpdatedTrackables(const ArSession *session, const ArFrame *frame, ArTrackableType filter_type, ArTrackableList *out_trackable_list)
|
void
Gets the set of trackables of a particular type that were changed by the
ArSession_update call that produced this Frame. |
ArFrame_hitTest(const ArSession *session, const ArFrame *frame, float pixel_x, float pixel_y, ArHitResultList *hit_result_list)
|
void
Performs a ray cast from the user's device in the direction of the given location in the camera view.
|
ArFrame_hitTestInstantPlacement(const ArSession *session, const ArFrame *frame, float pixel_x, float pixel_y, float approximate_distance_meters, ArHitResultList *hit_result_list)
|
void
Performs a ray cast that can return a result before ARCore establishes full tracking.
|
ArFrame_hitTestRay(const ArSession *session, const ArFrame *frame, const float *ray_origin_3, const float *ray_direction_3, ArHitResultList *hit_result_list)
|
void
Similar to
ArFrame_hitTest , but takes an arbitrary ray in world space coordinates instead of a screen space point. |
ArFrame_recordTrackData(ArSession *session, const ArFrame *frame, const uint8_t *track_id_uuid_16, const void *payload, size_t payload_size)
|
Writes a data sample in the specified track.
|
ArFrame_transformCoordinates2d(const ArSession *session, const ArFrame *frame, ArCoordinates2dType input_coordinates, int32_t number_of_vertices, const float *vertices_2d, ArCoordinates2dType output_coordinates, float *out_vertices_2d)
|
void
Transforms a list of 2D coordinates from one 2D coordinate system to another 2D coordinate system.
|
ArFrame_transformCoordinates3d(const ArSession *session, const ArFrame *frame, ArCoordinates2dType input_coordinates, int32_t number_of_vertices, const float *vertices_2d, ArCoordinates3dType output_coordinates, float *out_vertices_3d)
|
void
Transforms a list of 2D coordinates from one 2D coordinate space to 3D coordinate space.
|
ArFrame_transformDisplayUvCoords(const ArSession *session, const ArFrame *frame, int32_t num_elements, const float *uvs_in, float *uvs_out)
|
void
ArFrame_transformCoordinates2d instead. Transform the given texture coordinates to correctly show the background image. |
Enumerations
ArCoordinates2dType
ArCoordinates2dType
2d coordinate systems supported by ARCore.
Properties | |
---|---|
AR_COORDINATES_2D_IMAGE_NORMALIZED
|
CPU image, (x,y) normalized to [0.0f, 1.0f] range. |
AR_COORDINATES_2D_IMAGE_PIXELS
|
CPU image, (x,y) in pixels. The range of x and y is determined by the CPU image resolution. |
AR_COORDINATES_2D_OPENGL_NORMALIZED_DEVICE_COORDINATES
|
OpenGL Normalized Device Coordinates, display-rotated, (x,y) normalized to [-1.0f, 1.0f] range. |
AR_COORDINATES_2D_TEXTURE_NORMALIZED
|
GPU texture coordinates, (s,t) normalized to [0.0f, 1.0f] range. |
AR_COORDINATES_2D_TEXTURE_TEXELS
|
GPU texture, (x,y) in pixels. |
AR_COORDINATES_2D_VIEW
|
Android view, display-rotated, (x,y) in pixels. |
AR_COORDINATES_2D_VIEW_NORMALIZED
|
Android view, display-rotated, (x,y) normalized to [0.0f, 1.0f] range. |
ArCoordinates3dType
ArCoordinates3dType
3d coordinate systems supported by ARCore.
Properties | |
---|---|
AR_COORDINATES_3D_EIS_NORMALIZED_DEVICE_COORDINATES
|
Normalized Device Coordinates (NDC), display-rotated, (x,y) normalized to [-1.0f, 1.0f] range to compensate for perspective shift for EIS. Use with |
AR_COORDINATES_3D_EIS_TEXTURE_NORMALIZED
|
GPU texture coordinates, using the Z component to compensate for perspective shift when using Electronic Image Stabilization (EIS). Use with |
Typedefs
ArFrame
struct ArFrame_ ArFrame
The world state resulting from an update (value type).
- Create with:
ArFrame_create
- Allocate with:
ArSession_update
- Release with:
ArFrame_destroy
Functions
ArFrame_acquireCamera
void ArFrame_acquireCamera( const ArSession *session, const ArFrame *frame, ArCamera **out_camera )
Returns the camera object for the session.
Note that this Camera instance is long-lived so the same instance is returned regardless of the frame object this function was called on.
ArFrame_acquireCameraImage
ArStatus ArFrame_acquireCameraImage( ArSession *session, ArFrame *frame, ArImage **out_image )
Returns the CPU image for the current frame.
Caller is responsible for later releasing the image with ArImage_release
. Not supported on all devices (see https://developers.google.com/ar/devices). Return values:
Details | |
---|---|
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireDepthImage
ArStatus ArFrame_acquireDepthImage( const ArSession *session, const ArFrame *frame, ArImage **out_depth_image )
Attempts to acquire a depth image that corresponds to the current frame.
The depth image has a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis. Currently, the three most significant bits are always set to 000. The remaining thirteen bits express values from 0 to 8191, representing depth in millimeters. To extract distance from a depth map, see the Depth API developer guide.
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage
, ArFrame_acquireRawDepthImage
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy is achieved between 500 millimeters (50 centimeters) and 5000 millimeters (5 meters) from the camera. Error increases quadratically as distance from the camera increases.
Depth is estimated using data from the world-facing cameras, user motion, and hardware depth sensors such as a time-of-flight sensor (or ToF sensor) if available. As the user moves their device through the environment, 3D depth data is collected and cached which improves the quality of subsequent depth images and reducing the error introduced by camera distance.
If an up-to-date depth image isn't ready for the current frame, the most recent depth image available from an earlier frame will be returned instead. This is expected only to occur on compute-constrained devices. An up-to-date depth image should typically become available again within a few frames.
The image must be released with ArImage_release
once it is no longer needed.
Deprecated.
Deprecated in release 1.31.0. Please use ArFrame_acquireDepthImage16Bits
instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireDepthImage16Bits
due to the clearing of the top 3 bits per pixel.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireDepthImage16Bits
ArStatus ArFrame_acquireDepthImage16Bits( const ArSession *session, const ArFrame *frame, ArImage **out_depth_image )
Attempts to acquire a depth image that corresponds to the current frame.
The depth image has format HardwareBuffer.D_16, which is a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis, with the representable depth range between 0 millimeters and 65535 millimeters, or about 65 meters.
To extract distance from a depth map, see the Depth API developer guide.
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage16Bits
, ArFrame_acquireRawDepthImage16Bits
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy is achieved between 500 millimeters (50 centimeters) and 15000 millimeters (15 meters) from the camera, with depth reliably observed up to 25000 millimeters (25 meters). Error increases quadratically as distance from the camera increases.
Depth is estimated using data from the world-facing cameras, user motion, and hardware depth sensors such as a time-of-flight sensor (or ToF sensor) if available. As the user moves their device through the environment, 3D depth data is collected and cached which improves the quality of subsequent depth images and reducing the error introduced by camera distance.
If an up-to-date depth image isn't ready for the current frame, the most recent depth image available from an earlier frame will be returned instead. This is expected only to occur on compute-constrained devices. An up-to-date depth image should typically become available again within a few frames.
When the Geospatial API and the Depth API are enabled, output images from the Depth API will include terrain and building geometry when in a location with VPS coverage. See the Geospatial Depth Developer Guide for more information.
The image must be released with ArImage_release
once it is no longer needed.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireImageMetadata
ArStatus ArFrame_acquireImageMetadata( const ArSession *session, const ArFrame *frame, ArImageMetadata **out_metadata )
Gets the camera metadata for the current camera image.
Details | |
---|---|
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquirePointCloud
ArStatus ArFrame_acquirePointCloud( const ArSession *session, const ArFrame *frame, ArPointCloud **out_point_cloud )
Acquires the current set of estimated 3d points attached to real-world geometry.
A matching call to ArPointCloud_release
must be made when the application is done accessing the Point Cloud.
Note: This information is for visualization and debugging purposes only. Its characteristics and format are subject to change in subsequent versions of the API.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireRawDepthConfidenceImage
ArStatus ArFrame_acquireRawDepthConfidenceImage( const ArSession *session, const ArFrame *frame, ArImage **out_confidence_image )
Attempts to acquire the confidence image corresponding to the raw depth image of the current frame.
The image must be released via ArImage_release
once it is no longer needed.
Each pixel is an 8-bit unsigned integer representing the estimated confidence of the corresponding pixel in the raw depth image. The confidence value is between 0 and 255, inclusive, with 0 representing the lowest confidence and 255 representing the highest confidence in the measured depth value. Pixels without a valid depth estimate have a confidence value of 0 and a corresponding depth value of 0 (see ArFrame_acquireRawDepthImage16Bits
).
The scaling of confidence values is linear and continuous within this range. Expect to see confidence values represented across the full range of 0 to 255, with values increasing as better observations are made of each location. If an application requires filtering out low-confidence pixels, removing depth pixels below a confidence threshold of half confidence (128) tends to work well.
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage16Bits
, ArFrame_acquireRawDepthImage16Bits
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireRawDepthImage
ArStatus ArFrame_acquireRawDepthImage( const ArSession *session, const ArFrame *frame, ArImage **out_depth_image )
Attempts to acquire a "raw", mostly unfiltered, depth image that corresponds to the current frame.
The raw depth image is sparse and does not provide valid depth for all pixels. Pixels without a valid depth estimate have a pixel value of 0 and a corresponding confidence value of 0 (see ArFrame_acquireRawDepthConfidenceImage
).
The depth image has a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis. Currently, the three most significant bits are always set to 000. The remaining thirteen bits express values from 0 to 8191, representing depth in millimeters. To extract distance from a depth map, see the Depth API developer guide.
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage
, ArFrame_acquireRawDepthImage
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy occurs between 500 millimeters (50 centimeters) and 5000 millimeters (5 meters) from the camera. Error increases quadratically as distance from the camera increases.
Depth is primarily estimated using data from the motion of world-facing cameras. As the user moves their device through the environment, 3D depth data is collected and cached, improving the quality of subsequent depth images and reducing the error introduced by camera distance. Depth accuracy and robustness improves if the device has a hardware depth sensor, such as a time-of-flight (ToF) camera.
Not every raw depth image contains a new depth estimate. Typically there is about 10 updates to the raw depth data per second. The depth images between those updates are a 3D reprojection which transforms each depth pixel into a 3D point in space and renders those 3D points into a new raw depth image based on the current camera pose. This effectively transforms raw depth image data from a previous frame to account for device movement since the depth data was calculated. For some applications it may be important to know whether the raw depth image contains new depth data or is a 3D reprojection (for example, to reduce the runtime cost of 3D reconstruction). To do that, compare the current raw depth image timestamp, obtained via ArImage_getTimestamp
, with the previously recorded raw depth image timestamp. If they are different, the depth image contains new information.
The image must be released via ArImage_release
once it is no longer needed.
Deprecated.
Deprecated in release 1.31.0. Please use ArFrame_acquireRawDepthImage16Bits
instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireRawDepthImage16Bits
due to the clearing of the top 3 bits per pixel.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireRawDepthImage16Bits
ArStatus ArFrame_acquireRawDepthImage16Bits( const ArSession *session, const ArFrame *frame, ArImage **out_depth_image )
Attempts to acquire a "raw", mostly unfiltered, depth image that corresponds to the current frame.
The raw depth image is sparse and does not provide valid depth for all pixels. Pixels without a valid depth estimate have a pixel value of 0 and a corresponding confidence value of 0 (see ArFrame_acquireRawDepthConfidenceImage
).
The depth image has format HardwareBuffer.D_16, which is a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis, with the representable depth range between 0 millimeters and 65535 millimeters, or about 65 meters.
To extract distance from a depth map, see the Depth API developer guide.
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage16Bits
, ArFrame_acquireRawDepthImage16Bits
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy is achieved between 500 millimeters (50 centimeters) and 15000 millimeters (15 meters) from the camera, with depth reliably observed up to 25000 millimeters (25 meters). Error increases quadratically as distance from the camera increases.
Depth is primarily estimated using data from the motion of world-facing cameras. As the user moves their device through the environment, 3D depth data is collected and cached, improving the quality of subsequent depth images and reducing the error introduced by camera distance. Depth accuracy and robustness improves if the device has a hardware depth sensor, such as a time-of-flight (ToF) camera.
Not every raw depth image contains a new depth estimate. Typically there are about 10 updates to the raw depth data per second. The depth images between those updates are a 3D reprojection which transforms each depth pixel into a 3D point in space and renders those 3D points into a new raw depth image based on the current camera pose. This effectively transforms raw depth image data from a previous frame to account for device movement since the depth data was calculated. For some applications it may be important to know whether the raw depth image contains new depth data or is a 3D reprojection (for example, to reduce the runtime cost of 3D reconstruction). To do that, compare the current raw depth image timestamp, obtained via ArImage_getTimestamp
, with the previously recorded raw depth image timestamp. If they are different, the depth image contains new information.
When the Geospatial API and the Depth API are enabled, output images from the Depth API will include terrain and building geometry when in a location with VPS coverage. See the Geospatial Depth Developer Guide for more information.
The image must be released via ArImage_release
once it is no longer needed.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireSemanticConfidenceImage
ArStatus ArFrame_acquireSemanticConfidenceImage( const ArSession *session, const ArFrame *frame, ArImage **out_semantic_confidence_image )
Attempts to acquire the semantic confidence image corresponding to the current frame.
Each pixel is an 8-bit integer representing the estimated confidence of the corresponding pixel in the semantic image. See the Scene Semantics Developer Guide for more information.
The confidence value is between 0 and 255, inclusive, with 0 representing the lowest confidence and 255 representing the highest confidence in the semantic class prediction (see ArFrame_acquireSemanticImage
).
The image must be released via ArImage_release
once it is no longer needed.
In order to obtain a valid result from this function, you must set the session's ArSemanticMode
to AR_SEMANTIC_MODE_ENABLED
. Use ArSession_isSemanticModeSupported
to query for support for Scene Semantics.
The size of the semantic confidence image is the same size as the image obtained by ArFrame_acquireSemanticImage
.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_acquireSemanticImage
ArStatus ArFrame_acquireSemanticImage( const ArSession *session, const ArFrame *frame, ArImage **out_semantic_image )
Attempts to acquire the semantic image corresponding to the current frame.
Each pixel in the image is an 8-bit unsigned integer representing a semantic class label: see ArSemanticLabel
for a list of pixel labels and the Scene Semantics Developer Guide for more information.
The image must be released via ArImage_release
once it is no longer needed.
In order to obtain a valid result from this function, you must set the session's ArSemanticMode
to AR_SEMANTIC_MODE_ENABLED
. Use ArSession_isSemanticModeSupported
to query for support for Scene Semantics.
The width of the semantic image is currently 256 pixels. The height of the image depends on the device and will match its display aspect ratio.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_create
void ArFrame_create( const ArSession *session, ArFrame **out_frame )
Allocates a new ArFrame
object, storing the pointer into *out_frame
.
Note: the same ArFrame
can be used repeatedly when calling ArSession_update
.
ArFrame_destroy
void ArFrame_destroy( ArFrame *frame )
Releases an ArFrame
and any references it holds.
ArFrame_getAndroidSensorPose
void ArFrame_getAndroidSensorPose( const ArSession *session, const ArFrame *frame, ArPose *out_pose )
Sets out_pose
to the pose of the Android Sensor Coordinate System in the world coordinate space for this frame.
The orientation follows the device's "native" orientation (it is not affected by display rotation) with all axes corresponding to those of the Android sensor coordinates.
See Also:
ArCamera_getDisplayOrientedPose
for the pose of the virtual camera.ArCamera_getPose
for the pose of the physical camera.ArFrame_getTimestamp
for the system time that this pose was estimated for.
Note: This pose is only useful when ArCamera_getTrackingState
returns AR_TRACKING_STATE_TRACKING
and otherwise should not be used.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_getCameraTextureName
void ArFrame_getCameraTextureName( const ArSession *session, const ArFrame *frame, uint32_t *out_texture_id )
Returns the OpenGL ES camera texture name (ID) associated with this frame.
This is guaranteed to be one of the texture names previously set via ArSession_setCameraTextureNames
or ArSession_setCameraTextureName
. Texture names (IDs) are returned in a round robin fashion in sequential frames.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_getDisplayGeometryChanged
void ArFrame_getDisplayGeometryChanged( const ArSession *session, const ArFrame *frame, int32_t *out_geometry_changed )
Checks if the display rotation or viewport geometry changed since the previous call to ArSession_update
.
The application should re-query ArCamera_getProjectionMatrix
and ArFrame_transformCoordinates2d
whenever this emits non-zero.
ArFrame_getHardwareBuffer
ArStatus ArFrame_getHardwareBuffer( const ArSession *session, const ArFrame *frame, void **out_hardware_buffer )
Gets the
AHardwareBuffer
for this frame.
See Vulkan Rendering developer guide for more information.
The result in out_hardware_buffer
is only valid when a configuration is active that uses AR_TEXTURE_UPDATE_MODE_EXPOSE_HARDWARE_BUFFER
.
This hardware buffer is only guaranteed to be valid until the next call to ArSession_update()
. If you want to use the hardware buffer beyond that, such as for rendering, you must call
AHardwareBuffer_acquire
and then call
AHardwareBuffer_release
after your rendering is complete.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_getLightEstimate
void ArFrame_getLightEstimate( const ArSession *session, const ArFrame *frame, ArLightEstimate *out_light_estimate )
Gets the current ArLightEstimate
, if Lighting Estimation is enabled.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_getSemanticLabelFraction
ArStatus ArFrame_getSemanticLabelFraction( const ArSession *session, const ArFrame *frame, ArSemanticLabel query_label, float *out_fraction )
Retrieves the fraction of the most recent semantics frame that are query_label
.
Queries the semantic image provided by ArFrame_acquireSemanticImage
for pixels labeled by query_label
. This call is more efficient than retrieving the ArImage
and performing a pixel-wise search for the detected labels.
Details | |||||||||
---|---|---|---|---|---|---|---|---|---|
Parameters |
|
||||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_getTimestamp
void ArFrame_getTimestamp( const ArSession *session, const ArFrame *frame, int64_t *out_timestamp_ns )
Returns the timestamp in nanoseconds when this image was captured.
This can be used to detect dropped frames or measure the camera frame rate. The time base of this value is specifically not defined, but it is likely similar to clock_gettime(CLOCK_BOOTTIME)
.
ArFrame_getUpdatedAnchors
void ArFrame_getUpdatedAnchors( const ArSession *session, const ArFrame *frame, ArAnchorList *out_anchor_list )
Gets the set of anchors that were changed by the ArSession_update
that produced this Frame.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_getUpdatedTrackData
void ArFrame_getUpdatedTrackData( const ArSession *session, const ArFrame *frame, const uint8_t *track_id_uuid_16, ArTrackDataList *out_track_data_list )
Gets the set of data recorded to the given track available during playback on this ArFrame
.
If frames are skipped during playback, which can happen when the device is under load, played back track data will be attached to a later frame in order.
Note, currently playback continues internally while the session is paused. Track data from frames that were processed while the session was paused will be discarded.
Details | |||||||||
---|---|---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_getUpdatedTrackables
void ArFrame_getUpdatedTrackables( const ArSession *session, const ArFrame *frame, ArTrackableType filter_type, ArTrackableList *out_trackable_list )
Gets the set of trackables of a particular type that were changed by the ArSession_update
call that produced this Frame.
Details | |||||||||
---|---|---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_hitTest
void ArFrame_hitTest( const ArSession *session, const ArFrame *frame, float pixel_x, float pixel_y, ArHitResultList *hit_result_list )
Performs a ray cast from the user's device in the direction of the given location in the camera view.
Intersections with detected scene geometry are returned, sorted by distance from the device; the nearest intersection is returned first.
Note: Significant geometric leeway is given when returning hit results. For example, a plane hit may be generated if the ray came close, but did not actually hit within the plane extents or plane bounds (ArPlane_isPoseInExtents
and ArPlane_isPoseInPolygon
can be used to determine these cases). A point (Point Cloud) hit is generated when a point is roughly within one finger-width of the provided screen coordinates.
The resulting list is ordered by distance, with the nearest hit first
Note: If not tracking, the hit_result_list
will be empty.
Note: If called on an old frame (not the latest produced by ArSession_update
the hit_result_list
will be empty).
Note: When using the front-facing (selfie) camera, the returned hit result list will always be empty, as the camera is not AR_TRACKING_STATE_TRACKING
. Hit testing against tracked faces is not currently supported.
Note: In ARCore 1.24.0 or later on supported devices, if the ArDepthMode
is enabled by calling ArConfig_setDepthMode
the hit_result_list
includes ArDepthPoint
values that are sampled from the latest computed depth image.
Details | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_hitTestInstantPlacement
void ArFrame_hitTestInstantPlacement( const ArSession *session, const ArFrame *frame, float pixel_x, float pixel_y, float approximate_distance_meters, ArHitResultList *hit_result_list )
Performs a ray cast that can return a result before ARCore establishes full tracking.
The pose and apparent scale of attached objects depends on the ArInstantPlacementPoint
tracking method and the provided approximate_distance_meters
. A discussion of the different tracking methods and the effects of apparent object scale are described in ArInstantPlacementPoint
.
This function will succeed only if ArInstantPlacementMode
is AR_INSTANT_PLACEMENT_MODE_LOCAL_Y_UP
in the ARCore session configuration, the ARCore session tracking state is AR_TRACKING_STATE_TRACKING
, and there are sufficient feature points to track the point in screen space.
Details | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_hitTestRay
void ArFrame_hitTestRay( const ArSession *session, const ArFrame *frame, const float *ray_origin_3, const float *ray_direction_3, ArHitResultList *hit_result_list )
Similar to ArFrame_hitTest
, but takes an arbitrary ray in world space coordinates instead of a screen space point.
Details | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_recordTrackData
ArStatus ArFrame_recordTrackData( ArSession *session, const ArFrame *frame, const uint8_t *track_id_uuid_16, const void *payload, size_t payload_size )
Writes a data sample in the specified track.
The samples recorded using this API will be muxed into the recorded MP4 dataset in a corresponding additional MP4 stream.
For smooth playback of the MP4 on video players and for future compatibility of the MP4 datasets with ARCore's playback of tracks it is recommended that the samples are recorded at a frequency no higher than 90kHz.
Additionally, if the samples are recorded at a frequency lower than 1Hz, empty padding samples will be automatically recorded at approximately one second intervals to fill in the gaps.
Recording samples introduces additional CPU and/or I/O overhead and may affect app performance.
Details | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Parameters |
|
||||||||||
Returns |
AR_SUCCESS or any of:
|
ArFrame_transformCoordinates2d
void ArFrame_transformCoordinates2d( const ArSession *session, const ArFrame *frame, ArCoordinates2dType input_coordinates, int32_t number_of_vertices, const float *vertices_2d, ArCoordinates2dType output_coordinates, float *out_vertices_2d )
Transforms a list of 2D coordinates from one 2D coordinate system to another 2D coordinate system.
For Android view coordinates (AR_COORDINATES_2D_VIEW
, AR_COORDINATES_2D_VIEW_NORMALIZED
), the view information is taken from the most recent call to ArSession_setDisplayGeometry
.
Must be called on the most recently obtained ArFrame
object. If this function is called on an older frame, a log message will be printed and out_vertices_2d
will remain unchanged.
Some examples of useful conversions:
- To transform from [0,1] range to screen-quad coordinates for rendering:
AR_COORDINATES_2D_VIEW_NORMALIZED
->AR_COORDINATES_2D_TEXTURE_NORMALIZED
- To transform from [-1,1] range to screen-quad coordinates for rendering:
AR_COORDINATES_2D_OPENGL_NORMALIZED_DEVICE_COORDINATES
->AR_COORDINATES_2D_TEXTURE_NORMALIZED
- To transform a point found by a computer vision algorithm in a cpu image into a point on the screen that can be used to place an Android View (e.g. Button) at that location:
AR_COORDINATES_2D_IMAGE_PIXELS
->AR_COORDINATES_2D_VIEW
- To transform a point found by a computer vision algorithm in a CPU image into a point to be rendered using GL in clip-space ([-1,1] range):
AR_COORDINATES_2D_IMAGE_PIXELS
->AR_COORDINATES_2D_OPENGL_NORMALIZED_DEVICE_COORDINATES
If inputCoordinates
is same as outputCoordinates
, the input vertices will be copied to the output vertices unmodified.
Details | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_transformCoordinates3d
void ArFrame_transformCoordinates3d( const ArSession *session, const ArFrame *frame, ArCoordinates2dType input_coordinates, int32_t number_of_vertices, const float *vertices_2d, ArCoordinates3dType output_coordinates, float *out_vertices_3d )
Transforms a list of 2D coordinates from one 2D coordinate space to 3D coordinate space.
See the Electronic Image Stabilization Developer Guide for more information.
The view information is taken from the most recent call to ArSession_setDisplayGeometry
.
If Electronic Image Stabilization is off, the device coordinates return (-1, -1, 0) -> (1, 1, 0) and texture coordinates return the same coordinates as ArFrame_transformCoordinates2d
with the Z component set to 1.0f.
In order to use EIS, your app should use EIS compensated screen coordinates and camera texture coordinates to pass on to shaders. Use the 2D NDC space coordinates as input to obtain EIS compensated 3D screen coordinates and matching camera texture coordinates.
Details | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Parameters |
|
ArFrame_transformDisplayUvCoords
void ArFrame_transformDisplayUvCoords( const ArSession *session, const ArFrame *frame, int32_t num_elements, const float *uvs_in, float *uvs_out )
Transform the given texture coordinates to correctly show the background image.
This accounts for the display rotation, and any additional required adjustment. For performance, this function should be called only if ArFrame_getDisplayGeometryChanged
indicates a change.
Deprecated.
Deprecated in release 1.7.0. Use ArFrame_transformCoordinates2d
instead.
Details | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Parameters |
|