solution:
metadata:
id: androidify
title: |
Androidify: Building powerful AI-driven experiences with Jetpack
Compose, Gemini and CameraX
description: >
Learn how we built Androidify, a new open-source app showcasing the
latest Android technologies. This solution explores how we created a
delightful, adaptive UI with Jetpack Compose and Material 3 Expressive,
integrated powerful generative AI features with the Gemini models through
Firebase, and built a robust camera experience with CameraX.
fallbackCta:
title: "View on GitHub"
icon: "github_white"
url: "https://github.com/android/androidify"
sections:
- type: watch
youtubeId: p-Oy5lSeegg
title: |
Building Androidify: A Deep Dive into Building AI-Driven Experiences on Android
description: >
Explore the architecture and key features of the Androidify app, a
showcase of modern Android development practices. This video covers the
integration of Jetpack Compose for a stunning UI, the power of Gemini
and Firebase for AI-driven features, and the use of CameraX for a
seamless camera experience, all designed to be adaptive across various
devices.
technologies:
- icon: /external-assets/jetpack-compose.png
label: "Jetpack Compose"
url: "https://developer.android.com/jetpack/compose"
- icon: /external-assets/firebase.svg
label: "Firebase"
url: "https://firebase.google.com/"
- icon: /external-assets/gemini.svg
label: "Gemini"
url: "https://ai.google.dev/"
- icon: /external-assets/photo_camera.svg
label: "CameraX"
url: "https://developer.android.com/training/camerax"
- icon: /external-assets/android.svg
label: "Adaptive Design"
url: "https://developer.android.com/guide/topics/large-screens"
- icon: /external-assets/material-logo.png
label: "Material 3 Expressive"
url: "https://m3.material.io/"
- icon: /external-assets/wear.svg
label: "Wear OS"
url: "https://developer.android.com/wear"
- type: explore
mode: video
subsections:
- id: video-1
title: "Image validation and description with Gemini"
description: >
Androidify uses the Gemini models, accessed via the Firebase AI SDK,
to power its core functionality. When a user takes a photo, the
multi-modal capabilities of the Gemini API are used for image
validation to ensure a high-quality input. The app then leverages
Gemini to generate a descriptive prompt from the image, which is
then passed to Imagen 3 to create the final Android bot avatar.
videoPath: "external-assets/androidify_photo_flow_umbrella.mp4"
orientation: portrait
frame: responsive
layout: sidebyside
logs:
- timestamp: 7000
summary: "User picks an image from the gallery."
sequence:
- compose-ui
- timestamp: 9000
summary: "User selects desired bot color."
sequence:
- compose-ui
- timestamp: 14000
summary: "Image is validated through Firebase AI Logic SDK, which calls Gemini."
sequence:
- compose-ui
- data-layer
- firebase-ai
- vertex-ai
- gemini
inspect: gemini-image-validation/gemini-image-validation-0
- timestamp: 15000
summary: "Gemini generates a descriptive prompt from the image."
sequence:
- gemini
- vertex-ai
- firebase-ai
- data-layer
inspect: gemini-image-validation/gemini-image-validation-1
- timestamp: 17000
summary: "Imagen is called with the prompt to generate the Android bot avatar."
sequence:
- data-layer
- firebase-ai
- vertex-ai
- imagen
inspect: image-generation/image-generation-0
- timestamp: 19000
summary: "Android bot avatar is displayed."
sequence:
- imagen
- vertex-ai
- firebase-ai
- data-layer
- compose-ui
- id: video-2
title: "Delightful UIs with Jetpack Compose and Material 3 Expressive"
description: >
The Androidify app is built entirely with Jetpack Compose, enabling
a fully declarative UI. We leveraged the new Material 3 Expressive
library to create a more personalized and engaging user experience.
This includes using `MaterialShapes` for unique component designs,
like the 9-sided cookie shape for the camera button, and
`MotionScheme` for consistent and delightful animations.
This example shows the camera button and shared element transition between screens.
videoPath: "external-assets/morph_shared_element.mp4"
orientation: portrait
frame: responsive
layout: sidebyside
logs:
- timestamp: 1000
summary: "Camera button rendered with MaterialShape.Cookie9Sided."
- timestamp: 2000
summary: "Camera button clicked, triggering a shared element transition to the camera screen."
inspect: compose-expressive/compose-expressive-1
- timestamp: 8000
summary: "Shared element with clipping animation."
- type: inspect
subsections:
- id: gemini-image-validation
title: "Gemini for image validation and description"
examples:
- title: "Image validation with Gemini"
icons:
- /external-assets/firebase.svg
- /external-assets/gemini.svg
mode: markdown
info: >
The `FirebaseAiDataSource` class is central to the app's AI
functionality. It wraps the Firebase AI Logic SDK with all the different prompts and commands used throughout the app.
The first example is the image validation prompt, which uses
Gemini to check if the image meets certain criteria, such as containin a person, and has enough detail to get a good image description.
code:
language: "kotlin"
file: "/external-assets/FireabseAIDataSource_ImageValidation.kt"
links:
- url: "https://github.com/android/androidify/blob/1400cafabc37d22526d114e9cc49e45512426fd9/core/network/src/main/java/com/android/developers/androidify/vertexai/FirebaseAiDataSource.kt#L4"
type: "github"
- title: "Image description generation with Gemini"
icons:
- /external-assets/firebase.svg
- /external-assets/gemini.svg
mode: markdown
info: >
Once an image has been validated, the app uses Gemini to generate a
descriptive prompt from the image. Using the multi-modal
capabilities of the Gemini API, the image is processed and a
descriptive text prompt is generated.
code:
language: "kotlin"
file: "/external-assets/FirebaseAIDataSource_Description.kt"
links:
- url: "https://github.com/android/androidify/blob/1400cafabc37d22526d114e9cc49e45512426fd9/core/network/src/main/java/com/android/developers/androidify/vertexai/FirebaseAiDataSource.kt#L4"
type: "github"
- id: image-generation
title: "Image generation with Imagen"
examples:
- title: "Image generation with Imagen"
icons:
- /external-assets/firebase.svg
- /external-assets/imagen.svg
mode: markdown
info: >
Once the descriptive prompt of the input image has been generated, the app then uses Imagen 3 to create the final Android bot avatar. For Androidify, we are using a custom fine tuned model, but the base Imagen 3 model could also be used, as shown in this example. The prompt is made up of some extra instructions on what to focus on and how the body should be positioned, along with the descriptive prompt generated from the input image.
code:
language: "kotlin"
file: "/external-assets/FirebaseAIDataSource_ImageGen.kt"
links:
- url: "https://github.com/android/androidify/blob/1400cafabc37d22526d114e9cc49e45512426fd9/core/network/src/main/java/com/android/developers/androidify/vertexai/FirebaseAiDataSource.kt#L4"
type: "github"
- url: "https://android-developers.googleblog.com/2025/05/androidify-building-ai-driven-experiences-jetpack-compose-gemini-camerax.html"
type: "other"
label: "Read the blog post"
- id: compose-expressive
title: "Jetpack Compose and Material 3 Expressive"
examples:
- title: "Using Material 3 Expressive Theme"
icons:
- /external-assets/jetpack-compose.png
- /external-assets/material-logo.png
mode: markdown
info: >
In Androidify, we used Material 3 Expressive to create a more
dynamic and personal UI. The first step to adopting Material 3 Expressive
in your app is to switch to the theme at the top level to `MaterialExpressiveTheme`.
code:
language: "kotlin"
file: "external-assets/Compose_ExpressiveTheme.kt"
links:
- url: "https://github.com/android/androidify/blob/main/core/theme/src/main/java/com/android/developers/androidify/theme/Theme.kt"
type: "github"
- url: "https://android-developers.googleblog.com/2025/05/androidify-building-delightful-ui-with-compose.html"
type: "other"
label: "Read the blog post"
- title: "Shared Element Transitions with MaterialShapes"
icons:
- /external-assets/jetpack-compose.png
- /external-assets/material-logo.png
mode: markdown
info: >
The Androidify app uses preset `MaterialShapes` in a few places to create unique
and delightful component designs. For example, the camera button
is a 9-sided cookie shape, and the shape morphing animation used
for shared element transitions between screens is implemented
with `MaterialShapes`.
code:
language: "kotlin"
file: "external-assets/SharedElementsMaterialShapes.kt"
links:
- url: "https://www.youtube.com/watch?v=0moEXBqNDZI&feature=youtu.be"
type: "other"
label: "Watch the video"
- url: "https://android-developers.googleblog.com/2025/05/androidify-building-delightful-ui-with-compose.html"
type: "other"
label: "Read the blog post"
- id: compose-adaptive
title: "Adaptive Layouts with Jetpack Compose"
examples:
- title: "Slot based camera layout"
icons:
- /external-assets/jetpack-compose.png
mode: markdown
info: >
Androidify is designed to look great and function seamlessly across candy bar phones, foldables, and tablets. The general goal of developing adaptive apps is to avoid reimplementing the same app multiple times on each form factor by extracting out reusable composables, and leveraging APIs like `WindowSizeClass` to determine what kind of layout to display. The `CameraLayout` composable is designed to be adaptive, adjusting its layout for different screen sizes and device postures, such as tabletop mode on foldable devices.
code:
language: "kotlin"
file: "external-assets/CameraLayout.kt"
links:
- url: "https://github.com/android/androidify/blob/main/feature/camera/src/main/java/com/android/developers/androidify/camera/CameraLayout.kt#L62"
type: "github"
- title: "Foldable support"
icons:
- /external-assets/jetpack-compose.png
mode: markdown
info: >
The app actively checks for foldable device features. The camera screen uses `WindowInfoTracker` to get `FoldingFeature` information to adapt to different features by optimizing the layout for tabletop posture.
code:
language: "kotlin"
file: "external-assets/FoldableSupport.kt"
links:
- url: "https://github.com/android/androidify/blob/main/core/util/src/main/java/com/android/developers/androidify/util/LayoutUtils.kt#L67"
type: "github"
- title: "Rear camera display"
icons:
- /external-assets/jetpack-compose.png
mode: markdown
info: >
Support for devices with multiple displays is included via the `RearCameraUseCase`, allowing for the device camera preview to be shown on the external screen when the device is unfolded (so the main content is usually displayed on the internal screen).
code:
language: "kotlin"
file: "external-assets/RearCameraUseCase.kt"
links:
- url: "https://github.com/android/androidify/blob/main/feature/camera/src/main/java/com/android/developers/androidify/camera/RearCameraUseCase.kt"
type: "github"
- id: watch-faces
title: "Watch faces"
examples:
- title: "Watch Face Format"
icons:
- /external-assets/wear.svg
mode: markdown
info: >
Androidify uses watch face templates written using the Watch Face Format, which is an
XML-based language for defining watch faces. Your Androidify bot avatar is added to the
watch face as an image asset before the watch face is packaged as an APK.
code:
language: "xml"
file: "external-assets/watchface.xml"
links:
- url: "https://github.com/android/androidify/blob/main/watchface/src/main/assets/androiddigital/res/raw/watchface.xml#L203"
type: "github"
- title: "Installing watch faces"
icons:
- /external-assets/wear.svg
mode: markdown
info: >
The Watch Face Push API allows for watch faces to be installed and managed from the Wear
OS app. Watch faces are packaged as Watch Face Format APKs, which are then installed
through the API by passing both a handle to the APK and a validation token.
code:
language: "kotlin"
file: "external-assets/WatchFacePush.kt"
links:
- url: "https://github.com/android/androidify/blob/main/wear/src/main/java/com/android/developers/androidify/watchfacepush/WatchFaceOnboardingRepository.kt#L88"
type: "github"
- type: quiz
questions:
- title: "What Jetpack library is used to create the user interface in Androidify?"
answers:
- answer: "Jetpack DataStore"
- answer: "Jetpack Compose"
correct: true
- answer: "Jetpack Navigation"
- title: "Which AI model is used for generating the final Android bot avatar?"
answers:
- answer: "Gemini Nano"
- answer: "Imagen 3"
correct: true
- answer: "Gemini 2.5 Flash"
- title: "How does the Androidify app adapt to different screen sizes?"
answers:
- answer: "By using different APKs for each screen size."
- answer: "By using the `WindowSizeClass` API and creating reusable composables."
correct: true
- answer: "It only supports a single screen size."
- title: "What API is used to check for foldable device features in Androidify?"
answers:
- answer: "WindowInfoTracker"
correct: true
- answer: "WindowSdkExtensions"
- answer: "FoldingFeature"
- title: "What pattern is used for the camera layout in Androidify?"
answers:
- answer: "Slot-based pattern"
correct: true
- answer: "Composable hierarchy"
- answer: "Inheritance"
- title: "What language is used to define watch faces in Androidify?"
answers:
- answer: "Kotlin"
- answer: "Watch Face Format"
correct: true
- answer: "Natural language with AI"
- title: "What API is used to install and manage watch faces in Androidify?"
answers:
- answer: "Watch Faces API"
- answer: "Watch Faces Operations Library"
- answer: "Watch Face Push API"
correct: true
- type: build
promoType: "android-studio"
links:
- url: "https://github.com/android/androidify"
label: "Open on GitHub"
- url: "https://developers.google.com/solutions"
label: "Explore more Solutions"
architecture:
entities:
- id: "compose-ui"
icon: "/external-assets/jetpack-compose.png"
label: "Jetpack Compose UI"
x: 0
y: 0
connections:
- from: "compose-ui"
to: "camera-x"
- from: "compose-ui"
to: "data-layer"
inspect: compose-expressive
- id: "camera-x"
icon: "/external-assets/photo_camera.svg"
label: "CameraX"
x: 2
y: 1
- id: "data-layer"
icon: "/external-assets/android.svg"
label: "Data Layer"
x: 2
y: -1
connections:
- from: "data-layer"
to: "firebase-ai"
inspect: gemini-image-validation/gemini-image-validation-0
- id: "firebase-ai"
icon: "/external-assets/firebase.svg"
label: "Firebase AI SDK"
x: 4
y: -1
connections:
- from: "firebase-ai"
to: "vertex-ai"
- id: "vertex-ai"
icon: "/external-assets/vertex-ai.svg"
label: "Vertex AI"
x: 6
y: -1
connections:
- from: "vertex-ai"
to: "gemini"
- from: "vertex-ai"
to: "imagen"
inspect: image-generation/image-generation-0
- id: "gemini"
icon: "/external-assets/gemini.svg"
label: "Gemini"
x: 8
y: -2
- id: "imagen"
icon: "/external-assets/imagen.svg"
label: "Imagen"
x: 8
y: 0
badges:
startBadge: >
https://developers.google.com/profile/badges/playlists/solutions/androidify/view
exploreBadge: >
https://developers.google.com/profile/badges/playlists/solutions/androidify/learn
quizBadge: >
https://developers.google.com/profile/badges/playlists/solutions/androidify/quiz
buildBadge: >
https://developers.google.com/profile/badges/playlists/solutions/androidify/action