Experiments overview

  • The Google Ads API supports experimental campaigns for A/B testing changes to campaign structure or bidding.

  • Experiments are created by drafting changes on a special campaign and applying them to an experiment running alongside the base campaign for performance comparison.

  • The workflow involves creating an experiment, defining control and treatment arms, scheduling, comparing metrics, and finally promoting or ending the experiment.

The Google Ads API provides resources for A/B testing new ideas for campaigns, keywords, bidding strategies, and more. Depending on what you want to test, there are several different workflows available.

All experiment workflows involve splitting traffic between a control group or campaign, and one or more treatment groups or campaigns that have changes applied. By comparing the performance metrics between the control and treatment groups, you can evaluate the effectiveness of your changes.

Experiment workflows

The Google Ads API supports three distinct experiment workflows:

System-managed

This workflow is ideal for testing changes to existing

campaigns. A new treatment campaign is automatically created based on a control campaign, and you can modify this treatment campaign before the experiment starts. The one exception is PMAX_REPLACEMENT_SHOPPING experiments, which let you either create a new Performance Max campaign based on a control Shopping campaign, or use an existing Performance Max campaign as the treatment campaign.

Traffic is split between the control and treatment campaigns during the experiment. This is the closest workflow to standard A/B testing where two parallel campaigns run simultaneously.

Intra-campaign

This workflow is used for testing a specific feature—such as

Broad Match or Performance Max—within an existing campaign. Traffic is split within the single campaign, such that only some of the traffic is exposed to the feature being tested. This is useful when you want to test the impact of a single change without creating an entirely separate campaign.

Asset optimization

This workflow is designed specifically for testing asset

variations within Performance Max campaigns. It lets you test different sets of assets against each other to see which combination performs best.

Workflows and types summary

The workflow you use depends on the ExperimentType you select when creating your experiment. The following table summarizes the available types and their corresponding workflows.

Workflow Experiment Types Description
System-Managed SEARCH_CUSTOM, DISPLAY_CUSTOM, HOTEL_CUSTOM, YOUTUBE_CUSTOM, PMAX_REPLACEMENT_SHOPPING Creates or uses separate treatment campaigns to test against a control campaign.
Intra-campaign ADOPT_AI_MAX, ADOPT_BROAD_MATCH_KEYWORDS Tests a feature by splitting traffic within a single campaign.
Asset Optimization OPTIMIZE_ASSETS Tests different asset combinations in Performance Max campaigns.

Map the API to the UI

The following table summarizes how API experiment types map to experiment types in the Google Ads UI.

API ExperimentType Google Ads UI equivalent
ADOPT_AI_MAX AI Max for Search campaigns
ADOPT_BROAD_MATCH_KEYWORDS Broad match keywords for Search campaigns
DISPLAY_CUSTOM Custom Display
HOTEL_CUSTOM Custom Hotel
OPTIMIZE_ASSETS Assets provided by you
PMAX_REPLACEMENT_SHOPPING Performance Max versus Shopping
SEARCH_CUSTOM Custom Search
YOUTUBE_CUSTOM Custom Video
Custom Demand Gen

Experiment lifecycle

The process of managing an experiment typically follows these steps, with some variations across workflows:

  1. Setup: Create an Experiment and one or more ExperimentArm resources. If applicable, modify the treatment arms.
  2. Schedule: Start the experiment. Some workflows require scheduling to materialize or prepare campaigns before they can serve.
  3. Run and report: While the experiment is running, query experiment or other resources for metrics to compare performance between control and treatment arms.
  4. Complete: Once you have enough information, you can complete the experiment using one of the following operations:
    • End: Stops the experiment. The treatment campaigns or arms stop serving.
    • Promote: Applies the changes from the treatment arm to the control arm or campaign.
    • Graduate: Converts a treatment campaign into a fully independent, non-experimental campaign.

Next steps

To learn how to set up an experiment, see the guide for the workflow you need: