Calibrate treatment priors

Calibration is the process of using experiment results and other domain knowledge to set channel-specific priors.

Meridian's default priors are moderately informative. Data can be noisy. It's often helpful to calibrate treatment priors, which allow you to "guide" the model with your real-world business knowledge—whether that comes from rigorous experiments or your professional expectations.

Think of a calibrated prior as a "guardrail." It can be tempting to have weak or no guardrails (in the form of non-informative priors), but this could be risky, and may lead to poor model results. For more information, see Flat Priors and Regularization.

For example, without a reasonably strong prior the model might estimate that a channel with low spend is driving massive revenue. Calibrating priors helps guide the model's estimates to a realistic range.

Formulate your prior beliefs

Calibrating priors isn't a precise calculation; it combines data with domain knowledge and subjective judgment.

After choosing your metric, you can proceed with calibration. When calibrating, consider the following.

Use incrementality experiments

Incrementality experiments are perhaps the strongest basis for formulating your intuition, but translating experiment results into priors isn't a precise formula.

From experiment to prior

There is no single formula to translate an experiment result into a prior. Prior knowledge in the Bayesian sense is more broadly defined, and doesn't need to be a formulaic calculation. You could combine different pieces of evidence—such as results from multiple experiments or other domain knowledge—and their uncertainties into a single prior distribution, rather than rigidly applying one experiment's result.

One common approach is to use an experiment's point estimate as the prior mean and its standard error as the prior standard deviation.

Meridian's Bayesian framework lets you thoughtfully combine different pieces of evidence—such as results from multiple experiments or other domain knowledge—and their uncertainties into a single prior distribution, rather than rigidly applying one experiment's result.

Experiment ROI versus MMM ROI

In statistical terms, experiments and MMMs have different estimands and measurement goals—that is, they define ROI in different ways.

The ROI measured by an experiment rarely aligns perfectly with the ROI measured by MMM. (In statistical terms, the experiment and MMM have different estimands.) Experiments are always related to the specific conditions of the experiment, such as the time window, geographic regions, campaign settings.

Experiment results provide highly relevant information, but remember that translating them into an MMM prior involves an additional layer of uncertainty beyond just the experiment's standard error.

Relevance considerations

If using past experiments, carefully consider their relevance. Before using experiment results to inform priors, ask yourself these questions:

  • Timing: Was the experiment run before, during, or after the time period covered by your MMM data? Results from a different time period might not be directly applicable.
  • Duration: Was the experiment long enough to capture the long-term effects of marketing? Short experiments might not be.
  • Complexity: If the experiment involved a mixture of channels, it might be hard to get clear insights into individual channel performance.
  • Estimand differences: The MMM counterfactual is zero spend, whereas some experiments might define ROI against a different baseline, such as reduced spend.
  • Population differences: Was the population targeted in the experiment the same as the population considered in the MMM?

Use domain knowledge and intuition

When possible, you can and should use domain knowledge to inform your priors. Setting priors is about using your intuition to define a plausible range of values for each marketing channel's performance. This process relies on a blend of informed judgment, drawing from various sources such as domain knowledge, previous results, industry benchmarks, and especially incrementality experiments.

Incrementality experiments are perhaps the strongest basis for formulating your intuition, but translating experiment results into priors isn't a precise formula. Even experiments have limitations, and different experiments can yield different results. Your ROI prior should be a synthesis of all the information available to you.

If your intuition about a channel's performance is weak, setting a weak prior (one with a large standard deviation) is acceptable. However, when your intuition is stronger, incorporating it through priors makes your model more powerful and less susceptible to noise in the data.

In practice, you almost always have some intuition. For example, you probably believe that the probability of a causal ROI of 50 is quite low for most channels. If your company has run many experiments over the years and has never observed an ROI greater than 6 for a specific channel, then your prior should reflect this by assigning very low probability to ROI values greater than 6. Use whatever information you have to inform your priors.

No matter how you set your priors, we always recommend plotting them and examining their percentiles. Ask yourself if the probabilities align with your expectations. For example, if 80% of your prior probability distribution is greater than an ROI of 1.0, does that reflect your prior confidence in the channel's profitability?

Use your confidence level to set the prior's standard deviation

Use the strength of your prior belief to inform the standard deviation of the prior. If you have a strong belief in the effectiveness of a channel—for example, from multiple experiments with similar ROI point estimates or results from previous MMMs—you can set a smaller standard deviation for the prior to indicate strong confidence. Conversely, if you are skeptical about how well an experiment's results translate to the MMM, you can use a larger standard deviation to reflect that uncertainty. Remember that priors are not rigid constraints but rather starting points. If the data presents strong evidence that contradicts the prior, the model will adjust accordingly; the prior's standard deviation determines how much weight is given to your initial belief versus the evidence in the data.

Flat priors and regularization

In Meridian, it is helpful to view informative priors as a mathematical form of regularization. Regularization is a technique in statistical modeling that introduces additional information or constraints to prevent a model from overfitting to noisy data.

Common regularization methods, such as ridge or LASSO regression, typically constrain estimates by pulling them toward zero. Meridian's Bayesian framework offers a more flexible approach by using the prior itself to perform the regularization. Instead of defaulting to zero, an informative prior regularizes the model toward a realistic range of values grounded in business knowledge or historical data.

Relying on the prior to regularize the model is especially important because it provides a necessary mathematical anchor when your data is limited or lacks a clear signal. Although it might be tempting to use flat, non-informative priors for channels with no prior experiments, doing so removes this stabilizing effect and can lead to high-variance or unreliable estimates. If you don't have past experiment results or business intuition, start with Meridian's default priors. These defaults act as sensible starting points that regularize the model effectively. Finding the ideal degree of regularization is often an iterative process that involves checking out-of-sample model fit at various regularization strengths.

Code examples

The following examples demonstrate how to define and use prior distributions within Meridian.

Example: Define a lognormal prior from intuition or experiments

Meridian provides two helper functions to help you construct lognormal distributions from experimental data or other prior knowledge:

  • prior_distribution.lognormal_dist_from_mean_std: Constructs a lognormal distribution from a given mean and standard deviation. You could, for example, use an experiment's point estimate and standard error here.
  • prior_distribution.lognormal_dist_from_range: Constructs a lognormal distribution where a specified probability mass (such as 95%) falls within a given lower and upper bound. You could, for example, use an experiment's 95% confidence interval here.

Visit this page to see how to define a log normal prior from intuition or experiments.

Example: Tune the ROI calibration based on experiment results

See Configure paid media priors for how to set ROI priors based on experiment results.

Plot your priors to confirm they match intuition

It is recommended that you plot and visualize custom priors, especially ROI priors. This helps you double-check if the custom prior aligns with your expectations before you proceed with the analysis.

Additional considerations for calibrating treatment priors

This section contains additional considerations for calibrating treatment priors.

Consider the ROI calibration period

You can consider using the roi_calibration_period argument in ModelSpec if your prior information is only relevant for a specific time period. However, it's recommended to set a prior on the full modeling time window whenever possible. All available information, including experiments and domain knowledge, should be used to set this prior. For more information, see Set the ROI calibration period.

Understand prior posterior shifts

Comparing shifts in distributions between the prior and posterior can inform whether or not the model is learning from the data or being strongly influenced from the prior you calibrated.

To learn more, check out Prior-Posterior shifts in Meridian's Post Modeling Quality Checks.