Part 1 of 3 on debugging Attribution Reporting. Learn why debugging matters and when to use debug reports in testing.
Why you need debug reports
If you're testing the Attribution Reporting API, you should check that your integration is working properly, understand gaps in measurement results between your cookie-based implementation and your Attribution Reporting implementation, and troubleshoot any issues with your integration.
Debug reports are required to complete these tasks. Therefore, we strongly recommend you set them up.
Glossary
Key aspects of debug reports
Two types of debug reports
Two types of debug reports are available. Use both, as they fulfill different use cases.
Success debug reports
Success debug reports track successful generation of an attribution report. They relate directly to an attribution report.
Success debug reports have been available since Chrome 101 (April 2022).
Verbose debug reports
Verbose debug reports give you more visibility into the source and trigger events—so you can either ensure that sources were registered successfully, or track missing reports and determine why they're missing (failure in source or trigger events, failure when sending or generating the report). Verbose debug reports indicate:
- Cases where the browser successfully registered a source.
- Cases where the browser did not successfully register a source or trigger event — which means that it will not generate an attribution report.
- Cases where an attribution report can't be generated or sent for some reason.
Verbose debug reports include a type
field that describes either a successful source registration, or the reason why a source, trigger or attribution report was not generated.
Verbose debug reports have been available since Chrome 109 (January 2023)—except for source registration success verbose debug reports that have been added later in Chrome 112.
Review example reports in Part 2: Set up debug reports.
Debug reports are cookie-based
To use debug reports, the reporting origin needs to set a cookie.
If the origin configured to receive reports is a third party, this cookie will be a third-party cookie. This has a few key implications:
- Debug reports are only generated if third-party cookies are allowed in the user's browser.
- Debug reports will no longer be available after third-party cookies are phased out.
Debug reports are sent immediately
Debug reports are sent immediately by the browser to the reporting origin. This is unlike attribution reports, which are sent with a delay.
Success debug reports are generated and sent as soon as the corresponding attribution report is generated: that is, on trigger registration.
Verbose debug reports are sent immediately upon source or trigger registration.
Debug reports have different endpoint paths
Like attribution reports, all debug reports are sent to the reporting origin. Debug reports are sent to three separate endpoints of the reporting origin:
- Endpoint for success debug reports, event-level
- Endpoint for success debug reports, aggregatable
- Endpoint for verbose debug reports, event-level and aggregatable.
Learn more in Part 2: Set up debug reports.
Use cases
Basic real-time integration check
Debug reports are sent to your endpoint immediately, unlike attribution reports which are delayed to protect user privacy. Use debug reports as a real-time signal that your integration with the Attribution Reporting API is working.
Learn how to do this in Part 3: Debugging cookbook.
Loss analysis
Unlike third-party cookies, the Attribution Reporting API includes built-in privacy protections, that are designed to strike a balance between utility and privacy. This means that with the Attribution Reporting API, you might not be able to collect all of the measurement data that you currently collect with cookies. Not all the conversions that you can track with third-party cookies will generate an attribution report.
One example: for event-level reports, you can register at most one conversion per impression. This means that for a given ad impression, you will only get one attribution report, no matter how many times the user converts.
Use debug reports to gain visibility into the differences between your cookie-based measurement results and the results you get with the Attribution Reporting API. Pinpoint which conversions are reported, how many conversions are not reported, and specifically which ones and why.
Learn how to run a loss analysis in Part 3: Debugging cookbook.
Troubleshooting
While loss caused by privacy or resource protections is expected, other loss may be unintended. Misconfigurations in your implementation or bugs in the browser itself can cause reports to go missing.
You can use debug reports to detect and fix an implementation issue on your side, or to report a potential bug to browser teams. Learn how to do this in Part 3: Debugging cookbook.
Advanced configuration check
Some features of the Attribution Reporting API allow you to customize the API's behaviors. Filtering rules, deduplication rules and priority rules are some examples.
When using these features, use debug reports to check that your logic leads to the intended behavior in production, without waiting for attribution reports. Learn how to do this in Part 3: Debugging cookbook.
Local testing with aggregatable reports
Unlike aggregatable attribution reports that are encrypted, aggregatable debug reports include the unencrypted payload.
Use aggregatable debug reports to validate the contents of aggregatable reports, and to generate summary reports with the local aggregation tool for testing.
Reprocessing Aggregation Service reports
Another advantage of using debug mode is that it allows you to process reports again. Therefore, to process reports more than once, make sure to have debug reports enabled. You may want to reprocess reports when you're:
- attempting to debug the Aggregation Service.
- experimenting with different batching strategies.
- experimenting with different epsilon values.
Data recovery
We recommend ad techs enable debug mode to receive debug reports so they can recover their reporting data. This is useful in cases of Aggregation Service issues such as unavailable or non-responsive services that may cause summary report generation to fail.