Aggregation Service generates summary reports of detailed conversion data and reach measurements from raw aggregatable reports. Ad techs have two main aggregate entry points on the client side to funnel reports to the Aggregation Service, either through Attribution Reporting API or Private Aggregation API.
Implementation status
- Aggregation Service has now moved to general availability.
- Aggregation Service can be used with Attribution Reporting API and Private Aggregation API for Protected Audience API and Shared Storage API.
Availability
Proposal | Status |
---|---|
Cross Cloud Privacy Budget Service
Explainer |
Available |
Aggregation Service support for Amazon Web Services (AWS) across Attribution Reporting API, Private Aggregation API
Explainer |
Available |
Aggregation Service support for Google Cloud across Attribution Reporting API, Private Aggregation API Explainer |
Available |
Aggregation Service site enrollment and multi-origin aggregation. Site enrollment includes mapping of a site to cloud accounts (AWS, or GCP). To aggregate multiple origins, they must be of the same site.
FAQs on GitHub Site aggregation API documentation |
Available |
The Aggregation Service's epsilon value will be kept as a range of up to 64, to facilitate experimentation and feedback on different parameters.
Submit ARA epsilon feedback. Submit PAA epsilon feedback. |
Available. We will provide advanced notice to the ecosystem before the epsilon range values are updated. |
More flexible contribution filtering for Aggregation Service queries
Explainer |
Available |
Process for budget recovery post-disasters (errors, misconfigurations, and so on)
Explainer |
Available Mechanism to review the percentage of shared IDs recovered by an ad tech using budget recovery and suspend future recoveries for excessive recoveries planned for H1 2025 |
Accenture operating as one of the Coordinators on AWS
Developer Blog |
Available |
Independent party operating as one of the Coordinators on Google Cloud
Developer blog |
Available |
Aggregation Service support for Aggregate Debug Reporting on Attribution Reporting API
Explainer |
Available |
Key terms and concepts
If you are considering using Aggregation Service in your ad tech workflow, the following terms and concepts should provide some more insight into what this new aggregation flow can provide for your team:
Term | Description |
---|---|
Aggregation Service | An ad tech-operated service that processes aggregatable reports to create a summary report. |
Aggregatable Reports |
Aggregatable reports are encrypted reports sent from individual user devices. These reports contain data about cross-site user behavior and conversions. Conversions (sometimes called attribution trigger events) and associated metrics are defined by the advertiser or ad tech. Each report is encrypted to prevent various parties from accessing the underlying data. Learn more about aggregatable reports. |
Aggregatable Report Accounting | A distributed ledger located in both coordinators that tracks allocated privacy budget and enforces the 'No Duplicates' rule. This is the privacy preserving mechanism, located and run within coordinators, that ensures that no report passes through Aggregation Service beyond the allocated privacy budget. Read more on batching strategies on how it relates to aggregatable reports. |
Aggregatable Report Accounting Budget | References to the budget that ensures reports are not processed more than once. |
Trusted Execution Environment (TEE) |
A trusted execution environment is a special configuration of computer hardware and software that allows external parties to verify the exact versions of software running on the computer. TEEs allow external parties to verify that the software does exactly what the software manufacturer claims it does—nothing more or less. To learn more about TEEs used for the Privacy Sandbox proposals, read the Protected Audience API services explainer and the Aggregation Service explainer. |
Coordinators |
A coordinator is an entity responsible for key management and aggregatable report accounting. The coordinator maintains a list of hashes of approved aggregation service configurations and configures access to decryption keys. |
Shared ID |
Computed value that consists of: shared_info , reporting_origin , destination_site (available for Attribution Reporting API only), source_registration-time (available for Attribution Reporting API only), scheduled_report_time , version .
This means that multiple reports belong to the same shared ID should they share the same attributes of the shared_info field. This plays an important role within Aggregatable Report Accounting.
Read more about Trusted Servers.
|
Summary Report |
A summary report is an Attribution Reporting API and Private Aggregation API report type. A summary report includes aggregated user data and can contain detailed conversion data, with noise added. Summary reports are made up of aggregate reports. Summary reports allow for greater flexibility and a richer data model than event-level reporting, particularly for some use-cases like conversion values. |
Reporting Origin |
The reporting origin is the entity that receives aggregatable reports—in other words, the ad tech that called the Attribution Reporting API. Aggregatable reports are sent from user devices to a well-known URL associated with the reporting origin. This reporting origin should be designated during enrollment. |
Contribution Bonding | Aggregatable reports may contain an arbitrary number of counter increments. For example, a report may contain a count of products that a user has viewed on an advertiser's site. The sum of increments in all aggregatable reports related to a single source event must not exceed a given limit, `L1=2^16`. Learn more in the aggregatable reports explainer. |
Noise & Scaling | A certain amount of statistical noise is added to summary reports as a part of the aggregation process that also functions to preserve privacy and ensure the final reports provide anonymized measurement information. Read more about additive noise mechanism, which is drawn from Laplace distribution. |
Attestation |
Attestation is a mechanism to authenticate software identity, usually with cryptographic hashes or signatures. For the aggregation service proposal, attestation matches the code running in the ad tech-operated aggregation service with the open source code. Read more about attestation. |
Read more about Aggregation Service backstory in our explainer and the full terms list.
Aggregation use cases
Consider the following developer journeys for ad measurement and their corresponding measurement client libraries.
Use case | Entry point | Description |
---|---|---|
Bidding optimization | Attribution Reporting API (Chrome & Android) | Use aggregated reports to ingest conversion signals for bidding optimization purposes. |
Cross platform measurement | Attribution Reporting API (Chrome & Android) | Use the cross web and app measurement capabilities to get visibility into performance across Chrome & Android. |
Conversion reporting | Attribution Reporting API (Chrome & Android) | Create aggregated conversion reporting tailored to customers' campaign needs (includes CTCs and VTCs). |
Campaign reach measurement | Shared Storage API & Private Aggregation API (Chrome) | Use cross-site ad view variables to measure campaign reach. |
Demographic reporting | Shared Storage API & Private Aggregation API (Chrome) | Use cross-site ad view and demographic information to measure reach by demographics. |
Conversion path analysis | Shared Storage API & Private Aggregation API (Chrome) | Store cross-site ad view and conversion variables to perform aggregated conversion path analysis. |
Brand and conversion lift | Shared Storage API & Private Aggregation API (Chrome) | Reporting on test/control groups and polling information to measure brand lift and incrementality. |
Auction debugging | Protected Audience API & Private Aggregation API (Chrome) | Use aggregated reports for debugging. |
Distribution of bids | Protected Audience API & Private Aggregation API (Chrome) | Use aggregated reports to capture the distribution of bid values for auctions. |
End-to-end flow
The following diagram shows Aggregation Service in action. We'll focus on the end-to-end flow from receiving the reports from web and mobile to creating the summary reports in Aggregation Service.
- Fetch public key to generate encrypted reports.
- Encrypted aggregatable reports sent to ad tech servers to be collected, transformed and batched.
- Ad tech server batches reports (avro format) and send to sent to deployed Aggregation Service. (Must be completed by ad tech).
- Retrieve aggregated reports to decrypt.
- Retrieve decryption keys from coordinators.
- Aggregation Service decrypts reports for aggregation and noising.
- Aggregatable report accounting service checks if there is any privacy budget left to generate a summary report for the given aggregatable reports.
- Submit final summary report.
From the diagram, you can see the overall relationship that the Aggregation Service has with the main client measurement APIs Attribution Reporting API, Private Aggregation API and coordinators.
The flow starts with different Measurement APIs like Attribution Reporting API or Private Aggregation API generating reports from multiple browser instances. Chrome takes the public key from the Key Hosting Service in the Coordinator to encrypt the reports before it gets sent to the ad tech's reporting origin. Public keys are rotated every seven days.
Once the ad tech's reporting origin receives these reports, the reporting origin should be configured to collect and convert those reports to avro format and sent to their deployed Aggregation Service instance. Check out batching strategies.
Once the ad tech is ready to batch, the ad tech creates a batch request to Aggregation Service where the reports are decrypted by retrieving the decryption keys from the Key Hosting Service and aggregated and noised to create a summary report. Keep in mind that this is contingent on if there is enough privacy budget to generate the final summary reports.
The ad tech reporting origin endpoint where the reports are collected is hosted by the ad tech, and the Aggregation Service is deployed in the ad tech's cloud.
Aggregatable reports batching
The reporting flow wouldn't be complete without the help of the designated reporting origin server. This is the origin an ad tech would have submitted in the enrollment process. The main actions that the reporting origin is responsible for would be collecting, transforming, and batching the received aggregatable reports and preparing them to be sent to the ad tech's deployed Aggregation Service in either Google Cloud or Amazon Web Services. Read more on how to prepare your aggregatable reports.
Now that you have the general concept, take a closer look at the components that will be deployed in your Aggregation Service.
Cloud components
Aggregation Service consists of various cloud service components. The provided Terraform scripts provision and configure all necessary cloud service components.
Frontend Service
Managed Cloud Service: Cloud Function (Google Cloud) / API Gateway (Amazon Web Services)
Frontend Service is a serverless gateway that serves as the entry point for Aggregation API calls for job creation and job state retrieval. It is responsible for receiving requests from Aggregation Service users, validating input parameters, and initiating the aggregation job scheduling process.
Two APIs are available in Frontend Service:
Endpoint | Description |
---|---|
createJob |
This API triggers an Aggregation Service job. It requires information to trigger a job such as job ID, input storage details, output storage details, reporting origin, and more. |
getJob |
This API returns the status of a job for a specified job ID. It provides information about the state of the job, such as "Received," "In Progress," or "Finished." Additionally, if the job is finished, it displays the job result, including any error messages encountered during job execution. |
Check out the Aggregation Service API Documentation.
Job Queue
Managed Cloud Service: Pub/Sub (Google Cloud) / Amazon SQS (Amazon Web Services)
Job Queue is a message queue that stores job requests for Aggregation Service. Frontend Service inserts job request messages into the queue, which are then consumed by Aggregation Worker to process the job request.
Cloud storage
Managed Cloud Service: Google Cloud Storage (Google Cloud) / Amazon S3 (Amazon Web Services) Cloud storage is used to store input and output files used by Aggregation Service (examples: encrypted report files, output summary reports, etc).
Job Metadata Database
Managed Cloud Service: Spanner (Google Cloud) / DynamoDB (Amazon Web Services)
Job Metadata Database stores and tracks the status of aggregation jobs. The database records metadata such as creation time, requested time, updated time, and state (examples: Received, In Progress, Finished, etc). Aggregation Worker updates the Job Metadata Database as the job progresses.
Aggregation Worker
Managed Cloud Service: Compute engine with Confidential space (Google Cloud) / Amazon Web Services EC2 with Nitro Enclave (Amazon Web Services)
Aggregation Worker processes job requests initiated by a job request in the Job Queue, decrypting the encrypted inputs using keys fetched from Key Generation and Distribution Service (KGDS) in Coordinators. To minimize job processing latency, decryption keys are cached in Aggregation Worker for a period of 8 hours, usable across jobs processed by that worker instance.
The worker operates within a Trusted Execution Environment (TEE) instance. Each worker handles only one job at a time. Ad tech can configure multiple workers to process jobs in parallel by setting auto scaling configuration. Through auto scaling, the number of workers is dynamically adjusted by the number of messages remaining in the job queue. The minimum and maximum number of workers for auto scaling can be configured through the Terraform environment file. More information about autoscaling can be found in the following terraform scripts. [Amazon Web Services / Google Cloud]
Aggregation Worker calls Aggregatable Report Accounting service for aggregatable report accounting. The aggregatable report accounting service will ensure that jobs are only run as long as it has not yet exceeded the privacy budget limit. (See "No duplicates" rule). If the budget is available, a summary report is generated using the noisy aggregates. Read additional details regarding the aggregatable report accounting.
Aggregation Worker updates the job metadata in Job Metadata Database including appropriate job return codes and report error counters in case of partial report failures. Users can fetch the state using the job state retrieval API (getJob
).
For a more detailed description of Aggregation Service, refer to our explainer.
Next steps
Now that you have gotten the highlights of Aggregation Service, it's time for you to deploy your very own instance of the Aggregation Service through Google Cloud or Amazon Web Services check out the getting started section or if you need more information on how to operate a deployed Aggregation Service, follow this link to learn more about operating Aggregation Service.
Troubleshooting
Refer to our Common error codes and mitigations document for more detailed descriptions of error messages, what may have caused the error you're facing, and next steps for mitigation.
Get support and provide feedback
- For product questions, feedback, and feature requests, create an issue in our GitHub repository.
- For requesting technical troubleshooting support if you're facing an error while deploying, maintaining, or running jobs with Aggregation Service, use this Technical Support Form.
- Check the Public Status Dashboard for known issues.