업로드된 작업을 작업 유형별로 정렬합니다. 예를 들어 작업에 캠페인, 광고 그룹, 광고 그룹 기준을 추가하는 작업이 포함된 경우 업로드에서 모든 캠페인 작업이 먼저 오고, 그 뒤에 모든 광고 그룹 작업이 오고, 마지막으로 모든 광고 그룹 기준 작업이 오도록 작업을 정렬합니다.
동일한 유형의 작업 내에서 상위 리소스별로 그룹화하면 성능이 향상될 수 있습니다. 예를 들어 AdGroupCriterionOperation 객체가 여러 개 있는 경우 서로 다른 광고 그룹의 광고 그룹 기준에 영향을 미치는 작업을 혼합하는 대신 광고 그룹별로 작업을 그룹화하는 것이 더 효율적일 수 있습니다.
동시 실행 문제 방지
동일한 계정에 대해 여러 동시 작업을 제출할 때는 큰 작업 크기를 유지하면서 동일한 객체에서 동시에 작동하는 작업의 가능성을 줄이세요. 상태가 RUNNING인 미완료 작업이 많으면 동일한 객체 집합을 변경하려고 시도하므로 심각한 속도 저하와 작업 실패를 초래하는 교착 상태와 유사한 조건이 발생할 수 있습니다.
결과를 예측할 수 없으므로 동일한 작업에서 동일한 객체를 변경하는 여러 작업을 제출하지 마세요.
최적으로 결과 검색
작업 상태를 너무 자주 폴링하면 속도 제한 오류가 발생할 수 있습니다.
페이지당 1,000개 이상의 결과를 가져오지 마세요. 로드 또는 기타 요인으로 인해 서버에서 이보다 적은 수를 반환할 수 있습니다.
결과 순서는 업로드 순서와 동일합니다.
추가 사용 안내
일괄 작업이 취소되기 전에 실행될 수 있는 상한을 설정할 수 있습니다. 새 일괄 작업을 만들 때 metadata.execution_limit_seconds 필드를 원하는 시간 제한(초)으로 설정합니다. metadata.execution_limit_seconds가 설정되지 않은 경우 기본 시간 제한은 없습니다.
AddBatchJobOperationsRequest당 1,000개 이하의 작업을 추가하고 sequence_token을 사용하여 나머지 작업을 동일한 작업에 업로드하는 것이 좋습니다. 작업의 내용에 따라 단일 AddBatchJobOperationsRequest에 작업이 너무 많으면 REQUEST_TOO_LARGE 오류가 발생할 수 있습니다. 작업 수를 줄이고 AddBatchJobOperationsRequest를 다시 시도하여 이 오류를 처리할 수 있습니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["필요한 정보가 없음","missingTheInformationINeed","thumb-down"],["너무 복잡함/단계 수가 너무 많음","tooComplicatedTooManySteps","thumb-down"],["오래됨","outOfDate","thumb-down"],["번역 문제","translationIssue","thumb-down"],["샘플/코드 문제","samplesCodeIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-05(UTC)"],[[["\u003cp\u003eTo improve throughput when using BatchJobService, submit fewer, larger jobs and order operations by type and parent resource to enhance efficiency.\u003c/p\u003e\n"],["\u003cp\u003eWhen running concurrent jobs, minimize the likelihood of them operating on the same objects simultaneously to avoid potential deadlock-like conditions and job failures.\u003c/p\u003e\n"],["\u003cp\u003eFor optimal results retrieval, avoid excessive job status polling and retrieve results in batches of 1,000 or less to prevent rate limit errors and ensure efficient data handling.\u003c/p\u003e\n"],["\u003cp\u003eBatchJobService allows setting an execution time limit and recommends a maximum of 1,000 operations per request to avoid exceeding size limitations and potential errors.\u003c/p\u003e\n"],["\u003cp\u003eEach BatchJob is subject to limitations, including a maximum of one million operations, 100 active or pending jobs per account, a 7-day lifespan for pending jobs, and a request size limit of 10,484,504 bytes.\u003c/p\u003e\n"]]],[],null,["# Best Practices and Limitations\n\nConsider these guidelines when using [`BatchJobService`](/google-ads/api/reference/rpc/v21/BatchJobService).\n\nImprove throughput\n------------------\n\n- Fewer larger jobs is preferred over many smaller jobs.\n\n- Order uploaded operations by operation type. For example, if your job\n contains operations to add campaigns, ad groups, and ad group criteria,\n order the operations in your upload so that all of the [campaign\n operations](/google-ads/api/reference/rpc/v21/CampaignOperation) are first, followed by all of\n the [ad group operations](/google-ads/api/reference/rpc/v21/AdGroupOperation), and finally all\n [ad group criterion operations](/google-ads/api/reference/rpc/v21/AdGroupCriterionOperation).\n\n- Within operations of the same type, it can improve performance to group them\n by parent resource. For example, if you have a series of\n `AdGroupCriterionOperation` objects, it can be more efficient to group\n operations by ad group, rather than intermixing operations that affect ad\n group criteria in different ad groups.\n\nAvoid concurrency issues\n------------------------\n\n- When submitting multiple concurrent jobs for the same account, try to reduce\n the likelihood of jobs operating on the same objects at the same time, while\n maintaining large job sizes. Many unfinished jobs, which have the status of\n [`RUNNING`](/google-ads/api/reference/rpc/v21/BatchJobStatusEnum.BatchJobStatus#running),\n try to mutate the same set of objects, which can lead to deadlock-like\n conditions resulting in severe slow-down and even job failures.\n\n- Don't submit multiple operations that mutate the same object in the same\n job, as the result can be unpredictable.\n\nRetrieve results optimally\n--------------------------\n\n- Don't poll the job status too frequently or you risk hitting rate limit\n errors.\n\n- Don't retrieve more than 1,000 results per page. The server could return\n fewer than that due to load or other factors.\n\n- The results order will be the same as the upload order.\n\nAdditional usage guidance\n-------------------------\n\n- You can set an upper bound for how long a batch job is allowed to run before\n being cancelled. When creating a new batch job, set the\n [`metadata.execution_limit_seconds`](/google-ads/api/reference/rpc/v21/BatchJob.BatchJobMetadata#execution_limit_seconds)\n field to your preferred time limit, in seconds. There is no default time\n limit if `metadata.execution_limit_seconds` is not set.\n\n- It is recommended to add no more than 1,000 operations per\n [`AddBatchJobOperationsRequest`](/google-ads/api/reference/rpc/v21/BatchJobService/AddBatchJobOperations)\n and use the\n [`sequence_token`](/google-ads/api/reference/rpc/v21/AddBatchJobOperationsRequest#sequence_token)\n to upload the rest of the operations to the same job. Depending on the\n content of the operations, too many operations in a single\n `AddBatchJobOperationsRequest` could cause a [`REQUEST_TOO_LARGE`](/google-ads/api/reference/rpc/v21/DatabaseErrorEnum.DatabaseError#request_too_large) error. You\n can handle this error by reducing the number of operations and retrying the\n `AddBatchJobOperationsRequest`.\n\nLimitations\n-----------\n\n- Each [`BatchJob`](/google-ads/api/reference/rpc/v21/BatchJob) supports up to one million\n operations.\n\n- Each account can have up to 100 active or pending jobs at the same time.\n\n- Pending jobs older than 7 days are automatically removed.\n\n- Each [`AddBatchJobOperationsRequest`](/google-ads/api/reference/rpc/v21/BatchJobService/AddBatchJobOperations)\n has a maximum size of 10,484,504 bytes. If you exceed this, you will receive\n an `INTERNAL_ERROR`. You can determine the size of the request before\n submitting and take appropriate action if it is too large.\n\n ### Java\n\n\n static final int MAX_REQUEST_BYTES = 10_484_504;\n\n ... (code to get the request object)\n\n int sizeInBytes = request.getSerializedSize();\n\n ### Python\n\n\n from google.ads.googleads.client import GoogleAdsClient\n\n MAX_REQUEST_BYTES = 10484504\n\n ... (code to get the request object)\n\n size_in_bytes = request._pb.ByteSize()\n\n ### Ruby\n\n\n require 'google/ads/google_ads'\n\n MAX_REQUEST_BYTES = 10484504\n\n ... (code to get the request object)\n\n size_in_bytes = request.to_proto.bytesize\n\n ### PHP\n\n\n use Google\\Ads\\GoogleAds\\V16\\Resources\\Campaign;\n\n const MAX_REQUEST_BYTES = 10484504;\n\n ... (code to get the request object)\n\n $size_in_bytes = $campaign-\u003ebyteSize() . PHP_EOL;\n\n ### .NET\n\n\n using Google.Protobuf;\n const int MAX_REQUEST_BYTES = 10484504;\n\n ... (code to get the request object)\n\n int sizeInBytes = request.ToByteArray().Length;\n\n ### Perl\n\n\n use Devel::Size qw(total_size);\n use constant MAX_REQUEST_BYTES =\u003e 10484504;\n\n ... (code to get the request object)\n\n my $size_in_bytes = total_size($request);"]]