如需查看如何从 Python 客户端库中的自定义拦截器向 Google Cloud Logging 记录日志的示例,请参阅日志记录指南。有了 Google Cloud 中的这些数据,您就可以基于日志数据构建指标,并通过 Google Cloud Monitoring 了解应用的情况。按照用户定义的基于日志的指标指南,使用发送到 Google Cloud Logging 的日志构建指标。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-05。"],[[["\u003cp\u003ePerformance optimization involves identifying key metrics like latency and throughput to pinpoint areas for improvement.\u003c/p\u003e\n"],["\u003cp\u003eMonitoring tools enable tracking of these metrics, setting up alerts for thresholds, and visualizing performance trends.\u003c/p\u003e\n"],["\u003cp\u003eOptimizing for latency percentiles, such as the 90th and 99th, offers a comprehensive approach to performance enhancement.\u003c/p\u003e\n"],["\u003cp\u003eServer-side and browser monitoring provide different perspectives, with server-side offering granular data and browser monitoring reflecting user experience.\u003c/p\u003e\n"],["\u003cp\u003eLeverage tools like Google Cloud Logging and Monitoring to capture, track, and analyze performance metrics, facilitating efficient optimization strategies.\u003c/p\u003e\n"]]],[],null,["# Monitoring\n\nPerformance optimization starts with identifying key metrics, usually related to\nlatency and throughput. The addition of monitoring to capture and track these\nmetrics exposes weak points in the application. With metrics, optimization can\nbe undertaken to improve performance metrics.\n\nAdditionally, many monitoring tools let you set up alerts for your metrics, so\nthat you are notified when a certain threshold is met. For example, you might\nset up an alert to notify you when the percentage of failed requests increases\nby more than *x*% of the normal levels. Monitoring tools can help you identify\nwhat normal performance looks like and identify unusual spikes in latency, error\nquantities, and other key metrics. The ability to monitor these metrics is\nespecially important during business critical timeframes, or after new code has\nbeen pushed to production.\n\nIdentify latency metrics\n------------------------\n\nEnsure that you keep your UI as responsive as you can, noting that users expect\neven higher standards from [mobile apps](/web/fundamentals/performance/why-performance-matters). Latency should also be measured\nand tracked for backend services, particularly since it can lead to throughput\nissues if left unchecked.\n\nSuggested metrics to track include the following:\n\n- Request duration\n- Request duration at subsystem granularity (such as API calls)\n- Job duration\n\nIdentify throughput metrics\n---------------------------\n\nThroughput is a measure of the total number of requests served over a given\nperiod of time. Throughput can be affected by latency of subsystems, so you\nmight need to optimize for latency to improve throughput.\n\nHere are some suggested metrics to track:\n\n- Queries per second\n- Size of data transferred per second\n- Number of I/O operations per second\n- Resource utilization, such as CPU or memory usage\n- Size of processing backlog, such as pub/sub or number of threads\n\nNot just the mean\n-----------------\n\nA common mistake in measuring performance is only looking at the mean (average)\ncase. While this is useful, it doesn't provide insight into the distribution of\nlatency. A better metric to track is the performance percentiles, for example\nthe 50th/75th/90th/99th percentile for a metric.\n\nGenerally, optimizing can be done in two steps. First, optimize for 90th\npercentile latency. Then, consider the 99th percentile---also known as tail\nlatency: the small portion of requests which take much longer to complete.\n\nServer-side monitoring for detailed results\n-------------------------------------------\n\nServer-side profiling is generally preferred for tracking metrics. The server\nside is usually much easier to instrument, allows access to more granular data,\nand is less subject to perturbation from connectivity issues.\n\nBrowser monitoring for end-to-end visibility\n--------------------------------------------\n\nBrowser profiling can provide additional insights into the end user experience.\nIt can show which pages have slow requests, which you can then correlate to\nserver-side monitoring for further analysis.\n\n[Google Analytics](/analytics) provides out-of-the-box monitoring for page load\ntimes in the [page timings report](//support.google.com/analytics/answer/1205784#PageTimings). This provides several useful views\nfor understanding the user experience on your site, in particular:\n\n- Page load times\n- Redirect load times\n- Server response times\n\nMonitoring in the cloud\n-----------------------\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThere are many tools you can use to capture and monitor performance metrics for\nyour application. For example, you can use [Google Cloud Logging](//cloud.google.com/logging) to log\nperformance metrics to your [Google Cloud Project](/google-ads/api/docs/oauth/cloud-project), then set up\ndashboards in [Google Cloud Monitoring](//cloud.google.com/monitoring) to monitor and segment the logged\nmetrics.\n\nCheck out the [Logging guide](/google-ads/api/docs/productionize/logging) for an [example](/google-ads/api/docs/productionize/logging#option_4_implement_a_custom_grpc_logging_interceptor) of logging to\nGoogle Cloud Logging from a custom interceptor in the Python client library.\nWith that data available in Google Cloud, you can build metrics on top of the\nlogged data to gain visibility into your application through Google Cloud\nMonitoring. Follow the [guide](//cloud.google.com/logging/docs/logs-based-metrics#user-defined_metrics) for user-defined log-based metrics to\nbuild metrics using the logs sent to Google Cloud Logging.\n\nAlternatively, you could use the Monitoring client [libraries](//cloud.google.com/monitoring/docs/reference/libraries) to define\nmetrics in your code and send them directly to Monitoring, separate from the\nlogs.\n\n### Log-based metrics example\n\nSuppose you want to monitor the `is_fault` value to better understand error\nrates in your application. You can extract the `is_fault` value from the logs\ninto a new [counter metric](//cloud.google.com/logging/docs/logs-based-metrics/counter-metrics), `ErrorCount`.\n\nIn Cloud Logging, [labels](//cloud.google.com/logging/docs/logs-based-metrics/labels) let you group your metrics into categories\nbased on other data in the logs. You can configure a label for the [`method`\nfield sent to Cloud Logging](/google-ads/api/docs/productionize/logging#option_4_implement_a_custom_grpc_logging_interceptor) in order to look at how the error count is\nbroken down by the Google Ads API method.\n\nWith the `ErrorCount` metric and the `Method` label configured, you can [create\na new\nchart](//cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts) in\na Monitoring dashboard to monitor `ErrorCount`, grouped by `Method`.\n\n### Alerts\n\nIt's possible in Cloud Monitoring and in other tools to configure alert policies\nthat specify when and how alerts should be triggered by your metrics. For\ninstructions on setting up Cloud Monitoring alerts, follow the\n[alerts guide](//cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts#alert-on-lbm)."]]