封鎖 Googlebot 會影響 Google 搜尋 (包括探索和所有 Google 搜尋功能) 和其他產品 (例如 Google 圖片、Google 影片和 Google 新聞)。
驗證 Googlebot
決定封鎖 Googlebot 前,請留意其他檢索器經常假冒 Googlebot 採用的 HTTP user-agent 要求標頭。因此請務必驗證有問題的要求,確認是否真的由 Google 提出。如要確認要求是否來自 Googlebot,最好的做法是針對要求的來源 IP 使用反向 DNS 查詢,或是比對來源 IP 與 Googlebot IP 範圍。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-04 (世界標準時間)。"],[[["\u003cp\u003eGooglebot is the name for Google's web crawlers, which include Googlebot Smartphone and Googlebot Desktop, used to understand and index website content.\u003c/p\u003e\n"],["\u003cp\u003eGooglebot primarily crawls and indexes the mobile version of websites, reflecting the dominance of mobile browsing.\u003c/p\u003e\n"],["\u003cp\u003eWebsite owners can control Googlebot's access by using robots.txt to manage crawl rate and prevent crawling of specific content.\u003c/p\u003e\n"],["\u003cp\u003eWhile blocking Googlebot prevents crawling, it doesn't automatically remove a page from Google Search results; \u003ccode\u003enoindex\u003c/code\u003e should be used for that purpose.\u003c/p\u003e\n"],["\u003cp\u003eIt's crucial to verify the authenticity of Googlebot requests as its user agent is frequently imitated by other crawlers.\u003c/p\u003e\n"]]],["Googlebot, comprising Desktop and Smartphone crawlers, indexes web content, primarily favoring the mobile version. It crawls most sites at a rate of once every few seconds, fetching up to 15MB of HTML or text-based files and their resources. To manage Googlebot's access, sites can use `robots.txt` to block crawling or `noindex` to prevent indexing. Blocking crawling affects Google Search and related products. Verify Googlebot requests via reverse DNS lookup or by checking the IP range.\n"],null,["# What Is Googlebot | Google Search Central\n\nGooglebot\n=========\n\n\nGooglebot is the generic name for two types of\n[web crawlers](/search/docs/fundamentals/how-search-works) used by Google Search:\n\n- [**Googlebot Smartphone**](/search/docs/crawling-indexing/google-common-crawlers#googlebot-smartphone): a mobile crawler that simulates a user on a mobile device.\n- [**Googlebot Desktop**](/search/docs/crawling-indexing/google-common-crawlers#googlebot-desktop): a desktop crawler that simulates a user on desktop.\n\n\nYou can identify the subtype of Googlebot by looking at the\n[HTTP `user-agent` request header](/search/docs/crawling-indexing/overview-google-crawlers)\nin the request. However, both crawler types obey the same product token (user agent token) in\nrobots.txt, and so you cannot selectively target either Googlebot Smartphone or Googlebot\nDesktop using robots.txt.\n\n\nFor most sites Google Search primarily\n[indexes the mobile version](/search/docs/crawling-indexing/mobile/mobile-sites-mobile-first-indexing)\nof the content. As such the majority of Googlebot crawl requests will be made using the mobile\ncrawler, and a minority using the desktop crawler.\n\nHow Googlebot accesses your site\n--------------------------------\n\n\nFor most sites, Googlebot shouldn't access your site more than once every few seconds on\naverage. However, due to delays it's possible that the rate will appear to be slightly higher\nover short periods. If your site is having trouble keeping up with Google's crawling requests, you\ncan [reduce the crawl rate.](/search/docs/crawling-indexing/reduce-crawl-rate)\n\n\nGooglebot can crawl the first 15MB of an HTML file or\n[supported text-based file](/search/docs/crawling-indexing/indexable-file-types).\nEach resource referenced in the HTML such as CSS and JavaScript is fetched separately, and\neach fetch is bound by the same file size limit. After the first 15MB of the file, Googlebot\nstops crawling and only sends the first 15MB of the file for indexing consideration. The file size\nlimit is applied on the uncompressed data. Other Google crawlers, for example Googlebot Video and\nGooglebot Image, may have different limits.\n\n\nWhen crawling from IP addresses in the US, the timezone of Googlebot is\n[Pacific Time](https://g.co/kgs/WSf8oR).\n\n\nOther\n[technical properties of Googlebot](/search/docs/crawling-indexing/overview-google-crawlers#crawl-technical-props)\nare described in the overview of Google's crawlers.\n\nBlocking Googlebot from visiting your site\n------------------------------------------\n\n\nGooglebot discovers new URLs to crawl primarily from links embedded in previously crawled pages.\nIt's almost impossible to keep a site secret by not publishing links to it. For example, as soon\nas someone clicks a link from your \"secret\" site to another site, your \"secret\" site URL may\nappear in the referrer tag and can be stored and published by the other site in its referrer log.\n\n\nIf you want to prevent Googlebot from crawling content on your site, you have a\n[number of options](/search/docs/crawling-indexing/control-what-you-share). Remember\nthere's a difference between *crawling* and *indexing*; blocking Googlebot from crawling\na page doesn't prevent the URL of the page from appearing in search results:\n\n- **Prevent Googlebot from crawling a page?** Use a [robots.txt file](/search/docs/crawling-indexing/robots/intro).\n- **Don't want Google to index a page?** Use [`noindex`](/search/docs/crawling-indexing/block-indexing).\n- **Prevent a page from being accessible at all by both crawlers or users?** Use [another method, such as password protection](/search/docs/crawling-indexing/control-what-you-share).\n\n\nBlocking Googlebot affects Google Search (including Discover and all Google Search features), as\nwell as other products such as Google Images, Google Video, and Google News.\n\nVerifying Googlebot\n-------------------\n\n\nBefore you decide to block Googlebot, be aware that the HTTP `user-agent` request\nheader used by Googlebot is often spoofed by other crawlers. It's important to verify that a\nproblematic request actually comes from Google. The best way to verify that a request actually\ncomes from Googlebot is to\n[use a reverse DNS lookup](/search/docs/crawling-indexing/verifying-googlebot#manual)\non the source IP of the request, or to match the source IP against the\n[Googlebot IP ranges](/search/docs/crawling-indexing/verifying-googlebot#use-automatic-solutions)."]]