Removing information from Google search results FAQ

This article brings together answers to the questions we at Google hear most about removing information from Google search results.

How do I get a page removed from Google?

Google doesn't control the content of the web. That means that before we remove a page from our search results, the site owner has to change it or take it down. If that's you, just make the changes you want. We'll see them the next time we crawl your site, and we'll update our index.

After you've made the changes, you can expedite the removal process by submitting a URL removal request. If you don't own the site, and the site owner won't take the content down, you can still request removal of certain confidential or personal information, such as your government ID number, bank account number, or signature.

Read more about removing information Google's search results

How can I remove the cached version of a page?

If a page has changed and you urgently need to expedite the removal of outdated information, you can use the Remove outdated content tool. If you don't want Google to ever display a Cached link for a page, use the noarchive meta tag.

Somebody is using my content or violating my copyright. How can I get their pages taken down?

If you find that another site is duplicating your content by scraping (misappropriating and republishing) it, it's unlikely that this will negatively impact your site's ranking in Google search results pages. If you do spot a case that's particularly frustrating, you are welcome to file a DMCA request to claim ownership of the content and request removal of the other site from Google's index.

How do I keep my content out of Google's search results?

If your content is private, you must use server side authentication (password-protection) to block access to it. Don't rely on robots.txt, or meta and header tags to keep private content from becoming public; users may find those pages through means other than search engines.

You can also use a noindex meta tag to tell search engines not to index a certain page. In this case, make sure that that page is not disallowed in your robots.txt file. If we're not allowed to crawl the page, we won't be able to see and obey the meta tag on it.

You can also use the X-Robots-Tag directive, which adds Robots Exclusion Protocol (REP) meta tag support for non-HTML pages. This directive gives you the same control over your videos, spreadsheets, and other indexed file types.

Learn more about keeping content out of Google.

Why was my URL removal request denied?

Next to the denial you should see a stop sign icon or a link to learn more. This will provide details about why your specific removal was denied.

Make sure your URL met the requirements for removal. If you still have questions, post a new question in our Help Community with the details, including what you were trying to remove and what the denial reason says.

Report spam, paid links, or malware

If you find information in Google's search results that you believe result from spam, paid links or malware, you can report it to us.

Can't find the answer?

If you can't find the answer to your question on this page, check out Google's help resources for site owners.

We also provide official Google Search Central help communities in the following languages: EnglishDeutschEspañolFrançaisItalianoNederlandsPolskiPortuguêsTürkçeРусскийالعربية中文(简体)日本語한국어