Note: While our most popular guides have been translated into Spanish, some guides are only available in English.
Google Search Console robots.txt crawl errors

When verifying your site with Google Search Console, you'll see a message showing that parts of your URL are restricted by robots.txt. This is completely normal. We ask Google not to crawl these pages because they’re for internal use only or display duplicate content.

A robots.txt file tells search engines the parts of your site that shouldn't be indexed. All Squarespace sites use the same robots.txt file. This helps us follow SEO best practices and keep your site Google-friendly.

If you see the slugs in this guide in the message, you can ignore them.

We ask Google not to crawl these pages because they’re for internal use only. For example, /config/ is your Admin login page, and /api/ blocks our Analytics tracking cookie.

  • /config/
  • /api/
  • /static/

Here are some pages we ask Google not to crawl because they’re indexed views that display duplicate content. Since these may negatively impact SEO, we exclude them.

  • /*?author=*
  • /*&author=*
  • /*?tag=*
  • /*&tag=*
  • /*?category=*
  • /*&category=*
  • /*?month=*
  • /*&month=*
  • /*?view=*
  • /*&view=*
  • /*?format=*
  • /*&format=*
  • /*?reversePaginate=*
  • /*&reversePaginate=*

View a complete list of excluded pages in the robot.txt file.

Was this article helpful?
42 out of 86 found this helpful
Google Search Console robots.txt crawl errors