Note: While our most popular guides have been translated into Spanish, some guides are only available in English.
Google Search Console robots.txt crawl errors

When verifying your site with Google Search Console you'll see a message showing that parts of your URL are restricted by robots.txt. This is completely normal. We ask Google not to crawl these pages because they’re for internal use only or display duplicate content that can count against a site's SEO.

A robots.txt file tells search engines the parts of your site that shouldn't be indexed. All Squarespace sites use the same robots.txt files, including Squarespace.com's robots.txt file. This helps us follow SEO best practices and keep your site Google-friendly.

If you see the slugs in this guide in the message, you can ignore them.

We ask Google not to crawl these pages because they’re for internal use only. For example, /config/ is your Admin login page, and /api/ blocks our Analytics tracking cookie.

  • /config/
  • /api/
  • /static/

We ask Google not to crawl these pages because they’re indexed views that display duplicate content. Since these negatively impact SEO, we exclude them.

  • /*?author=*
  • /*&author=*
  • /*?tag=*
  • /*&tag=*
  • /*?category=*
  • /*&category=*
  • /*?month=*
  • /*&month=*
  • /*?view=*
  • /*&view=*
  • /*?format=*
  • /*&format=*
  • /*?reversePaginate=*
  • /*&reversePaginate=*

For more help with Google Search Console, visit Verifying your site with Google Search Console.

Was this article helpful?
25 out of 52 found this helpful
Google Search Console robots.txt crawl errors