When verifying your site with Google Search Console, you'll see a message showing that parts of your URL are restricted by robots.txt. This is completely normal. We ask Google not to crawl these pages because they’re for internal use only or display duplicate content.
A robots.txt file tells search engines the parts of your site that shouldn't be indexed. All Squarespace sites use the same robots.txt file. This helps us follow SEO best practices and keep your site Google-friendly.
If you see the slugs in this guide in the message, you can ignore them.
We ask Google not to crawl these pages because they’re for internal use only. For example, /config/ is your Admin login page, and /api/ blocks our Analytics tracking cookie.
Here are some pages we ask Google not to crawl because they’re indexed views. These pages organize content existing elsewhere on your site. Excluding them from search prevents them from outranking your original content in Google results.
View a complete list of excluded pages in the robot.txt file.