Certification Crawler Information

The Alexa crawler (robot), which identifies itself as Alexabot in the HTTP “User-agent” header field, is requesting pages on your website to make sure they contain the Alexa Certify Code. We do this scan as a complimentary service for all Alexa Pro subscribers who have decided they want to Certify the data on their website.

The Alexabot crawler looks for a file called “robots.txt”. Robots.txt is a file website administrators can place at the top level of a site to direct the behavior of web crawling robots. All of the major Web-crawlers, such as Google, Bing, Baidu, and Yandex, respect this standard. Alexa Internet strictly adheres to the standard, and the Alexabot crawler will always pick up a copy of the robots.txt file prior to its crawl of the Web.

To make sure we can scan your site and Certify your data, your robots.txt file should look like this:

User-agent: Alexabot
Disallow:

If you want to block Alexa from scanning certain parts of your site for the Certify Code, you can do that by adding lines like this to your robots.txt file:

User-agent: Alexabot
Disallow: /path/

For more information regarding robots, crawling, and robots.txt visit the Web Robots Pages at www.robotstxt.org, an excellent source for the latest information on the Standard for Robots Exclusion.