Scan settings
Here is where things get serious. You can decide to either run the first crawl immediately.
Then you set the rules. You can choose:
- Whether the scan should follow the rules contained in the robots.txt file or ignore them (and thus analyze noindex pages, too);
- The depth, i.e., how many sublevels the bot is supposed to crawl (you can reach a 15 levels depth!)
- How many resources – pages, images, links – should the crawler scan each second (it can go as far as 2,000 resources per second)
- The user-agent: you can either choose to perform the scan with our proprietary bot, or with one of the many Google’s spiders, or with other search engines’ crawlers (Yandex, Bing, Yahoo), or even to simulate a user on one of the many internet browsers.
- The scan speed. Of course, the fastest the crawl is, the less accurate it will be.
Once you’ve finished, click on the Next button to add the keywords.