Google describes the crawl budget and blackouts on websites correctly, Google does 1 trillion searches, and much more.
In 2017 Google’s Gary Illyes explained what a crawl budget is, the crawl rate limit, how they are implemented, what a crawl demand is, and what variables influence a site’s budget for a crawl.
Illyes explained that most websites could be refined with crawl budgets.
“Prioritizing the content to be crawled as well as when and the number of resources that servers hosting the website can dedicate to the crawl is crucial for larger sites or ones that generate pages using URL parameters,” Illyes said.
Google declares that the demand for crawl and crawl rate is the basis of GoogleBot's crawl budget for your site.
Gary Illyes from Google has published a blog article titled What Crawl Budget means the Googlebot. In it, he describes what crawl budget means and how crawl rate limiters are implemented, what crawl demand is, and what elements affect a website’s crawl budget.
Then, Gary explained that for most websites, the crawl budgets are not something to be concerned about. For big sites, there is a need to look into it.
“Prioritizing the crawling content as well as when and the amount of resource that servers hosting the website can allocate to crawling is especially important for sites with larger numbers of visitors, or ones that automatically generate pages according to URL parameter,” Gary said.
Here’s a brief review of what was written, but I suggest reading the full article.
Crawl rate limits are designed to aid Google in not scanning your website too often and at a speed that can harm your server.
Crawl demand is the amount Google will want to crawl your site. It is determined by how popular your website is and how old-fashioned your content is within Google’s index. Google index.
Crawl budget refers to “taking the crawl demand and demanded combined.” Google defines crawl budget as “the amount of URLs Googlebot can and would like to visit.”