Googlebot is busy. The web spider is responsible for crawling more than 130 trillion pages. If each web page took just one second to load, Googlebot would have more than four years’ worth of page loading and processing to fetch each page once.
The web spider is responsible for crawling more than 130 trillion pages.
This may not be a problem for a product detail page. It is likely the page will change little over time and will be in place for a long time. Thus an extra few days may be worth the wait. But an online store might want a new sale page or a holiday buying guide to appear in Google’s index and relevant SERPs as soon as possible.
Crawl, Render, Index
Crawler. First, Googlebot gets the address for a page — say the category page on an ecommerce store — from the crawl queue, and follows the URL. Assuming the page is not blocked via robots.txt, Googlebot will parse the page. On the diagram above, this is the “crawler” stage.
At the crawler stage, any new links (URLs) that Googlebot discovers are sent back to the crawl queue. The HTML content on the parsed page may then be indexed.
“How long it takes for Google to render your pages depends on many different factors, and we can’t make any guarantees here,” said Splitt.
Essentially, the page is placed in a render queue where it must wait its turn if you will.
In the former, Googlebot would see many or most of the links when it initially parses the HTML, while in the latter scenario it would not discover them until after processing.
- “Make sure Google can see lazy-loaded content,” by Google