Technical SEO

Google’s JavaScript SERPs Impact Trackers, AI

Google now requires JavaScript-enabled browsers to view the content of search result pages, effectively “hiding” the listings from organic rank trackers, artificial intelligence models, and other optimization tools.

The world’s most popular search engine began requiring JavaScript on search pages last month. Google stated the move aimed to protect its services from bots and “abuse,” perhaps a thinly veiled allusion to competitive AI.

These changes could complicate search engine optimization in at least three ways: rank tracking, keyword research, and AI visibility.

Screenshot of Google Search page with the notice, "Turn on JavaScript to keep searching."

Google Search now requires browsers to have JavaScript enabled.

Impact of JavaScript

Web crawlers can scrape and index JavaScript-enabled pages even when the JavaScript itself renders the content. Googlebot does this, for example.

A web-scraping bot grabs the content of an HTML page in four steps, more or less:

  • Request. The crawler sends a simple HTTP GET request to the URL.
  • Response. The server returns the HTML content.
  • Parse. The crawler parses (analyzes) the HTML, gathering the content.
  • Use. The content is passed on for storage or use.

For example, before the JavaScript switch, bots from Ahrefs and Semrush crawled Google SERPs. A bot could visit the SERP for, say, “men’s running shoes,” parse the HTML, and use the data to produce rank-tracking and traffic reports.

The process is relatively more complicated with JavaScript.

  • Request. The crawler sends a simple HTTP GET request to the URL.
  • Response. The server returns a basic HTML skeleton, often without much content (e.g., <div id=”app”></div>).
  • Execute. To run the JavaScript and load dynamic content., the crawler renders the page in a headless browser such as Puppeteer, Playwright, or Selenium.
  • Wait. The crawler waits for the page to load, including API calls and data updates. A few milliseconds might seem insignificant, but it slows down the crawlers and adds costs.
  • Parse. The crawler parses the dynamic and static HTML, gathering the content as before.
  • Use. The content is passed on for storage or use.

The two additional steps — Execute and Wait — are far from trivial since they require full browser simulation and thus much more CPU and RAM. Some have estimated that JavaScript-enabled crawling takes three to 10 times more computing resources than scraping static HTML.

FeatureHTML ScrapingJavaScript Scraping
Initial responseFull HTML contentMinimal HTML with placeholders
JavaScript executionNot requiredRequired
ToolsRequests, BeautifulSoup, ScrapyPuppeteer, Playwright, Selenium
PerformanceFaster, lightweightSlower, resource-heavy
Content availabilityStatic content onlyBoth static and dynamic content
ComplexityLowHigh

It is worth clarifying that Google does not render the entire SERP with JavaScript, instead requiring that visitors’ browsers enable JavaScript — essentially the same impact.

The time and resources to crawl a SERP vary greatly. Hence one cannot easily assess the impact of Google’s new JavaScript requirement on one tool or another other than an educated guess.

Rank tracking

Marketers use organic rank-tracking tools to monitor where a web page appears on Google SERPs — listings, featured snippets, knowledge panels, local packs — for target keywords.

Semrush, Ahrefs, and other tools crawl millions, if not billions, of SERPs monthly. Rendering and parsing those dynamic results pages could raise costs significantly, perhaps fivefold.

For marketers, this potential increase might mean tracking tools become more expensive or relatively less accurate if they crawl SERPs infrequently.

Keyword research

Google’s JavaScript requirement may also impact keyword research since identifying relevant, high-traffic keywords could become imprecise and more costly.

These changes may force marketers to find other ways to identify content topics and keyword gaps. Kevin Indig, a respected search engine optimizer, suggested that marketers turn to page- or domain-level traffic metrics if keyword data becomes unreliable.

AI models

The hype surrounding AI engines reminds me of voice search a few years ago, although the former is becoming much more transformative.

Likely AI models crawled Google results to discover pages and content. An AI model asked to find the best running shoe for a 185-pound male might scrape a Google SERP and follow links to the top 10 sites. Thus some marketers expected a halo effect from ranking well on Google.

But AI models must now spend extra time and computing power to parse Google’s JavaScript-driven results pages.

Wait and Adapt

As is often the case with Google’s changes, marketers must wait to gauge the JavaScript effect, but one thing is certain: SEO is changing.

Armando Roggio
Armando Roggio
Bio