Markdown is a lightweight, text-only language easily readable by both humans and machines. One of the newest search visibility tactics is to serve a Markdown version of web pages to generative AI bots. The aim is to assist the bots in fetching the content by reducing crawl resources, thereby encouraging them to access the page.
I’ve seen isolated tests by search optimizers showing an increase in visits from AI bots after Markdown, although none translated into better visibility. A few off-the-shelf tools, such as Cloudflare’s, make implementing Markdown easier.
Serving separate versions of a page to people and bots is not new. Called “cloaking,” the tactic is long considered spam under Google’s Search Central guidelines.
The AI scenario is different, however, because it’s not an attempt to manipulate algorithms, but rather making it easier for bots to access and read a page.
Effective?
That doesn’t make the tactic effective, however. Think carefully before implementing it, for the following reasons.
- Functionality. The Markdown version of a page may not function correctly. Buttons, in particular, could fail.
- Architecture. Markdown pages can lose essential elements, such as a footer, header, internal links (“related products”), and user-generated reviews via third-party providers. The effect is to remove critical context, which serves as a trust signal for large language models.
- Abuse. If the Markdown tactic becomes mainstream, sites will inevitably inject unique product data, instructions, or other elements for AI bots only.
Creating unique pages for bots often dilutes essential signals, such as link authority and branding. A much better approach has always been to create sites that are equally friendly to humans and bots.
Moreover, a goal of LLM agents is to interact with the web as humans do. Serving different versions serves no purpose.
Representatives of Google and Bing echoed this sentiment a few weeks ago. John Mueller is Google’s senior search analyst:
LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees?
Fabrice Canel is Bing’s principal product manager:
… really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Human eyes help fix people- and bot-viewed content.

