Skip to content

Commit 81eb20e

Browse files
authored
blog: llms.txt is overhyped (#76)
1 parent ded8f8a commit 81eb20e

File tree

2 files changed

+98
-0
lines changed

2 files changed

+98
-0
lines changed

blog/images/llms-txt.jpg

62.1 KB
Loading

blog/llms-txt-overhyped.md

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
---
2+
template: '../@theme/templates/BlogPost'
3+
title: llms.txt is overhyped
4+
description: We built it, tested it, and checked the logs. llms.txt isn’t the “robots.txt for AI” — it’s mostly ignored. Here’s what actually matters.
5+
seo:
6+
title: llms.txt is overhyped
7+
description: Redocly experimented with llms.txt and found it mostly smoke, not fire. See the results, the logs, and what really matters for docs + AI.
8+
image: ./images/llms-txt.jpg
9+
10+
author: adam-altman
11+
date: 2025-08-20
12+
categories:
13+
- developer-experience
14+
- learning
15+
- company-update
16+
- dev-portal
17+
image: llms-txt.jpg
18+
---
19+
# LLMS.txt is overhyped
20+
21+
Every now and then, the industry invents a new “standard” that's supposed to solve everything.
22+
Right now the hype train is parked at llms.txt.
23+
People call it the _robots.txt_ for AI.
24+
Cute analogy.
25+
The problem is: it doesn't actually work that way.
26+
27+
## We built it anyway
28+
29+
At Redocly, we like to experiment.
30+
So we added automatic llms.txt support to our platform.
31+
Turn it on, it generates the file.
32+
Easy.
33+
We even ran a full [Phronesis project](./phronesis.md) on it — testing across models, prompts, and scenarios.
34+
35+
The results? Pretty underwhelming.
36+
- Unless you explicitly paste the llms.txt file into the LLM, it doesn't do anything.
37+
- When you do paste it, you'd get better results just pasting the actual Markdown docs.
38+
- No model we tested spontaneously “read” or respected llms.txt on its own.
39+
40+
That's not a governance breakthrough.
41+
That's a parlor trick.
42+
43+
## The logs don't lie
44+
45+
We also pulled logs.
46+
How often are `llms.txt` and `llms-full.txt` even being accessed?
47+
Answer: basically never.
48+
When they are, it looks like someone experimenting in a single LLM session, not systematic use by the models.
49+
50+
Michael O'Neill at the University of Iowa checked too — same conclusion: [don't lose sleep over llms.txt](https://www.linkedin.com/pulse/dont-worry-llmstxt-yet-maybe-ever-michael-o-neill-huifc/?trackingId=epKdG7eoRpmJ5sNO7dFNwQ%3D%3D).
51+
52+
## The silver lining
53+
54+
The best thing about building llms.txt wasn't llms.txt. It was what came after:
55+
- one-click copy of any page in Markdown,
56+
- links you can drop straight into ChatGPT or Claude,
57+
- smooth handoff from docs → AI assistant.
58+
59+
That's useful today.
60+
That's how people actually want to interact with docs in an AI-first world.
61+
62+
## A tale of two experiments
63+
64+
Not all experiments flop.
65+
Last week, we ran another Phronesis project, this time on two of our new MCP features (not yet public).
66+
The difference was night and day.
67+
68+
With Docs MCPs, we saw real value.
69+
They made docs instantly more useful inside AI workflows.
70+
The debriefs weren’t full of head-scratching like with llms.txt — they were full of smiles.
71+
72+
That’s the difference between smoke and fire.
73+
LLMS.txt is smoke.
74+
Docs MCPs are fire.
75+
76+
77+
## What really matters
78+
79+
Focus on making good content.
80+
81+
If we want content governance in AI, it won't come from a text file no one reads.
82+
It'll come from:
83+
- licensing,
84+
- attribution,
85+
- legal clarity,
86+
- real standards AI companies can't ignore.
87+
88+
Until then, `llms.txt` is just… there.
89+
More checkbox than standard.
90+
91+
## My take
92+
93+
We tried it. We measured it.
94+
We learned from it.
95+
And now we can say it out loud:
96+
**llms.txt is overhyped.**
97+
98+
The sooner we move past the illusion, the sooner we can focus on solutions that actually matter.

0 commit comments

Comments
 (0)