LLMs as index funds

Venkatesh Rao giving an analogy between LLMs and index funds.

Foundation models like GPT and Claude now serve as the index funds of language. Trained on enormous corpora of human text, they do not try to innovate. Instead, they track the center of linguistic gravity: fluent, plausible, average-case language. They provide efficient, scalable access to verbal coherence, just as index funds offer broad exposure to market returns. For most users, most of the time, this is enough. LLMs automate fluency the way passive investing automates exposure. They flatten out risk and elevate reliability.

But they also suppress surprise. Like index funds, LLMs are excellent at covering known territory but incapable of charting new ground. The result is a linguistic landscape dominated by synthetic norms: smooth, predictable, uncontroversial. Writing with an LLM is increasingly like buying the market—safe, standardized, and inherently unoriginal.

In this new environment, the act of writing raw, unassisted text begins to resemble picking penny stocks. It’s risky, inefficient, and potentially seen as naïve. Yet it remains the only place where genuine linguistic alpha—the surplus value of originality—can be found. Alpha lives in human voice, conceptual invention, emotional charge, and expressive risk. It emerges from the irreducible tensions of context, personality, and thought. And like financial alpha, it is quickly absorbed and neutralized by the systems it disrupts. What begins as a surprise becomes a template; what once felt radical becomes the new benchmark.

As a result, the most original language is retreating into private markets. In Substacks, Signal threads, Discord servers, and private memos, new forms are being tested in semi-anonymous, high-context settings. These are the linguistic equivalents of venture capital and private equity—spaces of risk, scarcity, and concentrated attention. Just as companies now avoid going public too soon, writers may delay or even refuse public release, fearing dilution or misappropriation. Only once an idea matures might it “IPO” into the public sphere—perhaps as a viral tweet, a manifesto, or a cultural phrase. But even then, its time is limited: LLMs will soon flatten it into beta.

This is part of Venkatesh Rao’s AI slop writing, where he shares a “recipe”—a set of high-level ideas—that he uses to generate posts with an LLM. I didn’t realise I was reading AI slop until I reached the Recipe section.

Filed under