Category: Artificial Intelligence

  • Thinking

    Dustin Curtis talking about how AI is impacting his—and possibly others—thinking.

    I thought I was using AI in an incredibly positive and healthy way, as a bicycle for my mind and a way to vastly increase my thinking capacity. But LLMs are insidious–using them to explore ideas feels like work, but it’s not real work. Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show. Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

    The Netflix analogy hit hard.

  • Shallowness of LLMs

    Jason Cohen talking about how LLMs, in their knowledge, are wider but shallower than humans.

    It’s interesting how LLMs are wider than any human, but shallower than the best humans. (At many things; not, e.g. chess)

    It can’t do customer service as well as the best humans, but it can do it in 100 languages, which no human can.

    It can’t program as well as the best humans, but it can program in 100 languages and 1000 libraries, which no human can.

    It’s not as good at math or law or medicine or research or history as the best humans in each of those fields, but it is better than the median human in those fields.

  • Quoting Micha Kaufman

    Micha Kaufman’s email that is doing rounds of the internet.

    You must understand that what was once considered ‘easy tasks’ will no longer exist; what was considered ‘hard tasks’ will be the new easy, and what was considered ‘impossible tasks’ will be the new hard.

  • LLMs as index funds

    Venkatesh Rao giving an analogy between LLMs and index funds.

    Foundation models like GPT and Claude now serve as the index funds of language. Trained on enormous corpora of human text, they do not try to innovate. Instead, they track the center of linguistic gravity: fluent, plausible, average-case language. They provide efficient, scalable access to verbal coherence, just as index funds offer broad exposure to market returns. For most users, most of the time, this is enough. LLMs automate fluency the way passive investing automates exposure. They flatten out risk and elevate reliability.

    But they also suppress surprise. Like index funds, LLMs are excellent at covering known territory but incapable of charting new ground. The result is a linguistic landscape dominated by synthetic norms: smooth, predictable, uncontroversial. Writing with an LLM is increasingly like buying the market—safe, standardized, and inherently unoriginal.

    In this new environment, the act of writing raw, unassisted text begins to resemble picking penny stocks. It’s risky, inefficient, and potentially seen as naïve. Yet it remains the only place where genuine linguistic alpha—the surplus value of originality—can be found. Alpha lives in human voice, conceptual invention, emotional charge, and expressive risk. It emerges from the irreducible tensions of context, personality, and thought. And like financial alpha, it is quickly absorbed and neutralized by the systems it disrupts. What begins as a surprise becomes a template; what once felt radical becomes the new benchmark.

    As a result, the most original language is retreating into private markets. In Substacks, Signal threads, Discord servers, and private memos, new forms are being tested in semi-anonymous, high-context settings. These are the linguistic equivalents of venture capital and private equity—spaces of risk, scarcity, and concentrated attention. Just as companies now avoid going public too soon, writers may delay or even refuse public release, fearing dilution or misappropriation. Only once an idea matures might it “IPO” into the public sphere—perhaps as a viral tweet, a manifesto, or a cultural phrase. But even then, its time is limited: LLMs will soon flatten it into beta.

    This is part of Venkatesh Rao’s AI slop writing, where he shares a “recipe”—a set of high-level ideas—that he uses to generate posts with an LLM. I didn’t realise I was reading AI slop until I reached the Recipe section.

  • AI can empower developers to rewrite code without regret

    Matthew Sinclair talks about how AI can help programmers scrap the code they—or rather, the AI—has written and start over when they realise the approach won’t work, may lead to technical debt, or for any of thousands of other reasons, because new code can be generated fairly quickly.

    Working with Claude Code has fundamentally shifted how I think about the economics of programming time. Traditionally, coding involves three distinct “time buckets”:

    • Why am I doing this? Understanding the business problem and value 
    • What do I need to do? Designing the solution conceptually 
    • How am I going to do it? Actually writing the code 

    For decades, that last bucket consumed enormous amounts of our time. We’d spend hours, days or weeks writing, debugging, and refining. With Claude, that time cost has plummeted to nearly zero. I can generate thousands of lines of functional code in a sitting—something that is, frankly, mind-blowing.

    And there’s a new skill that emerges: wielding the knife. With code generation being essentially free, we need to become much more comfortable with throwing away entire solutions. The sunk cost fallacy hits programmers hard—we hate discarding code we’ve invested in, fearing we might break something important or never get back to a working state.

    But when your assistant can rewrite everything in minutes, that calculus changes completely. Three times during my backend project, I looked at substantial amounts of code—thousands of lines that technically worked—and decided to scrap it entirely because the approach wasn’t right. This wasn’t easy. My instinct was still to try to salvage and refactor. But the right move was to step back, rethink the approach, and direct the AI down a different path.

    This willingness to cut ruthlessly is a muscle most developers haven’t developed yet. It requires confidence in your architectural judgment and a radical shift in how you value implementation time.

  • Program vs Product

    Dylan Beattie explaining problem with vibe coding by referencing program vs product.

    Probably the single most important lesson I’ve learned in my career, the thing that I would argue is the hallmark of “experience”, is understanding just how much work it takes to turn a working program into a viable product. It’s why developer estimates are so notoriously optimistic – and why experienced developers are so notoriously cynical. Let’s say you crank out a bit of code that’ll take responses from a web form and add them in an Excel spreadsheet. That’s not that hard… yay! we just built a Typeform competitor in one afternoon! Except, no, you didn’t. You made one thing work one time on one computer. You haven’t considered encoding, internationalization, concurrency, authentication, telemetry, billing, branding, mobile devices, deployment. You haven’t hit any of the weird limits yet – ever had a system work brilliantly for the first 65,535 requests and then fall over? You don’t have a product. At best, you have a proof-of-concept of a good idea that, if some very smart people work very hard, might become a viable product.

  • Programming in natural language

    Edsger W. Dijkstra explaining why using natural language for programming is a bad idea.

    In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

    We know in the meantime that the choice of an interface is not just a division of (a fixed amount of) labour, because the work involved in co-operating and communicating across the interface has to be added. We know in the meantime —from sobering experience, I may add— that a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.

    Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.

    I saw this post on Hacker News and this comment sums up Dijkstra’s point succinctly.

    Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

    As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

    Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

  • Learn to code

    Kyrylo Silin reflecting on importance of learning to code.

    AI is the new shiny toy everyone wants to play with. And to be honest, it’s impressive. The problem with AI is that with every year, it gets better and better. Wait what? How’s that a problem? Well, with every new year you lose control.

    The more you rely on AI, the less you understand what you’re doing. The less you understand, the more AI vendors can control you. And the more control they have, the more they can charge you. It’s a vicious cycle.

    This shift was inevitable. Humanity must adapt to this new reality. AI isn’t going away, and we need to learn how to use it to our advantage.

    The large language models (LLMs) created by tech giants have absorbed decades of knowledge — our knowledge. They’ve been trained on our work.

    Now they’re selling it back to us and telling us we only need to learn English to code. This is a lie. As a new programmer, I don’t want you to fall for it. There’s no corner-cutting. Get your shit together and learn to code.

    Ha!

    While I agree that programmers need to learn to code, the unfortunate reality is that AI will charge us to write code that it trained itself on using our work. The charge is for completing a function in a matter of minutes, sometimes even seconds. The charge is for speed and efficiency.

  • AI Einstein

    Thomas Wolf arguing why we won’t have AI Einstein—at least in its current form. 

    …the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around.

    To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes ‘What if everyone is wrong about this?’ when all textbooks, experts, and common knowledge suggest otherwise.

    Wonderful insight. I am pretty sure that if AI had been available before we discovered that Earth orbits the Sun—and not the other way around—AI wouldn’t have dared to question that knowledge.

  • Knowledge collapse

    A long but worthwhile paper published by Andrew J. Peterson on AI and the Problem of Knowledge Collapse

    We identify conditions under which AI, by reducing the cost of access to certain modes of knowledge, can paradoxically harm public understanding. While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center’ of the distribution. This is generally useful, but widespread reliance on recursive AI systems could lead to a process we define as “knowledge collapse”, and argue this could harm innovation and the richness of human understanding and culture. However, unlike AI models that cannot choose what data they are trained on, humans may strategically seek out diverse forms of knowledge if they perceive them to be worthwhile. 

    The author analyzes how our dependence on AI could limit our knowledge to a narrow subset of views—views on which the AI was trained—and eventually forgetting the “long-tail” ideas. We see this happening right now when we ask the Chinese artificial intelligence, DeepSeek, about Tiananmen Square or Taiwan.

    With increasing integration of LLM-based systems, certain popular sources or beliefs which were common in the training data may come to be reinforced in the public mindset (and within the training data), while other “long-tail” ideas are neglected and eventually forgotten. 

    Such a process might be reinforced by an ‘echo chamber’ or information cascade effect, in which repeated exposure to this restricted set of information leads individuals to believe that the neglected, unobserved tails of knowledge are of little value.

    We have all seen the effect of the echo chamber in social media. An echo chamber with AI responses would only go on to solidify people’s beliefs—whether right or wrong.

    But knowledge collapse has been happening throughout human evolution.

    …traditional hunter-gatherers could identify thousands of different plants and knew their medicinal usages, whereas most humans today only know a few dozen plants and whether they can be purchased in a grocery store. This could be seen as a more efficient form of specialization of information across individuals, but it might also impact our beliefs about the value of those species or of a walk through a forest, or influence scientific or policy-relevant judgements.

    This will be a long term effect of AI, the implications of which will be visible only after 10 or more years.