Category: Artificial Intelligence

  • Input risk in LLM

    Doug Slater talking about input risk when using LLM to code.

    An LLM does not challenge a prompt which is leading or whose assumptions are flawed or context is incomplete. Example: An engineer prompts, “Provide a thread-safe list implementation in C#” and receives 200 lines of flawless, correct code. It’s still the wrong answer, because the question should have been, “How can I make this code thread-safe?” and whose answer is “Use System.Collections.Concurrent” and 1 line of code. The LLM is not able to recognize an instance of the XY problem because it was not asked to.

    The post covers a lot more ground on the risks involved with LLM generated code. Another thing that caught my attention was:

    LLMs accelerates incompetence.

    Simon Willison talks about the other side when he says:

    LLMs amplify existing expertise

    The conclusion is: If you are smart, LLMs can make you—or at least make you sound—smarter. If you are dumb, LLMs will make you dumber, without you ever knowing.

  • Addiction to… vibe coding

    Fred Benenson talking about how you can get addicted to vibe coding. Yes, vibe coding.

    I’ve been using AI coding assistants like Claude Code for a while now, and I’m here to say (with all due respect to people who have substance abuse issues), I may be an addict. And boy is this is an expensive habit.

    Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern (“the code works! it’s brilliant! it just broke! wtf!”), triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.

    What makes this especially effective with AI is the minimal effort required for potentially significant rewards — creating what neuroscientists call an “effort discounting” advantage. Combined with our innate completion bias — the drive to finish tasks we’ve started — this creates a compelling psychological loop that keeps us prompting.

    However, the post is less about addiction and more about the perverse incentives that AI companies have for verbose code generation.

    1. The AI generates verbose, procedural code for a given task
    2. This code becomes part of the context when you ask for further changes or additions (this is key)
    3. The AI now has to read (and you pay for) this verbose code in every subsequent interaction
    4. More tokens processed = more revenue for the company behind the AI
    5. The LLM developers have no incentive to “fix” the verbose code problem because doing so will meaningfully impact their bottom line

    Don’t miss to read the chuckle inducing postscript.

  • Thinking

    Dustin Curtis talking about how AI is impacting his—and possibly others—thinking.

    I thought I was using AI in an incredibly positive and healthy way, as a bicycle for my mind and a way to vastly increase my thinking capacity. But LLMs are insidious–using them to explore ideas feels like work, but it’s not real work. Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show. Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

    The Netflix analogy hit hard.

  • Shallowness of LLMs

    Jason Cohen talking about how LLMs, in their knowledge, are wider but shallower than humans.

    It’s interesting how LLMs are wider than any human, but shallower than the best humans. (At many things; not, e.g. chess)

    It can’t do customer service as well as the best humans, but it can do it in 100 languages, which no human can.

    It can’t program as well as the best humans, but it can program in 100 languages and 1000 libraries, which no human can.

    It’s not as good at math or law or medicine or research or history as the best humans in each of those fields, but it is better than the median human in those fields.

  • Quoting Micha Kaufman

    Micha Kaufman’s email that is doing rounds of the internet.

    You must understand that what was once considered ‘easy tasks’ will no longer exist; what was considered ‘hard tasks’ will be the new easy, and what was considered ‘impossible tasks’ will be the new hard.

  • LLMs as index funds

    Venkatesh Rao giving an analogy between LLMs and index funds.

    Foundation models like GPT and Claude now serve as the index funds of language. Trained on enormous corpora of human text, they do not try to innovate. Instead, they track the center of linguistic gravity: fluent, plausible, average-case language. They provide efficient, scalable access to verbal coherence, just as index funds offer broad exposure to market returns. For most users, most of the time, this is enough. LLMs automate fluency the way passive investing automates exposure. They flatten out risk and elevate reliability.

    But they also suppress surprise. Like index funds, LLMs are excellent at covering known territory but incapable of charting new ground. The result is a linguistic landscape dominated by synthetic norms: smooth, predictable, uncontroversial. Writing with an LLM is increasingly like buying the market—safe, standardized, and inherently unoriginal.

    In this new environment, the act of writing raw, unassisted text begins to resemble picking penny stocks. It’s risky, inefficient, and potentially seen as naïve. Yet it remains the only place where genuine linguistic alpha—the surplus value of originality—can be found. Alpha lives in human voice, conceptual invention, emotional charge, and expressive risk. It emerges from the irreducible tensions of context, personality, and thought. And like financial alpha, it is quickly absorbed and neutralized by the systems it disrupts. What begins as a surprise becomes a template; what once felt radical becomes the new benchmark.

    As a result, the most original language is retreating into private markets. In Substacks, Signal threads, Discord servers, and private memos, new forms are being tested in semi-anonymous, high-context settings. These are the linguistic equivalents of venture capital and private equity—spaces of risk, scarcity, and concentrated attention. Just as companies now avoid going public too soon, writers may delay or even refuse public release, fearing dilution or misappropriation. Only once an idea matures might it “IPO” into the public sphere—perhaps as a viral tweet, a manifesto, or a cultural phrase. But even then, its time is limited: LLMs will soon flatten it into beta.

    This is part of Venkatesh Rao’s AI slop writing, where he shares a “recipe”—a set of high-level ideas—that he uses to generate posts with an LLM. I didn’t realise I was reading AI slop until I reached the Recipe section.

  • AI can empower developers to rewrite code without regret

    Matthew Sinclair talks about how AI can help programmers scrap the code they—or rather, the AI—has written and start over when they realise the approach won’t work, may lead to technical debt, or for any of thousands of other reasons, because new code can be generated fairly quickly.

    Working with Claude Code has fundamentally shifted how I think about the economics of programming time. Traditionally, coding involves three distinct “time buckets”:

    • Why am I doing this? Understanding the business problem and value 
    • What do I need to do? Designing the solution conceptually 
    • How am I going to do it? Actually writing the code 

    For decades, that last bucket consumed enormous amounts of our time. We’d spend hours, days or weeks writing, debugging, and refining. With Claude, that time cost has plummeted to nearly zero. I can generate thousands of lines of functional code in a sitting—something that is, frankly, mind-blowing.

    And there’s a new skill that emerges: wielding the knife. With code generation being essentially free, we need to become much more comfortable with throwing away entire solutions. The sunk cost fallacy hits programmers hard—we hate discarding code we’ve invested in, fearing we might break something important or never get back to a working state.

    But when your assistant can rewrite everything in minutes, that calculus changes completely. Three times during my backend project, I looked at substantial amounts of code—thousands of lines that technically worked—and decided to scrap it entirely because the approach wasn’t right. This wasn’t easy. My instinct was still to try to salvage and refactor. But the right move was to step back, rethink the approach, and direct the AI down a different path.

    This willingness to cut ruthlessly is a muscle most developers haven’t developed yet. It requires confidence in your architectural judgment and a radical shift in how you value implementation time.

  • Program vs Product

    Dylan Beattie explaining problem with vibe coding by referencing program vs product.

    Probably the single most important lesson I’ve learned in my career, the thing that I would argue is the hallmark of “experience”, is understanding just how much work it takes to turn a working program into a viable product. It’s why developer estimates are so notoriously optimistic – and why experienced developers are so notoriously cynical. Let’s say you crank out a bit of code that’ll take responses from a web form and add them in an Excel spreadsheet. That’s not that hard… yay! we just built a Typeform competitor in one afternoon! Except, no, you didn’t. You made one thing work one time on one computer. You haven’t considered encoding, internationalization, concurrency, authentication, telemetry, billing, branding, mobile devices, deployment. You haven’t hit any of the weird limits yet – ever had a system work brilliantly for the first 65,535 requests and then fall over? You don’t have a product. At best, you have a proof-of-concept of a good idea that, if some very smart people work very hard, might become a viable product.

  • Programming in natural language

    Edsger W. Dijkstra explaining why using natural language for programming is a bad idea.

    In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

    We know in the meantime that the choice of an interface is not just a division of (a fixed amount of) labour, because the work involved in co-operating and communicating across the interface has to be added. We know in the meantime —from sobering experience, I may add— that a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.

    Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.

    I saw this post on Hacker News and this comment sums up Dijkstra’s point succinctly.

    Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

    As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

    Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

  • Learn to code

    Kyrylo Silin reflecting on importance of learning to code.

    AI is the new shiny toy everyone wants to play with. And to be honest, it’s impressive. The problem with AI is that with every year, it gets better and better. Wait what? How’s that a problem? Well, with every new year you lose control.

    The more you rely on AI, the less you understand what you’re doing. The less you understand, the more AI vendors can control you. And the more control they have, the more they can charge you. It’s a vicious cycle.

    This shift was inevitable. Humanity must adapt to this new reality. AI isn’t going away, and we need to learn how to use it to our advantage.

    The large language models (LLMs) created by tech giants have absorbed decades of knowledge — our knowledge. They’ve been trained on our work.

    Now they’re selling it back to us and telling us we only need to learn English to code. This is a lie. As a new programmer, I don’t want you to fall for it. There’s no corner-cutting. Get your shit together and learn to code.

    Ha!

    While I agree that programmers need to learn to code, the unfortunate reality is that AI will charge us to write code that it trained itself on using our work. The charge is for completing a function in a matter of minutes, sometimes even seconds. The charge is for speed and efficiency.