• Reignite

    There’s this Tell HN discussion on HackerNews—I’m 60 years old. Claude Code has re-ignited a passion.

    This comment from burnstek.

    50 here. Years ago I completely stopped coding, becoming tired of the never ending rat race of keeping up with the latest bizarre web stacks, frameworks for everything, node for this, npm for that, Angular, React, Vue, whatever – as if solving business problems just became too boring for software developers, so we decided to spend our cycles on the new hotness at every turn.

    Tools like Claude Code are the ultimate cheat code for me and have breathed new life into my desire to create. I know more than enough about architecture and coding to understand the plumbing and effectively debug, yet I don’t have to know or care about implementation details. It’s almost an unfair unlock.

    It’ll also be good to see leetcode die.

    This comment from bartread.

    Same age, same situation.

    I got completely fed up of continually having to learn new incantations to do the same shit I’ve been doing for decades without enough of a value add on top. I know what I want to build, and I know how to architect and structure it, but it’s simply not a good investment of my increasingly limited time to learn the umpteenth way to type code in simply to display text, data, and images on the web – especially when I know that knowledge will be useful for maybe, if I’m lucky, a handful of years before I have to relearn it again for some other opinionated framework.

    It’s just not interesting and I’ve become increasingly resentful of and uninterested in wasting time on it.

    Claude, on the other hand, is a massive force multiplier that enables me to focus on the parts of software development I do enjoy: solving the problems without the bother of having to type it all in (like, in days of old, I’d already solved the problem before my fingers touched the keyboard but the time-consuming bit was always typing it all in, testing and debugging – all of that is now faster but especially the typing part), focussing on use cases and user experience.

    And I don’t ever have to deal directly with CSS or Tailwind: I simply describe the way I want things to look and that’s how the end up looking.

    It’s – so far at any rate – the ultimate in declarative programming. It’s awesome, and it means I can really focus on the quality of the solution, which I’m a big fan of.

    It is an interesting insight for me to see how the older generation of programmers who got tired of the ever updating tech stack, especially in web development, are now perceiving the latest AI coding assistants.

  • Proofreader

    Simon Willison shares his prompt that he uses to proofread his blog.

    I don’t let LLMs write text for my blog. My hard line is that anything that expresses opinions or uses “I” pronouns needs to have been written by me. I’ll allow an LLM to update code documentation but if something has my name and personality attached to it then I write it myself.

    I do use LLMs to proofread text that I publish. Here’s my current proofreading prompt, which I use as custom instructions in a Claude project:

    You are a proofreader for posts about to be published.

    1. Identify spelling mistakes and typos
    2. Identify grammar mistakes
    3. Watch out for repeated terms like “It was interesting that X, and it was interesting that Y”
    4. Spot any logical errors or factual mistakes
    5. Highlight weak arguments that could be strengthened
    6. Make sure there are no empty or placeholder links
  • Necessity and reality

    M. Pattabiraman dropping truth bombs about necessity of equity investing and harsh reality of the same.

    The market has no obligation to give you the return you want. So, if you want high returns, you are more likely to be disappointed. A chance is not a guarantee. The hope of the Indian economy growing is neither a strategy nor a guarantee.

    People get angry when hearing about this simply because they have led themselves to believe comforting lies, such as if you stay invested for the long term, you will undoubtedly create wealth.

    So they get angry at me. The most common criticism is that you say we must invest in equity, but on the other hand, you say there are no guarantees.

    The first part is a necessity, and the second is a reality. Investors will take a while to appreciate that this is not a contradiction.

    We must invest in equity, but we must also not blindly believe that investing alone is enough. We need a risk mitigation plan.

  • Fusion architecture

    Om Malik explaning fusion architecture in Apple’s M5 Pro and M5 Max.

    For years, Apple’s narrative around its “M-series” chips was about integration. One chip. One die. Everything on the same piece of silicon. Unified memory so the CPU, GPU, and Neural Engine could all access the same data without copying it around. It worked beautifully for the M1 and M2. But now with the rise of AI, chips need to get bigger. AI demands more cores, more memory bandwidth, more compute. So, making one really big honking chip gets really expensive.

    The larger a single die gets, the harder it is to manufacture. One tiny defect anywhere on the silicon and you toss the whole thing. Yields drop. Costs climb. AMD’s CEO Lisa Su recently showed that a design using four smaller chiplets delivered more total capability at 59 percent of the cost of one big chip. 

    Apple, too, faced a fork in the road. Keep building bigger and bigger single chips. Or break the big chip into smaller pieces and connect them together fast enough that software barely notices the split. They chose the second option, but made it their own. They call it Fusion Architecture.

    […]

    This approach comes with its own tradeoffs. You split a chip into pieces, the pieces need to talk to each other. That means data traveling between dies, which adds latency. Memory gets divided up between chiplets. You solve the manufacturing problem but you compromise the architecture. Apple decided to do it its own way.

    Johny Srouji, who has led Apple’s silicon efforts for years, says Apple kept unified memory intact. In the press release he said Fusion Architecture would “scale the capabilities of Apple silicon while preserving its core tenets of performance, power efficiency, and unified memory architecture.” While Apple says unified memory is preserved across both dies, the technical details of how memory actually works across two dies versus one die aren’t spelled out.

  • Replaced

    This comic by Garry Trudeau.

  • Broken

    Tim Ferriss explaining how self help can be counter productive. This excerpt can be taken out of context, so I will highly recommend reading the entire post.

    The older I get, the more I think that self-help can be a trap. Sometimes the cure is worse than the disease. I say this after ~20 years of writing self-help and a lifetime of consuming it.

    Spend enough time in the world of “improvement,” and you’ll notice something strange: The people most obsessed with self-help are often the least helped by it. Behind the smiles and motivational quotes, behind closed doors and after a drink or two, the truth is that they’re not able to outsmart their worries.

    On one hand, perhaps this unhappiness is precisely what lands one in self-development in the first place, right? I long assumed this about myself, and it’s partially true.

    On the other hand, what if self-help itself is actually creating or amplifying unhappiness?

    Modern self-help contains an in-built flaw:

    To continually improve yourself, you must continually locate the ways you are broken.

  • Shallow competence

    Daniel explaining how AI is enabling shallow competence among the junior developers, as AI tells them what to do and but not what not to do.

    So I got this comment on my last video and it genuinely stopped me in my tracks. It was from @Thiccolo and it reads:

    “For someone like you, who likely has years of experience without LLMs, your brain totally understands good code/bad code, good architecture, and just general intuition around code and systems. LLMs must be an absolute gamechanger. But for someone like me who is starting out in this field, how am I supposed to build the years of experience and intuition that comes from manually writing code and building systems when companies are expecting AI to be used from here on out?”

    The number of upvotes this got told me everything. A lot of you are feeling this exact thing and not really talking about it. So let’s talk about it.

    Here’s the real problem. AI is making it really easy to build what I’d call shallow competence. You’re shipping fast, your manager’s happy, things look good on paper. But the moment someone in a code review asks you “hey why did you go with this approach?” and you freeze. Because honestly? You don’t know. The AI gave it to you and you just ran with it. Does this sound familiar?

    That’s a problem. And it’s going to catch up with you.

    To be clear, the reason experienced developers are valuable isn’t because we write code faster than you. It’s because we’ve spent years learning what not to do. We’ve made terrible architectural decisions and had to live with them. We’ve been paged at 2am because of something we shipped that seemed totally fine. That failure pattern recognition is what companies are actually paying for. And right now, a lot of junior devs are accidentally skipping all of that.

    When I was learning, and I know this is going to sound like “back in my day” type stuff, struggle was just a given. You had a bug, you read the stack trace, you traced through the code, you dug through the logs. That was just the job. Nobody was handing you answers. And as frustrating as that was, it’s exactly how I built the intuition I still rely on today.

  • Epistemic collapse

    Lucija Gregov talking about the upcoming risks of AI. One is them is called epistemic collapse.

    ‘Epistemic’ comes from a Greek word ‘episteme’, meaning ‘knowledge’. Let’s start with what’s happening to us, and what humans are already doing with this technology.

    A study published in Nature in January 2026 showed participants deepfake videos of someone confessing to a crime. The researchers explicitly warned participants that the videos were AI-generated. But this didn’t matter. Even the people who believed the warning, who knew it was fake, were still influenced by what they saw.

    Transparency didn’t work. The standard response to AI-generated misinformation is “just label it” or “tell people it’s synthetic.” This study showed that’s not enough. Knowing something is fake does not neutralise its effect on your judgement.

    So, the danger isn’t that AI will deceive us in some dramatic, sci-fi way. The danger is that AI will make deception so cheap and so ubiquitous that we might stop trying to figure out what is true. Not because we are fooled, but because we are exhausted. When everything could be fake, the rational response starts to look like not trusting anything at all. It started a while ago with all of the fake information on social media, but with AI, this problem is now becoming much bigger and on a bigger scale. We are also dealing with feedback loops of training models on user data, which is often wrong, or on user data from the internet, which is often wrong as well. How do we know which information was ground truth? I imagine this as making photocopies many times, and each time the copy becomes more distorted and further away from the original. But now, after we made hundreds and thousands of copies, we have lost the original copy, so we don’t have any idea what the original looked like. That is epistemic collapse, and it is already happening.

  • Discipline and vision

    August Mueller on why he is not worried about being replaced by AI.

    Lately I’ve been thinking more and more about what Acorn should be. In a time when anyone can come up with an app idea and ship with little effort, what is going to make Acorn stand out?

    I’m starting to think that’s going to be personality and feel and polish, but turned up a notch. That’s what I used to do when I started writing apps, but in some ways I have really toned it down in favor of OS alignment.

    Does an AI know how to do that? Does a coding assistant know that an app is really a giant collection of details?

    Maybe I’ll even have fewer competitors in the long run, or at least not as many new competitors. Because at some point it’s not about how good a programmer you are (and I’ve always been a middle-tier programmer), it’s about discipline and vision.

    I’ve got feelings because anyone can put an app together now, so what’s the point of me? But at the same time, I can focus on what I want to focus on and hopefully charge forward and maybe everyone else will get tired of little vibe coded apps because you still have to know exactly what you want to build. And you can’t build something you can’t think of. And I know how to think and I have ideas.

    And I have discipline and I know how to ship. And in my experience, that’s what has always mattered.

  • Footnote

    Greg Knauss sharing his thoughts on how he is losing an important part of himself as AI gets good at programming.

    It still makes me sad, though, that what I’ve spent 45 years of my life toiling at will likely end up as a footnote, the providence of folksy artisans and historical reenactors. I didn’t leave a dent in the universe so much as splatted against it. The world no longer has a need for what I somewhat sardonically call my art. We are all product managers now, pleading with obtuse underlings to go back and try again and to get it right this time. I remain a father and husband and son and friend, but the need for what I can do — the need for what programmers can do — is shrinking, and my conception of myself and my usefulness along with it.

    There will be more software than ever, as its production is automated; we are entering the industrial age of the digital age. But less of this code will be elegant, or considerate, or graceful. Less of it will be created by removing what isn’t David, and less of it will be driven by a human understanding of human needs.

    That was something I did that mattered. I’ll miss it.

    Maybe this is how human civilisation evolves. All our work ultimately ends up as a footnote.