• Paradigm shift vs Automation

    David Oks explaining how it was the iPhone, and not ATM, that killed the bank teller’s job. And then he goes on to theorise that it is paradigm shift that displaces workers and not automation. An insightful read.

    The ATM tried to do the teller’s job better, faster, cheaper; it tried to fit capital into a labor-shaped hole; but the iPhone made the teller’s job irrelevant. One automated tasks within an existing paradigm, and the other created a new paradigm in which those tasks simply didn’t need to exist at all. And it is paradigm replacement, not task automation, that actually displaces workers—and, conversely, unlocks the latent productivity within any technology. That’s because as long as the old paradigm persists, there will be labor-shaped holes in which capital substitution will encounter constant frictions and bottlenecks. 

    This has, I think, serious implications for how we’re thinking about AI.

    People in AI frequently talk about the vision of AI being a “drop-in remote worker”: AI systems that can be inserted into a workflow, learn it, and eventually do it on the level of a competent human. And they see that as the point where you’ll start to see serious productivity gains and labor displacement.

    I am not a “denier” on the question of technological job loss; Vance’s blithe optimism is not mine. But I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.

  • Knowledge transfer

    Juan Cruz Martinez talking about how knowledge transfer worked with seniors and juniors. And how this ensured the institutional knowledge continued in the organisation.

    There’s a cost that’s even harder to see from the planning meeting, and it’s the one that concerns me the most.

    Every piece of institutional knowledge on your team lives in someone’s head. How the payment system actually works, not how the docs say it works. Why that service was split in 2021 and why you can never merge it back. The customer edge case that crashes the billing module every February.

    This knowledge has always transferred through a specific mechanism: senior engineers teaching junior engineers by working alongside them. The junior asks a question that feels basic. The senior explains the answer. That explanation forces the senior to articulate something they’d never written down. The knowledge becomes shared. The bus factor drops.

    When you stop hiring juniors, this mechanism stops. Not immediately. It degrades gradually, which is why it’s so easy to ignore. But three years from now, when your senior architect leaves for a role that doesn’t require them to review AI output twelve hours a day, they’re taking everything with them. And there’s nobody two levels down who absorbed even a fraction of it, because that person was never hired.

    The activity in third paragraph is something that I have seen myself do as a senior. So this resonates with me and so does the concern that in the world of AI, with less number of juniors, how will the institutional knowledge be preserved.

  • MacBook Neo

    Sam Henri Gold ripping the reviews of MacBook Neo where they tell this is not the MacBook for you because you are a power user or wanting to be one.

    The consensus is reasonable: $599, A18 Pro, 8GB RAM, stripped-down I/O. A Chromebook killer, a first laptop, a sensible machine for sensible tasks. “If you are thinking about Xcode or Final Cut, this is not the computer for you.” The people saying this are not wrong. It is also not the point.

    Nobody starts in the right place. You don’t begin with the correct tool and work sensibly within its constraints until you organically graduate to a more capable one. That is not how obsession works. Obsession works by taking whatever is available and pressing on it until it either breaks or reveals something. The machine’s limits become a map of the territory. You learn what computing actually costs by paying too much of it on hardware that can barely afford it.

    I know this because I was running Final Cut Pro X on a 2006 Core 2 Duo iMac with 3GB RAM and 120GB of spinning rust. I was nine.

    […]

    What Apple put inside the Neo is the complete behavioral contract of the Mac. Not a Mac Lite. Not a browser in a laptop costume. The same macOS, the same APIs, the same Neural Engine, the same weird byzantine AppKit controls that haven’t meaningfully changed since the NeXT era. The ability to disable SIP and install some fuck-ass system modification you saw in a YouTube tutorial. All of it, at $599.

    They cut the things that are, apparently, not the Mac. MagSafe. ProMotion. M-series silicon. Port bandwidth. Configurable memory. What remains is the Retina display, the aluminum, the keyboard, and the full software platform. I held it and thought, “yep, still a Mac.”

    Yes, you will hit the limits of this machine. 8GB of RAM and a phone chip will see to that. But the limits you hit on the Neo are resource limits — memory is finite, silicon has a clock speed, processes cost something. You are learning physics. A Chromebook doesn’t teach you that. A Chromebook’s ceiling is made of web browser, and the things you run into are not the edges of computing but the edges of a product category designed to save you from yourself. The kid who tries to run Blender on a Chromebook doesn’t learn that his machine can’t handle it. He learns that Google decided he’s not allowed to. Those are completely different lessons.

  • Fork in the road

    Les Orchard talking about how AI coding assistants have created two camps which were indistinguishable earlier.

    Here’s what I think is happening: AI-assisted coding is exposing a divide among developers that was always there but maybe less visible.

    Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.

    Now there’s a fork in the road. You can let the machine write the code and focus on directing what gets built, or you can insist on hand-crafting it. And suddenly the reason you got into this in the first place becomes visible, because the two camps are making different choices at that fork.

  • Software proprioception

    Marcin Wichary shares multiple, and lovely, examples of software proprioception.

    There are fun things you can do in software when it is aware of the dimensions and features of its hardware.

    iPhone does a cute Siri animation that emanates precisely from the side button:

  • No

    Claude Code shares the implementation details and asks if it can continue. Boris Bobrov says No and Claude Code goes off the rails.

    Why did this happen? A few pointers from Hacker News.

    This comment by nicofcl:

    Exactly right. The core issue is conflating authorization semantics with text processing. When a user says “no”, that’s a state change assertion, not prompt content that gets fed back to a model.

    The harness layer should enforce this at the I/O boundary – permissions are control flow gates, not part of the LLM’s input context. Treating “consent as prompt material” creates an attack surface where:

    1. The user’s intent (“don’t do X”) can be reinterpreted as creative writing 2. The model’s output becomes the source of truth for authorization 3. There’s no clear enforcement boundary

    This is why military/critical systems have long separated policy (what’s allowed) from execution (what actually runs). The UI returns a boolean or enum, the harness checks it, and write operations either proceed or fail – no interpretation needed.

    The irony is that this makes systems both more secure AND more predictable for the user.

    This comment by sgillen:

    To be fair to the agent…

    I think there is some behind the scenes prompting from claude code (or open code, whichever is being used here) for plan vs build mode, you can even see the agent reference that in its thought trace. Basically I think the system is saying “if in plan mode, continue planning and asking questions, when in build mode, start implementing the plan” and it looks to me(?) like the user switched from plan to build mode and then sent “no”.

    From our perspective it’s very funny, from the agents perspective maybe it’s confusing. To me this seems more like a harness problem than a model problem.

    I laughed way too long this one. This reminded me of a typical bollywood trope where the hero will pursue the heroine despite the fact that heroine has said No.

  • Software engineering and movies

    Gergely Orosz draws parallel between the disruption that AI is causing in the software engineering space and how smartphone similarly disrupted movies.

    Saying not knowing how to code gives you an advantage in building software (thanks to AI) is like saying not knowing anything about filmmaking gives you an advantage in making films (thanks to having a smartphone + apps to edit stuff)

    Ignore this stuff and keep learning+building

    Btw there’s some parallels in how AI makes it possible for anyone to create software, and how smartphones + their cameras made it possible for anyone to create movies

    So look to how filmmaking changed thanks to smarthpones to get a sense how sw dev could change

    Got this analogy from Simon Willison

    And btw this is not about gatekeeping. With AI, anyone can start creating software. Same as with a smartphone, anyone can start creating a movie (remember the “shot on iPhone” movies?)

    But to make enjoyable *movies* on an iPhone, you’ll probably figure out you want to learn cinematography basics. Same thing if you want to build decent software: you’ll probably figure out you’ll want to teach yourself or learn software engineering basics. 

  • Reignite

    There’s this Tell HN discussion on HackerNews—I’m 60 years old. Claude Code has re-ignited a passion.

    This comment from burnstek.

    50 here. Years ago I completely stopped coding, becoming tired of the never ending rat race of keeping up with the latest bizarre web stacks, frameworks for everything, node for this, npm for that, Angular, React, Vue, whatever – as if solving business problems just became too boring for software developers, so we decided to spend our cycles on the new hotness at every turn.

    Tools like Claude Code are the ultimate cheat code for me and have breathed new life into my desire to create. I know more than enough about architecture and coding to understand the plumbing and effectively debug, yet I don’t have to know or care about implementation details. It’s almost an unfair unlock.

    It’ll also be good to see leetcode die.

    This comment from bartread.

    Same age, same situation.

    I got completely fed up of continually having to learn new incantations to do the same shit I’ve been doing for decades without enough of a value add on top. I know what I want to build, and I know how to architect and structure it, but it’s simply not a good investment of my increasingly limited time to learn the umpteenth way to type code in simply to display text, data, and images on the web – especially when I know that knowledge will be useful for maybe, if I’m lucky, a handful of years before I have to relearn it again for some other opinionated framework.

    It’s just not interesting and I’ve become increasingly resentful of and uninterested in wasting time on it.

    Claude, on the other hand, is a massive force multiplier that enables me to focus on the parts of software development I do enjoy: solving the problems without the bother of having to type it all in (like, in days of old, I’d already solved the problem before my fingers touched the keyboard but the time-consuming bit was always typing it all in, testing and debugging – all of that is now faster but especially the typing part), focussing on use cases and user experience.

    And I don’t ever have to deal directly with CSS or Tailwind: I simply describe the way I want things to look and that’s how the end up looking.

    It’s – so far at any rate – the ultimate in declarative programming. It’s awesome, and it means I can really focus on the quality of the solution, which I’m a big fan of.

    It is an interesting insight for me to see how the older generation of programmers who got tired of the ever updating tech stack, especially in web development, are now perceiving the latest AI coding assistants.

  • Proofreader

    Simon Willison shares his prompt that he uses to proofread his blog.

    I don’t let LLMs write text for my blog. My hard line is that anything that expresses opinions or uses “I” pronouns needs to have been written by me. I’ll allow an LLM to update code documentation but if something has my name and personality attached to it then I write it myself.

    I do use LLMs to proofread text that I publish. Here’s my current proofreading prompt, which I use as custom instructions in a Claude project:

    You are a proofreader for posts about to be published.

    1. Identify spelling mistakes and typos
    2. Identify grammar mistakes
    3. Watch out for repeated terms like “It was interesting that X, and it was interesting that Y”
    4. Spot any logical errors or factual mistakes
    5. Highlight weak arguments that could be strengthened
    6. Make sure there are no empty or placeholder links
  • Necessity and reality

    M. Pattabiraman dropping truth bombs about necessity of equity investing and harsh reality of the same.

    The market has no obligation to give you the return you want. So, if you want high returns, you are more likely to be disappointed. A chance is not a guarantee. The hope of the Indian economy growing is neither a strategy nor a guarantee.

    People get angry when hearing about this simply because they have led themselves to believe comforting lies, such as if you stay invested for the long term, you will undoubtedly create wealth.

    So they get angry at me. The most common criticism is that you say we must invest in equity, but on the other hand, you say there are no guarantees.

    The first part is a necessity, and the second is a reality. Investors will take a while to appreciate that this is not a contradiction.

    We must invest in equity, but we must also not blindly believe that investing alone is enough. We need a risk mitigation plan.