• Product and Relationship

    Terry Godier’s thought provoking post on how modern gadgets don’t stop talking.

    This watch [Casio F-91W] costs twelve dollars. It weighs twenty-one grams. It has an alarm that sounds like a microwave in another room. It has told time the same way since 1989.

    It doesn’t know my heart rate. It has no opinions about whether I’ve stood up enough today. It will never need a firmware update.

    When the battery dies in seven years, I’ll press in a new one with a paperclip.

    That will be the entirety of my obligation to it.

    This watch [Applet Watch] costs four hundred dollars. It also tells time.

    It also tracks my steps, monitors my blood oxygen, measures my sleep quality, logs my workouts, reminds me to breathe, reminds me to stand, nudges me to close my rings, alerts me to unusual heart rhythms, pings me with notifications from six apps, and dies every night.

    One of these is a product. The other is a relationship.

    And a very insightful take on screen time.

    Screen Time gives you a report card. And if the grade is bad, the design makes one thing clear: That’s a YOU problem.

    It measures YOUR usage. Tracks YOUR behavior. Gives YOU a weekly report card. If the numbers are too high?

    You picked it up too much.
    You spent too long.
    You failed your limit.
    Try again next week.
    Try harder.

    Screen Time is a blame shift dressed in a soft font.

    Ha!

  • Seat-based business model

    Ben Thompson talking about the impact of AI on seat-based business model.

    […] when Microsoft revealed how they will handle the potential business impact of AI reducing seats, which is a bit of a problem for their seat-based business model: the company is going to bundle AI into a new higher-tiered enterprise offering, E7, which is going to cost twice as much — $99 per seat per month — as the formerly top-of-the-line E5. That’s a big increase, which Microsoft needs to justify with AI that actually makes those seats more productive, and the product they launched with the new bundle was Copilot Cowork.

    This never occurred to me. AI replaces humans. Organisations need less licenses for their enterprise software. Enterprise software makers double the price of their licenses. Organisations pay the same amount for—supposedly—increased productivity.

  • Continue to plant the little acorns from which the mighty oak trees grow

    Sharif Shameem talks about the disadvantages of success and how willingness to look stupid helps countering it.

    There’s this unfortunate pattern that happens when someone wins a Nobel Prize. They tend to stop doing great work. Richard Hamming talks about this in You and Your Research:

    When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn’t the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren’t good afterwards, but they were superb before they got there and were only good afterwards.

    Before the Nobel Prize, nobody really cares who you are. But after the Nobel Prize, you’re a Nobel Prize winner, and Nobel Prize winners are supposed to have Good Ideas. Every idea, every paper, every talk at a conference is now being evaluated against the standard of your Nobel Prize-winning work. Everyone is asking, “is this worthy of a Nobel laureate?” It’s a high bar to clear. So instead of trying and occasionally failing, they just… stop trying. The fear of making something bad is worse than producing nothing at all.

    This reminds me of Quality vs Quantity.

  • Paradigm shift vs Automation

    David Oks explaining how it was the iPhone, and not ATM, that killed the bank teller’s job. And then he goes on to theorise that it is paradigm shift that displaces workers and not automation. An insightful read.

    The ATM tried to do the teller’s job better, faster, cheaper; it tried to fit capital into a labor-shaped hole; but the iPhone made the teller’s job irrelevant. One automated tasks within an existing paradigm, and the other created a new paradigm in which those tasks simply didn’t need to exist at all. And it is paradigm replacement, not task automation, that actually displaces workers—and, conversely, unlocks the latent productivity within any technology. That’s because as long as the old paradigm persists, there will be labor-shaped holes in which capital substitution will encounter constant frictions and bottlenecks. 

    This has, I think, serious implications for how we’re thinking about AI.

    People in AI frequently talk about the vision of AI being a “drop-in remote worker”: AI systems that can be inserted into a workflow, learn it, and eventually do it on the level of a competent human. And they see that as the point where you’ll start to see serious productivity gains and labor displacement.

    I am not a “denier” on the question of technological job loss; Vance’s blithe optimism is not mine. But I’m skeptical that simply slotting AI into human-shaped jobs will have the results people seem to expect. The history of technology, even exceptionally powerful general-purpose technology, tells us that as long as you are trying to fit capital into labor-shaped holes you will find yourself confronted by endless frictions: just as with electricity, the productivity inherent in any technology is unleashed only when you figure out how to organize work around it, rather than slotting it into what already exists. We are still very much in the regime of slotting it in. And as long as we are in that regime, I expect disappointing productivity gains and relatively little real displacement.

  • Knowledge transfer

    Juan Cruz Martinez talking about how knowledge transfer worked with seniors and juniors. And how this ensured the institutional knowledge continued in the organisation.

    There’s a cost that’s even harder to see from the planning meeting, and it’s the one that concerns me the most.

    Every piece of institutional knowledge on your team lives in someone’s head. How the payment system actually works, not how the docs say it works. Why that service was split in 2021 and why you can never merge it back. The customer edge case that crashes the billing module every February.

    This knowledge has always transferred through a specific mechanism: senior engineers teaching junior engineers by working alongside them. The junior asks a question that feels basic. The senior explains the answer. That explanation forces the senior to articulate something they’d never written down. The knowledge becomes shared. The bus factor drops.

    When you stop hiring juniors, this mechanism stops. Not immediately. It degrades gradually, which is why it’s so easy to ignore. But three years from now, when your senior architect leaves for a role that doesn’t require them to review AI output twelve hours a day, they’re taking everything with them. And there’s nobody two levels down who absorbed even a fraction of it, because that person was never hired.

    The activity in third paragraph is something that I have seen myself do as a senior. So this resonates with me and so does the concern that in the world of AI, with less number of juniors, how will the institutional knowledge be preserved.

  • MacBook Neo

    Sam Henri Gold ripping the reviews of MacBook Neo where they tell this is not the MacBook for you because you are a power user or wanting to be one.

    The consensus is reasonable: $599, A18 Pro, 8GB RAM, stripped-down I/O. A Chromebook killer, a first laptop, a sensible machine for sensible tasks. “If you are thinking about Xcode or Final Cut, this is not the computer for you.” The people saying this are not wrong. It is also not the point.

    Nobody starts in the right place. You don’t begin with the correct tool and work sensibly within its constraints until you organically graduate to a more capable one. That is not how obsession works. Obsession works by taking whatever is available and pressing on it until it either breaks or reveals something. The machine’s limits become a map of the territory. You learn what computing actually costs by paying too much of it on hardware that can barely afford it.

    I know this because I was running Final Cut Pro X on a 2006 Core 2 Duo iMac with 3GB RAM and 120GB of spinning rust. I was nine.

    […]

    What Apple put inside the Neo is the complete behavioral contract of the Mac. Not a Mac Lite. Not a browser in a laptop costume. The same macOS, the same APIs, the same Neural Engine, the same weird byzantine AppKit controls that haven’t meaningfully changed since the NeXT era. The ability to disable SIP and install some fuck-ass system modification you saw in a YouTube tutorial. All of it, at $599.

    They cut the things that are, apparently, not the Mac. MagSafe. ProMotion. M-series silicon. Port bandwidth. Configurable memory. What remains is the Retina display, the aluminum, the keyboard, and the full software platform. I held it and thought, “yep, still a Mac.”

    Yes, you will hit the limits of this machine. 8GB of RAM and a phone chip will see to that. But the limits you hit on the Neo are resource limits — memory is finite, silicon has a clock speed, processes cost something. You are learning physics. A Chromebook doesn’t teach you that. A Chromebook’s ceiling is made of web browser, and the things you run into are not the edges of computing but the edges of a product category designed to save you from yourself. The kid who tries to run Blender on a Chromebook doesn’t learn that his machine can’t handle it. He learns that Google decided he’s not allowed to. Those are completely different lessons.

  • Fork in the road

    Les Orchard talking about how AI coding assistants have created two camps which were indistinguishable earlier.

    Here’s what I think is happening: AI-assisted coding is exposing a divide among developers that was always there but maybe less visible.

    Before AI, both camps were doing the same thing every day. Writing code by hand. Using the same editors, the same languages, the same pull request workflows. The craft-lovers and the make-it-go people sat next to each other, shipped the same products, looked indistinguishable. The motivation behind the work was invisible because the process was identical.

    Now there’s a fork in the road. You can let the machine write the code and focus on directing what gets built, or you can insist on hand-crafting it. And suddenly the reason you got into this in the first place becomes visible, because the two camps are making different choices at that fork.

  • Software proprioception

    Marcin Wichary shares multiple, and lovely, examples of software proprioception.

    There are fun things you can do in software when it is aware of the dimensions and features of its hardware.

    iPhone does a cute Siri animation that emanates precisely from the side button:

  • No

    Claude Code shares the implementation details and asks if it can continue. Boris Bobrov says No and Claude Code goes off the rails.

    Why did this happen? A few pointers from Hacker News.

    This comment by nicofcl:

    Exactly right. The core issue is conflating authorization semantics with text processing. When a user says “no”, that’s a state change assertion, not prompt content that gets fed back to a model.

    The harness layer should enforce this at the I/O boundary – permissions are control flow gates, not part of the LLM’s input context. Treating “consent as prompt material” creates an attack surface where:

    1. The user’s intent (“don’t do X”) can be reinterpreted as creative writing 2. The model’s output becomes the source of truth for authorization 3. There’s no clear enforcement boundary

    This is why military/critical systems have long separated policy (what’s allowed) from execution (what actually runs). The UI returns a boolean or enum, the harness checks it, and write operations either proceed or fail – no interpretation needed.

    The irony is that this makes systems both more secure AND more predictable for the user.

    This comment by sgillen:

    To be fair to the agent…

    I think there is some behind the scenes prompting from claude code (or open code, whichever is being used here) for plan vs build mode, you can even see the agent reference that in its thought trace. Basically I think the system is saying “if in plan mode, continue planning and asking questions, when in build mode, start implementing the plan” and it looks to me(?) like the user switched from plan to build mode and then sent “no”.

    From our perspective it’s very funny, from the agents perspective maybe it’s confusing. To me this seems more like a harness problem than a model problem.

    I laughed way too long this one. This reminded me of a typical bollywood trope where the hero will pursue the heroine despite the fact that heroine has said No.

  • Software engineering and movies

    Gergely Orosz draws parallel between the disruption that AI is causing in the software engineering space and how smartphone similarly disrupted movies.

    Saying not knowing how to code gives you an advantage in building software (thanks to AI) is like saying not knowing anything about filmmaking gives you an advantage in making films (thanks to having a smartphone + apps to edit stuff)

    Ignore this stuff and keep learning+building

    Btw there’s some parallels in how AI makes it possible for anyone to create software, and how smartphones + their cameras made it possible for anyone to create movies

    So look to how filmmaking changed thanks to smarthpones to get a sense how sw dev could change

    Got this analogy from Simon Willison

    And btw this is not about gatekeeping. With AI, anyone can start creating software. Same as with a smartphone, anyone can start creating a movie (remember the “shot on iPhone” movies?)

    But to make enjoyable *movies* on an iPhone, you’ll probably figure out you want to learn cinematography basics. Same thing if you want to build decent software: you’ll probably figure out you’ll want to teach yourself or learn software engineering basics.