• Sprint

    I recently read two articles which highlighted a very similar problem when using AI for coding—burn out.

    Matthew Hansen talking about how a one-off productivity boost by AI can lead to the team burning out.

    My friend’s panel raised a point I keep coming back to: if we sprint to deliver something, the expectation becomes to keep sprinting. Always. Tired engineers miss edge cases, skip tests, ship bugs. More incidents, more pressure, more sprinting. It feeds itself.

    This is a management problem, not an engineering one. When leadership sees a team deliver fast once (maybe with AI help, maybe not), that becomes the new baseline. The conversation shifts from “how did they do that?” to “why can’t they do that every time?”

    My friend was saying:

    When people claim AI makes them 10x more productive, maybe it’s turning them from a 0.1x engineer to a 1x engineer. So technically yes, they’ve been 10x’d. The question is whether that’s a productivity gain or an exposure of how little investigating they were doing before.

    Burnout and shipping slop will eat whatever productivity gains AI gives you. You can’t optimise your way out of people being too tired to think clearly.

    And here’s Siddhant Khare talking about how an increase in throughput also increases context switching.

    Here’s the thing that broke my brain for a while: AI genuinely makes individual tasks faster. That’s not a lie. What used to take me 3 hours now takes 45 minutes. Drafting a design doc, scaffolding a new service, writing test cases, researching an unfamiliar API. All faster.

    But my days got harder. Not easier. Harder.

    The reason is simple once you see it, but it took me months to figure out. When each task takes less time, you don’t do fewer tasks. You do more tasks. Your capacity appears to expand, so the work expands to fill it. And then some. Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.

    Before AI, I might spend a full day on one design problem. I’d sketch on paper, think in the shower, go for a walk, come back with clarity. The pace was slow but the cognitive load was manageable. One problem. One day. Deep focus.

    Now? I might touch six different problems in a day. Each one “only takes an hour with AI.” But context-switching between six problems is brutally expensive for the human brain. The AI doesn’t get tired between problems. I do.

    This is the paradox: AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.

    If the team sprints in one sprint, the expectation is that the team will be sprinting forever.

    And on the other end of the spectrum we have David Crawshaw’s experience:

    I am having more fun programming than I ever have, because so many more of the programs I wish I could find the time to write actually exist. I wish I could share this joy with the people who are fearful about the changes agents are bringing. The fear itself I understand, I have fear more broadly about what the end-game is for intelligence on tap in our society. But in the limited domain of writing computer programs these tools have brought so much exploration and joy to my work.

    This is the most confusing aspect for me. The polar opposite experiences people are sharing while using AI for coding.

    Filed under
  • Analytics

    Mike Swanson explains the history of software tracking and how things became as they are today. Here’s Mike explaning a well known warning—when a measure becomes a target, it stops being a good measure.

    One of the most dangerous things about analytics is that they feel objective. A chart is a chart. A number is a number. They have the aesthetic of truth.

    I’ve always liked this quote by William Bruce Cameron (often misattributed to Albert Einstein):

    “Not everything that can be counted counts, and not everything that counts can be counted.”

    Metrics don’t measure reality. They measure what your product currently makes easy.

    There’s a well-known warning about this, often summarized as: when a measure becomes a target, it stops being a good measure. It’s commonly referred to as Goodhart’s Law, and the broader point shows up in multiple fields, because it keeps happening to humans in systems with incentives.

    When I was at Microsoft, a team wanted to remove a feature because “the analytics show that nobody uses it.” If you looked at the UI, though, that feature had been moved deeper and deeper over time:

    • it used to be easy to find
    • then it moved into a menu
    • then into a submenu
    • then into a settings panel
    • then behind an “advanced” section
    • then it was basically invisible

    Of course nobody used it!

    The analytics didn’t prove the feature was unwanted. The analytics proved that we buried it.

    Even worse, once a metric becomes a target, people get promoted for moving it. That doesn’t require anyone to be malicious. It just requires incentives and a dashboard.

    Filed under
  • Force multiplier

    This Hacker News comment by user Stratoscope on how being a force multiplier for your team is very difficult to showcase in front of leadership.

    Many years ago, I worked at a company with a product that ran on Mac and Windows. The Mac version was pretty solid, but the Windows version had some problems.

    They had a talented team of developers who were mostly Mac experts and just starting to get a grip on Windows.

    I was known at the time as a “Windows expert”, so they hired me to help the team get the Windows version into shape.

    My typical day started with “house calls”. People would ping me with their Windows questions and I’d go door to door to help solve them – and to make sure they understood how to do things on Windows.

    In the afternoon, I would work on my own code, but I told everyone they could always call on me for help with a Windows problem, any time of day.

    One colleague asked me: “Mike, how can you afford to be so generous with your time?”

    Then in a performance review, I got this feedback:

    “Mike, we’re worried. Your productivity has been OK lately, but not great. And it’s surprising, because the productivity of the rest of the team has improved a lot during this time.”

    I bit my tongue, but in retrospect I should have said:

    “Isn’t that what you hired me for?”

    Ha!

    Filed under
  • Shifting baseline syndrome

    This comment on Hacker News by user internet2000.

    This Douglas Adams quote is still undefeated:

    1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
    2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
    3. Anything invented after you’re thirty-five is against the natural order of things.

    This reminded me of the shifting baseline syndrome explained by David Attenborough in his book A Life on Our Planet.

    There are fewer fish in the sea today. We don’t realise that this is so because of a phenomenon called the shifting baseline syndrome. Each generation defines the normal by what it experiences. We judge what the sea can provide by the fish populations we know today, not knowing what those populations once were. We expect less and less from the ocean because we have never known for ourselves what riches it once provided and what it could again.

    Filed under
  • Morvec’s paradox

    From Wikipedia:

    Moravec’s paradox is the observation that, as Hans Moravec wrote in 1988, “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”. This counterintuitive pattern may happen because skills that appear effortless to humans, such as recognizing faces or walking, required millions of years of evolution to develop, while abstract reasoning abilities like mathematics are evolutionarily recent.

    Filed under
  • Values over data

    Jim Nielsen arguing that values should drive decision making, not data:

    Data tells you what people consume, not what you should make. Values, ethics, vision, those can help you with the “should”.

    “What is happening?” and “What should happen?” are two completely different questions and should be dealt with as such.

    The more powerful our ability to understand demand, the more important our responsibility to decide whether to respond to it. We can choose not to build something, even though the data suggests we should. We can say no to the data. 

    Data can tell you what people clicked on, even help you predict what people will click on, but you get to decide what you will profit from.

    Filed under
  • Transparent umbrella

    Jampa Uchoa sharing the learnings from being an engineering manager for a decade. The metaphor of ‘transparent umbrella’ struck a chord with me.

    A good manager is more like a transparent umbrella. They protect the team from unnecessary stress and pressure, but don’t hide reality from them.

    Telling the team: “Our users aren’t thrilled so far. We need to find ways to better serve them. The project risks cancellation if we don’t.” That’s fair game. They deserve to know.

    When you do deliver hard news, state it clearly and focus on how the team will do about it. If you act scared, they’ll be scared too. Your goal is to get them thinking about the next steps.

    Filed under
  • Posture

    There’s a GitHub project called posturr that blurs your screen when you slouch. This comment on Hacker News by avalys explains the inverse relationship between a developer’s posture and their productivity.

    You can measure my productivity by how slouched I am.

    Sitting up straight at my desk, chair locked, perfect posture? I’m doing nothing, maybe looking through System Preferences to change the system highlight color.

    Sliding down in my chair like jelly, with my shoulders where my butt should be and my head resting on the lumbar support? I’m building the next iPhone and it’ll be done by 2 AM.

    Ha!

    Filed under
  • Principled and pragmatic

    Mark Carney’s speech from Davos 2026:

    […]we aim to be both principled and pragmatic – principled in our commitment to fundamental values, sovereignty, territorial integrity, the prohibition of the use of force, except when consistent with the UN Charter, and respect for human rights, and pragmatic and recognizing that progress is often incremental, that interests diverge, that not every partner will share all of our values. 

    So, we’re engaging broadly, strategically with open eyes. We actively take on the world as it is, not wait around for a world we wish to be.

    We are calibrating our relationships, so their depth reflects our values, and we’re prioritizing broad engagement to maximize our influence, given and given the fluidity of the world at the moment, the risks that this poses and the stakes for what comes next.

    And we are no longer just relying on the strength of our values, but also the value of our strength.

    We are building that strength at home.

    Filed under
  • Telegraph and the First World War

    Nicholas Carr talking about how instant communication, enabled by telegraph, was one of the catalyst the First World War.

    The unprecedented ability of far-flung leaders and diplomats to talk directly with each other without delay spurred great hopes. It seemed obvious that the resulting exchanges would ease friction and encourage goodwill among nations. Nikola Tesla, in an 1898 interview about his work on wireless telegraph systems, said that he would be “remembered as the inventor who succeeded in abolishing war.” His rival, Guglielmo Marconi, declared in 1912 that wireless telegraphy would “make war impossible.”

    What actually happened was altogether different. In the lead-up to the Franco-Prussian War of 1870, telegraphic communications inflamed tensions rather than dampening them. Writes the French historian Pierre Granet: “The constant transmission of dispatches between governments and their agents, the rapid dissemination of controversial information among an already agitated public, hastened, if it did not actually provoke, the outbreak of hostilities.” 

    The start of the First World War in 1914, two years after Marconi announced the end of war, was similarly hastened by the new communication mediums. After the June 28 assassination of Austrian Archduke Franz Ferdinand in Sarajevo, hundreds of urgent diplomatic messages raced between European capitals through newly strung telegraph and telephone wires. As the historian Stephen Kern describes in The Culture of Time and Space 1880–1918, the rapid-fire dispatches quickly devolved into ultimatums and threats. “Communication technology imparted a breakneck speed to the usually slow pace of traditional diplomacy and seemed to obviate personal diplomacy,” Kern writes. “Diplomats could not cope with the volume and speed of electronic communication.”

    Same as ever.

    Filed under