• Thorium

    Snigdha Sharma and Rahel Philipose talking about how Thorium can help India’s energy needs in the Daybreak podcast.

    […] Homi Bhabha designed India’s long-term nuclear plan to work in a way that suits the country best. Here’s how he puts the idea behind the plan in his own words.

    The total reserves of thorium in India amount to over 500,000 tonnes in the readily extractable form, while the known reserves of uranium are less than a tenth of this. The aim of a long-range atomic power program in India must therefore be to base the nuclear power generation as soon as possible on thorium rather than uranium. And the reason why he came up with a three-phase plan is because as Kalam explains in his quote, thorium by itself is not fissile.

    […] A fissile material is one that will undergo fission in a nuclear reactor. Fission is essentially a process where the atoms of a material split, and that splitting is what releases enormous amounts of energy.

    Materials like uranium-235 and plutonium-239 are fissile, which is what makes them usable as reactor fuel directly. Thorium, on the other hand, is not fissile. Putting it in a reactor by itself does absolutely nothing.

    What it is, though, is fertile. And that pretty much means exactly what you might think. It can be converted into a fissile material when it’s placed in or around a reactor that’s running on something else.

    Then it can produce fuel that can then be used in the next generation of reactors. And that’s where the different stages come in. The first stage involves using pressurized heavy water reactors, or PHWRs, to generate electricity by burning natural uranium.

    The by-product of this process is a spent fuel that contains plutonium-239. This stage is actually fully operational right now, with India running several of these reactors in Rajasthan, Karnataka, Maharashtra and Tamil Nadu. Now in stage 2, the plutonium by-product from stage 1 fuels a different kind of a reactor.

    It’s called a fast-breeder reactor, which essentially breeds and creates more fissile material. During this stage, thorium is converted into uranium, and that is the uranium that can finally undergo fission in the last stage. And where are we with stage 2?

    Well, there’s a single prototype fast-breeder reactor in Kalapakkam. It’s been in development for decades, and it’s the only stage 2 reactor we currently have. And it’s also supposed to be completed only by September this year, which brings us to the third and final stage.

    This one involves yet another type of reactor, called an advanced heavy-water reactor. These are designed to run mainly on thorium, and once the process has begun, these reactors are designed to be self-sustaining. Which means the thorium in this reactor can basically just keep breeding more and more uranium.

    But here’s the thing, we are still a long, long way off from stage 3. India has been leading the research on thorium-based nuclear power globally, and the kind of reactor that stage 3 needs has no precedent anywhere in the world. Mainly because all of this is a complex and expensive process, and to get to stage 3, you require a fully operational stage 2 that has been running for years and which has created the amount of uranium necessary to kickstart stage 3.

    So while other countries found using uranium and plutonium easier and cheaper, thorium dropped as a priority for them. But India doesn’t have that same option. Like I mentioned earlier, the uranium reserves we have are miniscule, and we just can’t keep importing it forever.

    For a long-term solution, thorium is the only option that makes sense. Because the Indian nuclear establishment has estimated that the country could produce 500 gigawatts of electricity for at least 4 centuries. And that’s just by using the country’s easily extractable thorium reserves.

    500 gigawatts. 4 centuries. Wow. For context, India’s annual electricity generation for 2025-26 was 524 gigawatts. Do listen to the entire podcast, because it is not so straightforward as it appears.

  • Left behind

    Terence Eden talking about how it is ok to be a little behind the curve in all the AI excitement.

    Many years ago, someone tried to get me into cryptocurrencies. “They’re the future of money!” they said. I replied saying that I’d rather wait until they were more useful, less volatile, easier to use, and utterly reliable.

    “You don’t want to get left behind, do you?” They countered.

    That struck me as a bizarre sentiment. What is there to be left behind from? If BitCoin (or whatever) is going to liberate us all from economic drudgery, what’s the point of “getting in early”? It’ll still be there tomorrow and I can join the journey whenever it is sensible for me.

    Part of the crypto grift was telling people to “Have Fun Staying Poor”. That weaponisation of FOMO was an insidious way to get people to drop their scepticism.

    I feel the same way about the current crop of AI tools. I’ve tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I’m utterly content to wait until their hype has been realised. Why should I invest in learning the equivalent of WordStar for DOS when Google Docs is coming any-day-now?

  • Don’t judge

    This thoughtful take by drfloyd51 from Hacker News on why we shouldn’t judge parents.

    Before kids it was easy to judge bad parents. Then one day with child I found myself due to circumstances in a store way past my child’s bedtime. She was screaming and crying, because it was way past her bedtime.

    Then I realized… I was now “the bad parent” I had so easily judged.

    Then it was easy to judge parents with children younger than mine.

    Until I learned that not all children have the same issues in the same order.

    Then I learned it’s easier not to judge at all.

  • Metabolism over patience

    Jordi Visser talking about how AI has compressed time-to-market and in turn started rewarding metabolism over patience.

    For more than a decade, equity markets were built around a simple premise: durable franchises deserved durable multiples. Investors weren’t just buying earnings. They were buying time. Time to compound. Time before meaningful competition arrived. Time protected by scale, distribution, switching costs, and capital intensity.

    Time was the moat.

    The entire architecture of modern markets reinforced that belief. Passive flows concentrated into the largest platforms. Growth indices tilted toward scalable digital economics. Valuation frameworks stretched duration assumptions further into the future. A narrow cohort absorbed more and more of the index because the math appeared rational.

    Scale begot scale.

    But something subtle has changed.

    AI does not simply disrupt business models.

    It compresses time.

    When the replacement cost of competence collapses, when code can be generated instantly and iterated continuously, competitive cycles shrink. A product that once enjoyed a five- to ten-year window of defensibility may now face viable competition in months. Execution speed replaces installed base. Iteration cadence replaces headcount.

    And when competitive half-lives shorten, equity changes character.

    A share of stock used to represent ownership of a durable franchise with predictable cash flows. In the Age of Agents, it increasingly resembles a call option on execution velocity. Cash flows that once looked like fifteen-year streams begin to look like five-year bets.

    When duration compresses, multiples reprice.

    This is not simply a SaaS selloff. It is the repricing of time as an asset.

    If the last cycle rewarded patience, buy scale, hold duration, let monetary expansion amplify returns, the next may reward adaptability. Velocity over size. Metabolism over moat.

    This, kind of, conflicts with what Jason Fried shared on bespoke software.

  • Bespoke software

    Jason Fried shares his thoughts on bespoke software.

    So when someone suggests that AI means everyone will build their own custom tools, ask who “everyone” is. The three-person accounting firm drowning in client paperwork? They want the paperwork gone, not a new system to maintain. The regional logistics company with 40 trucks? They want the routes optimized, not Joe spouting off about this new system he’s been messing around with. The law firm billing 70-hour weeks? They want leverage on their time, not a software project to design.

    They don’t hate technology. But building and maintaining their own critical systems isn’t their wheelhouse, regardless of how much faster and easier it’s become. It’s another job on top of the job.

  • Let them fight

    Dave Rupert explaing how the Gell-Mann Amnesia Effect is playing out in organizations with all the surrounding AI hoopla.

    Sometimes I feel like there’s a palpable tension in the air as if we’re waiting to see whether AI will replace designers or engineers first. Designers empowered by AI might feel those pesky nay-saying, opinionated engineers aren’t needed anymore. Engineers empowered with AI might feel like AI creates designs that are good enough for most situations. Backend engineers feel like frontend engineering is a solved problem. Frontend engineers know scaffolding a CRUD app or an entire backend API is simple fodder for the agent. Meanwhile, management cackles in their leather chairs saying “Let them fight…”

    Ha!

  • Comprehensive specifications

    This comic by CommitStrip from 2016 feels way ahead of its time.

  • Product and Relationship

    Terry Godier’s thought provoking post on how modern gadgets don’t stop talking.

    This watch [Casio F-91W] costs twelve dollars. It weighs twenty-one grams. It has an alarm that sounds like a microwave in another room. It has told time the same way since 1989.

    It doesn’t know my heart rate. It has no opinions about whether I’ve stood up enough today. It will never need a firmware update.

    When the battery dies in seven years, I’ll press in a new one with a paperclip.

    That will be the entirety of my obligation to it.

    This watch [Applet Watch] costs four hundred dollars. It also tells time.

    It also tracks my steps, monitors my blood oxygen, measures my sleep quality, logs my workouts, reminds me to breathe, reminds me to stand, nudges me to close my rings, alerts me to unusual heart rhythms, pings me with notifications from six apps, and dies every night.

    One of these is a product. The other is a relationship.

    And a very insightful take on screen time.

    Screen Time gives you a report card. And if the grade is bad, the design makes one thing clear: That’s a YOU problem.

    It measures YOUR usage. Tracks YOUR behavior. Gives YOU a weekly report card. If the numbers are too high?

    You picked it up too much.
    You spent too long.
    You failed your limit.
    Try again next week.
    Try harder.

    Screen Time is a blame shift dressed in a soft font.

    Ha!

  • Seat-based business model

    Ben Thompson talking about the impact of AI on seat-based business model.

    […] when Microsoft revealed how they will handle the potential business impact of AI reducing seats, which is a bit of a problem for their seat-based business model: the company is going to bundle AI into a new higher-tiered enterprise offering, E7, which is going to cost twice as much — $99 per seat per month — as the formerly top-of-the-line E5. That’s a big increase, which Microsoft needs to justify with AI that actually makes those seats more productive, and the product they launched with the new bundle was Copilot Cowork.

    This never occurred to me. AI replaces humans. Organisations need less licenses for their enterprise software. Enterprise software makers double the price of their licenses. Organisations pay the same amount for—supposedly—increased productivity.

  • Continue to plant the little acorns from which the mighty oak trees grow

    Sharif Shameem talks about the disadvantages of success and how willingness to look stupid helps countering it.

    There’s this unfortunate pattern that happens when someone wins a Nobel Prize. They tend to stop doing great work. Richard Hamming talks about this in You and Your Research:

    When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn’t the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren’t good afterwards, but they were superb before they got there and were only good afterwards.

    Before the Nobel Prize, nobody really cares who you are. But after the Nobel Prize, you’re a Nobel Prize winner, and Nobel Prize winners are supposed to have Good Ideas. Every idea, every paper, every talk at a conference is now being evaluated against the standard of your Nobel Prize-winning work. Everyone is asking, “is this worthy of a Nobel laureate?” It’s a high bar to clear. So instead of trying and occasionally failing, they just… stop trying. The fear of making something bad is worse than producing nothing at all.

    This reminds me of Quality vs Quantity.