• Expert judgement

    This post on Dead Neurons talking about why expert judgement cannot by codified and hence cannot be taught.

    There is an apparent contradiction at the heart of expertise. Expert judgement is learnable, in the sense that people demonstrably acquire it over time. It is also non-transmissible, in the sense that no expert can transfer their judgement to another person through explanation. If it was once learnable, why can it not be taught?

    The resolution lies in a distinction between two fundamentally different modes of learning. The first is instruction: the transfer of explicit models, rules, and relationships from one person to another through language. The second is calibration: the development of internal models through repeated exposure to feedback in a specific environment. Judgement is learnable through calibration. It is not transmissible through instruction. These are different processes operating on different substrates, and conflating them is the source of the apparent contradiction.

    To see why, we need to be precise about what “high-dimensional” means when applied to expert knowledge, because the concept is doing all the real work.

    Consider a simple decision: should I cross the road? A rule-based encoding of this decision might operate on three variables: is a car visible, how fast is it moving, and how far away is it. These three dimensions are sufficient to produce a reasonable crossing decision most of the time. You could write this as an explicit rule, transmit it through language, and a person who had never crossed a road could apply it successfully in straightforward cases.

    Now consider the actual model that an experienced pedestrian uses. They are integrating: the car’s speed, its acceleration (is it slowing down?), the road surface (wet or dry, affecting stopping distance), the driver’s apparent attentiveness (are they looking at their phone?), the car’s trajectory (drifting within the lane?), the presence of other cars that might obscure the driver’s view, the width of the road, their own walking speed today (are they carrying something heavy, are they injured?), the behaviour of other pedestrians (are they crossing confidently or hesitating?), the sound of the engine (accelerating or decelerating, even before the speed change is visible), the type of vehicle (a truck has different stopping characteristics than a bicycle), the time of day (affecting driver fatigue and visibility), and dozens of other variables they could not enumerate if asked.

  • Algorithms

    From Collaborative Fund:

    Algorithms have an answer for everything except what’s good. They can tell you what’s trending, what people like you clicked on, what to watch next — but they can’t tell you why something matters, or what it felt like to discover it. That gap is widening. As AI-generated content fills every feed and recommendation engines collapse taste into pattern-matching, the most valuable signal online is increasingly the simplest one: a real person, with a real point of view, telling you about something they love.

    And this is why I love RSS readers. I am following a person. A person who will be like me in some ways and unlike me in many other ways. And the way that person is unlike me, is the opportunity for me to learn new things.

  • Why are executives enamoured by AI?

    John Wang sharing his thoughts on why executives are enamoured by AI.

    Executives have always had to deal with non-determinism. That’s par for the course:

    • People being out sick or taking time off unexpectedly
    • Someone not finishing an important project and not talking about it until far too late in the process
    • People reacting to an announcement in an unexpected way
    • A feature being built in a way that doesn’t make sense with respect to the rest of the product, but does technically achieve objectives.

    More generally, if you’ve ever taken a Chaos Theory class in math, you’ll know that nonlinear, chaotic systems emerge when individual agents in a system are all acting with different inputs, utility functions, etc. Systems become slightly easier to manage if you’re able to make those utility functions consistent (you’re able to get a grasp on system dynamics).

    A manager’s job is to create a model of the world and align everyone’s utility functions, knowing that there’s a large amount of non-determinism in complex systems. So it makes sense that as a manager, you’re ok with a decent amount of this.

    AI is something that is non-deterministic but has a lot of characteristics of a well behaved chaotic system (specifically a system where you can understand the general behavior of the system, even if you cannot predict the specific outcomes at any point in time).

    For example:

    • LLMs generally continue their work and provide an output regardless of time of day, how difficult the task is, how much information is available
    • LLM’s deficiencies have well defined failure modes (e.g. hallucinations, lack of ability to operate outside of their context, and especially poor outcomes when not given enough context)
    • The types of tasks that an LLM can accomplish are relatively well known, and the capability envelope is getting mapped out quickly. This is different than humans, where each person has a different set of strengths and weaknesses and where you need to uncover these over time.

    Many of these properties are more deterministic than large human systems, which makes AI incredibly attractive for an executive who is already used to this and likely has put a large amount of effort into adding determinism into their systems already (e.g. by adding processes and structure in the form of levels and ladders, standard operating procedures, etc.).

    Intriguing.

    He also talks about why individual contributors are sceptical of AI.

  • Useless knowledge

    I recently read this insightful paper The Usefulness of Useless Knowledge By Abraham Flexner published in 1939, yes 1939, which highlights how world changing discoveries have their roots in people just being curious and never caring about the utility.

    We live in a world that would be helpless without electricity. Called upon to mention a discovery of the most immediate and far reaching practical use we might well agree upon electricity. But who made the fundamental discoveries out of which the entire electrical development of more than one hundred years has come?

    The answer is interesting. Michael Faraday’s father was a blacksmith; Michael himself was apprenticed to a bookbinder. In 1812, when he was already twenty-one years of age, a friend took him to the Royal Institution where he heard Sir Humphrey Davy deliver four lectures on chemical subjects. He kept notes and sent a copy of them to Davy. The very next year, 1813, he became an assistant in Davy’s laboratory, working on chemical problems. Two years later he accompanied Davy on a trip to the Continent. In 1825, when he was thirty-four years of age, he became Director of the Laboratory of the Royal Institution where he spent fifty-four years of his life.

    Faraday’s interest soon shifted from chemistry to electricity and magnetism, to which he devoted the rest of his active life. Important but puzzling work in this field had been previously accomplished by Oersted, Ampere, and Wollaston. Faraday cleared away the difficulties which they had left unsolved and by 1841 had succeeded in the task of induction of the electric current. Four years later a second and equally brilliant epoch in his career opened when he discovered the effect of magnetism on polarized light. His earlier discoveries have led to the infinite number of practical applications by means of which electricity has lightened the burdens and increased the opportunities of modern life. His later discoveries have thus far been less prolific of practical results. What difference did this make to Faraday? Not the least. At no period of his unmatched career was he interested in utility. He was absorbed in disentangling the riddles of the universe, at first chemical riddles, in later periods, physical riddles. As far as he cared, the question of utility was never raised. Any suspicion of utility would have restricted his restless curiosity. In the end, utility resulted, but it was never a criterion to which his ceaseless experimentation could be subjected.

    […]

    I am not for a moment suggesting that everything that goes on in laboratories will ultimately turn to some unexpected practical use or that an ultimate practical use is its actual justification. Much more am I pleading for the abolition of the word “use,” and for the freeing of the human spirit. To be sure, we shall thus free some harmless cranks. To be sure, we shall thus waste some precious dollars. But what is infinitely more important is that we shall be striking the shackles off the human mind and setting it free for the adventures which in our own day have, on the one hand, taken Hale and Rutherford and Einstein and their peers millions upon millions of miles into the uttermost realms of space and, on the other, loosed the boundless energy imprisoned in the atom. What Rutherford and others like Bohr and Millikan have done out of sheer curiosity in the effort to understand the construction of the atom has released forces which may transform human life; but this ultimate and unforeseen and unpredictable practical result is not offered as a justification for Rutherford or Einstein or Millikan or Bohr or any of their peers. Let them alone. No educational administrator can possibly direct the channels in which these or other men shall work. The waste, I admit again, looks prodigious. It is not really so. All the waste that could be summed up in developing the science of bacteriology is as nothing compared to the advantages which have accrued from the discoveries of Pasteur, Koch, Ehrlich, Theobald Smith, and scores of others-advantages that could never have accrued if the idea of possible use had permeated their minds. These great artists-for such are scientists and bacteriologists-disseminated the spirit which prevailed in laboratories in which they were simply following the line of their own natural curiosity.

    I bet things haven’t changed a bit almost 100 years later.

  • Pessimism as a risk management tool

    M. Pattabiraman talking about how he uses pessimism as a risk management tool.

    A frequent criticism by those who encounter freefincal articles or videos is, “This guy is very pessimistic about equity investing”. I also often get asked why that is the case. Please allow me to explain.

    There is a difference between lack of faith and pessimism. If I lack belief in equity, I will not go near it. Pessimism does not preclude participation. It only serves as a risk management mechanism. Pessimism prevents faith from turning into blind faith.

    I use this strategy while managing projects. Being pessimistic helps putting in a plan for when things go sideways.

  • Thorium

    Snigdha Sharma and Rahel Philipose talking about how Thorium can help India’s energy needs in the Daybreak podcast.

    […] Homi Bhabha designed India’s long-term nuclear plan to work in a way that suits the country best. Here’s how he puts the idea behind the plan in his own words.

    The total reserves of thorium in India amount to over 500,000 tonnes in the readily extractable form, while the known reserves of uranium are less than a tenth of this. The aim of a long-range atomic power program in India must therefore be to base the nuclear power generation as soon as possible on thorium rather than uranium. And the reason why he came up with a three-phase plan is because as Kalam explains in his quote, thorium by itself is not fissile.

    […] A fissile material is one that will undergo fission in a nuclear reactor. Fission is essentially a process where the atoms of a material split, and that splitting is what releases enormous amounts of energy.

    Materials like uranium-235 and plutonium-239 are fissile, which is what makes them usable as reactor fuel directly. Thorium, on the other hand, is not fissile. Putting it in a reactor by itself does absolutely nothing.

    What it is, though, is fertile. And that pretty much means exactly what you might think. It can be converted into a fissile material when it’s placed in or around a reactor that’s running on something else.

    Then it can produce fuel that can then be used in the next generation of reactors. And that’s where the different stages come in. The first stage involves using pressurized heavy water reactors, or PHWRs, to generate electricity by burning natural uranium.

    The by-product of this process is a spent fuel that contains plutonium-239. This stage is actually fully operational right now, with India running several of these reactors in Rajasthan, Karnataka, Maharashtra and Tamil Nadu. Now in stage 2, the plutonium by-product from stage 1 fuels a different kind of a reactor.

    It’s called a fast-breeder reactor, which essentially breeds and creates more fissile material. During this stage, thorium is converted into uranium, and that is the uranium that can finally undergo fission in the last stage. And where are we with stage 2?

    Well, there’s a single prototype fast-breeder reactor in Kalapakkam. It’s been in development for decades, and it’s the only stage 2 reactor we currently have. And it’s also supposed to be completed only by September this year, which brings us to the third and final stage.

    This one involves yet another type of reactor, called an advanced heavy-water reactor. These are designed to run mainly on thorium, and once the process has begun, these reactors are designed to be self-sustaining. Which means the thorium in this reactor can basically just keep breeding more and more uranium.

    But here’s the thing, we are still a long, long way off from stage 3. India has been leading the research on thorium-based nuclear power globally, and the kind of reactor that stage 3 needs has no precedent anywhere in the world. Mainly because all of this is a complex and expensive process, and to get to stage 3, you require a fully operational stage 2 that has been running for years and which has created the amount of uranium necessary to kickstart stage 3.

    So while other countries found using uranium and plutonium easier and cheaper, thorium dropped as a priority for them. But India doesn’t have that same option. Like I mentioned earlier, the uranium reserves we have are miniscule, and we just can’t keep importing it forever.

    For a long-term solution, thorium is the only option that makes sense. Because the Indian nuclear establishment has estimated that the country could produce 500 gigawatts of electricity for at least 4 centuries. And that’s just by using the country’s easily extractable thorium reserves.

    500 gigawatts. 4 centuries. Wow. For context, India’s annual electricity generation for 2025-26 was 524 gigawatts. Do listen to the entire podcast, because it is not so straightforward as it appears.

  • Left behind

    Terence Eden talking about how it is ok to be a little behind the curve in all the AI excitement.

    Many years ago, someone tried to get me into cryptocurrencies. “They’re the future of money!” they said. I replied saying that I’d rather wait until they were more useful, less volatile, easier to use, and utterly reliable.

    “You don’t want to get left behind, do you?” They countered.

    That struck me as a bizarre sentiment. What is there to be left behind from? If BitCoin (or whatever) is going to liberate us all from economic drudgery, what’s the point of “getting in early”? It’ll still be there tomorrow and I can join the journey whenever it is sensible for me.

    Part of the crypto grift was telling people to “Have Fun Staying Poor”. That weaponisation of FOMO was an insidious way to get people to drop their scepticism.

    I feel the same way about the current crop of AI tools. I’ve tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I’m utterly content to wait until their hype has been realised. Why should I invest in learning the equivalent of WordStar for DOS when Google Docs is coming any-day-now?

  • Don’t judge

    This thoughtful take by drfloyd51 from Hacker News on why we shouldn’t judge parents.

    Before kids it was easy to judge bad parents. Then one day with child I found myself due to circumstances in a store way past my child’s bedtime. She was screaming and crying, because it was way past her bedtime.

    Then I realized… I was now “the bad parent” I had so easily judged.

    Then it was easy to judge parents with children younger than mine.

    Until I learned that not all children have the same issues in the same order.

    Then I learned it’s easier not to judge at all.

  • Metabolism over patience

    Jordi Visser talking about how AI has compressed time-to-market and in turn started rewarding metabolism over patience.

    For more than a decade, equity markets were built around a simple premise: durable franchises deserved durable multiples. Investors weren’t just buying earnings. They were buying time. Time to compound. Time before meaningful competition arrived. Time protected by scale, distribution, switching costs, and capital intensity.

    Time was the moat.

    The entire architecture of modern markets reinforced that belief. Passive flows concentrated into the largest platforms. Growth indices tilted toward scalable digital economics. Valuation frameworks stretched duration assumptions further into the future. A narrow cohort absorbed more and more of the index because the math appeared rational.

    Scale begot scale.

    But something subtle has changed.

    AI does not simply disrupt business models.

    It compresses time.

    When the replacement cost of competence collapses, when code can be generated instantly and iterated continuously, competitive cycles shrink. A product that once enjoyed a five- to ten-year window of defensibility may now face viable competition in months. Execution speed replaces installed base. Iteration cadence replaces headcount.

    And when competitive half-lives shorten, equity changes character.

    A share of stock used to represent ownership of a durable franchise with predictable cash flows. In the Age of Agents, it increasingly resembles a call option on execution velocity. Cash flows that once looked like fifteen-year streams begin to look like five-year bets.

    When duration compresses, multiples reprice.

    This is not simply a SaaS selloff. It is the repricing of time as an asset.

    If the last cycle rewarded patience, buy scale, hold duration, let monetary expansion amplify returns, the next may reward adaptability. Velocity over size. Metabolism over moat.

    This, kind of, conflicts with what Jason Fried shared on bespoke software.

  • Bespoke software

    Jason Fried shares his thoughts on bespoke software.

    So when someone suggests that AI means everyone will build their own custom tools, ask who “everyone” is. The three-person accounting firm drowning in client paperwork? They want the paperwork gone, not a new system to maintain. The regional logistics company with 40 trucks? They want the routes optimized, not Joe spouting off about this new system he’s been messing around with. The law firm billing 70-hour weeks? They want leverage on their time, not a software project to design.

    They don’t hate technology. But building and maintaining their own critical systems isn’t their wheelhouse, regardless of how much faster and easier it’s become. It’s another job on top of the job.