• Tool or the toolmaker

    Charles Arthur’s intriguing thought on the gradual loss of the cognitive capabilities as those capabilities are outsourced to LLMs.

    This makes me think that this complaint/debate has been going for a long time. The move from oral longform poetry such as The Iliad and Beowulf to writing it down, then printing it, then putting it on websites, then letting search engines find it for you, and now letting LLMs do some part of the work of analysing it – all of these seem to have been viewed as letting our brains slide back into the primordial ooze. If a problem is eternal, is it really because of the tools, or the toolmakers?

  • Laziness

    Bryan Cantrill explaining how laziness is a strength.

    […] when programmers are engaged in the seeming laziness of hammock-driven development, we are in fact turning the problem over and over in our heads. We undertake the hard intellectual work of developing these abstractions in part because we are optimizing the hypothetical time of our future selves, even if at the expense of our current one. When we get this calculus right, it is glorious, as the abstraction serves not just ourselves, but all who come after us. That is, our laziness serves to make software easier to write, and systems easier to compose — to allow more people to write more of it.

    […]

    The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone’s) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters. As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don’t want to waste our (human!) time on the consequences of clunky ones. The best engineering is always borne of constraints, and the constraint of our time places limits on the cognitive load of the system that we’re willing to accept. This is what drives us to make the system simpler, despite its essential complexity.

    Ha!

    The author goes onto emphasise that LLMs are going to play an important role in the future of software development, but they are a tool. Extraordinary tool, but still a tool.

  • Value comes from doing something non-obvious

    Lalit Maganti shares his experience of using AI build devtools for SQLite, something he dreamt for eight long years. This is probably the most measured take I have read on the internet. Most of the pieces are fluff which are written the moment AI spits out the first version of the codebase.

    AI turned out to be better than me at the act of writing code itself, assuming that code is obvious. If I can break a problem down to “write a function with this behaviour and parameters” or “write a class matching this interface,” AI will build it faster than I would and, crucially, in a style that might well be more intuitive to a future reader. It documents things I’d skip, lays out code consistently with the rest of the project, and sticks to what you might call the “standard dialect” of whatever language you’re working in.

    That standardness is a double-edged sword. For the vast majority of code in any project, standard is exactly what you want: predictable, readable, unsurprising. But every project has pieces that are its edge, the parts where the value comes from doing something non-obvious. For syntaqlite, that was the extraction pipeline and the parser architecture. AI’s instinct to normalize was actively harmful there, and those were the parts I had to design in depth and often resorted to just writing myself.

    But here’s the flip side: the same speed that makes AI great at obvious code also makes it great at refactoring. If you’re using AI to generate code at industrial scale, you have to refactor constantly and continuously. If you don’t, things immediately get out of hand. This was the central lesson of the vibe-coding month: I didn’t refactor enough, the codebase became something I couldn’t reason about, and I had to throw it all away. In the rewrite, refactoring became the core of my workflow. After every large batch of generated code, I’d step back and ask “is this ugly?” Sometimes AI could clean it up. Other times there was a large-scale abstraction that AI couldn’t see but I could; I’d give it the direction and let it execute. If you have taste, the cost of a wrong approach drops dramatically because you can restructure quickly.

  • Expert judgement

    This post on Dead Neurons talking about why expert judgement cannot by codified and hence cannot be taught.

    There is an apparent contradiction at the heart of expertise. Expert judgement is learnable, in the sense that people demonstrably acquire it over time. It is also non-transmissible, in the sense that no expert can transfer their judgement to another person through explanation. If it was once learnable, why can it not be taught?

    The resolution lies in a distinction between two fundamentally different modes of learning. The first is instruction: the transfer of explicit models, rules, and relationships from one person to another through language. The second is calibration: the development of internal models through repeated exposure to feedback in a specific environment. Judgement is learnable through calibration. It is not transmissible through instruction. These are different processes operating on different substrates, and conflating them is the source of the apparent contradiction.

    To see why, we need to be precise about what “high-dimensional” means when applied to expert knowledge, because the concept is doing all the real work.

    Consider a simple decision: should I cross the road? A rule-based encoding of this decision might operate on three variables: is a car visible, how fast is it moving, and how far away is it. These three dimensions are sufficient to produce a reasonable crossing decision most of the time. You could write this as an explicit rule, transmit it through language, and a person who had never crossed a road could apply it successfully in straightforward cases.

    Now consider the actual model that an experienced pedestrian uses. They are integrating: the car’s speed, its acceleration (is it slowing down?), the road surface (wet or dry, affecting stopping distance), the driver’s apparent attentiveness (are they looking at their phone?), the car’s trajectory (drifting within the lane?), the presence of other cars that might obscure the driver’s view, the width of the road, their own walking speed today (are they carrying something heavy, are they injured?), the behaviour of other pedestrians (are they crossing confidently or hesitating?), the sound of the engine (accelerating or decelerating, even before the speed change is visible), the type of vehicle (a truck has different stopping characteristics than a bicycle), the time of day (affecting driver fatigue and visibility), and dozens of other variables they could not enumerate if asked.

  • Algorithms

    From Collaborative Fund:

    Algorithms have an answer for everything except what’s good. They can tell you what’s trending, what people like you clicked on, what to watch next — but they can’t tell you why something matters, or what it felt like to discover it. That gap is widening. As AI-generated content fills every feed and recommendation engines collapse taste into pattern-matching, the most valuable signal online is increasingly the simplest one: a real person, with a real point of view, telling you about something they love.

    And this is why I love RSS readers. I am following a person. A person who will be like me in some ways and unlike me in many other ways. And the way that person is unlike me, is the opportunity for me to learn new things.

  • Why are executives enamoured by AI?

    John Wang sharing his thoughts on why executives are enamoured by AI.

    Executives have always had to deal with non-determinism. That’s par for the course:

    • People being out sick or taking time off unexpectedly
    • Someone not finishing an important project and not talking about it until far too late in the process
    • People reacting to an announcement in an unexpected way
    • A feature being built in a way that doesn’t make sense with respect to the rest of the product, but does technically achieve objectives.

    More generally, if you’ve ever taken a Chaos Theory class in math, you’ll know that nonlinear, chaotic systems emerge when individual agents in a system are all acting with different inputs, utility functions, etc. Systems become slightly easier to manage if you’re able to make those utility functions consistent (you’re able to get a grasp on system dynamics).

    A manager’s job is to create a model of the world and align everyone’s utility functions, knowing that there’s a large amount of non-determinism in complex systems. So it makes sense that as a manager, you’re ok with a decent amount of this.

    AI is something that is non-deterministic but has a lot of characteristics of a well behaved chaotic system (specifically a system where you can understand the general behavior of the system, even if you cannot predict the specific outcomes at any point in time).

    For example:

    • LLMs generally continue their work and provide an output regardless of time of day, how difficult the task is, how much information is available
    • LLM’s deficiencies have well defined failure modes (e.g. hallucinations, lack of ability to operate outside of their context, and especially poor outcomes when not given enough context)
    • The types of tasks that an LLM can accomplish are relatively well known, and the capability envelope is getting mapped out quickly. This is different than humans, where each person has a different set of strengths and weaknesses and where you need to uncover these over time.

    Many of these properties are more deterministic than large human systems, which makes AI incredibly attractive for an executive who is already used to this and likely has put a large amount of effort into adding determinism into their systems already (e.g. by adding processes and structure in the form of levels and ladders, standard operating procedures, etc.).

    Intriguing.

    He also talks about why individual contributors are sceptical of AI.

  • Useless knowledge

    I recently read this insightful paper The Usefulness of Useless Knowledge By Abraham Flexner published in 1939, yes 1939, which highlights how world changing discoveries have their roots in people just being curious and never caring about the utility.

    We live in a world that would be helpless without electricity. Called upon to mention a discovery of the most immediate and far reaching practical use we might well agree upon electricity. But who made the fundamental discoveries out of which the entire electrical development of more than one hundred years has come?

    The answer is interesting. Michael Faraday’s father was a blacksmith; Michael himself was apprenticed to a bookbinder. In 1812, when he was already twenty-one years of age, a friend took him to the Royal Institution where he heard Sir Humphrey Davy deliver four lectures on chemical subjects. He kept notes and sent a copy of them to Davy. The very next year, 1813, he became an assistant in Davy’s laboratory, working on chemical problems. Two years later he accompanied Davy on a trip to the Continent. In 1825, when he was thirty-four years of age, he became Director of the Laboratory of the Royal Institution where he spent fifty-four years of his life.

    Faraday’s interest soon shifted from chemistry to electricity and magnetism, to which he devoted the rest of his active life. Important but puzzling work in this field had been previously accomplished by Oersted, Ampere, and Wollaston. Faraday cleared away the difficulties which they had left unsolved and by 1841 had succeeded in the task of induction of the electric current. Four years later a second and equally brilliant epoch in his career opened when he discovered the effect of magnetism on polarized light. His earlier discoveries have led to the infinite number of practical applications by means of which electricity has lightened the burdens and increased the opportunities of modern life. His later discoveries have thus far been less prolific of practical results. What difference did this make to Faraday? Not the least. At no period of his unmatched career was he interested in utility. He was absorbed in disentangling the riddles of the universe, at first chemical riddles, in later periods, physical riddles. As far as he cared, the question of utility was never raised. Any suspicion of utility would have restricted his restless curiosity. In the end, utility resulted, but it was never a criterion to which his ceaseless experimentation could be subjected.

    […]

    I am not for a moment suggesting that everything that goes on in laboratories will ultimately turn to some unexpected practical use or that an ultimate practical use is its actual justification. Much more am I pleading for the abolition of the word “use,” and for the freeing of the human spirit. To be sure, we shall thus free some harmless cranks. To be sure, we shall thus waste some precious dollars. But what is infinitely more important is that we shall be striking the shackles off the human mind and setting it free for the adventures which in our own day have, on the one hand, taken Hale and Rutherford and Einstein and their peers millions upon millions of miles into the uttermost realms of space and, on the other, loosed the boundless energy imprisoned in the atom. What Rutherford and others like Bohr and Millikan have done out of sheer curiosity in the effort to understand the construction of the atom has released forces which may transform human life; but this ultimate and unforeseen and unpredictable practical result is not offered as a justification for Rutherford or Einstein or Millikan or Bohr or any of their peers. Let them alone. No educational administrator can possibly direct the channels in which these or other men shall work. The waste, I admit again, looks prodigious. It is not really so. All the waste that could be summed up in developing the science of bacteriology is as nothing compared to the advantages which have accrued from the discoveries of Pasteur, Koch, Ehrlich, Theobald Smith, and scores of others-advantages that could never have accrued if the idea of possible use had permeated their minds. These great artists-for such are scientists and bacteriologists-disseminated the spirit which prevailed in laboratories in which they were simply following the line of their own natural curiosity.

    I bet things haven’t changed a bit almost 100 years later.

  • Pessimism as a risk management tool

    M. Pattabiraman talking about how he uses pessimism as a risk management tool.

    A frequent criticism by those who encounter freefincal articles or videos is, “This guy is very pessimistic about equity investing”. I also often get asked why that is the case. Please allow me to explain.

    There is a difference between lack of faith and pessimism. If I lack belief in equity, I will not go near it. Pessimism does not preclude participation. It only serves as a risk management mechanism. Pessimism prevents faith from turning into blind faith.

    I use this strategy while managing projects. Being pessimistic helps putting in a plan for when things go sideways.

  • Thorium

    Snigdha Sharma and Rahel Philipose talking about how Thorium can help India’s energy needs in the Daybreak podcast.

    […] Homi Bhabha designed India’s long-term nuclear plan to work in a way that suits the country best. Here’s how he puts the idea behind the plan in his own words.

    The total reserves of thorium in India amount to over 500,000 tonnes in the readily extractable form, while the known reserves of uranium are less than a tenth of this. The aim of a long-range atomic power program in India must therefore be to base the nuclear power generation as soon as possible on thorium rather than uranium. And the reason why he came up with a three-phase plan is because as Kalam explains in his quote, thorium by itself is not fissile.

    […] A fissile material is one that will undergo fission in a nuclear reactor. Fission is essentially a process where the atoms of a material split, and that splitting is what releases enormous amounts of energy.

    Materials like uranium-235 and plutonium-239 are fissile, which is what makes them usable as reactor fuel directly. Thorium, on the other hand, is not fissile. Putting it in a reactor by itself does absolutely nothing.

    What it is, though, is fertile. And that pretty much means exactly what you might think. It can be converted into a fissile material when it’s placed in or around a reactor that’s running on something else.

    Then it can produce fuel that can then be used in the next generation of reactors. And that’s where the different stages come in. The first stage involves using pressurized heavy water reactors, or PHWRs, to generate electricity by burning natural uranium.

    The by-product of this process is a spent fuel that contains plutonium-239. This stage is actually fully operational right now, with India running several of these reactors in Rajasthan, Karnataka, Maharashtra and Tamil Nadu. Now in stage 2, the plutonium by-product from stage 1 fuels a different kind of a reactor.

    It’s called a fast-breeder reactor, which essentially breeds and creates more fissile material. During this stage, thorium is converted into uranium, and that is the uranium that can finally undergo fission in the last stage. And where are we with stage 2?

    Well, there’s a single prototype fast-breeder reactor in Kalapakkam. It’s been in development for decades, and it’s the only stage 2 reactor we currently have. And it’s also supposed to be completed only by September this year, which brings us to the third and final stage.

    This one involves yet another type of reactor, called an advanced heavy-water reactor. These are designed to run mainly on thorium, and once the process has begun, these reactors are designed to be self-sustaining. Which means the thorium in this reactor can basically just keep breeding more and more uranium.

    But here’s the thing, we are still a long, long way off from stage 3. India has been leading the research on thorium-based nuclear power globally, and the kind of reactor that stage 3 needs has no precedent anywhere in the world. Mainly because all of this is a complex and expensive process, and to get to stage 3, you require a fully operational stage 2 that has been running for years and which has created the amount of uranium necessary to kickstart stage 3.

    So while other countries found using uranium and plutonium easier and cheaper, thorium dropped as a priority for them. But India doesn’t have that same option. Like I mentioned earlier, the uranium reserves we have are miniscule, and we just can’t keep importing it forever.

    For a long-term solution, thorium is the only option that makes sense. Because the Indian nuclear establishment has estimated that the country could produce 500 gigawatts of electricity for at least 4 centuries. And that’s just by using the country’s easily extractable thorium reserves.

    500 gigawatts. 4 centuries. Wow. For context, India’s annual electricity generation for 2025-26 was 524 gigawatts. Do listen to the entire podcast, because it is not so straightforward as it appears.

  • Left behind

    Terence Eden talking about how it is ok to be a little behind the curve in all the AI excitement.

    Many years ago, someone tried to get me into cryptocurrencies. “They’re the future of money!” they said. I replied saying that I’d rather wait until they were more useful, less volatile, easier to use, and utterly reliable.

    “You don’t want to get left behind, do you?” They countered.

    That struck me as a bizarre sentiment. What is there to be left behind from? If BitCoin (or whatever) is going to liberate us all from economic drudgery, what’s the point of “getting in early”? It’ll still be there tomorrow and I can join the journey whenever it is sensible for me.

    Part of the crypto grift was telling people to “Have Fun Staying Poor”. That weaponisation of FOMO was an insidious way to get people to drop their scepticism.

    I feel the same way about the current crop of AI tools. I’ve tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I’m utterly content to wait until their hype has been realised. Why should I invest in learning the equivalent of WordStar for DOS when Google Docs is coming any-day-now?