• Don’t forget the boring stuff

    Tim Harford explains why you shouldn’t forget the boring stuff. Some pretty good examples.

    …smooth, successful operations are uninteresting, and uninteresting matters tend to be neglected. Eventually they stop working well, at which point they become interesting again. 

    This is certainly true of the AC waveform. It seems boring because it has felt like a solved problem. Yet, as with low inflation or herd immunity from measles, if we allow the foundations of a success story to be eaten away, we find that the problem isn’t quite as thoroughly solved as we assumed.

    Success leads to boredom. Boredom leads to neglect. Neglect leads to failure. Failure is no longer boring. But if we don’t show more interest in the successful systems we have built, they may suddenly become far too interesting for comfort. By the time these boring topics start seeming interesting, it’s too late.

  • Six years with Franklin India Mid Cap Fund (erstwhile Franklin India Prima Fund)

    My investment in Franklin India Prima Fund continues in the sixth year. Along with the fund house renaming it from Franklin India Prima Fund and me stopping my SIP in Jan’25 there were no other significant events that happened since last one year. I plan to continue to hold this fund for long term and will restart my SIPs once my current financial commitments are taken care of. 

    Figure 1

    Franklin India Mid Cap Fund continues to underperform Nifty Midcap 50 although over the last 1½ years the gap has reduced a bit (Figure 2). The Trump tariffs are very much visible in the Figure 2. A steep drop in profit starting from Jan’25 and then recovery starting from Mar’25. 

    (more…)
  • Almost right, but not quite

    The results of Stack Overflow Developer Survey 2025 are in.

    No need to bury the lede: more developers are using AI tools, but their trust in those tools is falling.

    And why is the trust falling?

    The number-one frustration, cited by 45% of respondents, is dealing with “AI solutions that are almost right, but not quite,” which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing “almost-right” AI-generated code. When the code gets complicated and the stakes are high, developers turn to people. An overwhelming 75% said they would still ask another person for help when they don’t trust AI’s answers.

  • Zero sum thinking

    Alex Tabarrok explaining zero sum thinking.

    Zero sum thinking fuels support for trade protection: if other countries gain, we must be losing. It drives opposition to immigration: if immigrants benefit, natives must suffer. And it even helps explain hostility toward universities and the desire to cut science funding. For the zero-sum thinker, there’s no such thing as a public good or even a shared national interest—only “us” versus “them.” In this framework, funding top universities isn’t investing in cancer research; it’s enriching elites at everyone else’s expense. Any claim to broader benefit is seen as a smokescreen for redistributing status, power, and money to “them.”

    Especially relevant right now.

  • Jagged intelligence

    Andrej Karpathy explaining what jagged intelligence is in AI along with some examples.

    Jagged Intelligence. Some things work extremely well (by human standards) while some things fail catastrophically (again by human standards), and it’s not always obvious which is which, though you can develop a bit of intuition over time. Different from humans, where a lot of knowledge and problem solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood.

    Personally I think these are not fundamental issues. They demand more work across the stack, including not just scaling. The big one I think is the present lack of “cognitive self-knowledge”, which requires more sophisticated approaches in model post-training instead of the naive “imitate human labelers and make it big” solutions that have mostly gotten us this far.

    It’s from a year ago, and some of those jags have been smoothed out.

  • Leverage arbitrage

    Tushar Dadlani talking about the growing gap between how fast some actors can change the world and how fast others can respond to those changes.

    …we’re experiencing massive “leverage arbitrage” where actors with higher-order leverage can extract value from systems faster than actors with lower-order leverage can maintain those systems. Google’s compensation revolution didn’t just raise tech salaries—it systematically destroyed entrepreneurial ecosystems globally by making employment more attractive than company-building. Social media platforms don’t just connect people—they reshape democratic discourse faster than democratic institutions can adapt. AI systems aren’t just tools—they’re deployed faster than we can develop frameworks for understanding their social implications.

    And this arbitrage is making us perpetual junior programmers.

  • Virtuosity in the world of AI

    Drew Breunig talking about virtuosity how quickly amazing new developments in the world of AI are becoming, meh.

    virtuosity can only be achieved when the audience can perceive the risks being taken by the performer.

    A DJ that walks on stage and hits play is not likely to be perceived as a virtuoso. While a pianist who is able to place their fingers perfectly among a minefield of clearly visible wrong keys is without question a virtuoso. I think this idea carries over to sports as well and can partially explain the decline of many previously popular sports and the rise of video game streaming. We watch the things that we have personally experienced as being difficult. That is essential context to appreciate a performance.

    Initially, many AI applications were, surprisingly, embraced as incredible performances. The images generated by DALLe were usually not more impressive than those of professional illustrators. They were instead incredibly impressive because they had been achieved by a computer program. The same goes for video generating AI demos; none of their video clips are aesthetic or narrative achievements. They are impressive because they were generated by software. But even here, the AI is not the virtuoso. The virtuoso are the teams and companies building these models.

    We’ve been able to watch this sheen come off very quickly. Generating an image from a chatbot is no longer very impressive to our friends. It is a novelty. And this half-life, the time it takes for a model’s output to become merely novel, is shortening with every release.

  • Proof of thought

    Alex Martsinovich talking about how writing has become incredibly cheap and goes on to talk about the AI etiquettes that we need to start following.

    For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

    Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can’t rely on proof-of-thought anymore. Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned. So what do we do?

  • We are all junior programmers now

    Generative AI is the fastest moving technology I have seen till date. I am seeing something or the other happening every single day. It’s hard to keep track of updates. Scott Werner extends this thought and theorises that because of this we are all junior programmers now.

    We’re all junior developers again. But not junior in the traditional sense, where senior is waiting after enough years pass. We’re junior in a permanent sense, where the technology evolves faster than expertise can accumulate.

    It’s like being a professional surfer on an ocean that keeps changing its physics. Just when you think you understand waves, they start moving sideways. Or backwards. Or turning into birds.

    This is either terrifying or liberating, depending on your relationship with control.

    Scott also touches upon other interesting aspects like—Time Dilation in the Age of AI.

  • Documentation

    Rachel Kroll taking about benefits of documenting.

    I used to be on a team that was responsible for the care and feeding of a great many Linux boxes which together constituted the “web tier” for a giant social network. You know, the one with all of the cat pictures… and later the whole genocide thing and enabling fascism. Yeah, them.

    Anyway, given that we had a six-digit number of machines that was steadily climbing and people were always experimenting with stuff on them, with them, and under them, it was necessary to apply some balance to keep things from breaking too often. There was a fine line between “everything’s broken” and “it’s impossible to roll anything out so the business dies”.

    At some point, I realized that if I wrote a wiki page and documented the things that we were willing to support, I could wait about six months and then it would be like it had always been there. Enough people went through the revolving doors of that place such that six months’ worth of employee turnover was sufficient to make it look like a whole other company. All I had to do was write it, wait a bit, then start citing it when needed.

    Loved the reference of revolving doors.