• Risks in passive investing

    Dan Amoss talking about the incoming risks to passive investments.

    These funds don’t try to beat the market. They are the market. They buy stocks based on their size in the index, no questions asked.

    On the surface, this passive revolution appears brilliant. Passive funds are cheap, with management fees close to zero. They have performed well in the past. And they require no effort.

    But like so many financial innovations that seem too good to be true, this one comes with hidden risks that few investors understand.

    According to Mike Green, chief strategist at Simplify Asset Management, this passive revolution has created unintended side effects that could end in disaster.

    Green calls passive investing the “giant mindless robot.”

    Why?

    Because when new money flows into index funds, it doesn’t sit on the sidelines. This money flow is not completely isolated from the index it tracks. Rather, it buys stocks immediately. And not just any stocks. It allocates the largest share of incremental purchases to the largest stocks (like Apple, Microsoft, Nvidia, and Amazon) because those names dominate the index.

    This creates what economists call a “price-insensitive buyer” – the most dangerous type of market participant. The money flows in, the big stocks get bigger, and the index rises. That rise draws in more money, which buys more shares, pushing prices even higher.

    It’s momentum on autopilot.

    Here’s where Goodhart’s Law reveals the fundamental problem. The index has become the target of trillions of dollars in retirement accounts, pensions, and ETF flows. It’s no longer as good a measure of economic strength as it once was!

    The post is written from a US perspective, but I believe something similar will play out in India over the coming decades.

    Filed under
  • Outcome bias

    Vishal Khandelwal explaining outcome bias.

    Outcome Bias, which leads us to judge the quality of a decision based on its result, instead of the thought or process that went into it.

    So, as long as the outcome is favourable, we assume the decision was good. When it turns out badly, we blame the decision, even if it made perfect sense at the time.

    The real problem is that outcome bias not only distorts our view of the past, but that it shapes our future decisions. If a bad decision leads to a good result, we often reinforce it. We do it again. Worse, we up the stakes. It becomes a habit. And like my friend, we trust the pattern until it breaks. When that happens in investing, it may lead to financial ruin.

    Outcome bias also leads us to punish good behaviour unfairly. Imagine someone who stuck to their asset allocation plan, avoided chasing hot stocks, and rebalanced regularly, but ended up underperforming in a year when speculative bets did well. That person might feel foolish, even though they followed a sound process.

    The irony is that the more disciplined your process, the more often you’ll look wrong in the short term.

    Filed under
  • Credit card and vibe coding

    Steve Krouse sharing an analogy that vibe coding is like giving child a credit card. The child gets instant gratification, but at the end of the month you need to pay the bill.

    The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt.

    As you can imagine, the first phase is ecstatic. I can wave this little piece of plastic in stores and take whatever I want!

    Which is a lot like AI can build anything now! Nobody needs to learn how to code! Look at what it just made for me!

    But if you wait a month, you’ll get the credit card bill. Did I actually need to buy all those things? How will I get myself out of this hole?

    It’s similar for the vibe coder. My code broken. What do all these files and folders even do? How will I ever get this fixed? Can I get a refund for the $400 I spent vibe coding?

    If you don’t understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

    I saw this post on Hacker News and there was this comment that caught my eye.

    Non-technical or junior people developed and deployed applications, emboldened by the relative ease of Microsoft Access and Excel. There were all kinds of limitations, scaling problems, and maintenance nightmares. But there were a lot of upsides too, and it made the “professionals” up their game to obviate the need for such adhoc and unsanctioned developments.

    Come to think of it, the exact same thing happened when the PC became popular. Mainframe people were aghast at all the horrible unprofessional mess that the PC people were creating.

    This in turn reminded me of the quote from Micha Kaufman.

    You must understand that what was once considered ‘easy tasks’ will no longer exist; what was considered ‘hard tasks’ will be the new easy, and what was considered ‘impossible tasks’ will be the new hard.

    These historical perspectives and statements drive me to a conclusion—vibe coding is here to stay. We will have people on both end of the spectrum. Some folks will rack up huge credit card debt and go bankrupt. Others will use the credit card wisely and travel free with the accumulated reward points.

    Filed under
  • Don’t forget the boring stuff

    Tim Harford explains why you shouldn’t forget the boring stuff. Some pretty good examples.

    …smooth, successful operations are uninteresting, and uninteresting matters tend to be neglected. Eventually they stop working well, at which point they become interesting again. 

    This is certainly true of the AC waveform. It seems boring because it has felt like a solved problem. Yet, as with low inflation or herd immunity from measles, if we allow the foundations of a success story to be eaten away, we find that the problem isn’t quite as thoroughly solved as we assumed.

    Success leads to boredom. Boredom leads to neglect. Neglect leads to failure. Failure is no longer boring. But if we don’t show more interest in the successful systems we have built, they may suddenly become far too interesting for comfort. By the time these boring topics start seeming interesting, it’s too late.

    Filed under
  • Six years with Franklin India Mid Cap Fund (erstwhile Franklin India Prima Fund)

    My investment in Franklin India Prima Fund continues in the sixth year. Along with the fund house renaming it from Franklin India Prima Fund and me stopping my SIP in Jan’25 there were no other significant events that happened since last one year. I plan to continue to hold this fund for long term and will restart my SIPs once my current financial commitments are taken care of. 

    Figure 1

    Franklin India Mid Cap Fund continues to underperform Nifty Midcap 50 although over the last 1½ years the gap has reduced a bit (Figure 2). The Trump tariffs are very much visible in the Figure 2. A steep drop in profit starting from Jan’25 and then recovery starting from Mar’25. 

    (more…)
    Filed under
  • Almost right, but not quite

    The results of Stack Overflow Developer Survey 2025 are in.

    No need to bury the lede: more developers are using AI tools, but their trust in those tools is falling.

    And why is the trust falling?

    The number-one frustration, cited by 45% of respondents, is dealing with “AI solutions that are almost right, but not quite,” which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing “almost-right” AI-generated code. When the code gets complicated and the stakes are high, developers turn to people. An overwhelming 75% said they would still ask another person for help when they don’t trust AI’s answers.

    Filed under
  • Zero sum thinking

    Alex Tabarrok explaining zero sum thinking.

    Zero sum thinking fuels support for trade protection: if other countries gain, we must be losing. It drives opposition to immigration: if immigrants benefit, natives must suffer. And it even helps explain hostility toward universities and the desire to cut science funding. For the zero-sum thinker, there’s no such thing as a public good or even a shared national interest—only “us” versus “them.” In this framework, funding top universities isn’t investing in cancer research; it’s enriching elites at everyone else’s expense. Any claim to broader benefit is seen as a smokescreen for redistributing status, power, and money to “them.”

    Especially relevant right now.

    Filed under
  • Jagged intelligence

    Andrej Karpathy explaining what jagged intelligence is in AI along with some examples.

    Jagged Intelligence. Some things work extremely well (by human standards) while some things fail catastrophically (again by human standards), and it’s not always obvious which is which, though you can develop a bit of intuition over time. Different from humans, where a lot of knowledge and problem solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood.

    Personally I think these are not fundamental issues. They demand more work across the stack, including not just scaling. The big one I think is the present lack of “cognitive self-knowledge”, which requires more sophisticated approaches in model post-training instead of the naive “imitate human labelers and make it big” solutions that have mostly gotten us this far.

    It’s from a year ago, and some of those jags have been smoothed out.

    Filed under
  • Leverage arbitrage

    Tushar Dadlani talking about the growing gap between how fast some actors can change the world and how fast others can respond to those changes.

    …we’re experiencing massive “leverage arbitrage” where actors with higher-order leverage can extract value from systems faster than actors with lower-order leverage can maintain those systems. Google’s compensation revolution didn’t just raise tech salaries—it systematically destroyed entrepreneurial ecosystems globally by making employment more attractive than company-building. Social media platforms don’t just connect people—they reshape democratic discourse faster than democratic institutions can adapt. AI systems aren’t just tools—they’re deployed faster than we can develop frameworks for understanding their social implications.

    And this arbitrage is making us perpetual junior programmers.

    Filed under
  • Virtuosity in the world of AI

    Drew Breunig talking about virtuosity how quickly amazing new developments in the world of AI are becoming, meh.

    virtuosity can only be achieved when the audience can perceive the risks being taken by the performer.

    A DJ that walks on stage and hits play is not likely to be perceived as a virtuoso. While a pianist who is able to place their fingers perfectly among a minefield of clearly visible wrong keys is without question a virtuoso. I think this idea carries over to sports as well and can partially explain the decline of many previously popular sports and the rise of video game streaming. We watch the things that we have personally experienced as being difficult. That is essential context to appreciate a performance.

    Initially, many AI applications were, surprisingly, embraced as incredible performances. The images generated by DALLe were usually not more impressive than those of professional illustrators. They were instead incredibly impressive because they had been achieved by a computer program. The same goes for video generating AI demos; none of their video clips are aesthetic or narrative achievements. They are impressive because they were generated by software. But even here, the AI is not the virtuoso. The virtuoso are the teams and companies building these models.

    We’ve been able to watch this sheen come off very quickly. Generating an image from a chatbot is no longer very impressive to our friends. It is a novelty. And this half-life, the time it takes for a model’s output to become merely novel, is shortening with every release.

    Filed under