• 10x productivity

    Colton Voege arguing that AI is not making software engineers 10x as productive.

    10x productivity means ten times the outcomes, not ten times the lines of code. This means what you used to ship in a quarter you now ship in a week and a half. These numbers should make even the truest AI believer pause. The amount of product ideation, story point negotiation, bugfixing, code review, waiting for deployments, testing, and QA in that go into what was traditionally 3 months of work is now getting done in 7 work days? For that to happen each and every one of these bottlenecks has to also seen have 10x productivity gains.

    Any software engineer who has worked on actual code in an actual company knows this isn’t possible.

    AI is making coding—which is a small portion of what software engineers do—10x productive. That too, sometimes. Colton Voege touches upon quite a few other topics. A worth while read.

    Filed under
  • Risks for parents who have achieved FIRE

    M. Pattabiraman talking about risks for parents who have achieved FIRE i.e. Financial Independence, Retire Early.

    We learned work ethic by observing our parents’ struggle — watching them juggle job or housework or both, watching them stress over monthly bills, witnessing the family make sacrifices for long-term goals. These experiences, while challenging, instilled us with crucial life lessons about effort, delayed gratification, and the value of money.

    Today’s FIRE parents like me present their children with a dramatically different reality. Instead of 6 AM commutes and weekend overtime, kids see parents working flexible schedules, pursuing passion projects, and taking spontaneous vacations. While this lifestyle represents the ultimate financial success, it may inadvertently communicate that money comes easily and life should be perpetually comfortable.

    Filed under
  • Risks in passive investing

    Dan Amoss talking about the incoming risks to passive investments.

    These funds don’t try to beat the market. They are the market. They buy stocks based on their size in the index, no questions asked.

    On the surface, this passive revolution appears brilliant. Passive funds are cheap, with management fees close to zero. They have performed well in the past. And they require no effort.

    But like so many financial innovations that seem too good to be true, this one comes with hidden risks that few investors understand.

    According to Mike Green, chief strategist at Simplify Asset Management, this passive revolution has created unintended side effects that could end in disaster.

    Green calls passive investing the “giant mindless robot.”

    Why?

    Because when new money flows into index funds, it doesn’t sit on the sidelines. This money flow is not completely isolated from the index it tracks. Rather, it buys stocks immediately. And not just any stocks. It allocates the largest share of incremental purchases to the largest stocks (like Apple, Microsoft, Nvidia, and Amazon) because those names dominate the index.

    This creates what economists call a “price-insensitive buyer” – the most dangerous type of market participant. The money flows in, the big stocks get bigger, and the index rises. That rise draws in more money, which buys more shares, pushing prices even higher.

    It’s momentum on autopilot.

    Here’s where Goodhart’s Law reveals the fundamental problem. The index has become the target of trillions of dollars in retirement accounts, pensions, and ETF flows. It’s no longer as good a measure of economic strength as it once was!

    The post is written from a US perspective, but I believe something similar will play out in India over the coming decades.

    Filed under
  • Outcome bias

    Vishal Khandelwal explaining outcome bias.

    Outcome Bias, which leads us to judge the quality of a decision based on its result, instead of the thought or process that went into it.

    So, as long as the outcome is favourable, we assume the decision was good. When it turns out badly, we blame the decision, even if it made perfect sense at the time.

    The real problem is that outcome bias not only distorts our view of the past, but that it shapes our future decisions. If a bad decision leads to a good result, we often reinforce it. We do it again. Worse, we up the stakes. It becomes a habit. And like my friend, we trust the pattern until it breaks. When that happens in investing, it may lead to financial ruin.

    Outcome bias also leads us to punish good behaviour unfairly. Imagine someone who stuck to their asset allocation plan, avoided chasing hot stocks, and rebalanced regularly, but ended up underperforming in a year when speculative bets did well. That person might feel foolish, even though they followed a sound process.

    The irony is that the more disciplined your process, the more often you’ll look wrong in the short term.

    Filed under
  • Credit card and vibe coding

    Steve Krouse sharing an analogy that vibe coding is like giving child a credit card. The child gets instant gratification, but at the end of the month you need to pay the bill.

    The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt.

    As you can imagine, the first phase is ecstatic. I can wave this little piece of plastic in stores and take whatever I want!

    Which is a lot like AI can build anything now! Nobody needs to learn how to code! Look at what it just made for me!

    But if you wait a month, you’ll get the credit card bill. Did I actually need to buy all those things? How will I get myself out of this hole?

    It’s similar for the vibe coder. My code broken. What do all these files and folders even do? How will I ever get this fixed? Can I get a refund for the $400 I spent vibe coding?

    If you don’t understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

    I saw this post on Hacker News and there was this comment that caught my eye.

    Non-technical or junior people developed and deployed applications, emboldened by the relative ease of Microsoft Access and Excel. There were all kinds of limitations, scaling problems, and maintenance nightmares. But there were a lot of upsides too, and it made the “professionals” up their game to obviate the need for such adhoc and unsanctioned developments.

    Come to think of it, the exact same thing happened when the PC became popular. Mainframe people were aghast at all the horrible unprofessional mess that the PC people were creating.

    This in turn reminded me of the quote from Micha Kaufman.

    You must understand that what was once considered ‘easy tasks’ will no longer exist; what was considered ‘hard tasks’ will be the new easy, and what was considered ‘impossible tasks’ will be the new hard.

    These historical perspectives and statements drive me to a conclusion—vibe coding is here to stay. We will have people on both end of the spectrum. Some folks will rack up huge credit card debt and go bankrupt. Others will use the credit card wisely and travel free with the accumulated reward points.

    Filed under
  • Don’t forget the boring stuff

    Tim Harford explains why you shouldn’t forget the boring stuff. Some pretty good examples.

    …smooth, successful operations are uninteresting, and uninteresting matters tend to be neglected. Eventually they stop working well, at which point they become interesting again. 

    This is certainly true of the AC waveform. It seems boring because it has felt like a solved problem. Yet, as with low inflation or herd immunity from measles, if we allow the foundations of a success story to be eaten away, we find that the problem isn’t quite as thoroughly solved as we assumed.

    Success leads to boredom. Boredom leads to neglect. Neglect leads to failure. Failure is no longer boring. But if we don’t show more interest in the successful systems we have built, they may suddenly become far too interesting for comfort. By the time these boring topics start seeming interesting, it’s too late.

    Filed under
  • Six years with Franklin India Mid Cap Fund (erstwhile Franklin India Prima Fund)

    My investment in Franklin India Prima Fund continues in the sixth year. Along with the fund house renaming it from Franklin India Prima Fund and me stopping my SIP in Jan’25 there were no other significant events that happened since last one year. I plan to continue to hold this fund for long term and will restart my SIPs once my current financial commitments are taken care of. 

    Figure 1

    Franklin India Mid Cap Fund continues to underperform Nifty Midcap 50 although over the last 1½ years the gap has reduced a bit (Figure 2). The Trump tariffs are very much visible in the Figure 2. A steep drop in profit starting from Jan’25 and then recovery starting from Mar’25. 

    (more…)
    Filed under
  • Almost right, but not quite

    The results of Stack Overflow Developer Survey 2025 are in.

    No need to bury the lede: more developers are using AI tools, but their trust in those tools is falling.

    And why is the trust falling?

    The number-one frustration, cited by 45% of respondents, is dealing with “AI solutions that are almost right, but not quite,” which often makes debugging more time-consuming. In fact, 66% of developers say they are spending more time fixing “almost-right” AI-generated code. When the code gets complicated and the stakes are high, developers turn to people. An overwhelming 75% said they would still ask another person for help when they don’t trust AI’s answers.

    Filed under
  • Zero sum thinking

    Alex Tabarrok explaining zero sum thinking.

    Zero sum thinking fuels support for trade protection: if other countries gain, we must be losing. It drives opposition to immigration: if immigrants benefit, natives must suffer. And it even helps explain hostility toward universities and the desire to cut science funding. For the zero-sum thinker, there’s no such thing as a public good or even a shared national interest—only “us” versus “them.” In this framework, funding top universities isn’t investing in cancer research; it’s enriching elites at everyone else’s expense. Any claim to broader benefit is seen as a smokescreen for redistributing status, power, and money to “them.”

    Especially relevant right now.

    Filed under
  • Jagged intelligence

    Andrej Karpathy explaining what jagged intelligence is in AI along with some examples.

    Jagged Intelligence. Some things work extremely well (by human standards) while some things fail catastrophically (again by human standards), and it’s not always obvious which is which, though you can develop a bit of intuition over time. Different from humans, where a lot of knowledge and problem solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood.

    Personally I think these are not fundamental issues. They demand more work across the stack, including not just scaling. The big one I think is the present lack of “cognitive self-knowledge”, which requires more sophisticated approaches in model post-training instead of the naive “imitate human labelers and make it big” solutions that have mostly gotten us this far.

    It’s from a year ago, and some of those jags have been smoothed out.

    Filed under