• Five years as shareholder of Tata Consultancy Services

    When I started investing in TCS, I’d already held L&T Infotech—before its merger with Mindtree—as another IT services company in my portfolio. The rationale behind investing in TCS was twofold. First, I wanted to diversify within the IT services sector. With L&T Infotech being a Nifty Next 50 company, TCS was a good choice as a Nifty 50 company. Second, TCS was available at a discount because I started investing right in the middle of the COVID pandemic market crash.

    My hope was that TCS would outperform the index. Unfortunately, it’s been an underperformer for the last three years against both the Nifty 50 and the Nifty IT index.

    This trend isn’t unique to TCS—I’ve noticed it across my equity investments made during the COVID pandemic market crash. The first one to two years are great—I’m either beating or matching the index. And even if I’m not beating the index, the returns are still strong. I start daydreaming about making a fortune after 20 years, only to get sucker-punched in the subsequent years. Two years into my investment in TCS, I was at a 30% XIRR. Now, I’m at just 4.4% (Figure 1).

    Figure 1

    What’s even more surprising is that TCS has significantly underperformed the Nifty IT index. In fact, the Nifty IT index has even outpaced the Nifty 50. I was so surprised that I headed over to Google Finance (Figure 2) to double-check my calculations. Nope—Nifty IT has beaten both TCS and Nifty 50 over the last five years. It also goes to show my lack knowledge on how Nifty indices work. 

    Figure 2

    That said, I have to give TCS credit for two key points. First, I haven’t seen a negative XIRR on my investment yet. At a measly 4.4% XIRR, it’s still positive. But with the FII sell-off, I think I might finally see a negative return. Second, dividends (Figure 3) are another area where it does good. The special dividends for FY 2022-23 and FY 2024-25 have boosted my dividend yield at cost. In my portfolio, only ITC beats TCS in terms of dividend yield.

    Figure 3

    Filed under
  • Wrong, but useful

    Jason Cohen talking about how a theory can give accurate predictions, but fail on the explanation of why.

    The Mayans’ conception of the Earth, moon, sun, planets, and stars, was as ludicrous as every other ancient civilization, yet their priests routinely predicted the timing of eclipses with impressive accuracy. The priests leveraged this accuracy as evidence that their religion was correct, and that their exalted position in society was justified.

    Their religion—and therefore their explanation of how the universe worked—is laughable to the modern reader: The Earth in the center (of course), with thirteen tiers of heaven whirling above and nine levels of underworld threatening from below. Eclipses are not caused by a physical object blocking the light of the sun, but rather spiritual beings temporarily consuming the sun or moon (Figure 1). Even the most fervently religious person today would classify these ideas as fanciful mythology, though the Mayans were no less certain of the veracity of their religion than modern-day humans are of theirs.

    Nevertheless, they were careful observers and meticulous calculators. They understood that eclipses happened roughly every 173 days, adjusted by a 405-month cycle and additional smaller correction. They tracked these cycles and updated their model over the centuries, and as a result, their theory yielded accurate predictions, even though the theory’s explanation of why was entirely incorrect.

    Filed under
  • Convenience trumps authority

    Dror Poleg succinctly explaining how convenience trumps authority.

    At my office building, there is a bathroom. Every other week, the toilet gets clogged and the landlord has to call in a plumber. “Someone flushed wet wipes,” says the plumber. “You should tell people to not flush anything other than toilet paper.”

    At first, the landlord put up a sign that said “Please do not flush wet wipes.” After every incident, the signs became more aggressive, using terms like “absolutely forbidden” and even “this is disgusting.” The landlord even tried to punish tenants by delaying the repair and keeping the toilet closed for days. 

    Nothing worked. The signs continue to get more aggressive, and the toilet continues to get occasionally clogged. 

    How do you prevent people from throwing wet wipes into the toilet? 

    There’s one strategy the landlord hasn’t tried yet: Put a little bin next to the toilet, so people could have somewhere else to throw their wipes.

    Filed under
  • Don’t give advice, share stories

    I read an interesting post by Jacob Kaplan-Moss about taking advice from someone who’s been in the tech industry far longer than you and how to give advice as a silver sage yourself.

    …advice people with long careers on what worked for them when they were getting started is unlikely to be advice that works today. The tech industry of 15 or 20 years ago was, again, dramatically different from tech today. I used to joke that if you knew which was was up on a keyboard, you could get a job in tech. That joke makes no sense today: breaking into the field is now very difficult, and getting harder every year.

    So even when people with long careers try to give newcomers advice, while it may be well-intentioned, it’s likely to be useless. If you’re new to tech, ignoring advice from old heads is probably a good idea. And if you’re a fellow old head who’d like to help people new to the industry, do it by paying close attention to what works for them and pass on that advice.

    I think this applies to everyone, irrespective of the field in which they are working on. Commonly called as, generation gap. The young ones will always roll their eyes the moment silver sage starts giving advice. Then what can the silver sage offer? Stories.

    After reading the post, I coincidentally listened to the Paisa Vaisa podcast by Anupam Gupta and Aashish Somaiyaa on How Investor Behavior is Reshaping Mutual Funds. Below is the transcript of what caught my attention.

    The people who are young today, jo yeh log OTT pe dekh rahe hain, kitabon main proper literature padh rahe hain, hum log toh tabloid main padhte the. Oh my god, look Harshad Mehta and his Lexus and see what has happened. So for us it was really vivid, I mean we knew boss yeh to scam hota hai, for these people it’s a story. [Link]

    It’s a different world. Matlab aap sportsman ko dekho, cricketer ko dekho, humare yeh jo young log hain unka, for the lack of better word, disposition, attitude, its totally changed. And rightfully so. Inko confidence hai because inka PE multiple zyada hai. Inko future ka visibility zyada hai. Humara PE multiple low tha, humko future kaa visibility hi nahi tha. [Link]

    The younger ones don’t need advice. They need to hear the stories and vivid experiences that the silver sage went through, and how those experiences shaped what the silver sage is today.

    Filed under
  • AI Einstein

    Thomas Wolf arguing why we won’t have AI Einstein—at least in its current form. 

    …the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around.

    To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes ‘What if everyone is wrong about this?’ when all textbooks, experts, and common knowledge suggest otherwise.

    Wonderful insight. I am pretty sure that if AI had been available before we discovered that Earth orbits the Sun—and not the other way around—AI wouldn’t have dared to question that knowledge.

    Filed under
  • Knowledge collapse

    A long but worthwhile paper published by Andrew J. Peterson on AI and the Problem of Knowledge Collapse

    We identify conditions under which AI, by reducing the cost of access to certain modes of knowledge, can paradoxically harm public understanding. While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center’ of the distribution. This is generally useful, but widespread reliance on recursive AI systems could lead to a process we define as “knowledge collapse”, and argue this could harm innovation and the richness of human understanding and culture. However, unlike AI models that cannot choose what data they are trained on, humans may strategically seek out diverse forms of knowledge if they perceive them to be worthwhile. 

    The author analyzes how our dependence on AI could limit our knowledge to a narrow subset of views—views on which the AI was trained—and eventually forgetting the “long-tail” ideas. We see this happening right now when we ask the Chinese artificial intelligence, DeepSeek, about Tiananmen Square or Taiwan.

    With increasing integration of LLM-based systems, certain popular sources or beliefs which were common in the training data may come to be reinforced in the public mindset (and within the training data), while other “long-tail” ideas are neglected and eventually forgotten. 

    Such a process might be reinforced by an ‘echo chamber’ or information cascade effect, in which repeated exposure to this restricted set of information leads individuals to believe that the neglected, unobserved tails of knowledge are of little value.

    We have all seen the effect of the echo chamber in social media. An echo chamber with AI responses would only go on to solidify people’s beliefs—whether right or wrong.

    But knowledge collapse has been happening throughout human evolution.

    …traditional hunter-gatherers could identify thousands of different plants and knew their medicinal usages, whereas most humans today only know a few dozen plants and whether they can be purchased in a grocery store. This could be seen as a more efficient form of specialization of information across individuals, but it might also impact our beliefs about the value of those species or of a walk through a forest, or influence scientific or policy-relevant judgements.

    This will be a long term effect of AI, the implications of which will be visible only after 10 or more years. 

    Filed under
  • Rebuilding a software ecosystem

    Gordon Brander talking about why you should not rebuild software ecosystem. And if you are rebuilding a software ecosystem, you might be screwed.

    Software can be rebuilt, because software is a machine. But a software ecosystem is not a machine. It is a living system. When we attempt to rebuild the ecosystem, we’re making a category error. We’re confusing the software for the ecological process unfolding around it.

    You can’t rebuild an ecosystem, just like you can’t rebuild the Amazon rainforest. You can only grow with it, or bulldoze it and start over from zero.

    May be that’s why legacy modernisation projects are so complex. Because you are screwed but you don’t know where.

    Filed under
  • Simon Willison shares how he uses LLMs to write code

    Simon Willison has published his experience on he uses LLMs to write code. It is filled with great tips and nuggets of wisdom. Here are my favourites.

    If someone tells you that coding with LLMs is easy they are (probably unintentionally) misleading you. They may well have stumbled on to patterns that work, but those patterns do not come naturally to everyone.

    If you assume that this technology will implement your project perfectly without you needing to exercise any of your own skill you’ll quickly be disappointed.

    Instead, use them to augment your abilities. My current favorite mental model is to think of them as an over-confident pair programming assistant who’s lightning fast at looking things up, can churn out relevant examples at a moment’s notice and can execute on tedious tasks without complaint.

    A crucial characteristic of any model is its training cut-off date. This is the date at which the data they were trained on stopped being collected. For OpenAI’s models this is usually October of 2023. Anthropic and Gemini and other providers may have more recent dates.

    This is extremely important for code, because it influences what libraries they will be familiar with. If the library you are using had a major breaking change since October 2023, OpenAI models won’t know about it!

    I find LLMs respond extremely well to function signatures like the one I use here. I get to act as the function designer, the LLM does the work of building the body to my specification.

    Your responsibility as a software developer is to deliver working systems. If you haven’t seen it run, it’s not a working system. You need to invest in strengthening those manual QA habits.

    it’s not about getting work done faster, it’s about being able to ship projects that I wouldn’t have been able to justify spending time on at all.

    And my favourite.

    LLMs amplify existing expertise

    Filed under
  • Buying the dip, in recession

    While the jury is still out there on buying the dip, Nick Maggiulli explains why buying the dip—especially during recession—might not be possible for everyone

    If we assume that the market will eventually recover, then a decline in equity prices today allows young and “asset-light” investors to buy cheaper today and earn higher returns in the future.

    But the problem with this logic is that all else isn’t equal. Market crashes don’t happen in a vacuum. When asset prices decline, economic consequences typically follow. Workers lose their jobs or don’t get promoted. Hiring freezes up. People stop spending as much money. And this negative cycle feeds on itself.

    If you happen to be someone who keeps their high-paying job during such a time, then, yes, a market decline can be a buying opportunity. But this isn’t the case for everyone. In fact, the paper The Short- and Long-Term Career Effects of Graduating in a Recession suggests that those who start their career during a recession tend to see 5% lower lifetime earnings. As the authors state:

    A typical recession—a rise in unemployment rates by 5 percentage points in our context—implies an initial loss in earnings of about 9 percent that halves within 5 years, and finally fades to 0 by 10 years. For this time period, these reductions add up to a loss of about 5 percent of cumulated earnings.

    I know what you might be thinking: “Yes, I lose 5% of my lifetime earnings, but I get to buy stocks at a 20%+ discount. How is that not a huge win?”

    There are a few problems with this thinking, each of which I will address in turn.

    Filed under
  • Services as Software

    Eurasia Review talking about the rise of Services as Software

    The traditional Software-as-a-Service (SaaS) model disrupted enterprise Information Technology (IT) by replacing expensive, on-premise software solutions with cloud-based applications. Databases were maintained by the SaaS providers remotely, and the per-seat license model evolved rapidly to annuity payments and the rise of Annualised Recurring Revenue (ARR). SaaS has dominated the technology world for the better part of two decades. Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’.

    A little later in the article. 

    The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development, legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling.

    Filed under