• Live in chaos, holiday in peace

    Saurabh Mukherjea talking to Nandita Rajhansa on his book Behold the Leviathan: The Unsual Rise of Modern India and dropping a suggestion on how to live your life. 

    A lot of people in our country like sukh shanti, I don’t blame them, if you live in a city like Bombay or Bangalore, jahan perpetual churn hai, perpetual traffic hai, sar kaa phodi hai, you like sukh shanti.

    Sukh shanti waale jag pe holiday karna hai, right. Karma bhoomi aapki chaos main honi chahiye, holiday aapki sukh shanti main honi chahiye. Don’t mix the two. For Christ’s sake don’t try to setup your karma bhoomi in a country with sukh shanti because phir barbaadi aapko bahut kareebi se dekhne milega. Live in chaos, holiday in peace and that way you will have a great life, you will compound your wealth and have a fulfilling life. India is a definition of chaos and that’s why India rocks. 

    This made me chuckle. Of course you need to survive the pollution and traffic first. 

  • NSA

    Bruce Schneier talking about some inner workings of NSA in context of the Signal chat leak.

    When the NSA discovers a technological vulnerability in a service such as Signal (or buys one on the thriving clandestine vulnerability market), does it exploit it in secret, or reveal it so that it can be fixed? Since at least 2014, a US government interagency “equities” process has been used to decide whether it is in the national interest to take advantage of a particular security flaw, or to fix it. The trade-offs are often complicated and hard.

    Waltz—along with Vice President J.D. Vance, Defense Secretary Pete Hegseth, and the other officials in the Signal group—have just made the trade-offs much tougher to resolve. Signal is both widely available and widely used. Smaller governments that can’t afford their own military-grade encryption use it. Journalists, human rights workers, persecuted minorities, dissidents, corporate executives, and criminals around the world use it. Many of these populations are of great interest to the NSA.

    At the same time, as we have now discovered, the app is being used for operational US military traffic. So, what does the NSA do if it finds a security flaw in Signal?

    Previously, it might have preferred to keep the flaw quiet and use it to listen to adversaries. Now, if the agency does that, it risks someone else finding the same vulnerability and using it against the US government. And if it was later disclosed that the NSA could have fixed the problem and didn’t, then the results might be catastrophic for the agency.

    It is really fascinating to read how these clandestine agencies work. Fodder for a spy novel.

  • Programming in natural language

    Edsger W. Dijkstra explaining why using natural language for programming is a bad idea.

    In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

    We know in the meantime that the choice of an interface is not just a division of (a fixed amount of) labour, because the work involved in co-operating and communicating across the interface has to be added. We know in the meantime —from sobering experience, I may add— that a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.

    Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.

    I saw this post on Hacker News and this comment sums up Dijkstra’s point succinctly.

    Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

    As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

    Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

  • Learn to code

    Kyrylo Silin reflecting on importance of learning to code.

    AI is the new shiny toy everyone wants to play with. And to be honest, it’s impressive. The problem with AI is that with every year, it gets better and better. Wait what? How’s that a problem? Well, with every new year you lose control.

    The more you rely on AI, the less you understand what you’re doing. The less you understand, the more AI vendors can control you. And the more control they have, the more they can charge you. It’s a vicious cycle.

    This shift was inevitable. Humanity must adapt to this new reality. AI isn’t going away, and we need to learn how to use it to our advantage.

    The large language models (LLMs) created by tech giants have absorbed decades of knowledge — our knowledge. They’ve been trained on our work.

    Now they’re selling it back to us and telling us we only need to learn English to code. This is a lie. As a new programmer, I don’t want you to fall for it. There’s no corner-cutting. Get your shit together and learn to code.

    Ha!

    While I agree that programmers need to learn to code, the unfortunate reality is that AI will charge us to write code that it trained itself on using our work. The charge is for completing a function in a matter of minutes, sometimes even seconds. The charge is for speed and efficiency.

  • Five years as shareholder of Tata Consultancy Services

    When I started investing in TCS, I’d already held L&T Infotech—before its merger with Mindtree—as another IT services company in my portfolio. The rationale behind investing in TCS was twofold. First, I wanted to diversify within the IT services sector. With L&T Infotech being a Nifty Next 50 company, TCS was a good choice as a Nifty 50 company. Second, TCS was available at a discount because I started investing right in the middle of the COVID pandemic market crash.

    My hope was that TCS would outperform the index. Unfortunately, it’s been an underperformer for the last three years against both the Nifty 50 and the Nifty IT index.

    This trend isn’t unique to TCS—I’ve noticed it across my equity investments made during the COVID pandemic market crash. The first one to two years are great—I’m either beating or matching the index. And even if I’m not beating the index, the returns are still strong. I start daydreaming about making a fortune after 20 years, only to get sucker-punched in the subsequent years. Two years into my investment in TCS, I was at a 30% XIRR. Now, I’m at just 4.4% (Figure 1).

    Figure 1

    What’s even more surprising is that TCS has significantly underperformed the Nifty IT index. In fact, the Nifty IT index has even outpaced the Nifty 50. I was so surprised that I headed over to Google Finance (Figure 2) to double-check my calculations. Nope—Nifty IT has beaten both TCS and Nifty 50 over the last five years. It also goes to show my lack knowledge on how Nifty indices work. 

    Figure 2

    That said, I have to give TCS credit for two key points. First, I haven’t seen a negative XIRR on my investment yet. At a measly 4.4% XIRR, it’s still positive. But with the FII sell-off, I think I might finally see a negative return. Second, dividends (Figure 3) are another area where it does good. The special dividends for FY 2022-23 and FY 2024-25 have boosted my dividend yield at cost. In my portfolio, only ITC beats TCS in terms of dividend yield.

    Figure 3

  • Wrong, but useful

    Jason Cohen talking about how a theory can give accurate predictions, but fail on the explanation of why.

    The Mayans’ conception of the Earth, moon, sun, planets, and stars, was as ludicrous as every other ancient civilization, yet their priests routinely predicted the timing of eclipses with impressive accuracy. The priests leveraged this accuracy as evidence that their religion was correct, and that their exalted position in society was justified.

    Their religion—and therefore their explanation of how the universe worked—is laughable to the modern reader: The Earth in the center (of course), with thirteen tiers of heaven whirling above and nine levels of underworld threatening from below. Eclipses are not caused by a physical object blocking the light of the sun, but rather spiritual beings temporarily consuming the sun or moon (Figure 1). Even the most fervently religious person today would classify these ideas as fanciful mythology, though the Mayans were no less certain of the veracity of their religion than modern-day humans are of theirs.

    Nevertheless, they were careful observers and meticulous calculators. They understood that eclipses happened roughly every 173 days, adjusted by a 405-month cycle and additional smaller correction. They tracked these cycles and updated their model over the centuries, and as a result, their theory yielded accurate predictions, even though the theory’s explanation of why was entirely incorrect.

  • Convenience trumps authority

    Dror Poleg succinctly explaining how convenience trumps authority.

    At my office building, there is a bathroom. Every other week, the toilet gets clogged and the landlord has to call in a plumber. “Someone flushed wet wipes,” says the plumber. “You should tell people to not flush anything other than toilet paper.”

    At first, the landlord put up a sign that said “Please do not flush wet wipes.” After every incident, the signs became more aggressive, using terms like “absolutely forbidden” and even “this is disgusting.” The landlord even tried to punish tenants by delaying the repair and keeping the toilet closed for days. 

    Nothing worked. The signs continue to get more aggressive, and the toilet continues to get occasionally clogged. 

    How do you prevent people from throwing wet wipes into the toilet? 

    There’s one strategy the landlord hasn’t tried yet: Put a little bin next to the toilet, so people could have somewhere else to throw their wipes.

  • Don’t give advice, share stories

    I read an interesting post by Jacob Kaplan-Moss about taking advice from someone who’s been in the tech industry far longer than you and how to give advice as a silver sage yourself.

    …advice people with long careers on what worked for them when they were getting started is unlikely to be advice that works today. The tech industry of 15 or 20 years ago was, again, dramatically different from tech today. I used to joke that if you knew which was was up on a keyboard, you could get a job in tech. That joke makes no sense today: breaking into the field is now very difficult, and getting harder every year.

    So even when people with long careers try to give newcomers advice, while it may be well-intentioned, it’s likely to be useless. If you’re new to tech, ignoring advice from old heads is probably a good idea. And if you’re a fellow old head who’d like to help people new to the industry, do it by paying close attention to what works for them and pass on that advice.

    I think this applies to everyone, irrespective of the field in which they are working on. Commonly called as, generation gap. The young ones will always roll their eyes the moment silver sage starts giving advice. Then what can the silver sage offer? Stories.

    After reading the post, I coincidentally listened to the Paisa Vaisa podcast by Anupam Gupta and Aashish Somaiyaa on How Investor Behavior is Reshaping Mutual Funds. Below is the transcript of what caught my attention.

    The people who are young today, jo yeh log OTT pe dekh rahe hain, kitabon main proper literature padh rahe hain, hum log toh tabloid main padhte the. Oh my god, look Harshad Mehta and his Lexus and see what has happened. So for us it was really vivid, I mean we knew boss yeh to scam hota hai, for these people it’s a story. [Link]

    It’s a different world. Matlab aap sportsman ko dekho, cricketer ko dekho, humare yeh jo young log hain unka, for the lack of better word, disposition, attitude, its totally changed. And rightfully so. Inko confidence hai because inka PE multiple zyada hai. Inko future ka visibility zyada hai. Humara PE multiple low tha, humko future kaa visibility hi nahi tha. [Link]

    The younger ones don’t need advice. They need to hear the stories and vivid experiences that the silver sage went through, and how those experiences shaped what the silver sage is today.

  • AI Einstein

    Thomas Wolf arguing why we won’t have AI Einstein—at least in its current form. 

    …the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around.

    To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes ‘What if everyone is wrong about this?’ when all textbooks, experts, and common knowledge suggest otherwise.

    Wonderful insight. I am pretty sure that if AI had been available before we discovered that Earth orbits the Sun—and not the other way around—AI wouldn’t have dared to question that knowledge.

  • Knowledge collapse

    A long but worthwhile paper published by Andrew J. Peterson on AI and the Problem of Knowledge Collapse

    We identify conditions under which AI, by reducing the cost of access to certain modes of knowledge, can paradoxically harm public understanding. While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center’ of the distribution. This is generally useful, but widespread reliance on recursive AI systems could lead to a process we define as “knowledge collapse”, and argue this could harm innovation and the richness of human understanding and culture. However, unlike AI models that cannot choose what data they are trained on, humans may strategically seek out diverse forms of knowledge if they perceive them to be worthwhile. 

    The author analyzes how our dependence on AI could limit our knowledge to a narrow subset of views—views on which the AI was trained—and eventually forgetting the “long-tail” ideas. We see this happening right now when we ask the Chinese artificial intelligence, DeepSeek, about Tiananmen Square or Taiwan.

    With increasing integration of LLM-based systems, certain popular sources or beliefs which were common in the training data may come to be reinforced in the public mindset (and within the training data), while other “long-tail” ideas are neglected and eventually forgotten. 

    Such a process might be reinforced by an ‘echo chamber’ or information cascade effect, in which repeated exposure to this restricted set of information leads individuals to believe that the neglected, unobserved tails of knowledge are of little value.

    We have all seen the effect of the echo chamber in social media. An echo chamber with AI responses would only go on to solidify people’s beliefs—whether right or wrong.

    But knowledge collapse has been happening throughout human evolution.

    …traditional hunter-gatherers could identify thousands of different plants and knew their medicinal usages, whereas most humans today only know a few dozen plants and whether they can be purchased in a grocery store. This could be seen as a more efficient form of specialization of information across individuals, but it might also impact our beliefs about the value of those species or of a walk through a forest, or influence scientific or policy-relevant judgements.

    This will be a long term effect of AI, the implications of which will be visible only after 10 or more years.