• AI can empower developers to rewrite code without regret

    Matthew Sinclair talks about how AI can help programmers scrap the code they—or rather, the AI—has written and start over when they realise the approach won’t work, may lead to technical debt, or for any of thousands of other reasons, because new code can be generated fairly quickly.

    Working with Claude Code has fundamentally shifted how I think about the economics of programming time. Traditionally, coding involves three distinct “time buckets”:

    • Why am I doing this? Understanding the business problem and value 
    • What do I need to do? Designing the solution conceptually 
    • How am I going to do it? Actually writing the code 

    For decades, that last bucket consumed enormous amounts of our time. We’d spend hours, days or weeks writing, debugging, and refining. With Claude, that time cost has plummeted to nearly zero. I can generate thousands of lines of functional code in a sitting—something that is, frankly, mind-blowing.

    And there’s a new skill that emerges: wielding the knife. With code generation being essentially free, we need to become much more comfortable with throwing away entire solutions. The sunk cost fallacy hits programmers hard—we hate discarding code we’ve invested in, fearing we might break something important or never get back to a working state.

    But when your assistant can rewrite everything in minutes, that calculus changes completely. Three times during my backend project, I looked at substantial amounts of code—thousands of lines that technically worked—and decided to scrap it entirely because the approach wasn’t right. This wasn’t easy. My instinct was still to try to salvage and refactor. But the right move was to step back, rethink the approach, and direct the AI down a different path.

    This willingness to cut ruthlessly is a muscle most developers haven’t developed yet. It requires confidence in your architectural judgment and a radical shift in how you value implementation time.

    Filed under
  • Diversification and Risk

    Joe Wiggins explaining how diversification affects risk.

    Although diversification is by no means a free lunch, it is an effective means of reducing and controlling risk, if done prudently. It works because by combining securities and assets with different future potential return paths it significantly constrains the range of outcomes of the combined portfolio.

    If we move from a single stock holding to a diversified 50 stock portfolio we greatly lower the potential to make 10x our money, but also (nearly) entirely remove the risk of losing everything.

    Diversification is a tool whereby we can (very imperfectly) create a portfolio with a range of potential outcomes that we are comfortable with. When individuals complain about over-diversification, what they typically mean is that the range of outcomes has been narrowed so that average outcomes are very likely. There is, however, no right or wrong level, it simply depends on our tolerance for risk. Or, to put it another way, our appetite for extremely good or extremely bad results.

    Filed under
  • Program vs Product

    Dylan Beattie explaining problem with vibe coding by referencing program vs product.

    Probably the single most important lesson I’ve learned in my career, the thing that I would argue is the hallmark of “experience”, is understanding just how much work it takes to turn a working program into a viable product. It’s why developer estimates are so notoriously optimistic – and why experienced developers are so notoriously cynical. Let’s say you crank out a bit of code that’ll take responses from a web form and add them in an Excel spreadsheet. That’s not that hard… yay! we just built a Typeform competitor in one afternoon! Except, no, you didn’t. You made one thing work one time on one computer. You haven’t considered encoding, internationalization, concurrency, authentication, telemetry, billing, branding, mobile devices, deployment. You haven’t hit any of the weird limits yet – ever had a system work brilliantly for the first 65,535 requests and then fall over? You don’t have a product. At best, you have a proof-of-concept of a good idea that, if some very smart people work very hard, might become a viable product.

    Filed under
  • Seven years as shareholder of Supreme Industries

    I’ve been investing in Supreme Industries for about seven years now, but most of my investment—about 57% of it—came in just the financial year 2023–24 (Figure 1). So while the timeline technically stretches back quite a bit, the real weight of my position is barely a year old. It’s a little early to label this a “long-term” investment.

    Figure 1

    When I compare my XIRR to the broader market—specifically the Nifty 500—I’ve observed a pattern (Figure 2). When the markets are in a rough patch, Supreme Industries tends to move in line with the Nifty 500. It doesn’t drastically outperform or underperform. But when the markets start climbing, that’s when Supreme Industries pulls ahead. During bull runs, it consistently beats the index by a significant margin.

    Figure 2

    The dividend yield at cost (Figure 3), sitting a little over one percent, might not be jaw-dropping, but it adds up.

    Figure 3

    But the last few months have been a bit of a reality check.

    Back in July 2024, things were looking fantastic. My portfolio was showing a profit of over 125% on Supreme Industries (Figure 4). But then came the FII sell-off and that 125% profit has shrunk to just 23%. That kind of drawdown isn’t easy to watch—especially when you’ve seen how high things can go. The Trump tariffs have added another layer of uncertainty. I expect the next couple of years to be very volatile.

    Figure 4

    Filed under
  • Live in chaos, holiday in peace

    Saurabh Mukherjea talking to Nandita Rajhansa on his book Behold the Leviathan: The Unsual Rise of Modern India and dropping a suggestion on how to live your life. 

    A lot of people in our country like sukh shanti, I don’t blame them, if you live in a city like Bombay or Bangalore, jahan perpetual churn hai, perpetual traffic hai, sar kaa phodi hai, you like sukh shanti.

    Sukh shanti waale jag pe holiday karna hai, right. Karma bhoomi aapki chaos main honi chahiye, holiday aapki sukh shanti main honi chahiye. Don’t mix the two. For Christ’s sake don’t try to setup your karma bhoomi in a country with sukh shanti because phir barbaadi aapko bahut kareebi se dekhne milega. Live in chaos, holiday in peace and that way you will have a great life, you will compound your wealth and have a fulfilling life. India is a definition of chaos and that’s why India rocks. 

    This made me chuckle. Of course you need to survive the pollution and traffic first. 

    Filed under
  • NSA

    Bruce Schneier talking about some inner workings of NSA in context of the Signal chat leak.

    When the NSA discovers a technological vulnerability in a service such as Signal (or buys one on the thriving clandestine vulnerability market), does it exploit it in secret, or reveal it so that it can be fixed? Since at least 2014, a US government interagency “equities” process has been used to decide whether it is in the national interest to take advantage of a particular security flaw, or to fix it. The trade-offs are often complicated and hard.

    Waltz—along with Vice President J.D. Vance, Defense Secretary Pete Hegseth, and the other officials in the Signal group—have just made the trade-offs much tougher to resolve. Signal is both widely available and widely used. Smaller governments that can’t afford their own military-grade encryption use it. Journalists, human rights workers, persecuted minorities, dissidents, corporate executives, and criminals around the world use it. Many of these populations are of great interest to the NSA.

    At the same time, as we have now discovered, the app is being used for operational US military traffic. So, what does the NSA do if it finds a security flaw in Signal?

    Previously, it might have preferred to keep the flaw quiet and use it to listen to adversaries. Now, if the agency does that, it risks someone else finding the same vulnerability and using it against the US government. And if it was later disclosed that the NSA could have fixed the problem and didn’t, then the results might be catastrophic for the agency.

    It is really fascinating to read how these clandestine agencies work. Fodder for a spy novel.

    Filed under
  • Programming in natural language

    Edsger W. Dijkstra explaining why using natural language for programming is a bad idea.

    In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

    We know in the meantime that the choice of an interface is not just a division of (a fixed amount of) labour, because the work involved in co-operating and communicating across the interface has to be added. We know in the meantime —from sobering experience, I may add— that a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.

    Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.

    I saw this post on Hacker News and this comment sums up Dijkstra’s point succinctly.

    Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

    As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

    Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

    Filed under
  • Learn to code

    Kyrylo Silin reflecting on importance of learning to code.

    AI is the new shiny toy everyone wants to play with. And to be honest, it’s impressive. The problem with AI is that with every year, it gets better and better. Wait what? How’s that a problem? Well, with every new year you lose control.

    The more you rely on AI, the less you understand what you’re doing. The less you understand, the more AI vendors can control you. And the more control they have, the more they can charge you. It’s a vicious cycle.

    This shift was inevitable. Humanity must adapt to this new reality. AI isn’t going away, and we need to learn how to use it to our advantage.

    The large language models (LLMs) created by tech giants have absorbed decades of knowledge — our knowledge. They’ve been trained on our work.

    Now they’re selling it back to us and telling us we only need to learn English to code. This is a lie. As a new programmer, I don’t want you to fall for it. There’s no corner-cutting. Get your shit together and learn to code.

    Ha!

    While I agree that programmers need to learn to code, the unfortunate reality is that AI will charge us to write code that it trained itself on using our work. The charge is for completing a function in a matter of minutes, sometimes even seconds. The charge is for speed and efficiency.

    Filed under
  • Five years as shareholder of Tata Consultancy Services

    When I started investing in TCS, I’d already held L&T Infotech—before its merger with Mindtree—as another IT services company in my portfolio. The rationale behind investing in TCS was twofold. First, I wanted to diversify within the IT services sector. With L&T Infotech being a Nifty Next 50 company, TCS was a good choice as a Nifty 50 company. Second, TCS was available at a discount because I started investing right in the middle of the COVID pandemic market crash.

    My hope was that TCS would outperform the index. Unfortunately, it’s been an underperformer for the last three years against both the Nifty 50 and the Nifty IT index.

    This trend isn’t unique to TCS—I’ve noticed it across my equity investments made during the COVID pandemic market crash. The first one to two years are great—I’m either beating or matching the index. And even if I’m not beating the index, the returns are still strong. I start daydreaming about making a fortune after 20 years, only to get sucker-punched in the subsequent years. Two years into my investment in TCS, I was at a 30% XIRR. Now, I’m at just 4.4% (Figure 1).

    Figure 1

    What’s even more surprising is that TCS has significantly underperformed the Nifty IT index. In fact, the Nifty IT index has even outpaced the Nifty 50. I was so surprised that I headed over to Google Finance (Figure 2) to double-check my calculations. Nope—Nifty IT has beaten both TCS and Nifty 50 over the last five years. It also goes to show my lack knowledge on how Nifty indices work. 

    Figure 2

    That said, I have to give TCS credit for two key points. First, I haven’t seen a negative XIRR on my investment yet. At a measly 4.4% XIRR, it’s still positive. But with the FII sell-off, I think I might finally see a negative return. Second, dividends (Figure 3) are another area where it does good. The special dividends for FY 2022-23 and FY 2024-25 have boosted my dividend yield at cost. In my portfolio, only ITC beats TCS in terms of dividend yield.

    Figure 3

    Filed under
  • Wrong, but useful

    Jason Cohen talking about how a theory can give accurate predictions, but fail on the explanation of why.

    The Mayans’ conception of the Earth, moon, sun, planets, and stars, was as ludicrous as every other ancient civilization, yet their priests routinely predicted the timing of eclipses with impressive accuracy. The priests leveraged this accuracy as evidence that their religion was correct, and that their exalted position in society was justified.

    Their religion—and therefore their explanation of how the universe worked—is laughable to the modern reader: The Earth in the center (of course), with thirteen tiers of heaven whirling above and nine levels of underworld threatening from below. Eclipses are not caused by a physical object blocking the light of the sun, but rather spiritual beings temporarily consuming the sun or moon (Figure 1). Even the most fervently religious person today would classify these ideas as fanciful mythology, though the Mayans were no less certain of the veracity of their religion than modern-day humans are of theirs.

    Nevertheless, they were careful observers and meticulous calculators. They understood that eclipses happened roughly every 173 days, adjusted by a 405-month cycle and additional smaller correction. They tracked these cycles and updated their model over the centuries, and as a result, their theory yielded accurate predictions, even though the theory’s explanation of why was entirely incorrect.

    Filed under