• Decumulation

    Barry Ritholtz talking about the thorniest problem in finance.

    Consider some of the most challenging problems in finance: the equity-premium puzzle; binomial-option pricing models; do zero interest rates spur inflation or damp it; are stocks cheap or overpriced?

    Challenging as those may appear, none compare to what Nobel laureate William Sharpe, 82, calls “decumulation,” or the use of savings in retirement. It is, he says, “the nastiest, hardest problem in finance.”

    Why is planning for retirement so difficult? Saurabh Mukherjea explains the two key challenges.

    Longevity Risk: None of us know how far we will live. What we do know is that with each passing year, improvements in medical science are likely to increase our lifespan thus increasing our cost of retirement. Even if we assume that most of us will die between the ages of 70-100, there are at least 30 possible outcomes with regards to our own longevity.

    However, for retirement planning purposes, we also have to factor in our spouse’s longevity – after all, after I am gone, my better half’s lifestyle still needs to be funded properly. Since, even for your spouse there are 30 possible longevity outcomes, for a couple there are at least 900 possible longevity outcomes.

    Investment Risk: Assuming that most retirement portfolios are a mixture of bonds & equities, the blended return of the portfolio through the retirement years and in the decade immediately preceding retirement could range anywhere from low-single digits to low-20s. Why? Because the world’s large stock markets have seen extended periods of ZERO returns. For example, from1967-84, the S&P500 gave zero returns. Between 1993-2003, the Sensex gave no returns. Then, again, from Jan 2007-Jan 2014the Sensex gave close to zero returns. In fact, for half the years in the past 3 decades, the Sensex has given annual returns which are close to zero – see chart below.

    Assuming however, that a mixture of bonds and equities, gives a retirement portfolio long term returns anywhere between 5% and 25% and taking intervals of 0.20%, there are at least 1000 different return possibilities to consider for your retirement portfolio.

    Now, given that there are 900 possible longevity outcomes (30 for you & 30 for your spouse) and there are 1000 possible returns outcomes, there are at least 90,000 possible retirement outcomes for you to consider even before we add the next important layer to this problem.

    He then adds two additional risks that a developing economy like India needs to consider.

  • Disattention

    This comment by hgomersall on Hacker News explaining how smartphones steal your—for a lack of better word—disattention.

    I’ve recently realised that the biggest problem with smartphones is not that they steal your attention (which is bad enough), but that they steal your disattention

    I don’t know of a better word for it than disattention. Perhaps downtime? But it’s not so structured. It’s just those moments where you’d previously let your mind wander. Gone forever.

  • Junior engineers

    Justin Smestad weighs in on why you need junior engineers.

    Senior engineers know things. They have context, relationships, institutional knowledge. That makes them valuable. It also makes them expensive, and it gives them leverage.

    Eventually this turns into a comp conversation. A senior engineer says “I want a 40% raise or I’m leaving,” and the company’s ability to respond depends entirely on what their alternatives look like. If there’s a bench of mid-level engineers who’ve been growing into that space for the past two years, the company has options. They can negotiate from a position of strength. The loss would hurt, but it wouldn’t be catastrophic.

    If there’s no bench, because you stopped hiring juniors three years ago and there’s nobody coming up behind your seniors, you don’t have options. You pay the 40%, or you lose the person and spend six months (and a recruiter’s fee) trying to find a replacement at market rate, which is probably even higher.

    Junior employees aren’t just doing junior work. They’re a long-term bet. They’re future mid-levels and future seniors growing inside your organization, building context that you can’t hire in from the outside. Every junior you don’t hire today is a senior you’ll have to overpay for in three years.

  • AI and academia

    Minas Karamanis talking about the impact of AI on academia.

    Science is about people:

    What’s great about science is its people. The slow, stubborn, sometimes painful process by which a confused student becomes an independent thinker. If we use these tools to bypass that process in favor of faster output, we don’t just risk taking away what’s great about science. We take away the only part of it that wasn’t replaceable in the first place.

    Knowing why those buttons exist:

    The real threat is a slow, comfortable drift toward not understanding what you’re doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can’t produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can’t sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.

    Training first, tools later:

    I use AI agents regularly, and so do most of the people in my research group. The colleagues I work with produce solid results with these tools. But when you look at how they use them, there’s a pattern: they know what the code should do before they ask the agent to write it. They know what the paper should say before they let it help with the phrasing. They can explain every function, every parameter, every modeling choice, because they built that knowledge over years of doing things the slow way. If every AI company went bankrupt tomorrow, these people would be slower. They would not be lost. They came to the tools after the training, not instead of it. That sequence matters more than anything else in this conversation.

    Output and understanding are two different things:

    Schwartz can use Claude to write a paper because Schwartz already knows the physics. His decades of experience are the immune system that catches Claude’s hallucinations. A first-year student using the same tool, on the same problem, with the same supervisor giving the same feedback, produces the same output with none of the understanding. The paper looks identical. The scientist doesn’t.

  • Speed and wisdom

    Jim Nielsen talking about how speed is not conducive to wisdom.

    Speed is how you avoid reckoning. It guarantees you miss things, and you can’t learn from what you don’t notice.

    Wisdom’s feedback loop is slow.

    Wise people I’ve met seem unhurried. I don’t think it’s because they’re slow thinkers or actors. I think it’s because they’ve learned that important things take the time they take, no amount of urgency changes that.

    Wisdom is chasing all of us, but we’re going too fast to notice what it’s trying to teach us.

  • Last step in thinking

    This comment by user roadside_picnic on how writing is the last step in thinking.

    I’ve long considered writing to be the “last step in thinking”. I can’t tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

  • Survival

    Joe Wiggins reflects on his ten beliefs about investing. For me, survival is the most important one.

    Survival is the most important and neglected goal for all investors.

    Investors rarely talk about survival, but it is the most important aspect of any strategy. It should define whether we make a particular investment and how it is sized. We should always avoid creating situations where there is a meaningful risk of complete disaster (being wiped out) or where we are unable to withstand spells of poor performance. The best investment is the one we can stick with for the long-run, not the one with the highest potential return. If we don’t survive, nothing else matters.

  • Tool or the toolmaker

    Charles Arthur’s intriguing thought on the gradual loss of the cognitive capabilities as those capabilities are outsourced to LLMs.

    This makes me think that this complaint/debate has been going for a long time. The move from oral longform poetry such as The Iliad and Beowulf to writing it down, then printing it, then putting it on websites, then letting search engines find it for you, and now letting LLMs do some part of the work of analysing it – all of these seem to have been viewed as letting our brains slide back into the primordial ooze. If a problem is eternal, is it really because of the tools, or the toolmakers?

  • Laziness

    Bryan Cantrill explaining how laziness is a strength.

    […] when programmers are engaged in the seeming laziness of hammock-driven development, we are in fact turning the problem over and over in our heads. We undertake the hard intellectual work of developing these abstractions in part because we are optimizing the hypothetical time of our future selves, even if at the expense of our current one. When we get this calculus right, it is glorious, as the abstraction serves not just ourselves, but all who come after us. That is, our laziness serves to make software easier to write, and systems easier to compose — to allow more people to write more of it.

    […]

    The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone’s) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters. As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don’t want to waste our (human!) time on the consequences of clunky ones. The best engineering is always borne of constraints, and the constraint of our time places limits on the cognitive load of the system that we’re willing to accept. This is what drives us to make the system simpler, despite its essential complexity.

    Ha!

    The author goes onto emphasise that LLMs are going to play an important role in the future of software development, but they are a tool. Extraordinary tool, but still a tool.

  • Value comes from doing something non-obvious

    Lalit Maganti shares his experience of using AI build devtools for SQLite, something he dreamt for eight long years. This is probably the most measured take I have read on the internet. Most of the pieces are fluff which are written the moment AI spits out the first version of the codebase.

    AI turned out to be better than me at the act of writing code itself, assuming that code is obvious. If I can break a problem down to “write a function with this behaviour and parameters” or “write a class matching this interface,” AI will build it faster than I would and, crucially, in a style that might well be more intuitive to a future reader. It documents things I’d skip, lays out code consistently with the rest of the project, and sticks to what you might call the “standard dialect” of whatever language you’re working in.

    That standardness is a double-edged sword. For the vast majority of code in any project, standard is exactly what you want: predictable, readable, unsurprising. But every project has pieces that are its edge, the parts where the value comes from doing something non-obvious. For syntaqlite, that was the extraction pipeline and the parser architecture. AI’s instinct to normalize was actively harmful there, and those were the parts I had to design in depth and often resorted to just writing myself.

    But here’s the flip side: the same speed that makes AI great at obvious code also makes it great at refactoring. If you’re using AI to generate code at industrial scale, you have to refactor constantly and continuously. If you don’t, things immediately get out of hand. This was the central lesson of the vibe-coding month: I didn’t refactor enough, the codebase became something I couldn’t reason about, and I had to throw it all away. In the rewrite, refactoring became the core of my workflow. After every large batch of generated code, I’d step back and ask “is this ugly?” Sometimes AI could clean it up. Other times there was a large-scale abstraction that AI couldn’t see but I could; I’d give it the direction and let it execute. If you have taste, the cost of a wrong approach drops dramatically because you can restructure quickly.