• In the world of AI, programming language is fungible

    Mitchell Hashimoto talking about how in the world of AI programming languages are fungible. He gives example of recent Bun rewrite in Rust.

    On the interesting side is how fungible programming languages are nowadays. Programming languages used to be LOCK IN, and they’re increasingly not so. You think the Bun rewrite in Rust is good for Rust? Bun has shown they can be in probably any language they want in roughly a week or two. Rust is expendable. It’s useful until its not then it can be thrown out. That’s interesting!

    This reminds me of how AI can empower developers to rewrite code without regret.

  • Climbing the walls of worry

    Deepak Shenoy’s thought provoking post on climbing the walls of worry:

    The news is tiresome because it engulfs us in a sense of doom, while in reality we don’t see an impact. And the markets – they are now going up, both in India and the west. This sounds a little scary but often, when markets climb a wall of worry, you should sit up and give the market a little more credit for its moves.

    […]

    In the long run, things do work out, but what does change is how you plan ahead. For example, one change for now is: the Rupee’s at 95. If you were planning your child’s college fees at an assumption of 40 lakh per year (Rs. 80 to a USD, and $50,000 USD per year), you might need to revise that to 50 lakh per year. If you have 15 years left, your monthly SIP would have to rise from Rs. 61,000 to Rs. 77,000 per month with this new information. (Assumptions: effective rise of 6% per year from now, blended return at 11% on investing)

    At least this one is actionable. To make up for this higher SIP – if you needed to – you will have to perhaps need to rework your goals, and perhaps delay retirement, or change your spending patterns. These are useful reactions to this crisis, better than endless doomscrolling of meme videos involving lego characters.

    […]

    The internet is designed to make you worry. Markets climb those walls often and surprise us on the upside. In life, one way to deal with worry is to climb those walls, not to let them be built around us. Enjoy the narrative, and stick with your game.

  • Expertise

    These two comments on Hacker News which talk about how difficult it is to impart your expertise by codifying it. It can only be taught by working along side your apprentice.

    By hamstergene:

    Because the most important parts of the expertise are coming from their internal “world model” and are inseparable from it.

    An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and “communicate” their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.

    Factual knowledge can be transferred via words well, that’s why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn’t yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of “world model”, that is what “expertise” is. That part cannot be communicated, one can only help others acquire the same expertise.

    Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.

    By gooseyard:

    By complete coincidence, yesterday I came across this link to an article Peter Naur wrote in 1985 (https://pages.cs.wisc.edu/~remzi/Naur.pdf) which I haven’t been able to stop thinking about.

    I’ve been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don’t do it the way everyone else does, or whatever, and there’s almost always some truth to that.

    The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.

    A theory of the type Naur describes can’t be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That’s one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.

    This has become the most rewarding part of my work, and a large part of why I’m not eager to retire yet as long as I feel I’m performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur’s conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.

  • Maintenance costs

    James Shore arguing that AI productivity boost will proportionally increase maintenance costs.

    your AI coding agent, the one you use to write code, needs to reduce your maintenance costs. Not by a little bit, either. You write code twice as quick now? Better hope you’ve halved your maintenance costs. Three times as productive? One third the maintenance costs. Otherwise, you’re screwed. You’re trading a temporary speed boost for permanent indenture.

  • Coordinated disclosures

    Jeff Kaufman talking about how AI has impacted the status quo of coordinated disclosures in security vulnerabilities.

    A week ago the Copy Fail vulnerability came out, and Hyunwoo Kim immediately realized that the fixes were insufficient, sharing a patch the same day. In doing this he followed standard procedure for Linux, especially within networking: share the security impact with a closed list of Linux security engineers, while fixing the bug quietly and efficiently in the open. His goal was that with only the raw fix public, the knowledge that a serious vulnerability existed could be “embargoed”: the people in a position to address it know, but they’ve agreed not to say anything for a few days.
    Someone else noticed the change, however, realized the security implications, and shared it publicly. Since it was now out, the embargo was deemed over, and we can now see the full details.

    It’s interesting to see the tension here between two different approaches to vulnerabilities, and think about how this is likely to change with AI acceleration.

    On one side you have “coordinated disclosure” culture. This is probably the most common approach in computer security. When you discover a security bug you tell the maintainers privately and give them some amount of time (often 90d) to fix it. The goal is that a fix is out before anyone learns about the hole.

    On the other side you have “bugs are bugs” culture. This is especially common in Linux, where the argument is that if the kernel is doing something it shouldn’t then someone somewhere may be able to turn it into an attack. Just fix things as quickly as possible, without drawing attention to them. Often people won’t notice, with so many changes going past, and there’s still time to get machines patched.

    This approach never worked perfectly, but with AI getting good at finding vulnerabilities it’s a much bigger problem. So many security fixes are coming out now that examining commits is much more attractive: the signal-to-noise ratio is higher. Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective.

    Another take on this by user tptacek on Hacker News.

    This has been a very long time coming and the crackup we’re starting to see was predicted long before anyone knew what an LLM is.

    The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.

    This has been playing out in slow motion ever since BinDiff: you can’t patch software without disclosing vulnerabilities. We’ve been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.

    It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.

    The norms of coordinated disclosure are not calibrated for this environment. They really haven’t been for the last decade.

    I’m weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.

  • Scrum in the age of AI

    Robert Glaser arguing that Scrum was created before AI with constraints that, in the age of AI, are no longer present.

    Sprint planning, estimation, standups, user stories, ticket grooming, handoffs, all the ceremony around coordination and risk reduction. Reasonable, given the constraints. If a single iteration takes days or weeks, you need structures that prevent people from wasting too many of them.

    But agentic engineering changes the economics: It makes more options materializable! It lets teams move from intent to prototype to evaluation much faster. It lets product people see working software earlier. It lets engineers test more hypotheses before committing. It does not magically make delivery easy, but it moves the constraint away from implementation and toward intent, verification, judgment, and feedback.

    The awkward thing is that many organizations spent twenty years calling themselves agile while preserving the organizational reflexes agile was supposed to remove. Now AI makes real agility more plausible, and the system still asks for two-week sprint commitments, handoff documents, and all the stuff that assumes iteration is scarce.

    That is the ceremony graveyard again, but now at adoption level. The loop can move faster than the organization can metabolize what the loop learned.

  • Futures

    Paul Krugman explains how corruption in the futures market ends up destroying its risk reducing benefits.

    First, ask yourself what purpose is served by the oil futures market. Unlike the prediction markets Polymarket and Kalshi, the oil futures market is not intended to be mainly a vehicle for gambling. Instead, it is a market that serves to reduce risk through hedging.

    Here’s how it works. There are people and institutions, such as oil producers, who will need to sell oil at a future date. They want to lock in the price today on those future sales. There are also people and institutions, such as airlines, who have a future need for oil and would like to lock in the price today. Thus the futures market lets both sellers and buyers of oil eliminate a major source of risk – fluctuations in the price of oil. This reduces uncertainty in the economy as a whole.

    But what if there are substantial players in the futures market with inside information? Then if you are, say, a corporation trying to lock in the price of oil you plan to buy next month, you may not be making a mutually beneficial deal with future sellers. You may, instead, be being played for a sucker — paying what in retrospect will have been an excessive price — by people who know what’s about to appear in the president’s social media feed.

    The same could apply to sellers of oil futures, although the examples of insider trading we know about involved Trump insiders getting ahead of falling, not rising, prices.

    Either way, the effect of traders’ suspicion that they may be losers in a rigged game will be to make them reluctant to play at all — reluctant either to buy or to sell oil futures. And this will mean losing the risk-reducing benefits of a properly functioning futures market.

  • Risk profile

    Sneha Rege writing for Freefincal, explains why measuring your risk profile is much, much more nuanced than the 10-point questionnaire available out there.

    It is easy to dismiss risk profiling as a formality, a box to tick before you start “actual investing.” That mindset is part of the problem.

    Your risk profile is not a label. It is the quiet answer to a question you can only really test in a downturn: would I sit on a 30% drawdown across a portfolio worth several years of my savings, without acting on impulse? Most of us think we know the answer. Very few actually do.

    A ten-click quiz cannot tell you this. Neither can your YouTube watch history nor your Instagram saves on “small-cap funds with the highest Sortino Ratio”. Reading about volatility is not the same as living through it. Knowing the theory of drawdowns is not the same as watching a chunk of your net worth disappear on a Tuesday afternoon.

    […]

    your risk appetite is not fixed. As your portfolio grows, the rupee value of every drawdown grows with it. A 30% fall on five lakh feels very different from a 30% fall on fifty lakh, and different again at five crore. The percentage is identical; the experience is not. You have to keep asking yourself whether you still have the stomach for it as the absolute numbers change. 

    Life events shift this, too. A new home loan, a child, an ageing parent, a job change, any of these can quietly redraw your capacity for risk without you noticing.

  • Tab

    Raymond Chen recalling the conflict between Microsoft and IBM over the TAB key.

    A colleague recalls that while he was assigned to the IBM offices in Boca Raton, Florida, there was a dispute over what key should be used to move from one field to another in dialog boxes. The folks at IBM were not happy with my colleague’s decision to use the TAB key, so they asked him to escalate the issue to his manager back in Redmond.

    My colleague’s manager replied, “The reason you are in Boca is to make these decisions so I don’t have to be in Boca.”

    My colleague rephrased this reply in a more corporate manner before passing it on to IBM: “Microsoft supports the use of the TAB key for this purpose.”

    Unsatisfied, the IBM folks escalated the issue up their organizational chain for several levels, and replied that their VP (who was around seven levels of management above the programmers) was absolutely opposed to the use of the TAB for this purpose, and they wanted confirmation from the equivalent-level manager at Microsoft that Microsoft stands by the choice of the TAB key.

    My colleague replied, “Bill Gates’s mother is not interested in the TAB key.”

    This apparently ended the discussion, and the TAB key stayed.

    Ha!

  • Accountable

    Christine Lemmer-Webber talking about her meeting with Gerald Sussman.

    At some point Sussman expressed how he thought AI was on the wrong track. He explained that he thought most AI directions were not interesting to him, because they were about building up a solid AI foundation, then the AI system runs as a sort of black box. “I’m not interested in that. I want software that’s accountable.” Accountable? “Yes, I want something that can express its symbolic reasoning. I want to it to tell me why it did the thing it did, what it thought was going to happen, and then what happened instead.” He then said something that took me a long time to process, and at first I mistook for being very science-fiction’y, along the lines of, “If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court.”

    This discussion happened in 2015. Not 2025. 2015. Thanks to Hacker News for bringing up this.