• Perpetual beta and perpetual uncertainty

    Charlie Warzel talking about how generative AI’s perpetual beta has put us all in perpetual uncertainty.

    The world that ChatGPT built is a world defined by a particular type of precarity. It is a world that is perpetually waiting for a shoe to drop. Young generations feel this instability acutely as they prepare to graduate into a workforce about which they are cautioned that there may be no predictable path to a career. Older generations, too, are told that the future might be unrecognizable, that the marketable skills they’ve honed may not be relevant. Investors are waiting too, dumping unfathomable amounts of capital into AI companies, data centers, and the physical infrastructure that they believe is necessary to bring about this arrival. It is, we’re told, a race—a geopolitical one, but also a race against the market, a bubble, a circular movement of money and byzantine financial instruments and debt investment that could tank the economy. The AI boosters are waiting. They’ve created detailed timelines for this arrival. Then the timelines shift.

    We are waiting because a defining feature of generative AI, according to its true believers, is that it is never in its final form. Like ChatGPT before its release, every model in some way is also a “low-key research preview”—a proof of concept for what’s really possible. You think the models are good now? Ha! Just wait. Depending on your views, this is trademark showmanship, a truism of innovation, a hostage situation, or a long con. Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future. But you are waiting nonetheless—for a bubble to burst, for a genie to arrive with a plan to print money, for a bailout, for Judgment Day. In that way, generative AI is a faith-based technology.

    It doesn’t matter that the technology is already useful to many, that it can code and write marketing copy and complete basic research tasks. Because Silicon Valley is not selling useful; it’s selling transformation—with all the grand promises, return on investment, genuine risk, and collateral damage that entails. And even if you aren’t buying it, three years out, you’re definitely feeling it.

  • Speed vs Intelligence

    Sami Bahri emphasising that while AI accelerates the process, the intelligence to setup and monitor that process still requires a human.

    Intelligence implies wisdom, context, and nuance. While AI models are simulating reasoning better every day, in a business context, they are fundamentally pattern-matching engines. They excel at acceleration.

    • The Old Way: An analyst reads 50 contracts (unstructured), highlights risks based on gut feeling (unstructured process), and summarizes them in 3 days.
    • The AI Way: An AI scans 50 contracts and extracts specific risk clauses based on defined parameters in 3 minutes.

    The process (Review Contracts -> Identify Risk -> Summarize) hasn’t changed, but it had to be rigorously defined for the AI to work. The intelligence (knowing what a “risk” actually means) still requires human governance. What has changed is the velocity.

  • Removing friction

    Martin Fowler’s foreword on the book Frictionless.

    The key to this book is that they don’t think in terms of how to whip people into greater productivity, but how to find the sources of friction that slow them down. Friction is when I have to submit a pull request that sits for a couple of days while I forget about the code, or spend two days wrangling some infrastructure that ought to be a simple API call. Smoothing away these points of friction is the essence of improving Developer Experience – and thus speeding up getting useful software into the hands of its users.

    They describe effective developer experience in terms of three elements: feedback loops, flow state, and cognitive load. We can only find out whether we are on the right path by getting rapid feedback. The longer the delay between that blue dot moving on my phone-map, the longer I walk in the wrong direction before realizing my mistake. If our feedback is rapid, we can remain in the second element, a flow state, where we can smoothly and rapidly get things done, improving our products and our motivation. Flow also depends on our ability to understand what we need to do, which means we must be wary of being overwhelmed by cognitive load, whether it comes in the form of poorly structured code, flaky tests, or interruptions that break our flow.

    Focusing on developer experience is about finding what gets in the way of these three elements. Improving developer experience leads to better outcomes for the business. Those lost hours wrangling with infrastructure is money wasted on developers’ salary, and revenue lost because the software took longer to get into production.

  • Voice

    Tony Alicea explaining why you shouldn’t use LLMs to write blog posts.

    If you rely on an LLM to write all your posts, you are making a mistake.

    Your voice is an asset. Not just what you want to say, but how you say it.

    Your voice is unique. It is formed from your lifetime of lived experiences. No one’s voice will be exactly like yours.

    Your voice becomes recognizable. Over many posts it becomes something people subconsciously connect with, recognize, trust, and look forward to.

    Your voice provides the framework for the impression you leave in a job interview, while networking at a meet-up, or with a co-worker.

    Update: 15-Mar-2026

    Another very similar reasoning by Sebastian Aigner.

    When you run your message through an LLM, it will inevitably obscure what you actually wanted to say; we choose words for a reason after all – even if they’re sometimes not the right words.

    But what’s far worse is that it robs the intended recipient of the ability to actually interpret the message according to the accrued knowledge of how you write, and the subtler notes of the tone your message carries, your choice of emphasis or omission, and so on.

    As you interact with people, you build this atlas of implicit knowledge about them; it’s the reason why “…we need to talk” coming from two different people might carry vastly different meanings and emotional undercurrents. I know you, and that knowledge informs how I read your text. And if I don’t know you, the words you choose combined with the interactions we have help me build that understanding.

  • PWAs are good

    No, not in terms of user experience. In fact, they provide a subpar user experience. And some times they are straight away user hostile.

    On Threads, at times the profile picture fails to load or the media in the post itself fails to load.

    Profile picture did not load
    Media did not load

    And many a times I can even see a loading icon if I scroll too far at the bottom. It seems pre-fetch doesn’t work well on PWAs.

    Still pre fetching data

    Twitter/X takes its own sweet time to load.

    Still loading

    YouTube on web defaults to video lower resolution. 

    I could go on and on. And that is what makes them good. 

    They are good at keeping me off the meaningless content and doom scrolling. I use social media less because the experience on PWA is bad. 

    Go PWA!

  • Light day

    Piyush Gupta talking about how NASA’s Voyager 1 will be one light day away from Earth. The one light day, not one light year.

    After nearly 50 years in space, NASA’s Voyager 1 is about to hit a historic milestone. By November 15, 2026, it will be 16.1 billion miles (25.9 billion km) away, meaning a radio signal will take a full 24 hours—a full light-day—to reach it. For context, a light-year is the distance light travels in a year, about 5.88 trillion miles (9.46 trillion km), so one light-day is just a tiny fraction of that.

  • Best and worst case scenario for AI

    Christopher Butler’s take on the best and worst case scenario for AI.

    The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe. This is a classic bubble scenario. We’ll all take a hit when the air is let out, and given the historic concentration of the market compared to previous bubbles, the hit will really hurt. The worst case scenario is that the people with the most money at stake in AI know it’s not what they say it is. If this is true, we get the bubble and fraud with compound motives.

    […]

    I don’t worry about the end of work so much as I worry about what comes after — when the infrastructure that powers AIbecomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments. I know, you can picture me wildly gesticulating at my crazy board of pins and string, but I’m really just following the money and the power to their logical conclusion.

  • Surveillance pricing

    Cory Doctorow explaining what is surveillance pricing.

    Economists praise “price discrimination” as “efficient.” That’s when a company charges different customers different amounts based on inferences about their willingness to pay. But when a company sells you something for $2 that someone else can buy for $1, they’re revaluing the dollars in your pocket at half the rate of the other guy’s.

    That’s not how economists see it, of course. When a hotel sells you a room for $50 that someone else might get charged $500 for, that’s efficient, provided that the hotelier is sure no $500 customers are likely to show up after you check in. The empty room makes them nothing, and $50 is more than nothing. There’s a kind of metaphysics at work here, in which the room that is for sale at $500 is “a hotel room you book two weeks in advance and are sure will be waiting for you when you check in” while the $50 room is “a hotel room you can only get at the last minute, and if it’s not available, you’re sleeping in a chair at the Greyhound station.”

    But what if you show up at the hotel at 9pm and the hotelier can ask a credit bureau how much you can afford to pay for the room? What if they can find out that you’re in chemotherapy, so you don’t have the stamina to shop around for a cheaper room? What if they can tell that you have a 5AM flight and need to get to bed right now? What if they charge you more because they can see that your kids are exhausted and cranky and the hotel infers that you’ll pay more to get the kids tucked into bed? What if they charge you more because there’s a wildfire and there are plenty of other people who want the room?

    The metaphysics of “room you booked two weeks ago” as a different product from “room you’re trying to book right now” break down pretty quickly once you factor in the ability of sellers to figure out how desperate you are – or merely how distracted you are – and charge accordingly. “Surveillance pricing” is the practice of spying on you to figure out how much you’re willing to spend – because you’re wealthy, because you’re desperate, because you’re distracted, because it’s payday…

    Surveillance pricing essentially lets the seller devalue buyer’s money.

    That surveillance can be weaponized against you, through “surveillance pricing,” which is when companies raise prices based on their estimation of your desperation, which they can infer from surveillance data. Surveillance pricing lets a company reach into your wallet and devalue your money – if you are charged $10 for a burger that costs the next person $5, that means your dollar is only worth $0.50

  • Writing

    This intriguing thought by thekla on writing and using AI to write.

    but on a deeper level, writing is more than just the process by which you obtain a piece of text, right? it’s also about finding out what you wanted to say in the first place, and how you wanted to say it. this post existed in my head first as a thought, then it started to gel into words, and then i tried pulling those words out to arrange them in a way that (hopefully) gets my point across. there is nothing extra there, no filler. i alone can get the thought out and writing is how i do that.

    and sure, there is a lot of text that is not written with this kind of goal. but then, perhaps, we should ask why it has to be written in the first place; and if the cost of that might be higher than just the co2 that is attributable to its creation. in a similar vein, neither should we forget to ask whether those texts that are now written by ai would indeed otherwise have been written by humans, or if there isn’t an awful lot of text produced now that simply no one would have spent their time writing before.

  • Intermediary’s information advantage

    Sandra Knispel talking about the study done by American Economic Review.

    Two financial economists, from the University of Rochester and the University of Wisconsin–Madison respectively, created a model that explains how reputation, information, and retention interact in professions where skill is essential and performance is both visible and attributable to a specific person, particularly in fields such as law, consulting, fund asset management, auditing, and architecture. They argue that much of the professional services world operates through “intermediaries”—firms that both hire employees (also referred to as “agents” or “managers”) and market their expertise to clients—because clients can’t themselves easily judge a worker’s ability from the outset.

    […]

    At the start of an employee’s career, the firm has an advantage, Kaniel and his coauthor Dmitry Orlov contend, because the firm (“the mediator”) can assess an employee’s talent more accurately than outside clients can. During what the authors call “quiet periods,” the firm keeps those who perform adequately and pays them standard wages.

    Over time, however, an employee’s public performance—measured by successful cases, profitable investments, or well-executed projects—reduces the firm’s informational advantage. As the informational gap shrinks, the firm needs to pay some employees more because clients are now able to observe an employee’s good performance and hence update their beliefs about the employee’s skills.

    “At some point, the informational advantage becomes fairly small,” says Kaniel, “and the firm says, ‘Well, I will basically start to churn. I will let go of some employees, and by doing that, I can actually extract more from the remaining ones.’”

    Ironically, to the client these churned—or strategically fired—employees look just as good as the ones whom the firm kept. Churning happens not because these employees have failed but because they may be just somewhat lower-skilled than their peers. Subsequently, churning heightens both the reputation of the firm and of the employees who remain.

    I found the concept of information advantage and its subsequent shrinking intriguing.