• Leverage arbitrage

    Tushar Dadlani talking about the growing gap between how fast some actors can change the world and how fast others can respond to those changes.

    …we’re experiencing massive “leverage arbitrage” where actors with higher-order leverage can extract value from systems faster than actors with lower-order leverage can maintain those systems. Google’s compensation revolution didn’t just raise tech salaries—it systematically destroyed entrepreneurial ecosystems globally by making employment more attractive than company-building. Social media platforms don’t just connect people—they reshape democratic discourse faster than democratic institutions can adapt. AI systems aren’t just tools—they’re deployed faster than we can develop frameworks for understanding their social implications.

    And this arbitrage is making us perpetual junior programmers.

    Filed under
  • Virtuosity in the world of AI

    Drew Breunig talking about virtuosity how quickly amazing new developments in the world of AI are becoming, meh.

    virtuosity can only be achieved when the audience can perceive the risks being taken by the performer.

    A DJ that walks on stage and hits play is not likely to be perceived as a virtuoso. While a pianist who is able to place their fingers perfectly among a minefield of clearly visible wrong keys is without question a virtuoso. I think this idea carries over to sports as well and can partially explain the decline of many previously popular sports and the rise of video game streaming. We watch the things that we have personally experienced as being difficult. That is essential context to appreciate a performance.

    Initially, many AI applications were, surprisingly, embraced as incredible performances. The images generated by DALLe were usually not more impressive than those of professional illustrators. They were instead incredibly impressive because they had been achieved by a computer program. The same goes for video generating AI demos; none of their video clips are aesthetic or narrative achievements. They are impressive because they were generated by software. But even here, the AI is not the virtuoso. The virtuoso are the teams and companies building these models.

    We’ve been able to watch this sheen come off very quickly. Generating an image from a chatbot is no longer very impressive to our friends. It is a novelty. And this half-life, the time it takes for a model’s output to become merely novel, is shortening with every release.

    Filed under
  • Proof of thought

    Alex Martsinovich talking about how writing has become incredibly cheap and goes on to talk about the AI etiquettes that we need to start following.

    For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

    Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can’t rely on proof-of-thought anymore. Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned. So what do we do?

    Filed under
  • We are all junior programmers now

    Generative AI is the fastest moving technology I have seen till date. I am seeing something or the other happening every single day. It’s hard to keep track of updates. Scott Werner extends this thought and theorises that because of this we are all junior programmers now.

    We’re all junior developers again. But not junior in the traditional sense, where senior is waiting after enough years pass. We’re junior in a permanent sense, where the technology evolves faster than expertise can accumulate.

    It’s like being a professional surfer on an ocean that keeps changing its physics. Just when you think you understand waves, they start moving sideways. Or backwards. Or turning into birds.

    This is either terrifying or liberating, depending on your relationship with control.

    Scott also touches upon other interesting aspects like—Time Dilation in the Age of AI.

    Filed under
  • Documentation

    Rachel Kroll taking about benefits of documenting.

    I used to be on a team that was responsible for the care and feeding of a great many Linux boxes which together constituted the “web tier” for a giant social network. You know, the one with all of the cat pictures… and later the whole genocide thing and enabling fascism. Yeah, them.

    Anyway, given that we had a six-digit number of machines that was steadily climbing and people were always experimenting with stuff on them, with them, and under them, it was necessary to apply some balance to keep things from breaking too often. There was a fine line between “everything’s broken” and “it’s impossible to roll anything out so the business dies”.

    At some point, I realized that if I wrote a wiki page and documented the things that we were willing to support, I could wait about six months and then it would be like it had always been there. Enough people went through the revolving doors of that place such that six months’ worth of employee turnover was sufficient to make it look like a whole other company. All I had to do was write it, wait a bit, then start citing it when needed.

    Loved the reference of revolving doors.

    Filed under
  • The invisible price of defection

    Anurag Bansal talking about how well intentioned actions like subsidising solar panels can end up hurting the most vulnerable sections of society on The Daily Brief podcast.

    …if your house generated more electricity than it used, you could make money off it by selling it away. Now, if you combine that with the absence of any major import duties on solar panels, the financial case for a rooftop solar became very, very attractive. But this is also where things started getting a little bit complicated.

    See, electricity grids are built for scale in general. They are designed to serve millions of people in a centralized and predictable way. That’s how, by the way, the costs stay reasonable because the fixed costs of generation, transmission, and distribution are spread across a large number of paying users.

    But the thing is, when that scale breaks, which is when wealthier users exit the system, the math suddenly starts falling apart. And that is exactly what is right now playing out in Pakistan. The richest households who paid the most amount of money for grid electricity are suddenly leaving the grid system altogether.

    That means that the burden of paying for the fixed cost of electricity infrastructure is falling on smaller, poorer group of users, which are people who cannot afford to install solar panels, but they still rely on the grid. Now, these users are now left to cover the cost of not just their own electricity, but also the expensive take-or-pay contracts that the government had signed many years ago. And to remind you, these contracts do not care about actual consumption.

    Whether or not people use electricity from the grid, the government is still obligated to pay the power producers. So even though solar is reducing demand from the grid, the cost of that unused electricity is still very much there, and it is being passed on to whoever is left on the system. That, ladies and gentlemen, is invisible price of defection.

    Filed under
  • Anthropomorphization of AI

    Halvar Flake talking about anthropomorphization of AI.

    The moment that people ascribe properties such as “consciousness” or “ethics” or “values” or “morals” to these learnt mappings is where I tend to get lost. We are speaking about a big recurrence equation that produces a new word, and that stops producing words if we don’t crank the shaft.

    To me, wondering if this contraption will “wake up” is similarly bewildering as if I was to ask a computational meteorologist if he isn’t afraid of his meteorological numerical calculation will “wake up”.

    […]

    Instead of saying “we cannot ensure that no harmful sequences will be generated by our function, partially because we don’t know how to specify and enumerate harmful sequences”, we talk about “behaviors”, “ethical constraints”, and “harmful actions in pursuit of their goals”. All of these are anthropocentric concepts that – in my mind – do not apply to functions or other mathematical objects. And using them muddles the discussion, and our thinking about what we’re doing when we create, analyze, deploy and monitor LLMs.

    Filed under
  • Curse of vision

    Maalvika Bhat explaining the curse of vision in her post about how being too ambitious is a clever form of self-sabotage.

    We are perhaps the only species that suffers from our own imagination. A bird building a nest does not first conceive of the perfect nest and then suffer from the inadequacy of twigs and mud. A spider spinning a web does not pause, paralyzed by visions of geometric perfection beyond her current capabilities. But humans? We possess the strange gift of being haunted by visions of what could be, tormented by the gap between our aspirations and our abilities.

    […]

    Watch a child draw. They create fearlessly, unselfconsciously, because they have not yet developed the curse of sophisticated taste! They draw purple trees and flying elephants with the confidence of someone who has never been told that trees aren’t purple, that elephants don’t fly. But somewhere around age eight or nine, taste arrives like a harsh critic, and suddenly the gap opens. The child can see that their drawing doesn’t match the impossible standard their developing aesthetic sense has conjured.

    This is what leads most of us to stop drawing. Not because we lack talent, but because we’ve developed the ability to judge before we’ve developed the ability to execute. We become connoisseurs of our own inadequacy.

    And this is where our minds, in their desperate attempt, devise an elegant escape. Faced with this unbearable gap, we develop what researchers call “productive avoidance” — staying busy with planning, researching, and dreaming while avoiding the vulnerable act of creating something concrete that might fail. It feels like work because it engages all our intellectual faculties. But it functions as avoidance because it protects us from the terrifying possibility of creating something imperfect. I see this in wannabe founders listening to podcasts on loop, wannabe TikTokkers watching hours of videos as “research,” and wannabe novelists who spend years developing character backstories for books they never begin.

    The spider doesn’t face this problem. It spins webs according to ancient genetic instructions, each one remarkably similar to the last. But human creativity requires us to navigate the treacherous territory between what we can imagine and what we can actually do. We are cursed with visions of perfection and blessed with the capacity to fail toward them.

    Filed under
  • Layoff paradox

    Daniel Sada talking about layoff paradox and why layoffs are easier than firing an employee.

    To fire someone for performance, managers must document everything meticulously. Every missed deadline, every subpar deliverable, every coaching conversation needs to be recorded. HR requires a paper trail that can withstand legal scrutiny. This process can take months or even years, during which the underperforming employee continues to collect their salary and potentially drag down team morale.

    The legal risk is real. Even in at-will states, wrongful termination lawsuits can be expensive and time-consuming. Companies worry about discrimination claims, especially if the fired employee belongs to a protected class. Better to have an ironclad documentation trail than face a costly legal battle.

    Layoffs, paradoxically, are “cleaner.” When you eliminate entire roles or teams, you sidestep the performance documentation requirements. You’re not firing someone for being bad at their job, you’re eliminating the job itself. The legal risk is minimal, and you can even look compassionate by offering severance packages and transition support. Employees can even claim unemployment and keep their income for months!

    For VPs and executives, layoffs solve multiple problems at once: they can eliminate underperformers without the messy documentation process, reduce headcount to hit financial targets, and maintain the narrative that they’re making “tough but strategic decisions” rather than admitting they failed to manage performance effectively.

    The bitter irony? Getting laid off is often better for the employee than being fired. Layoffs typically come with severance, extended healthcare, and the ability to say you were “affected by restructuring” rather than “terminated for cause.” You get more time to find your next job, and your professional reputation remains intact.

    Filed under
  • Disposable code

    Charlie Guo’s thoughts on programming and AI.

    Code has historically been something with a very high upfront cost to create and nearly zero cost to distribute. That’s defined much of the economic models of Silicon Valley – VC-funded startups invest heavily in creating products that can scale near-infinitely.

    But we’re turning that model on its head with the ability to create software for a fraction of what it used to cost. And as someone who (at least in part) considers himself a craftsman, I’m learning to embrace cheap, single-use code. I’m not sure how I feel about it – we’re now dealing with the environmental consequences of single-use physical products, despite their convenience. But there’s something fundamentally different about writing a script you’ll use once and throw away versus carefully architecting a system meant to last for years.

    What’s more, writing custom software that works used to be only within the domain of software engineers who had either formally studied or had invested hours into teaching themselves the arcane knowledge of compilers, networking, algorithms, and more. Everyone else had to use off-the-shelf products or “no code” platforms that heavily constrained what you could do – like going from a full palette to a paint-by-numbers system.

    Now, almost anyone with a bit of product sense can ship something new. Indie hackers don’t have to worry about hiring a whole dev team to get to an MVP, and designers and PMs can vibe code internal prototypes in an afternoon. None of this code will be perfect, but I think that’s sort of the point – it’s an entirely different beast from the type of code I’m used to working with. And I’m reasonably sure I’m going to have to evolve my way of working.

    So what happens when AI starts to think—disposable code is all humans want? Do we end up polluting the knowledge base for LLMs just like we did with our environment?

    Filed under