• Proof of thought

    Alex Martsinovich talking about how writing has become incredibly cheap and goes on to talk about the AI etiquettes that we need to start following.

    For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

    Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can’t rely on proof-of-thought anymore. Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned. So what do we do?

    Filed under
  • We are all junior programmers now

    Generative AI is the fastest moving technology I have seen till date. I am seeing something or the other happening every single day. It’s hard to keep track of updates. Scott Werner extends this thought and theorises that because of this we are all junior programmers now.

    We’re all junior developers again. But not junior in the traditional sense, where senior is waiting after enough years pass. We’re junior in a permanent sense, where the technology evolves faster than expertise can accumulate.

    It’s like being a professional surfer on an ocean that keeps changing its physics. Just when you think you understand waves, they start moving sideways. Or backwards. Or turning into birds.

    This is either terrifying or liberating, depending on your relationship with control.

    Scott also touches upon other interesting aspects like—Time Dilation in the Age of AI.

    Filed under
  • Documentation

    Rachel Kroll taking about benefits of documenting.

    I used to be on a team that was responsible for the care and feeding of a great many Linux boxes which together constituted the “web tier” for a giant social network. You know, the one with all of the cat pictures… and later the whole genocide thing and enabling fascism. Yeah, them.

    Anyway, given that we had a six-digit number of machines that was steadily climbing and people were always experimenting with stuff on them, with them, and under them, it was necessary to apply some balance to keep things from breaking too often. There was a fine line between “everything’s broken” and “it’s impossible to roll anything out so the business dies”.

    At some point, I realized that if I wrote a wiki page and documented the things that we were willing to support, I could wait about six months and then it would be like it had always been there. Enough people went through the revolving doors of that place such that six months’ worth of employee turnover was sufficient to make it look like a whole other company. All I had to do was write it, wait a bit, then start citing it when needed.

    Loved the reference of revolving doors.

    Filed under
  • The invisible price of defection

    Anurag Bansal talking about how well intentioned actions like subsidising solar panels can end up hurting the most vulnerable sections of society on The Daily Brief podcast.

    …if your house generated more electricity than it used, you could make money off it by selling it away. Now, if you combine that with the absence of any major import duties on solar panels, the financial case for a rooftop solar became very, very attractive. But this is also where things started getting a little bit complicated.

    See, electricity grids are built for scale in general. They are designed to serve millions of people in a centralized and predictable way. That’s how, by the way, the costs stay reasonable because the fixed costs of generation, transmission, and distribution are spread across a large number of paying users.

    But the thing is, when that scale breaks, which is when wealthier users exit the system, the math suddenly starts falling apart. And that is exactly what is right now playing out in Pakistan. The richest households who paid the most amount of money for grid electricity are suddenly leaving the grid system altogether.

    That means that the burden of paying for the fixed cost of electricity infrastructure is falling on smaller, poorer group of users, which are people who cannot afford to install solar panels, but they still rely on the grid. Now, these users are now left to cover the cost of not just their own electricity, but also the expensive take-or-pay contracts that the government had signed many years ago. And to remind you, these contracts do not care about actual consumption.

    Whether or not people use electricity from the grid, the government is still obligated to pay the power producers. So even though solar is reducing demand from the grid, the cost of that unused electricity is still very much there, and it is being passed on to whoever is left on the system. That, ladies and gentlemen, is invisible price of defection.

    Filed under
  • Anthropomorphization of AI

    Halvar Flake talking about anthropomorphization of AI.

    The moment that people ascribe properties such as “consciousness” or “ethics” or “values” or “morals” to these learnt mappings is where I tend to get lost. We are speaking about a big recurrence equation that produces a new word, and that stops producing words if we don’t crank the shaft.

    To me, wondering if this contraption will “wake up” is similarly bewildering as if I was to ask a computational meteorologist if he isn’t afraid of his meteorological numerical calculation will “wake up”.

    […]

    Instead of saying “we cannot ensure that no harmful sequences will be generated by our function, partially because we don’t know how to specify and enumerate harmful sequences”, we talk about “behaviors”, “ethical constraints”, and “harmful actions in pursuit of their goals”. All of these are anthropocentric concepts that – in my mind – do not apply to functions or other mathematical objects. And using them muddles the discussion, and our thinking about what we’re doing when we create, analyze, deploy and monitor LLMs.

    Filed under
  • Curse of vision

    Maalvika Bhat explaining the curse of vision in her post about how being too ambitious is a clever form of self-sabotage.

    We are perhaps the only species that suffers from our own imagination. A bird building a nest does not first conceive of the perfect nest and then suffer from the inadequacy of twigs and mud. A spider spinning a web does not pause, paralyzed by visions of geometric perfection beyond her current capabilities. But humans? We possess the strange gift of being haunted by visions of what could be, tormented by the gap between our aspirations and our abilities.

    […]

    Watch a child draw. They create fearlessly, unselfconsciously, because they have not yet developed the curse of sophisticated taste! They draw purple trees and flying elephants with the confidence of someone who has never been told that trees aren’t purple, that elephants don’t fly. But somewhere around age eight or nine, taste arrives like a harsh critic, and suddenly the gap opens. The child can see that their drawing doesn’t match the impossible standard their developing aesthetic sense has conjured.

    This is what leads most of us to stop drawing. Not because we lack talent, but because we’ve developed the ability to judge before we’ve developed the ability to execute. We become connoisseurs of our own inadequacy.

    And this is where our minds, in their desperate attempt, devise an elegant escape. Faced with this unbearable gap, we develop what researchers call “productive avoidance” — staying busy with planning, researching, and dreaming while avoiding the vulnerable act of creating something concrete that might fail. It feels like work because it engages all our intellectual faculties. But it functions as avoidance because it protects us from the terrifying possibility of creating something imperfect. I see this in wannabe founders listening to podcasts on loop, wannabe TikTokkers watching hours of videos as “research,” and wannabe novelists who spend years developing character backstories for books they never begin.

    The spider doesn’t face this problem. It spins webs according to ancient genetic instructions, each one remarkably similar to the last. But human creativity requires us to navigate the treacherous territory between what we can imagine and what we can actually do. We are cursed with visions of perfection and blessed with the capacity to fail toward them.

    Filed under
  • Layoff paradox

    Daniel Sada talking about layoff paradox and why layoffs are easier than firing an employee.

    To fire someone for performance, managers must document everything meticulously. Every missed deadline, every subpar deliverable, every coaching conversation needs to be recorded. HR requires a paper trail that can withstand legal scrutiny. This process can take months or even years, during which the underperforming employee continues to collect their salary and potentially drag down team morale.

    The legal risk is real. Even in at-will states, wrongful termination lawsuits can be expensive and time-consuming. Companies worry about discrimination claims, especially if the fired employee belongs to a protected class. Better to have an ironclad documentation trail than face a costly legal battle.

    Layoffs, paradoxically, are “cleaner.” When you eliminate entire roles or teams, you sidestep the performance documentation requirements. You’re not firing someone for being bad at their job, you’re eliminating the job itself. The legal risk is minimal, and you can even look compassionate by offering severance packages and transition support. Employees can even claim unemployment and keep their income for months!

    For VPs and executives, layoffs solve multiple problems at once: they can eliminate underperformers without the messy documentation process, reduce headcount to hit financial targets, and maintain the narrative that they’re making “tough but strategic decisions” rather than admitting they failed to manage performance effectively.

    The bitter irony? Getting laid off is often better for the employee than being fired. Layoffs typically come with severance, extended healthcare, and the ability to say you were “affected by restructuring” rather than “terminated for cause.” You get more time to find your next job, and your professional reputation remains intact.

    Filed under
  • Disposable code

    Charlie Guo’s thoughts on programming and AI.

    Code has historically been something with a very high upfront cost to create and nearly zero cost to distribute. That’s defined much of the economic models of Silicon Valley – VC-funded startups invest heavily in creating products that can scale near-infinitely.

    But we’re turning that model on its head with the ability to create software for a fraction of what it used to cost. And as someone who (at least in part) considers himself a craftsman, I’m learning to embrace cheap, single-use code. I’m not sure how I feel about it – we’re now dealing with the environmental consequences of single-use physical products, despite their convenience. But there’s something fundamentally different about writing a script you’ll use once and throw away versus carefully architecting a system meant to last for years.

    What’s more, writing custom software that works used to be only within the domain of software engineers who had either formally studied or had invested hours into teaching themselves the arcane knowledge of compilers, networking, algorithms, and more. Everyone else had to use off-the-shelf products or “no code” platforms that heavily constrained what you could do – like going from a full palette to a paint-by-numbers system.

    Now, almost anyone with a bit of product sense can ship something new. Indie hackers don’t have to worry about hiring a whole dev team to get to an MVP, and designers and PMs can vibe code internal prototypes in an afternoon. None of this code will be perfect, but I think that’s sort of the point – it’s an entirely different beast from the type of code I’m used to working with. And I’m reasonably sure I’m going to have to evolve my way of working.

    So what happens when AI starts to think—disposable code is all humans want? Do we end up polluting the knowledge base for LLMs just like we did with our environment?

    Filed under
  • Knowledge in the world vs knowledge in the head

    Philip Kortum explaining ‘knowledge in the world’ and ‘knowledge in the head’ with some interesting examples.

    From an interface design standpoint, knowledge in the world simply means that the controls you need are visible, and the identification and operation of these controls can be done through recognition rather than recall. Drop-down computer menus on modern graphical user interfaces […] are a good example. Users don’t have to memorize the command or even its specific location. Rather, they can browse through the menu command structure until they find the control they want to use.

    By contrast, knowledge in the head means that the user must memorize the necessary commands and controls for the operation of a specific device. In a bygone era many, if not most, computing tasks required significant knowledge in the head. You had to know the syntax or the command language to perform even rudimentary tasks. For example, determining what files were on your computer required you to have specific knowledge in the head to perform that task. If you were using a DOS-based system, you had to know the command was DIR, and that if you switched systems, the command was likely different. These kinds of systems were the ultimate hidden controls, as you needed to have complete knowledge in the head to accomplish anything.

    Filed under
  • Just-in-time software

    Shayne Sweeney talking about future of AI-written software.

    Picture software that crystallizes only when a user shows up, tuned to their request, then evaporates when no one needs it. No pre-compiled artifacts, no fixed language runtime, just live thoughts turning into running systems. In that world the concept of “code” dissolves into something closer to conversation history.

    At that point, the most valuable programmers will be the ones who articulate problems clearly enough that the machine can solve them safely. Syntax will matter about as much as spark-plug gap settings matter to the driver who only presses “Start.”

    […]

    So keep your lint rules if they spark joy, but start practicing a different skill: explaining your intent so clearly that two billion transistors and a trillion-parameter model can’t possibly misunderstand you. The next diff you review may be measured in behaviors shipped, not lines changed. And that’s a future worth pressing “Start” for.

    Filed under