• Anthropomorphization of AI

    Halvar Flake talking about anthropomorphization of AI.

    The moment that people ascribe properties such as “consciousness” or “ethics” or “values” or “morals” to these learnt mappings is where I tend to get lost. We are speaking about a big recurrence equation that produces a new word, and that stops producing words if we don’t crank the shaft.

    To me, wondering if this contraption will “wake up” is similarly bewildering as if I was to ask a computational meteorologist if he isn’t afraid of his meteorological numerical calculation will “wake up”.

    […]

    Instead of saying “we cannot ensure that no harmful sequences will be generated by our function, partially because we don’t know how to specify and enumerate harmful sequences”, we talk about “behaviors”, “ethical constraints”, and “harmful actions in pursuit of their goals”. All of these are anthropocentric concepts that – in my mind – do not apply to functions or other mathematical objects. And using them muddles the discussion, and our thinking about what we’re doing when we create, analyze, deploy and monitor LLMs.

  • Curse of vision

    Maalvika Bhat explaining the curse of vision in her post about how being too ambitious is a clever form of self-sabotage.

    We are perhaps the only species that suffers from our own imagination. A bird building a nest does not first conceive of the perfect nest and then suffer from the inadequacy of twigs and mud. A spider spinning a web does not pause, paralyzed by visions of geometric perfection beyond her current capabilities. But humans? We possess the strange gift of being haunted by visions of what could be, tormented by the gap between our aspirations and our abilities.

    […]

    Watch a child draw. They create fearlessly, unselfconsciously, because they have not yet developed the curse of sophisticated taste! They draw purple trees and flying elephants with the confidence of someone who has never been told that trees aren’t purple, that elephants don’t fly. But somewhere around age eight or nine, taste arrives like a harsh critic, and suddenly the gap opens. The child can see that their drawing doesn’t match the impossible standard their developing aesthetic sense has conjured.

    This is what leads most of us to stop drawing. Not because we lack talent, but because we’ve developed the ability to judge before we’ve developed the ability to execute. We become connoisseurs of our own inadequacy.

    And this is where our minds, in their desperate attempt, devise an elegant escape. Faced with this unbearable gap, we develop what researchers call “productive avoidance” — staying busy with planning, researching, and dreaming while avoiding the vulnerable act of creating something concrete that might fail. It feels like work because it engages all our intellectual faculties. But it functions as avoidance because it protects us from the terrifying possibility of creating something imperfect. I see this in wannabe founders listening to podcasts on loop, wannabe TikTokkers watching hours of videos as “research,” and wannabe novelists who spend years developing character backstories for books they never begin.

    The spider doesn’t face this problem. It spins webs according to ancient genetic instructions, each one remarkably similar to the last. But human creativity requires us to navigate the treacherous territory between what we can imagine and what we can actually do. We are cursed with visions of perfection and blessed with the capacity to fail toward them.

  • Layoff paradox

    Daniel Sada talking about layoff paradox and why layoffs are easier than firing an employee.

    To fire someone for performance, managers must document everything meticulously. Every missed deadline, every subpar deliverable, every coaching conversation needs to be recorded. HR requires a paper trail that can withstand legal scrutiny. This process can take months or even years, during which the underperforming employee continues to collect their salary and potentially drag down team morale.

    The legal risk is real. Even in at-will states, wrongful termination lawsuits can be expensive and time-consuming. Companies worry about discrimination claims, especially if the fired employee belongs to a protected class. Better to have an ironclad documentation trail than face a costly legal battle.

    Layoffs, paradoxically, are “cleaner.” When you eliminate entire roles or teams, you sidestep the performance documentation requirements. You’re not firing someone for being bad at their job, you’re eliminating the job itself. The legal risk is minimal, and you can even look compassionate by offering severance packages and transition support. Employees can even claim unemployment and keep their income for months!

    For VPs and executives, layoffs solve multiple problems at once: they can eliminate underperformers without the messy documentation process, reduce headcount to hit financial targets, and maintain the narrative that they’re making “tough but strategic decisions” rather than admitting they failed to manage performance effectively.

    The bitter irony? Getting laid off is often better for the employee than being fired. Layoffs typically come with severance, extended healthcare, and the ability to say you were “affected by restructuring” rather than “terminated for cause.” You get more time to find your next job, and your professional reputation remains intact.

  • Disposable code

    Charlie Guo’s thoughts on programming and AI.

    Code has historically been something with a very high upfront cost to create and nearly zero cost to distribute. That’s defined much of the economic models of Silicon Valley – VC-funded startups invest heavily in creating products that can scale near-infinitely.

    But we’re turning that model on its head with the ability to create software for a fraction of what it used to cost. And as someone who (at least in part) considers himself a craftsman, I’m learning to embrace cheap, single-use code. I’m not sure how I feel about it – we’re now dealing with the environmental consequences of single-use physical products, despite their convenience. But there’s something fundamentally different about writing a script you’ll use once and throw away versus carefully architecting a system meant to last for years.

    What’s more, writing custom software that works used to be only within the domain of software engineers who had either formally studied or had invested hours into teaching themselves the arcane knowledge of compilers, networking, algorithms, and more. Everyone else had to use off-the-shelf products or “no code” platforms that heavily constrained what you could do – like going from a full palette to a paint-by-numbers system.

    Now, almost anyone with a bit of product sense can ship something new. Indie hackers don’t have to worry about hiring a whole dev team to get to an MVP, and designers and PMs can vibe code internal prototypes in an afternoon. None of this code will be perfect, but I think that’s sort of the point – it’s an entirely different beast from the type of code I’m used to working with. And I’m reasonably sure I’m going to have to evolve my way of working.

    So what happens when AI starts to think—disposable code is all humans want? Do we end up polluting the knowledge base for LLMs just like we did with our environment?

  • Knowledge in the world vs knowledge in the head

    Philip Kortum explaining ‘knowledge in the world’ and ‘knowledge in the head’ with some interesting examples.

    From an interface design standpoint, knowledge in the world simply means that the controls you need are visible, and the identification and operation of these controls can be done through recognition rather than recall. Drop-down computer menus on modern graphical user interfaces […] are a good example. Users don’t have to memorize the command or even its specific location. Rather, they can browse through the menu command structure until they find the control they want to use.

    By contrast, knowledge in the head means that the user must memorize the necessary commands and controls for the operation of a specific device. In a bygone era many, if not most, computing tasks required significant knowledge in the head. You had to know the syntax or the command language to perform even rudimentary tasks. For example, determining what files were on your computer required you to have specific knowledge in the head to perform that task. If you were using a DOS-based system, you had to know the command was DIR, and that if you switched systems, the command was likely different. These kinds of systems were the ultimate hidden controls, as you needed to have complete knowledge in the head to accomplish anything.

  • Just-in-time software

    Shayne Sweeney talking about future of AI-written software.

    Picture software that crystallizes only when a user shows up, tuned to their request, then evaporates when no one needs it. No pre-compiled artifacts, no fixed language runtime, just live thoughts turning into running systems. In that world the concept of “code” dissolves into something closer to conversation history.

    At that point, the most valuable programmers will be the ones who articulate problems clearly enough that the machine can solve them safely. Syntax will matter about as much as spark-plug gap settings matter to the driver who only presses “Start.”

    […]

    So keep your lint rules if they spark joy, but start practicing a different skill: explaining your intent so clearly that two billion transistors and a trillion-parameter model can’t possibly misunderstand you. The next diff you review may be measured in behaviors shipped, not lines changed. And that’s a future worth pressing “Start” for.

  • Ten years as shareholder of HDFC Bank

    Somewhere in June 2015, I got a call from ICICI Direct that I had been assigned a relationship manager and he wanted to help me with my equity investments. Being naive to the world of equities, I decided to give it a try. I met my relationship manager and he helped me setup my ICICI Direct account. 

    “Which stock should I buy to start with?” I asked my relationship manager. 

    “HDFC Bank”

    “What? And not ICICI Bank? But you work for ICICI Bank!”

    “Because HDFC Bank is better.”

    I heeded to his advice and bought my first shares of HDFC Bank. It has been ten years since that.

    (more…)
  • Artist vs AI

    A wonderfully illustrated post by Christoph Niemann talking about his fears about AI. Each one of his thought is supported by its own thoughtfully designed illustration.

    Automating the creation of art is like automating life, so you can make it to the finish line faster.

    This one hit hard.

  • Deleting the second brain

    A wonderful post by Joan Westenberg who went a deleting spree of all her knowledge base—notes, links, to do lists.

    Two nights ago, I deleted everything.

    Every note in Obsidian. Every half-baked atomic thought, every Zettelkasten slip, every carefully linked concept map. I deleted every Apple Note I’d synced since 2015. Every quote I’d ever highlighted. Every to-do list from every productivity system I’d ever borrowed, broken, or bastardized. Gone. Erased in seconds.

    What followed: Relief.

    And a comforting silence where the noise used to be.

    For years, I had been building what technologists and lifehackers call a “second brain.” The premise: capture everything, forget nothing. Store your thinking in a networked archive so vast and recursive it can answer questions before you know to ask them. It promises clarity. Control. Mental leverage.

    But over time, my second brain became a mausoleum. A dusty collection of old selves, old interests, old compulsions, piled on top of each other like geological strata. Instead of accelerating my thinking, it began to replace it. Instead of aiding memory, it froze my curiosity into static categories.

    And so…

    Well, I killed the whole thing.

    This is akin to the forest fires which are valuable for some ecosystems to thrive. They open the forest canopy to sunlight so that sunlight reaches the ground and stimulates new growth. Sometimes, we also need a clean slate to start with new set of ideas.

    Another wonderful excerpt from the post.

    In trying to remember everything, I outsourced the act of reflection. I didn’t revisit ideas. I didn’t interrogate them. I filed them away and trusted the structure. But a structure is not thinking. A tag is not an insight. And an idea not re-encountered might as well have never been had.

  • Proprioception

    Ardem Patapoutian talking about proprioception in this interview where he discusses about his Nobel prize for medicine.

    The simplest test is to close your eyes and touch your nose. If you think about how you’re able to tell where your fingers are with your eyes closed, you realize it’s because of how much your muscles are stretched. It’s the same sensor, Piezo 2, that senses this. You don’t feel that the muscle in your second finger is stretched, but rather you gather all the information, and your brain forms an image of where you are and what space you occupy. And that’s why, easily, without looking, you can walk, run, play soccer, play the violin. You can do all of this thanks to proprioception. And we take it for granted because you can’t turn it off. You can close your eyes and imagine what a blind person is like, but you can’t turn off proprioception. That’s why most people don’t know about it, because it’s always there. And it’s a big philosophical message: we take things for granted when we always have them.