Category: Artificial Intelligence

  • We are all junior programmers now

    Generative AI is the fastest moving technology I have seen till date. I am seeing something or the other happening every single day. It’s hard to keep track of updates. Scott Werner extends this thought and theorises that because of this we are all junior programmers now.

    We’re all junior developers again. But not junior in the traditional sense, where senior is waiting after enough years pass. We’re junior in a permanent sense, where the technology evolves faster than expertise can accumulate.

    It’s like being a professional surfer on an ocean that keeps changing its physics. Just when you think you understand waves, they start moving sideways. Or backwards. Or turning into birds.

    This is either terrifying or liberating, depending on your relationship with control.

    Scott also touches upon other interesting aspects like—Time Dilation in the Age of AI.

  • Anthropomorphization of AI

    Halvar Flake talking about anthropomorphization of AI.

    The moment that people ascribe properties such as “consciousness” or “ethics” or “values” or “morals” to these learnt mappings is where I tend to get lost. We are speaking about a big recurrence equation that produces a new word, and that stops producing words if we don’t crank the shaft.

    To me, wondering if this contraption will “wake up” is similarly bewildering as if I was to ask a computational meteorologist if he isn’t afraid of his meteorological numerical calculation will “wake up”.

    […]

    Instead of saying “we cannot ensure that no harmful sequences will be generated by our function, partially because we don’t know how to specify and enumerate harmful sequences”, we talk about “behaviors”, “ethical constraints”, and “harmful actions in pursuit of their goals”. All of these are anthropocentric concepts that – in my mind – do not apply to functions or other mathematical objects. And using them muddles the discussion, and our thinking about what we’re doing when we create, analyze, deploy and monitor LLMs.

  • Disposable code

    Charlie Guo’s thoughts on programming and AI.

    Code has historically been something with a very high upfront cost to create and nearly zero cost to distribute. That’s defined much of the economic models of Silicon Valley – VC-funded startups invest heavily in creating products that can scale near-infinitely.

    But we’re turning that model on its head with the ability to create software for a fraction of what it used to cost. And as someone who (at least in part) considers himself a craftsman, I’m learning to embrace cheap, single-use code. I’m not sure how I feel about it – we’re now dealing with the environmental consequences of single-use physical products, despite their convenience. But there’s something fundamentally different about writing a script you’ll use once and throw away versus carefully architecting a system meant to last for years.

    What’s more, writing custom software that works used to be only within the domain of software engineers who had either formally studied or had invested hours into teaching themselves the arcane knowledge of compilers, networking, algorithms, and more. Everyone else had to use off-the-shelf products or “no code” platforms that heavily constrained what you could do – like going from a full palette to a paint-by-numbers system.

    Now, almost anyone with a bit of product sense can ship something new. Indie hackers don’t have to worry about hiring a whole dev team to get to an MVP, and designers and PMs can vibe code internal prototypes in an afternoon. None of this code will be perfect, but I think that’s sort of the point – it’s an entirely different beast from the type of code I’m used to working with. And I’m reasonably sure I’m going to have to evolve my way of working.

    So what happens when AI starts to think—disposable code is all humans want? Do we end up polluting the knowledge base for LLMs just like we did with our environment?

  • Just-in-time software

    Shayne Sweeney talking about future of AI-written software.

    Picture software that crystallizes only when a user shows up, tuned to their request, then evaporates when no one needs it. No pre-compiled artifacts, no fixed language runtime, just live thoughts turning into running systems. In that world the concept of “code” dissolves into something closer to conversation history.

    At that point, the most valuable programmers will be the ones who articulate problems clearly enough that the machine can solve them safely. Syntax will matter about as much as spark-plug gap settings matter to the driver who only presses “Start.”

    […]

    So keep your lint rules if they spark joy, but start practicing a different skill: explaining your intent so clearly that two billion transistors and a trillion-parameter model can’t possibly misunderstand you. The next diff you review may be measured in behaviors shipped, not lines changed. And that’s a future worth pressing “Start” for.

  • Artist vs AI

    A wonderfully illustrated post by Christoph Niemann talking about his fears about AI. Each one of his thought is supported by its own thoughtfully designed illustration.

    Automating the creation of art is like automating life, so you can make it to the finish line faster.

    This one hit hard.

  • Dividing a job into tasks

    Arvind Narayanan talking about dividing job into tasks and the boundaries between them.

    …if you define jobs in terms of tasks maybe you’re actually defining away the most nuanced and hardest-to-automate aspects of jobs, which are at the boundaries between tasks.

    Can you break up your own job into a set of well-defined tasks such that if each of them is automated, your job as a whole can be automated? I suspect most people will say no. But when we think about other people’s jobs that we don’t understand as well as our own, the task model seems plausible because we don’t appreciate all the nuances.

    If this is correct, it is irrelevant how good AI gets at task-based capability benchmarks. If you need to specify things precisely enough to be amenable to benchmarking, you will necessarily miss the fact that the lack of precise specification is often what makes jobs messy and complex in the first place. So benchmarks can tell us very little about automation vs augmentation.

  • Judgement over technical skill

    Alexander Kohlhofer quoting Brian Eno and explaining how judgement becomes more important than technical skill in the age of AI.

    The great benefit of computer sequencers is that they remove the issue of skill, and replace it with the issue of judgement. 

    With Cubase or Photoshop, anybody can actually do anything, and you can make stuff that sounds very much like stuff you’d hear on the radio, or looks very much like anything you see in magazines. 

    So the question becomes not whether you can do it or not, because any drudge can do it if they’re prepared to sit in front of the computer for a few days, the question then is, “Of all the things you can now do, which do you choose to do? “

  • ChatGPT and students

    A rant by a professor posted on Reddit on his struggles with ChatGPT use among students.

    I actually get excited when I find typos and grammatical errors in their writing now.

    My constant struggle is how to convince them that getting an education in the humanities is not about regurgitating ideas/knowledge that already exist. It’s about generating new knowledge, striving for creative insights, and having thoughts that haven’t been had before. I don’t want you to learn facts. I want you to think. To notice. To question. To reconsider. To challenge. Students don’t yet get that ChatGPT only rearranges preexisting ideas, whether they are accurate or not.

  • Input risk in LLM

    Doug Slater talking about input risk when using LLM to code.

    An LLM does not challenge a prompt which is leading or whose assumptions are flawed or context is incomplete. Example: An engineer prompts, “Provide a thread-safe list implementation in C#” and receives 200 lines of flawless, correct code. It’s still the wrong answer, because the question should have been, “How can I make this code thread-safe?” and whose answer is “Use System.Collections.Concurrent” and 1 line of code. The LLM is not able to recognize an instance of the XY problem because it was not asked to.

    The post covers a lot more ground on the risks involved with LLM generated code. Another thing that caught my attention was:

    LLMs accelerates incompetence.

    Simon Willison talks about the other side when he says:

    LLMs amplify existing expertise

    The conclusion is: If you are smart, LLMs can make you—or at least make you sound—smarter. If you are dumb, LLMs will make you dumber, without you ever knowing.

  • Addiction to… vibe coding

    Fred Benenson talking about how you can get addicted to vibe coding. Yes, vibe coding.

    I’ve been using AI coding assistants like Claude Code for a while now, and I’m here to say (with all due respect to people who have substance abuse issues), I may be an addict. And boy is this is an expensive habit.

    Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern (“the code works! it’s brilliant! it just broke! wtf!”), triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.

    What makes this especially effective with AI is the minimal effort required for potentially significant rewards — creating what neuroscientists call an “effort discounting” advantage. Combined with our innate completion bias — the drive to finish tasks we’ve started — this creates a compelling psychological loop that keeps us prompting.

    However, the post is less about addiction and more about the perverse incentives that AI companies have for verbose code generation.

    1. The AI generates verbose, procedural code for a given task
    2. This code becomes part of the context when you ask for further changes or additions (this is key)
    3. The AI now has to read (and you pay for) this verbose code in every subsequent interaction
    4. More tokens processed = more revenue for the company behind the AI
    5. The LLM developers have no incentive to “fix” the verbose code problem because doing so will meaningfully impact their bottom line

    Don’t miss to read the chuckle inducing postscript.