• Replacing junior programmars

    Matt Garman—Amazon’s CEO—talking about why firing all your junior programmers for AI is a bad idea.

    With Kiro and part of what we’ve done in Agentic coding first kind of mentality is that you actually start with a spec of the thing that you want to build and then you work with the tool to actually go and build parts of that spec and as you’re vibe coding you it it can automatically change parts of that spec but you still have that spec as the core thing you can always go back to and change aspects of it or functions of it or whatever it is. Um, and we have seen the light bulb go on because it actually one of the cool things about that is that you can actually guide more junior developers as to like what are great coding practices, how do we think about this? 

    […]

    I was at a group a leadership group and people were telling me they’re like we think that with AI we can replace all of our junior people in our company. I was like that’s the like one the dumbest thing I’ve ever heard. like they’re probably the least expensive employees you have. They’re the most leaned into your AI tools and like how’s that going to work when you go like 10 years in the future and you have no one that has built up or learned anything. Um and so it’s you know I I my view is like you absolutely want to keep hiring kids out of college and teaching them the right ways to go build software and decompose problems and think about it um just as much as you ever have.

    Filed under
  • AGI. Are we there yet?

    A very pessimistic take by Marcus Hutchins on the current state of AI. The author touches upon a variety of topics which I have read independent of each other.

    A logical problem I previously used to tests early LLMs was one called “The Wolf, The Goat, And The Cabbage”. The problem is simple. You’re walking with a wolf, a goat, and a cabbage. You come to a river which you need to cross. There is a small boat which only has enough space for you and one other item. If left unattended, the wolf will eat the goat, and the goat will eat the cabbage. How do you get all 3 safely across?

    The correct answer is you take the goat across, leaving behind the wolf and the cabbage. You then return and fetch the cabbage, leaving the goat alone on the other side. Because the goat and cabbage cannot be left alone together, you take the goat back, leaving just the cabbage. Now, you can take the wolf across, leaving the wolf and the cabbage alone on the other side, finally returning to fetch the goat.

    Any LLM could effortlessly answer this problem, because it has thousands of instances of the problem and the correct solution in its training data. But it was found that by simply swapping out one item but keeping the same constraints, the LLM would no longer be able to answer. Replacing the wolf with a lion, would result in the LLM going off the rails and just spewing a bunch of nonsense.

    This made it clear the LLM was not actually thinking or reasoning through the problem, simply just regurgitating answers and explanations from its training data. Any human, knowing the answer to the original problem, could easily handle the wolf being swapped for a lion, or the cabbage for a lettuce. But LLMs, lacking reasoning, treated this as an entirely new problem.

    Over time this issue was fixed. It could be that the LLM developers wrote algorithms to identify variants of the problem. It’s also possible that people posting different variants of the problem allowed the LLM to detect the core pattern, which all variants follow, allowing it to substitute words where needed.

    This is when someone found you could just break the problem, and the LLM’s pattern matching along with it. Either by making it so none of the objects could be left unattended, or all of them could. In some variants there was no reason to cross the river, the boat doesn’t fit anyone, was actually a car, or has enough space to carry all the items at once. Humans, having actual logic and reasoning abilities could easily identify the broken versions of the problems and answer accordingly, but the LLMs would just output incoherent gibberish.

    But of course, as more and more ways to disprove LLM reasoning were found, the developers just found ways to fix them. I strongly suspect these issues are not being fixed by any introduction of actual logic or reasoning, but by sub-models built to address specific problems. If this is the case, I’d argue we’re moving away from AGI and back towards building problem specific ML models, which is how “AI” has worked for decades.

    Bonus: Check the wikipedia page of Marcus Hutchins.

    Filed under
  • Inevitabilism

    Tom Renner explaining inevitabilism.

    People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

    This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

    “We are entering a world where we will learn to coexist with AI, not as its masters, but as its collaborators.” – Mark Zuckerberg

    “AI is the new electricity.” – Andrew Ng

    “AI will not replace humans, but those who use AI will replace those who don’t.” – Ginni Rometty

    These are some big names in the tech world, all framing the conversation in a very specific way. Rather than “is this the future you want?”, the question is instead “how will you adapt to this inevitable future?”. Note also the threatening tone present, a healthy psychological undercurrent encouraging you to go with the flow, because you’d otherwise be messing with scary powers way beyond your understanding.

    Filed under
  • Blogging is a superpower

    Simon Willison talking to Corey Quinn on AI’s Security Crisis. During the podcast Simon touches upon how his frequent blogging is the reason he has become valuable in the AI space. The bold emphasis is added by me.

    So I’m a blogger, right? I blog I’ve my blog’s like 22 years old now, and having a blog is a superpower because nobody else does it, right?

    The, those of us who who write frequently online are vanishing you, right? Everyone else moved to LinkedIn posts or tweet tweets or whatever. And the impact that you can have from a blog entry is so much higher than that. You’ve got more space. It lives on your own domain. You get to stay in complete control of your destiny.

    And so at the moment, I’m blogging two or three things a day, and a lot of these are very short form. It’s a link to something and a couple of paragraphs about why I think that thing’s interesting. A couple of times a week, I’ll post a long form blog entry, the amount of influence you can have on the world if you write frequently about it.

    I get invited to like dinners at Weird mansions in Silicon Valley to talk about AI because I have a blog. It doesn’t matter how many people read it, it matters the quality of the people that read it, right? If you are. Active in a space and you have a hundred readers, but those a hundred readers work for the companies that are influential in that space.

    That’s incredibly valuable. So yeah, I, I feel like that’s really my, my, my ultimate sort of trick right now. My, my life hack is I blog and people don’t blog. They, they should blog. It’s, it’s, it’s good for you.

    Filed under
  • Eight years as shareholder of Pidilite Industries

    Pidilite remains a key investment in my equity portfolio. Slowly and steadily I have increased my investment (Figure 1) in it over the last eight years. There’s a temporary pause on new investments as I have other financial commitments, but as soon as they are taken care of I would resume.

    Figure 1
    (more…)
    Filed under
  • Summary vs Shortening

    Scott Jenson talking about anthropomorphizing of LLMs and touching upon the difference between summary and shortening. I recommend reading the entire post to avoid taking the subtext below out of context..

    […] we say they can “summarize” a document. But LLMs don’t summarize, they shorten, and this is a critical distinction. A true summary, the kind a human makes, requires outside context and reference points. Shortening just reworks the information already in the text.

    Here is an example using the movie The Matrix:

    Summary

    A philosophical exploration of free will and reality disguised as a sci-fi action film about breaking free from systems of control.

    Shortening

    A computer hacker finds out reality is fake and learns Kung Fu.

    There’s a key difference between summarizing and simply shortening. A summary enriches a text by providing context and external concepts, creating a broader framework for understanding. Shortening, in contrast, only reduces the original text; it removes information without adding any new perspective.

    Filed under
  • Eight years as shareholder of VIP Industries

    Eight years ago, when I first invested in VIP Industries, I had no idea what kind of journey I was starting on. My investment was scattered through these eight years (Figure 1) and it was more of a diversification mechanism rather than investment backed by research. I have bought VIP suitcases and bags. They turned out to be good. So the stock must also be good.

    Figure 1
    (more…)
    Filed under
  • Reach out

    This wonderful post from Soonly talking about our metaphorical electric fences which stopped working years ago.

    The person who reaches out first isn’t the weak one. They’re the one who discovered the fence is broken. They’re the one running free while everyone else stands on their safe little porches, barking at the world but never joining it.

    Your breakthrough isn’t on the other side of productivity or success or self-improvement. It’s on the other side of that text you’re not sending. That call you’re not making. That “I miss you” stuck in your throat.

    The electric fence between you and the people you care about? It hasn’t worked in years.

    But you’re still standing on the porch.

    Filed under
  • Happiness = Smiles – Frowns

    Steve Wozniak responding to a comment on how selling off his Apple stock was a bad decision.

    I gave all my Apple wealth away because wealth and power are not what I live for. I have a lot of fun and happiness. I funded a lot of important museums and arts groups in San Jose, the city of my birth, and they named a street after me for being good. I now speak publicly and have risen to the top. I have no idea how much I have but after speaking for 20 years it might be $10M plus a couple of homes. I never look for any type of tax dodge. I earn money from my labor and pay something like 55% combined tax on it. I am the happiest person ever. Life to me was never about accomplishment, but about Happiness, which is Smiles minus Frowns. I developed these philosophies when I was 18-20 years old and I never sold out.

    Filed under
  • Do more with same rather than doing same with less

    Thomas Dohmke—ex CEO of Github—shares his take on the AI vs Developer+AI argument. He still thinks developers will need to get their fundamentals right, review and verify AI generated code, understand and design. But then also acknowledges that AI is going to bring in a significant change in the way developers code in the future.

    Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition. We believe that means that we should update how we talk about (and measure) success when using these tools, and we should expect that after the initial efficiency gains our focus will be on raising the ceiling of the work and outcomes we can accomplish, which is a very different way of interpreting tool investments. This helps explain the – perhaps unintuitive at first – observation that many of the developers we interviewed were paying for top-tier subscriptions. When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.

    The last sentence in bold ties back to the title of this post.

    Filed under