Category: Artificial Intelligence

  • Comparative advantage

    Noah Smith explaining what is comparative advantage while arguing that we will still have jobs when AI takes over the world—with some caveat.

    Comparative advantage actually means “who can do a thing better relative to the other things they can do”. So for example, suppose I’m worse than everyone at everything, but I’m a little less bad at drawing portraits than I am at anything else. I don’t have any competitive advantages at all, but drawing portraits is my comparative advantage. 

    The key difference here is that everyone — every single person, every single AI, everyone — always has a comparative advantage at something!

    To help illustrate this fact, let’s look at a simple example. A couple of years ago, just as generative AI was getting big, I co-authored a blog post about the future of work with an OpenAI engineer named Roon. In that post, we gave an example illustrating how someone can get paid — and paid well — to do a job that the person hiring them would actually be better at doing:

    Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at

    (In fact, we lifted this example from an econ textbook by Greg Mankiw, who in turn lifted it from Paul Samuelson.) 

    Note that in our example, Marc is better than his secretary at every single task that the company requires. He’s better at doing VC deals. And he’s also better at typing. But even though Marc is better at everything, he doesn’t end up doing everything himself! He ends up doing the thing that’s his comparative advantage — doing VC deals. And the secretary ends up doing the thing that’s his comparative advantage — typing. Each worker ends up doing the thing they’re best at relative to the other things they could be doing, rather than the thing they’re best at relative to other people.

  • AI ‘may’ not take away software jobs

    Dustin Ewers arguing AI will create more software jobs rather than taking away.

    AI tools create a significant productivity boost for developers. Different folks report different gains, but most people who try AI code generation recognize its ability to increase velocity. Many people think that means we’re going to need fewer developers, and our industry is going to slowly circle the drain.

    This view is based on a misunderstanding of why people pay for software. A business creates software because they think that it will give them some sort of economic advantage. The investment needs to pay for itself with interest. There are many software projects that would help a business, but businesses aren’t going to do them because the return on investment doesn’t make sense.

    When software development becomes more efficient, the ROI of any given software project increases, which unlocks more projects. That legacy modernization project that no one wants to tackle because it’s super costly. Now you can make AI do most of the work. That project now makes sense. That cool new software product idea that might be awesome but might also crash and burn. AI can make it cheaper for a business to roll the dice. Cheaper software means people are going to want more of it. More software means more jobs for increasingly efficient software developers.

    Economists call this Jevons Paradox.

    This gives me hope.

    Bonus: I first learnt about Jevons Paradox while reading Kim Stanley Robinson’s The Ministry For The Future.

  • There’s a new mistake-maker in town

    An insightful article by Bruce Schneier on how humans have built guardrails to manage mistakes made by humans. But we are not equipped to manage the weird mistakes made by AI.

    Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.

    Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes.

  • Agentic AI

    Gary Marcus on AI Agents

    I do genuinely think we will all have our own AI agents, and companies will have armies of them. And they will be worth trillions, since eventually (no time soon) they will do a huge fraction of all human knowledge work, and maybe physical labor too. 

    But not this year (or next, or the one after that, and probably not this decade, except in narrow use cases). All that we will have this year are demos.

    Funny.

    And I am hoping it plays out the way Gary is describing it. I get to keep my job a little longer. And build a retirement corpus.