• Comparative advantage

    Noah Smith explaining what is comparative advantage while arguing that we will still have jobs when AI takes over the world—with some caveat.

    Comparative advantage actually means “who can do a thing better relative to the other things they can do”. So for example, suppose I’m worse than everyone at everything, but I’m a little less bad at drawing portraits than I am at anything else. I don’t have any competitive advantages at all, but drawing portraits is my comparative advantage. 

    The key difference here is that everyone — every single person, every single AI, everyone — always has a comparative advantage at something!

    To help illustrate this fact, let’s look at a simple example. A couple of years ago, just as generative AI was getting big, I co-authored a blog post about the future of work with an OpenAI engineer named Roon. In that post, we gave an example illustrating how someone can get paid — and paid well — to do a job that the person hiring them would actually be better at doing:

    Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at

    (In fact, we lifted this example from an econ textbook by Greg Mankiw, who in turn lifted it from Paul Samuelson.) 

    Note that in our example, Marc is better than his secretary at every single task that the company requires. He’s better at doing VC deals. And he’s also better at typing. But even though Marc is better at everything, he doesn’t end up doing everything himself! He ends up doing the thing that’s his comparative advantage — doing VC deals. And the secretary ends up doing the thing that’s his comparative advantage — typing. Each worker ends up doing the thing they’re best at relative to the other things they could be doing, rather than the thing they’re best at relative to other people.

  • Change

    From the post Too many Americans still fear the future by Noah Smith

    Being older makes it harder to adapt to changes in the technical-social equilibrium. If you’re 18 and just starting out at the bottom, big changes are almost pure opportunity; if you’re 50 and you have a mortgage and kids in college and a high salary at a big corporation, you probably want to keep things stable.

    The 18 year olds are seeing the 50 year olds getting laid off and thinking about FIRE—Financial Independence, Retire Early.

  • The USA supremacy

    An interesting article on the sheer dominance of USA and US Dollar.

    Here’s an interesting question: Take a look at the interest rate of the Indian Market (comparable to the Fed rate in the U.S.). In 2011, while the U.S. offered a whopping 0.25%, the Indian Markets offered 8.5%. In simple terms, if you had put your money in a money market account in the U.S., you would’ve got a 0.25% return per year, and in India, you would’ve got 8.5% per year6.  

    In other words, it will take only ~8 years to double your money in an Indian account compared to ~288 years in a U.S. account. So why shouldn’t an investor just keep their funds in India instead of the U.S.? 

    The answer lies in currency and country risk. 

    In 2005, 1 USD was equal to 44 Indian Rupees. Fast forward 20 years, now 1 USD is worth 86 Indian Rupees. So, the Indian Currency has roughly lost half its value (49%) when compared to the US Dollar. 

    So, even if you gain some additional interest rate by parking your fund in an emerging market, if that currency depreciates against the US dollar, your net gain will shrink dramatically. Plus, you now also have to deal with country risk.

  • Better answers and right answers

    Benedict Evans talks about how AI is getting better at giving better answers, but still lags behind when it comes to giving the right answer.

    Here’s a practical example of the kind of thing that I do quite often, that I’d like to be able to automate. I asked ChatGPT 4o how many people were employed as elevator operators in the USA in 1980. The US Census collected this data and published it: the answer is 21,982

    First, I try the answer cold, and I get an answer that’s specific, unsourced, and wrong. Then I try helping it with the primary source, and I get a different wrong answer with a list of sources, that are indeed the US Census, and the first link goes to the correct PDF… but the number is still wrong. Hmm. Let’s try giving it the actual PDF? Nope. Explaining exactly where in the PDF to look? Nope. Asking it to browse the web? Nope, nope, nope…

    I faced this issue when I asked AI “what is the composition of nifty 500 between large caps mid caps and small caps“. ChatGPT came close by the getting the right document to refer, but ended up picking the wrong value. I needed the right answer. As Benedict Evans calls it, a deterministic task and not a probablistic task.

    The useful critique of my ‘elevator operator’ problem is not that I’m prompting it wrong or using the wrong version of the wrong model, but that I am in principle trying to use a non-deterministic system for a a deterministic task. I’m trying to use a LLM as though it was SQL: it isn’t, and it’s bad at that.

    But don’t write off AI so soon. Benedict Evans goes on to talk about how disruption happens.

    Part of the concept of ‘Disruption’ is that important new technologies tend to be bad at the things that matter to the previous generation of technology, but they do something else important instead. Asking if an LLM can do very specific and precise information retrieval might be like asking if an Apple II can match the uptime of a mainframe, or asking if you can build Photoshop inside Netscape. No, they can’t really do that, but that’s not the point and doesn’t mean they’re useless. They do something else, and that ‘something else’ matters more and pulls in all of the investment, innovation and company creation. Maybe, 20 years later, they can do the old thing too – maybe you can run a bank on PCs and build graphics software in a browser, eventually – but that’s not what matters at the beginning. They unlock something else.

  • AI ‘may’ not take away software jobs

    Dustin Ewers arguing AI will create more software jobs rather than taking away.

    AI tools create a significant productivity boost for developers. Different folks report different gains, but most people who try AI code generation recognize its ability to increase velocity. Many people think that means we’re going to need fewer developers, and our industry is going to slowly circle the drain.

    This view is based on a misunderstanding of why people pay for software. A business creates software because they think that it will give them some sort of economic advantage. The investment needs to pay for itself with interest. There are many software projects that would help a business, but businesses aren’t going to do them because the return on investment doesn’t make sense.

    When software development becomes more efficient, the ROI of any given software project increases, which unlocks more projects. That legacy modernization project that no one wants to tackle because it’s super costly. Now you can make AI do most of the work. That project now makes sense. That cool new software product idea that might be awesome but might also crash and burn. AI can make it cheaper for a business to roll the dice. Cheaper software means people are going to want more of it. More software means more jobs for increasingly efficient software developers.

    Economists call this Jevons Paradox.

    This gives me hope.

    Bonus: I first learnt about Jevons Paradox while reading Kim Stanley Robinson’s The Ministry For The Future.

  • There’s a new mistake-maker in town

    An insightful article by Bruce Schneier on how humans have built guardrails to manage mistakes made by humans. But we are not equipped to manage the weird mistakes made by AI.

    Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.

    Much of the friction—and risk—associated with our use of AI arise from that difference. We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes.

  • Elon Musk

    For all the hate Elon Musk gets—especially post his Twitter acquisition—there are some real, tangible benefits that he has given to the world. Quentin Stafford-Fraser talks about how he changed the EV space.

    Though he neither founded Tesla nor designed the cars himself, his perseverance, vision, and willingness to spend his cash where others weren’t, has dragged an entire industry, mostly kicking and screaming, into a far better place, both technologically and for the planet.  I remember the shock of traditional car dealers in 2016, trying hard to sell a few more cars at discounts to fill their next quarter’s quota, when it was announced that quarter of a million people had put down a deposit for the recently-announced Model 3: a car they had never even seen.   It took that kind of major eathquake to rattle the enormous global inertia of the fossil-burning world and to kick investment in battery-production up to a whole new level.   I won’t pretend Musk was doing all of this for purely selfless reasons, or that he did it entirely on his own, but many thousands of Greta Thunbergs combined could not dream of having such an impact.  He changed the world.

    Bonus: Quentin Stafford-Fraser created the first webcam.

  • Rule of law

    Jennifer Pahlka explaining rule of law with the inaugration of Donald Trump for the second time.

    In The New York Times a few weeks ago, Stephen E. Hanson and Jeffrey S. Kopstein characterize the incoming administration’s patrimonialism (rule through personal power and patronage) as “an assault on the modern state as we know it.” Noting that Trump won the presidential election fairly, they correctly assess that reversing this assault “will require more than a simple defense of ‘democracy.’ …The threat we face is different, and perhaps even more critical: a world in which the rule of law has given way entirely to the rule of men.”

    I agree. The rule of men is fundamentally inconsistent with the principles and values of our nation, and I do not welcome any nepotism, graft, or abuse of the system for retribution that may be coming our way. But why would half of voters tolerate this? The authors propose that “a slew of self-aggrandizing leaders has taken advantage of rising inequality, cultural conflicts and changing demography to grab power,” suggesting that the rule of law is a hapless casualty of other circumstances. But the principle of a nation governed by laws not men should have been non-negotiable. It should have been a crown jewel of our democracy for which all else could be tolerated. It wasn’t. The crown jewel was tarnished, and unless we understand the nature of that tarnish, we have little hope of returning that jewel to its rightful place.

  • Agentic AI

    Gary Marcus on AI Agents

    I do genuinely think we will all have our own AI agents, and companies will have armies of them. And they will be worth trillions, since eventually (no time soon) they will do a huge fraction of all human knowledge work, and maybe physical labor too. 

    But not this year (or next, or the one after that, and probably not this decade, except in narrow use cases). All that we will have this year are demos.

    Funny.

    And I am hoping it plays out the way Gary is describing it. I get to keep my job a little longer. And build a retirement corpus.

  • Planned obsolescence

    I learned about this term—planned obsolescence—when news broke out that Apple intentionally slowed down their older iPhones. And today I learned that there existed a lightbulb cartel who did this 100 years before Apple.

    The cartel’s grip on the lightbulb market lasted only into the 1930s. Its far more enduring legacy was to engineer a shorter life span for the incandescent lightbulb. By early 1925, this became codified at 1,000 hours for a pear-shaped household bulb, a marked reduction from the 1,500 to 2,000 hours that had previously been common. Cartel members rationalized this approach as a trade-off: Their lightbulbs were of a higher quality, more efficient, and brighter burning than other bulbs. They also cost a lot more. Indeed, all evidence points to the cartel’s being motivated by profits and increased sales, not by what was best for the consumer. In carefully crafting a lightbulb with a relatively short life span, the cartel thus hatched the industrial strategy now known as planned obsolescence.

    THE GREAT LIGHTBULB CONSPIRACY