• Rise of the Planet of the Monkeys

    So this is how it begins.

    In an episode that would be laughable if it were not symptomatic of a greater rot, a troupe of errant primates has reportedly plunged an entire nation into darkness. The Panadura Power Station, a facility emblematic of our fragile infrastructure, succumbed to an altercation among these unwitting simians, causing an island-wide blackout. The Ministry of Power and Energy, in its characteristic obfuscation, has assured the public that investigations are ongoing—a phrase which, in political parlance, often translates to indefinite inertia and zero accountability.

    From Sri Lanka Guardian. Although funny, real reason is the systemic corruption.

    Filed under
  • Enshittification of PDF

    Ars Technica lists down the worst offenders for enshittification. What caught my eye was the inclusion of PDF. 

    But Acrobat was ultimately an Adobe product, with all that came with it. It was expensive, it was prone to bloat and poor performance, and there was no end to its security issues. Features were added that greatly expanded its scope but were largely useless for most people. Eventually, you couldn’t install it without also installing what felt like half a dozen seemingly unrelated Adobe products.

    By building PDF capabilities into its OS, Apple allowed me to go Adobe-free and avoid some of this enshittification on my computers.

    After reading the article, I realized the author was right. When I first used PDFs, I remembered how painfully slow Acrobat Reader was. Searching for alternatives, I found Foxit, which was blazing fast. I haven’t used Foxit in a long time, so I am not sure if it has also slowed down like Acrobat Reader.

    When I switched from Windows to Mac, Preview became my go-to app for viewing PDFs. Over the years, I never once used Acrobat Reader’s features like commenting, signing, or annotating—not even once.

    While the term enshittification was coined in 2022, it existed all along.

    Filed under
  • Pencils

    Steve Mould explains how much of the lead in your pencil we actually use—or rather, how much we waste.

    Filed under
  • Model Autophagy Disorder

    Interesting read on Livescu.

    …when AI models generate things—text, images, sound—and then those generated products are used to train a subsequent model, the new model actually gets worse at generating images and texts. Over a few generations it can fail completely, producing only a string of gibberish or a single same image over and over again.

    And this is how AI goes ‘MAD’. Later on in the article the author describes a funny little analogy on how to discriminate between rich data and poor data.

    Filed under
  • Bennu

    Alexandra Witze writing for Nature:

    Fragments of the asteroid Bennu, carefully collected and ferried to Earth by a robotic spacecraft, contain the building blocks for life, NASA announced today.

    Not only does Bennu contain all 5 of the nucleobases that form DNA and RNA on Earth and 14 of the 20 amino acids found in known proteins, the asteroid’s amino acids hold a surprise. On Earth, amino acids in living organisms predominantly have a ‘left-handed’ chemical structure. Bennu, however, contains nearly equal amounts of these structures and their ‘right-handed’, mirror-image forms, calling into question scientists’ hypothesis that asteroids similar to this one might have seeded life on Earth.

    This is an interesting discovery. The closest I read something similar in fiction was in Dan Brown’s Deception Point where NASA discovers a meteorite filled with fossils.

    The discovery also raises a few questions.

    Glavin is most perplexed by the discovery of an equal mixture of left-handed and right-handed amino acids on Bennu. He, like many scientists, had thought that organic molecules from primordial asteroids would have had the same left-handed dominance as those from life on Earth. Now, researchers have to go back to the drawing board to understand how life might have been seeded on Earth.

    “I felt a little bit disappointed at first, like it invalidated 20 years of my research,” Glavin says. “But this is why we explore — to learn new things.”

    I tried to understand why amino acids are left handed, but that was too heavy for me to understand.

    Filed under
  • Million dollar ideas

    John Gruber talking about how no one’s working on million dollar ideas because everyone’s chasing billion dollar ideas!

    The whole tech world needs more projects that aren’t trying to become billion- (let alone trillion-) dollar ideas, but are happily shooting for success as million-dollar ideas (or less!). Many of the best and most beloved movies ever made weren’t big-budget Hollywood blockbusters. If your list of all-time favorite movies doesn’t include a bunch that were made on shoestring budgets, your whole list probably sucks, because you have no taste. The same is of course true for music, games, and everything else. Indie art is often great art, imbued by the souls and obsessions of its creators, and blockbuster art is often garbage art, imbued only by soulless corporate bureaucracies.

    Can we bring back small phones? Is that a million dollar idea?

    Filed under
  • Comparative advantage

    Noah Smith explaining what is comparative advantage while arguing that we will still have jobs when AI takes over the world—with some caveat.

    Comparative advantage actually means “who can do a thing better relative to the other things they can do”. So for example, suppose I’m worse than everyone at everything, but I’m a little less bad at drawing portraits than I am at anything else. I don’t have any competitive advantages at all, but drawing portraits is my comparative advantage. 

    The key difference here is that everyone — every single person, every single AI, everyone — always has a comparative advantage at something!

    To help illustrate this fact, let’s look at a simple example. A couple of years ago, just as generative AI was getting big, I co-authored a blog post about the future of work with an OpenAI engineer named Roon. In that post, we gave an example illustrating how someone can get paid — and paid well — to do a job that the person hiring them would actually be better at doing:

    Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at

    (In fact, we lifted this example from an econ textbook by Greg Mankiw, who in turn lifted it from Paul Samuelson.) 

    Note that in our example, Marc is better than his secretary at every single task that the company requires. He’s better at doing VC deals. And he’s also better at typing. But even though Marc is better at everything, he doesn’t end up doing everything himself! He ends up doing the thing that’s his comparative advantage — doing VC deals. And the secretary ends up doing the thing that’s his comparative advantage — typing. Each worker ends up doing the thing they’re best at relative to the other things they could be doing, rather than the thing they’re best at relative to other people.

    Filed under
  • Change

    From the post Too many Americans still fear the future by Noah Smith

    Being older makes it harder to adapt to changes in the technical-social equilibrium. If you’re 18 and just starting out at the bottom, big changes are almost pure opportunity; if you’re 50 and you have a mortgage and kids in college and a high salary at a big corporation, you probably want to keep things stable.

    The 18 year olds are seeing the 50 year olds getting laid off and thinking about FIRE—Financial Independence, Retire Early.

    Filed under
  • The USA supremacy

    An interesting article on the sheer dominance of USA and US Dollar.

    Here’s an interesting question: Take a look at the interest rate of the Indian Market (comparable to the Fed rate in the U.S.). In 2011, while the U.S. offered a whopping 0.25%, the Indian Markets offered 8.5%. In simple terms, if you had put your money in a money market account in the U.S., you would’ve got a 0.25% return per year, and in India, you would’ve got 8.5% per year6.  

    In other words, it will take only ~8 years to double your money in an Indian account compared to ~288 years in a U.S. account. So why shouldn’t an investor just keep their funds in India instead of the U.S.? 

    The answer lies in currency and country risk. 

    In 2005, 1 USD was equal to 44 Indian Rupees. Fast forward 20 years, now 1 USD is worth 86 Indian Rupees. So, the Indian Currency has roughly lost half its value (49%) when compared to the US Dollar. 

    So, even if you gain some additional interest rate by parking your fund in an emerging market, if that currency depreciates against the US dollar, your net gain will shrink dramatically. Plus, you now also have to deal with country risk.

    Filed under
  • Better answers and right answers

    Benedict Evans talks about how AI is getting better at giving better answers, but still lags behind when it comes to giving the right answer.

    Here’s a practical example of the kind of thing that I do quite often, that I’d like to be able to automate. I asked ChatGPT 4o how many people were employed as elevator operators in the USA in 1980. The US Census collected this data and published it: the answer is 21,982

    First, I try the answer cold, and I get an answer that’s specific, unsourced, and wrong. Then I try helping it with the primary source, and I get a different wrong answer with a list of sources, that are indeed the US Census, and the first link goes to the correct PDF… but the number is still wrong. Hmm. Let’s try giving it the actual PDF? Nope. Explaining exactly where in the PDF to look? Nope. Asking it to browse the web? Nope, nope, nope…

    I faced this issue when I asked AI “what is the composition of nifty 500 between large caps mid caps and small caps“. ChatGPT came close by the getting the right document to refer, but ended up picking the wrong value. I needed the right answer. As Benedict Evans calls it, a deterministic task and not a probablistic task.

    The useful critique of my ‘elevator operator’ problem is not that I’m prompting it wrong or using the wrong version of the wrong model, but that I am in principle trying to use a non-deterministic system for a a deterministic task. I’m trying to use a LLM as though it was SQL: it isn’t, and it’s bad at that.

    But don’t write off AI so soon. Benedict Evans goes on to talk about how disruption happens.

    Part of the concept of ‘Disruption’ is that important new technologies tend to be bad at the things that matter to the previous generation of technology, but they do something else important instead. Asking if an LLM can do very specific and precise information retrieval might be like asking if an Apple II can match the uptime of a mainframe, or asking if you can build Photoshop inside Netscape. No, they can’t really do that, but that’s not the point and doesn’t mean they’re useless. They do something else, and that ‘something else’ matters more and pulls in all of the investment, innovation and company creation. Maybe, 20 years later, they can do the old thing too – maybe you can run a bank on PCs and build graphics software in a browser, eventually – but that’s not what matters at the beginning. They unlock something else.

    Filed under