Category: Technology

  • Morvec’s paradox

    From Wikipedia:

    Moravec’s paradox is the observation that, as Hans Moravec wrote in 1988, “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”. This counterintuitive pattern may happen because skills that appear effortless to humans, such as recognizing faces or walking, required millions of years of evolution to develop, while abstract reasoning abilities like mathematics are evolutionarily recent.

  • Lessons from the cloud build-out

    This research report from Motilal Oswal which compares the current AI headwinds for the Indian IT industry, with the cloud build-out period of 2016-18:

    Lessons from the cloud build-out: Headwind, then a huge tailwind:

    • The present AI investment cycle closely resembles the early cloud build-out period (2016-18), where cloud initially acted as a headwind for Indian IT services. During those years, cloud migration and platform consolidation eliminated several traditional service lines (infrastructure management, legacy hosting, on-prem ops) and triggered pricing pressure in adjacent portfolios.
    • However, once the cloud infra build-out stabilized, the next phase created an entirely new set of revenue pools: application modernization, cloud-native development, re-architecture of legacy estates, data engineering, DevOps, managed services, and security.
    • These work streams ultimately more than offset the services that cloud displaced, leading to a multi-year technology services upcycle.

    This gives me hope.

  • Replacing developers

    This insightful post by Stephan Schwab where he shares a brief history of how we have dreamt of replacing developers and failed. But the pursuit of this dream has created enormous value.

    Perhaps the recurring dream of replacing developers isn’t a mistake. Perhaps it’s a necessary optimism that drives tool creation. Each attempt to make development more accessible produces tools that genuinely help. The dream doesn’t come true as imagined, but pursuing it creates value.

    COBOL didn’t let business analysts write programs, but it did enable a generation of developers to build business systems effectively. CASE tools didn’t generate complete applications, but they advanced our thinking about visual modeling. Visual Basic didn’t eliminate professional developers, but it brought application development to more people. AI won’t replace developers, but it will change how we work in meaningful ways.

    The pattern continues because the dream reflects a legitimate need. We genuinely require faster, more efficient ways to create software. We just keep discovering that the constraint isn’t the tool—it’s the complexity of the problems we’re trying to solve.

    Understanding this doesn’t mean rejecting new tools. It means using them with clear expectations about what they can provide and what will always require human judgment.

  • Hype first and context later

    Carette Antonin reflecting on the recent viral tweets of Jaana Dogan and Andrej Karpathy and how ‘Influentists’ are hyping up AI only to add context later on, which ends up deflating the hype.

    This pattern of “hype first and context later” is actually part of a growing trend.

    I call the individuals participating to that trend “The Influentists”. Those people are members of a scientific or technical community, and leverage their large audiences to propagate claims that are, at best, unproven and, at worst, intentionally misleading.

    But how can we spot them?

    I personally identify these “Influentists” by four personality traits that characterize their public discourse.

    The first is a reliance on trust-me-bro” culture, where anecdotal experiences are framed as universal, objective truths to generate hype. This is a sentiment perfectly captured by the “I’m not joking and this isn’t funny” tone of Rakyll’s original tweet, but also the dramatic “I’ve never felt that much behind as a programmer” from Andrej Karpathy’s tweet. This is supported by an absence of reproducible proof, as these individuals rarely share the code, data, or methodology behind their viral “wins”, an omission made easier than ever in the current LLM era. And finally, they utilize strategic ambiguity, carefully wording their claims with enough vagueness to pivot toward a “clarification” if the technical community challenges their accuracy.

  • Dashboard or Pipes

    Gokul Rajaram explaining the difference between dashboard product and pipes product, and why it is important to identify early on which product are you are working on.

    Every startup needs to make a choice: is their product a dashboard product or a pipes product?

    Dashboard products are used directly and regularly by end users as their primary interface for accomplishing tasks. The goal for these products is to get customers to live in the product. The primary North Star metric for these companies is active users (daily / weekly / monthly, depending on the natural frequency of customer usage for the category). Facebook’s first product (aka Facebook :)) was a dashboard product.

    Pipes products are used in the background to process transactions, data, payments, etc, and customers rarely interact with them directly after initial setup. The goal for these products is to for their customers to send as much of their data / payments / etc through them. Their North Star metrics is a volume metric (eg GPV). Databricks’ core product is a pipes product.

    Companies can have both types of products in their portfolio. For example, ChatGPT is a dashboard product while OpenAI’s APIs are a pipes product. However, a given product has to determine which camp it’s primarily in.

  • How to talk about AI?

    Emily M. Bender and Nanna Inie sharing an approach on how we should be talking about AI.

    A more deliberate and thoughtful way forward is to talk about “AI” systems in terms of what we use systems to do, often specifying input and/or output. That is, talk about functionalities that serve our purposes, rather than “capabilities” of the system. Rather than saying a model is “good at” something (suggesting the model has skills) we can talk about what it is “good for”. Who is using the model to do something, and what are they using it to do?

  • Follow the cap table, not the keynote

    Robert Greiner reflecting on the recent acquisition of Bun by Anthropic.

    Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands. If you want to understand what AI companies actually believe about engineering, follow the cap table, not the keynote.

  • Cars and AI

    This intriguing comparison about cars and AI which highlights how we have built our cities around cars. While cars pollute and take up a lot of space in parking, they are incredibly useful for humans.

    This comment on Hacker News by rukuu001.

    I think a lot about how much we altered our environment to suit cars. They’re not a perfect solution to transport, but they’ve been so useful we’ve built tons more road to accommodate them.

    So, while I don’t think AGI will happen any time soon, I wonder what ‘roads’ we’ll build to squeeze the most out of our current AI. Probably tons of power generation.

    This comment on Hacker News by sotix.

    This is a really interesting observation! Cars don’t have to dominate our city design, and yet they do in many places. In the USA, you basically only have NYC and a few less convenient cities to avoid a city designed for cars. Society has largely been reshaped with the assumption that cars will be used whether or not you’d like to use one.

    What would that look like for navigating life without AI? Living in a community similar to the Amish or Hasidic Jews that don’t integrate technology in their lives as much as the average person does? That’s a much more extreme lifestyle change than moving to NYC to get away from cars.

  • Perpetual beta and perpetual uncertainty

    Charlie Warzel talking about how generative AI’s perpetual beta has put us all in perpetual uncertainty.

    The world that ChatGPT built is a world defined by a particular type of precarity. It is a world that is perpetually waiting for a shoe to drop. Young generations feel this instability acutely as they prepare to graduate into a workforce about which they are cautioned that there may be no predictable path to a career. Older generations, too, are told that the future might be unrecognizable, that the marketable skills they’ve honed may not be relevant. Investors are waiting too, dumping unfathomable amounts of capital into AI companies, data centers, and the physical infrastructure that they believe is necessary to bring about this arrival. It is, we’re told, a race—a geopolitical one, but also a race against the market, a bubble, a circular movement of money and byzantine financial instruments and debt investment that could tank the economy. The AI boosters are waiting. They’ve created detailed timelines for this arrival. Then the timelines shift.

    We are waiting because a defining feature of generative AI, according to its true believers, is that it is never in its final form. Like ChatGPT before its release, every model in some way is also a “low-key research preview”—a proof of concept for what’s really possible. You think the models are good now? Ha! Just wait. Depending on your views, this is trademark showmanship, a truism of innovation, a hostage situation, or a long con. Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future. But you are waiting nonetheless—for a bubble to burst, for a genie to arrive with a plan to print money, for a bailout, for Judgment Day. In that way, generative AI is a faith-based technology.

    It doesn’t matter that the technology is already useful to many, that it can code and write marketing copy and complete basic research tasks. Because Silicon Valley is not selling useful; it’s selling transformation—with all the grand promises, return on investment, genuine risk, and collateral damage that entails. And even if you aren’t buying it, three years out, you’re definitely feeling it.

  • Speed vs Intelligence

    Sami Bahri emphasising that while AI accelerates the process, the intelligence to setup and monitor that process still requires a human.

    Intelligence implies wisdom, context, and nuance. While AI models are simulating reasoning better every day, in a business context, they are fundamentally pattern-matching engines. They excel at acceleration.

    • The Old Way: An analyst reads 50 contracts (unstructured), highlights risks based on gut feeling (unstructured process), and summarizes them in 3 days.
    • The AI Way: An AI scans 50 contracts and extracts specific risk clauses based on defined parameters in 3 minutes.

    The process (Review Contracts -> Identify Risk -> Summarize) hasn’t changed, but it had to be rigorously defined for the AI to work. The intelligence (knowing what a “risk” actually means) still requires human governance. What has changed is the velocity.