Category: Technology

  • Entry level jobs

    Zara Zhang sharing her thoughts on entry level jobs.

    A Harvard student told me something I can’t stop thinking about. When they go to the library, every single screen has ChatGPT open. Homework that used to take hours now takes minutes.

    But then they talk to alums who say entry-level roles are basically gone. The jobs they planned their entire college trajectory around don’t exist anymore.

    AI made homework easier but made proving you deserve a job exponentially harder.

    Scary.

  • Choosing a programming language

    Steve Francia talking about why engineers can’t be rational about choosing a programming language. He ends his post by highlighting that choosing a programming language should be reframed to an economic debate. What is programming language going to cost us?

    Instead of asking “which language is best?” we need to ask “what is this language going to cost us?” Not just in salaries, but in velocity, in technical debt, in hiring difficulty, in operational complexity, in every dimension that actually determines whether you survive.

    Reframe it from a technical debate to an economic one. And unlike identity, economics can be measured, compared, and decided without anyone’s ego being threatened.

    Choosing a programming language is the single most expensive economic decision your company will make. It will define your culture, constrain your budget, determine your hiring pipeline, set your operational costs, and ultimately dictate whether you can move fast enough to win your market.

  • Subconscious processing

    There’s a spirited discussion on the research paper A Definition of AGI on Hacker News. This comment by fnordpiglet caught my attention.

    Try this exercise. Do not think and let your mind clear. Ideas will surface. By what process did they surface? Or clear your mind entirely then try to perform some complex task. You will be able to. How did you do this without thought? We’ve all had sudden insights without deliberation or thought. Where did these come from? By what process did you arrive at them? Most of the things we do or think are not deliberative and definitely not structured with language. This process is unobservable and not measurable, and the only way we have to do so is through imperfect verbalizations that hint out some vague outline of a subconscious mind. But without being able to train a model on that subconscious process, one that can’t be expressed in language with any meaningful sufficiency, how will language models demonstrate it? Their very nature of autoregressive inference prohibits such a process from emerging at any scale. We might very well be able to fake it to an extent that it fools us, but awareness isn’t there – and I’d assert that awareness is all you need.

  • Vibe coding is a lot like stock-picking

    Erik D. Kennedy reflecting on why has AI not impacted designer, yet.

    My hunch: vibe coding is a lot like stock-picking – everyone’s always blabbing about their big wins. Ask what their annual rate of return is above the S&P, and it’s a quieter conversation 🤫

    Ha!

  • AI security trilemma

    An insightful post by Bruce Schneier on the security issues plaguing AI. He also suggests that prompt injections might be unsolvable in today’s LLMs.

    The fundamental problem is that AI must compress reality into model-legible forms. In this setting, adversaries can exploit the compression. They don’t have to attack the territory; they can attack the map. Models lack local contextual knowledge. They process symbols, not meaning. A human sees a suspicious URL; an AI sees valid syntax. And that semantic gap becomes a security gap.

    Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? Double the attack surface. Security requires boundaries, but LLMs dissolve boundaries. More generally, existing mechanisms to improve models won’t help protect against attack. Fine-tuning preserves backdoors. Reinforcement learning with human feedback adds human preferences without removing model biases. Each training phase compounds prior compromises.

    This is Ken Thompson’s “trusting trust” attack all over again. Poisoned states generate poisoned outputs, which poison future states. Try to summarize the conversation history? The summary includes the injection. Clear the cache to remove the poison? Lose all context. Keep the cache for continuity? Keep the contamination. Stateful systems can’t forget attacks, and so memory becomes a liability. Adversaries can craft inputs that corrupt future outputs.

    This is the agentic AI security trilemma. Fast, smart, secure; pick any two. Fast and smart—you can’t verify your inputs. Smart and secure—you check everything, slowly, because AI itself can’t be used for this. Secure and fast—you’re stuck with models with intentionally limited capabilities.

  • Genesis of US-East-1

    Ben Thompson talking about origins of the US-East-1, the AWS region that went down a couple of days back and along with it a significant portion of the internet.

    Northern Virginia was a place that, in the 1990s, had relatively cheap and reliable power, land, and a fairly benign natural-disaster profile; it also had one of the first major Internet exchange points, thanks to its proximity to Washington D.C., and was centrally located between the west coast and Europe. That drew AOL, the largest Internet Service Provider of the 1990s, which established the region as data center central, leading to an even larger buildout of critical infrastructure, and making it the obvious location to place AWS’s first data center in 2006.

    That data center became what is known as US-East-1, and from the beginning it has been the location with the most capacity, the widest variety of instance types, and the first region to get AWS’s newest features. It’s so critical that AWS itself has repeatedly been shown to have dependencies on US-East-1; it’s also the default location in tutorials and templates used by developers around the world.

  • Pretending

    The Oatmeal talking about AI art.

    As a kid, I had one of those little Casio keyboards where you could hit a button and it’d automatically play a song.

    I remember hitting the button.

    I remember standing there, pretending to make music.

    That’s how I see Al art.

    Standing there.

    Pretending.

    Pretending. This resonated with me. At times I have had this feeling whenever I used AI. While generating proposals. While generating code. While generating ideas. It seems I am pretending to work.

    So is the future of work that we all pretend to work?

  • From AI will replace humans to AI is in a bubble

    Stephanie Palazzolo writing for The Information.

    Not only has the one-year-old Thinking Machines not yet released a product, it hasn’t talked publicly about what that product will be. Even some of the company’s investors don’t have a very good idea of what it is working on. While raising capital for Thinking Machines earlier this year and late last year, Murati shared few details about what it would be building, prospective investors said.

    “It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.’”

    Despite that vagueness, Murati raised $2 billion in funding—the largest seed round ever—at a $10 billion pre-investment valuation from top Silicon Valley VC firms including Andreessen Horowitz, Accel and GV. The investors also made the highly unusual decision to give her total veto power over the board of directors. (Thinking Machines is using Nvidia-powered servers it rents from Google Cloud, whose ultimate parent Alphabet also oversees GV.)

    When 2025 started all I could read was AI is going to replace humans. Now its all about—AI is in a bubble. These valuations don’t make sense.

    I don’t know what to believe anymore. But I am bookmarking this in case the bubble explodes in the future.

  • Comprehension debt

    Jason Gorman explaing the challenge of comprehension debt with AI generated code.

    When teams produce code faster than they can understand it, it creates what I’ve been calling “comprehension debt”. If the software gets used, then the odds are high that at some point that generated code will need to change. The “A.I.” boosters will say “We can just get the tool to do that”. And that might work maybe 70% of the time. 

    But those of us who’ve experimented a lot with using LLMs for code generation and modification know that there will be times when the tool just won’t be able to do it. 

    “Doom loops”, when we go round and round in circles trying to get an LLM, or a bunch of different LLMs, to fix a problem that it just doesn’t seem to be able to, are an everyday experience using this technology. Anyone claiming it doesn’t happen to them has either been extremely lucky, or is fibbing.

    It’s pretty much guaranteed that there will be many times when we have to edit the code ourselves. The “comprehension debt” is the extra time it’s going to take us to understand it first.

    And we’re sitting on a rapidly growing mountain of it.

    On a very similar note, Steve Krouse explains how vibe code is legacy code because nobody understands it.

  • AI can complete tasks, not jobs. For now.

    Ethan Mollick reflecting on the recent report by OpenAI which evaluates AI model performance on real-world economically valuable tasks

    Does that mean AI is ready to replace human jobs?

    No (at least not soon), because what was being measured was not jobs but tasks. Our jobs consist of many tasks. My job as a professor is not just one thing, it involves teaching, researching, writing, filling out annual reports, supporting my students, reading, administrative work and more. AI doing one or more of these tasks does not replace my entire job, it shifts what I do. And as long as AI is jagged in its abilities, and cannot substitute for all the complex work of human interaction, it cannot easily replace jobs as a whole…

    …and yet some of the tasks that AI can do right now have incredible value.