• Politics

    Matheus Lima explaining politics at workplace.

    Politics is just how humans coordinate in groups. It’s the invisible network of relationships, influence, and informal power that exists in every organization. You can refuse to participate, but that doesn’t make it go away. It just means decisions get made without you.

    Think about the last time a terrible technical decision got pushed through at your company. Maybe it was adopting some overcomplicated architecture, or choosing a vendor that everyone knew was wrong, or killing a project that was actually working. I bet if you dig into what happened, you’ll find it wasn’t because the decision-makers were stupid. It’s because the people with the right information weren’t in the room. They “didn’t do politics.”

    Meanwhile, someone who understood how influence works was in that room, making their case, building coalitions, showing they’d done their homework. And their idea won. Not because it was better, but because they showed up to play while everyone else was “too pure” for politics.

    Ideas don’t speak. People do. And the people who understand how to navigate organizational dynamics, build relationships, and yes, play politics? Their ideas get heard.

    When you build strong relationships across teams, understand what motivates different stakeholders, and know how to build consensus, you’re doing politics. When you take time to explain your technical decisions to non-technical stakeholders in language they understand, that’s politics. When you grab coffee with someone from another team to understand their challenges, that’s politics too.

    Good politics is just being strategic about relationships and influence in the service of good outcomes.

    I too thought of politics at workplace is something to be avoided. But politics doesn’t meaning backstabbing your opponent. As per Wikipedia it means:

    Politics is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of status or resources.

  • Friction vs effort

    Jameel Ur Rahman shares his take on how overcoming friction leads to growth. We have seen this in nature where the struggle of butterfly to come out of its cocoon helps develop its wings. But this comment by gwd makes a distinction between friction and effort.

    Can I make a distinction between “friction” and “effort”?

    If you’re riding a bike up a hill, you can’t go up without effort. But not all of your effort is actually moving you up the hill — some of it is being lost in friction: inefficiencies in your muscles, friction in your gears and wheel and chain, wind resistance.

    Similarly, you can’t learn anything without effort; but it’s often the case that effort you put in ends up wasted: if you’re learning a language, time spent looking for content rather than studying content is friction; effort spent forcing yourself to read something that’s too hard is effort you could have spent more profitably elsewhere.

    Put that way, we should minimize friction, so that we can maximize the amount our effort goes towards actually growing.

  • A market for lemons

    Frank Chimero referring to the paper by George Akerlof and explaining what a market for lemons means. He then goes ahead and explains that we are in the lemon stage of the internet.

    The idea is called “a market for lemons.” The phrase comes from a 1970 paper by George Akerlof that explains how information asymmetry between buyers and sellers can undermine a marketplace. Akerlof asks us to imagine ourselves buying a used car. Some cars on the lot are reliable, well-maintained gems. Others cars are lemons, the kinds of cars that can make it off the lot but are disasters waiting to happen. The sellers know which cars are which, but you, as a buyer, can’t tell the difference. That information asymmetry affects the average price in the market and eventually impacts the overall market dynamics.

    The thinking goes like this: if a buyer can’t distinguish between good and bad, everything gets priced somewhere in the middle. If you’re selling junk, this is fantastic news—you’ll probably get paid more than your lemon is worth. If you’re selling a quality used car, this price is insultingly low. As a result, people with good cars leave the market to sell their stuff elsewhere, which pushes the overall quality and price down even further, until eventually all that’s left on the market are lemons.

    I think we’re in the lemon stage of the internet.

  • From AI will replace humans to AI is in a bubble

    Stephanie Palazzolo writing for The Information.

    Not only has the one-year-old Thinking Machines not yet released a product, it hasn’t talked publicly about what that product will be. Even some of the company’s investors don’t have a very good idea of what it is working on. While raising capital for Thinking Machines earlier this year and late last year, Murati shared few details about what it would be building, prospective investors said.

    “It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.’”

    Despite that vagueness, Murati raised $2 billion in funding—the largest seed round ever—at a $10 billion pre-investment valuation from top Silicon Valley VC firms including Andreessen Horowitz, Accel and GV. The investors also made the highly unusual decision to give her total veto power over the board of directors. (Thinking Machines is using Nvidia-powered servers it rents from Google Cloud, whose ultimate parent Alphabet also oversees GV.)

    When 2025 started all I could read was AI is going to replace humans. Now its all about—AI is in a bubble. These valuations don’t make sense.

    I don’t know what to believe anymore. But I am bookmarking this in case the bubble explodes in the future.

  • Comprehension debt

    Jason Gorman explaing the challenge of comprehension debt with AI generated code.

    When teams produce code faster than they can understand it, it creates what I’ve been calling “comprehension debt”. If the software gets used, then the odds are high that at some point that generated code will need to change. The “A.I.” boosters will say “We can just get the tool to do that”. And that might work maybe 70% of the time. 

    But those of us who’ve experimented a lot with using LLMs for code generation and modification know that there will be times when the tool just won’t be able to do it. 

    “Doom loops”, when we go round and round in circles trying to get an LLM, or a bunch of different LLMs, to fix a problem that it just doesn’t seem to be able to, are an everyday experience using this technology. Anyone claiming it doesn’t happen to them has either been extremely lucky, or is fibbing.

    It’s pretty much guaranteed that there will be many times when we have to edit the code ourselves. The “comprehension debt” is the extra time it’s going to take us to understand it first.

    And we’re sitting on a rapidly growing mountain of it.

    On a very similar note, Steve Krouse explains how vibe code is legacy code because nobody understands it.

  • Who can build the product?

    There’s an interesting discussion on Hacker News about the news that Boeing has started working on a 737 MAX replacement. My favorite comments.

    Comment by scrlk.

    an oil industry proverb: a healthy oil company has a geologist in charge, a mature one has an engineer in charge, a declining one has an accountant in charge, and a dying one has a lawyer in charge.

    Comment by nostrademons.

    It’ll be interesting to see if they still can design and build a new ground-up airplane design. The last all-new design was the 787, initiated in 2003 and launched in 2009, and its design was fraught with problems. Before then was the 777 in the early 90s (pre-McDonnell takeover), and the 757/767 in the early 80s.

    There’s a phenomena that ofter occurs with large organizations where once their markets mature, everybody who can build a product end-to-end leaves or gets forced out, leaving only people with highly specialized maintenance skillsets. The former group has no work to do, after all, so why should the company keep them around? But then if the market ecosystem shifts, and a new product is necessary, they no longer have the capacity to build ground-up new products. All those people have left, and won’t come anywhere near the company.

    Steve Jobs spoke eloquently about this phenomena in an old interview:

  • AI can complete tasks, not jobs. For now.

    Ethan Mollick reflecting on the recent report by OpenAI which evaluates AI model performance on real-world economically valuable tasks

    Does that mean AI is ready to replace human jobs?

    No (at least not soon), because what was being measured was not jobs but tasks. Our jobs consist of many tasks. My job as a professor is not just one thing, it involves teaching, researching, writing, filling out annual reports, supporting my students, reading, administrative work and more. AI doing one or more of these tasks does not replace my entire job, it shifts what I do. And as long as AI is jagged in its abilities, and cannot substitute for all the complex work of human interaction, it cannot easily replace jobs as a whole…

    …and yet some of the tasks that AI can do right now have incredible value.

  • AI and radiologists

    Deena Mousa explaining how radiology combines digital images, clear benchmarks, and repeatable tasks, but replacing humans with AI is harder than it seems.

    First, while models beat humans on benchmarks, the standardized tests designed to measure AI performance, they struggle to replicate this performance in hospital conditions. Most tools can only diagnose abnormalities that are common in training data, and models often don’t work as well outside of their test conditions. Second, attempts to give models more tasks have run into legal hurdles: regulators and medical insurers so far are reluctant to approve or cover fully autonomous radiology models. Third, even when they do diagnose accurately, models replace only a small share of a radiologist’s job. Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians. 

    Now where have I heard this before? Oh yes, here.

    Coding can be a challenge, but I’ve never had spent more than two weeks trying to figure out what is wrong with the code. Once you get the hang of the syntax, logic, and techniques, it’s a pretty straightforward process—most of the time. The real problems are usually centered around what the software is supposed to do. The hardest part about creating software is not writing code—it’s creating the requirements, and those software requirements are still defined by humans.

  • I don’t know

    Ibrahim Diallo sharing tips on how to lead a room full of experts.

    By definition, leading is knowing the way forward. But in reality, in a room full of experts, pretending to know everything makes you look like an idiot.

    Instead, “I don’t know, but let’s figure it out” becomes a superpower. It gives your experts permission to share uncertainty. It models intellectual humility. And it keeps the focus on moving forward rather than defending ego. It’s also an opportunity to let your experts shine.

    Saying “I don’t know” is truly a super power. Every time I have said it, the person in the front has excitedly shared all their knowledge with me.

  • AI and junior developers

    I read this post by Can Elma on how AI is helping senior developers but not junior developers. While the post has some interesting takes, there’s an even more interesting discussion on it on Hacker News. Two of my favourite comments.

    Comment by kaydub.

    Because juniors don’t know when they’re being taken down a rabbit hole. So they’ll let the LLM go too deep in its hallucinations.

    I have a Jr that was supposed to deploy a terraform module I built. This task has been hanging out for a while so I went to check in on them. They told me the problem they’re having and asked me to take a look.

    Their repo is a disaster, it’s very obvious claude took them down a rabbit hole just from looking. When I asked, “Hey, why is all this python in here? The module has it self contained” and they respond with “I don’t know, claude did that” affirming my assumptions.

    They lack the experience and they’re overly reliant on the LLM tools. Not just in the design and implementation phases but also for troubleshooting. And if you’re troubleshooting with something that’s hallucinating and you don’t know enough to know it’s hallucinating you’re in for a long ride.

    Meanwhile the LLM tools have taken away a lot of the type of work I hated doing. I can quickly tell when the LLM is going down a rabbit hole (in most cases at least) and prevent it from continuing. It’s kinda re-lit my passion for coding and building software. So that’s ended up in me producing more and giving better results.

    Comment by bentt.

    The best code I’ve written with an LLM has been where I architect it, I guide the LLM through the scaffolding and initial proofs of different components, and then I guide it through adding features. Along the way it makes mistakes and I guide it through fixing them. Then when it is slow, I profile and guide it through optimizations.

    So in the end, it’s code that I know very, very well. I could have written it but it would have taken me about 3x longer when all is said and done. Maybe longer. There are usually parts that have difficult functions but the inputs and outputs of those functions are testable so it doesn’t matter so much that you know every detail of the implementation, as long as it is validated.

    This is just not junior stuff.