Category: Artificial Intelligence

  • AI and junior developers

    I read this post by Can Elma on how AI is helping senior developers but not junior developers. While the post has some interesting takes, there’s an even more interesting discussion on it on Hacker News. Two of my favourite comments.

    Comment by kaydub.

    Because juniors don’t know when they’re being taken down a rabbit hole. So they’ll let the LLM go too deep in its hallucinations.

    I have a Jr that was supposed to deploy a terraform module I built. This task has been hanging out for a while so I went to check in on them. They told me the problem they’re having and asked me to take a look.

    Their repo is a disaster, it’s very obvious claude took them down a rabbit hole just from looking. When I asked, “Hey, why is all this python in here? The module has it self contained” and they respond with “I don’t know, claude did that” affirming my assumptions.

    They lack the experience and they’re overly reliant on the LLM tools. Not just in the design and implementation phases but also for troubleshooting. And if you’re troubleshooting with something that’s hallucinating and you don’t know enough to know it’s hallucinating you’re in for a long ride.

    Meanwhile the LLM tools have taken away a lot of the type of work I hated doing. I can quickly tell when the LLM is going down a rabbit hole (in most cases at least) and prevent it from continuing. It’s kinda re-lit my passion for coding and building software. So that’s ended up in me producing more and giving better results.

    Comment by bentt.

    The best code I’ve written with an LLM has been where I architect it, I guide the LLM through the scaffolding and initial proofs of different components, and then I guide it through adding features. Along the way it makes mistakes and I guide it through fixing them. Then when it is slow, I profile and guide it through optimizations.

    So in the end, it’s code that I know very, very well. I could have written it but it would have taken me about 3x longer when all is said and done. Maybe longer. There are usually parts that have difficult functions but the inputs and outputs of those functions are testable so it doesn’t matter so much that you know every detail of the implementation, as long as it is validated.

    This is just not junior stuff.

  • Benefit of the AI bubble

    Faisal Hoque arguing that there are three bubbles in AI. He concludes his post by explaining the benefits of bubbles.

    Far from being a threat, the AI bubble might be the best thing that could happen to pragmatic adopters. Consider what speculative excess delivers: billions in venture capital funding R&D you’d never justify to your board; the world’s brightest minds abandoning stable careers to join AI startups, working on tools that you’ll eventually be able to use; infrastructure being built at a scale no rational actor would attempt, driving down future costs through overcapacity.

    While investors bet on which companies will dominate AI, you can cherry-pick proven tools at competitive prices. While speculators debate valuations, you will be implementing solutions with clear ROI. When the correction comes, you’ll also be able to benefit from fire-sale prices on enterprise tools, seasoned talent seeking stability, and battle-tested technologies that survived the shakeout.

    The dotcom bubble gave us broadband infrastructure and trained web developers. The AI bubble will leave behind GPU clusters and ML engineers. The smartest response isn’t to avoid the bubble or try to time investments in it perfectly. It is to let others take the capital risk while you harvest the operational benefits. The bubble isn’t your enemy. If you play your cards strategically, it can be a major benefactor.

  • Workslop

    This post on Harvard Business Review by Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano and Jeffrey T. Hancock explaining workslop.

    We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.

    Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.

    If you have ever experienced this, you might recall the feeling of confusion after opening such a document, followed by frustration—Wait, what is this exactly?—before you begin to wonder if the sender simply used AI to generate large blocks of text instead of thinking it through. If this sounds familiar, you have been workslopped.

  • AI and the next technological revolution

    Jerry Neumann comparing AI with previous revolutionary technologies like microprocessor and containers—shipping containers, not software containers—and arguing that money will be made on the applications sitting on top of AI rather than on AI itself.

    This doesn’t mean AI can’t start the next technological revolution. It might, if experimentation becomes cheap, distributed and permissionless—like Wozniak cobbling together computers in his garage, Ford building his first internal combustion engine in his kitchen, or Trevithick building his high-pressure steam engine as soon as James Watt’s patents expired. When any would-be innovator can build and train an LLM on their laptop and put it to use in any way their imagination dictates, it might be the seed of the next big set of changes—something revolutionary rather than evolutionary. But until and unless that happens, there can be no irruption.

  • AI has softened the consequences of procrastination

    Ashanty Rosario talking about the challenges AI presents for education.

    Many homework assignments are due by 11:59 p.m., to be submitted online via Google Classroom. We used to share memes about pounding away at the keyboard at 11:57, anxiously rushing to complete our work on time. These moments were not fun, exactly, but they did draw students together in a shared academic experience. Many of us were propelled by a kind of frantic productivity as we approached midnight, putting the finishing touches on our ideas and work. Now the deadline has been sapped of all meaning. AI has softened the consequences of procrastination and led many students to avoid doing any work at all. As a result, these programs have destroyed much of what tied us together as students. There is little intensity anymore. Relatively few students seem to feel that the work is urgent or that they need to sharpen their own mind. We are struggling to receive the lessons of discipline that used to come from having to complete complicated work on a tight deadline, because chatbots promise to complete our tasks in seconds.

  • Perpetual anxiety

    Noah Smith sharing his thoughts on the recent report about plunging job market among college graduates. He starts off with this.

    The debate over whether AI is taking people’s jobs may or may not last forever. If AI takes a lot of people’s jobs, the debate will end because one side will have clearly won. But if AI doesn’t take a lot of people’s jobs, then the debate will never be resolved, because there will be a bunch of people who will still go around saying that it’s about totake everyone’s job. Sometimes those people will find some subset of workers whose employment prospects are looking weaker than others, and claim that this is the beginning of the great AI job destruction wave. And who will be able to prove them wrong? 

    In other words, the good scenario for the labor market is that we continue to exist in a perpetual state of anxiety about whether or not we’re all about to be made obsolete by the next generation of robots and chatbots.

    Ha!

    If the good scenario is us being in perpetual anxiety, then I don’t want to imagine the bad scenario.

  • Blur tool for photographs

    Ted Chiang’s post on The New Yorker about how ChatGPT—an LLMs in general—are the blurry JPEG of the web. This post is old and came out in Feb’23.

    When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them. (“When in the Course of human events, it becomes necessary for one to separate his garments from their mates, in order to maintain the cleanliness and order thereof. . . .”) ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.

    Reminds me of Venkatesh Rao’s analogy on LLMs as index funds.

  • Revenge of the English majors

    Quoting Stephan H. Wissel.

    LLMs with their dependency on well crafted prompts feels like the revenge of the English majors hurled towards computer science

    Ha!

  • Vibe hacking

    Kevin Collier reporting for NBC News on how a hacker vibe hacked their way into various industries. This information comes from Anthropic’s Threat Intelligence Report for Aug’25.

    …one of Anthropic’s periodic reports on threats, the operation began with the hacker convincing Claude Code — Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests — to identify companies vulnerable to attack. Claude then created malicious software to actually steal sensitive information from the companies. Next, it organized the hacked files and analyzed them to both help determine what was sensitive and could be used to extort the victim companies.

    The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand in exchange for the hacker’s promise not to publish that material. It also wrote suggested extortion emails.

    The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand—What the…!

    Since I have started following AI news, I read about how you should break down your problem statement into smaller chunks for AI, setup a plan, do periodic reviews of code generated, and then accept the changes. I never thought this approach would be effective in hacking also.

  • What if AI isn’t a bubble?

    Craig McCaskill gives multiple example of bubbles and how they ultimately burst. But what if AI isn’t a bubble? What if this is the real deal?

    The AI revolution is real, transformative, and probably unstoppable. Whether it unfolds through sustainable growth or boom-bust cycles depends largely on the choices we make in the next few years. The early signs (including voices like Altman’s warning about overexcitement) suggest we might actually be learning from history.

    The AI bubble’s human impact could be fundamentally different. Previous bubbles destroyed jobs when they burst. AI might destroy jobs while it’s still inflating. If AI actually delivers on its automation promises, we could see the first bubble that eliminates more employment during its rise than its fall.

    This creates an unprecedented social risk: a technology bubble that succeeds in its goals might cause more disruption than one that fails. The Railway Mania gave Britain train networks and industrial jobs. The dot-com bubble gave us e-commerce and digital careers. The AI bubble might give us unprecedented productivity and fewer jobs. That’s a social equation we haven’t solved.