• Talk

    Some times I find tiny gems on Hacker News. Today was one of those days. This comment by a user going by the name lordnacho.

    It’s the internet. When you talk to people online, it often descends into pettiness. When you talk to people in the real world, that rarely happens. But it’s much easier to talk online, so people get the wrong impression.

    You should talk to strangers. It’s never gone wrong for me. Most people have a warmth and agreeableness that comes out when you are there with them, talking about stuff. There’s also the interesting effect that people will give you their innermost secrets, knowing you won’t tell anyone (I actually met a serial killer who did this, heh). For instance I was on a long haul flight earlier this year, and my neighbour told me everything about her divorce. Like a kind of therapy.

    I also find when I have a real disagreement with someone, it’s a lot easier when you’re face-to-face. For instance, I have friends who are religious, in a real way, ie they actually think there’s a god who created the earth and wants us to live a certain way. Being there in person keeps me from ridiculing them like I might on an internet forum, but it also keeps them from condemning me to hell.

    So folks, practice talking to people. Much of what’s wrong in the current world is actually loneliness, having no outlet for your expressions.

  • AI has softened the consequences of procrastination

    Ashanty Rosario talking about the challenges AI presents for education.

    Many homework assignments are due by 11:59 p.m., to be submitted online via Google Classroom. We used to share memes about pounding away at the keyboard at 11:57, anxiously rushing to complete our work on time. These moments were not fun, exactly, but they did draw students together in a shared academic experience. Many of us were propelled by a kind of frantic productivity as we approached midnight, putting the finishing touches on our ideas and work. Now the deadline has been sapped of all meaning. AI has softened the consequences of procrastination and led many students to avoid doing any work at all. As a result, these programs have destroyed much of what tied us together as students. There is little intensity anymore. Relatively few students seem to feel that the work is urgent or that they need to sharpen their own mind. We are struggling to receive the lessons of discipline that used to come from having to complete complicated work on a tight deadline, because chatbots promise to complete our tasks in seconds.

  • Investment strategy

    M. Pattabiraman dropping truth bombs while sharing his investment strategy.

    I recommend index funds to others so that they avoid repeating my mistakes. And that is what my portfolio is – a sum of all my mistakes made over the years. As I look back, I cannot think of a single intelligent, well-analysed choice.

    […]

    My portfolio was never well-designed or well-diversified. It was and is cluttered. I have learnt to live with it and realised the importance of inaction once your basics are in place during the accumulation phase.

    Instead of worrying about performance and returns, I focused all my energies on how much I can invest and how much I can increase this investment month by month. That has been the key driver of portfolio growth.

    A sum of all my mistakes made over the years—this statement hits the nail right on the head.

  • Perpetual anxiety

    Noah Smith sharing his thoughts on the recent report about plunging job market among college graduates. He starts off with this.

    The debate over whether AI is taking people’s jobs may or may not last forever. If AI takes a lot of people’s jobs, the debate will end because one side will have clearly won. But if AI doesn’t take a lot of people’s jobs, then the debate will never be resolved, because there will be a bunch of people who will still go around saying that it’s about totake everyone’s job. Sometimes those people will find some subset of workers whose employment prospects are looking weaker than others, and claim that this is the beginning of the great AI job destruction wave. And who will be able to prove them wrong? 

    In other words, the good scenario for the labor market is that we continue to exist in a perpetual state of anxiety about whether or not we’re all about to be made obsolete by the next generation of robots and chatbots.

    Ha!

    If the good scenario is us being in perpetual anxiety, then I don’t want to imagine the bad scenario.

  • Blur tool for photographs

    Ted Chiang’s post on The New Yorker about how ChatGPT—an LLMs in general—are the blurry JPEG of the web. This post is old and came out in Feb’23.

    When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them. (“When in the Course of human events, it becomes necessary for one to separate his garments from their mates, in order to maintain the cleanliness and order thereof. . . .”) ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.

    Reminds me of Venkatesh Rao’s analogy on LLMs as index funds.

  • Revenge of the English majors

    Quoting Stephan H. Wissel.

    LLMs with their dependency on well crafted prompts feels like the revenge of the English majors hurled towards computer science

    Ha!

  • Vibe hacking

    Kevin Collier reporting for NBC News on how a hacker vibe hacked their way into various industries. This information comes from Anthropic’s Threat Intelligence Report for Aug’25.

    …one of Anthropic’s periodic reports on threats, the operation began with the hacker convincing Claude Code — Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests — to identify companies vulnerable to attack. Claude then created malicious software to actually steal sensitive information from the companies. Next, it organized the hacked files and analyzed them to both help determine what was sensitive and could be used to extort the victim companies.

    The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand in exchange for the hacker’s promise not to publish that material. It also wrote suggested extortion emails.

    The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand—What the…!

    Since I have started following AI news, I read about how you should break down your problem statement into smaller chunks for AI, setup a plan, do periodic reviews of code generated, and then accept the changes. I never thought this approach would be effective in hacking also.

  • Nuclear batteries

    Today I learned that we have something called nuclear batteries. For more than 50 years. James Blanchard talks about the genesis of these batteries, why don’t we use them anymore, what are its current applications, and more.

    In 1970, surgeons in Paris implanted the first nuclear-powered pacemaker, and over the next five years, at least 1,400 additional people received the devices, mostly in France and the United States. Encased in titanium, the batteries for these devices contained a radioactive isotope—typically about a tenth of a gram of plutonium-238—and could operate for decades without maintenance. The invention provided relief to a population of people who previously needed surgery every few years to change out their pacemaker’s chemical battery.

    Technically, they are not nuclear. They are radioisotopes.

    The term “nuclear batteries” may evoke images of tiny nuclear reactors, but that’s not how they work. Nuclear batteries don’t split atoms with neutron bombardment. Instead, they capture energy in the form of radiation that’s spontaneously released when atomic nuclei decay.

    Most research groups developing nuclear batteries are focused on harnessing energy from radioactive isotopes of nickel and hydrogen. In many nuclear battery designs, adjacent semiconductors absorb the radiation released by the radioisotopes’ nuclei and convert it to an electric current, much like a solar cell does. In other designs, thermoelectric devices convert the heat produced by the emitted radiation to electricity. So “radioisotope power source” is a better descriptor than “nuclear battery,” but for ease of language, I’ll use these terms interchangeably.

  • What if AI isn’t a bubble?

    Craig McCaskill gives multiple example of bubbles and how they ultimately burst. But what if AI isn’t a bubble? What if this is the real deal?

    The AI revolution is real, transformative, and probably unstoppable. Whether it unfolds through sustainable growth or boom-bust cycles depends largely on the choices we make in the next few years. The early signs (including voices like Altman’s warning about overexcitement) suggest we might actually be learning from history.

    The AI bubble’s human impact could be fundamentally different. Previous bubbles destroyed jobs when they burst. AI might destroy jobs while it’s still inflating. If AI actually delivers on its automation promises, we could see the first bubble that eliminates more employment during its rise than its fall.

    This creates an unprecedented social risk: a technology bubble that succeeds in its goals might cause more disruption than one that fails. The Railway Mania gave Britain train networks and industrial jobs. The dot-com bubble gave us e-commerce and digital careers. The AI bubble might give us unprecedented productivity and fewer jobs. That’s a social equation we haven’t solved.

  • AGI. Are we there yet? Part 2

    Dwarkesh Patel arguing why he doesn’t think AGI is right around the corner.

    But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.

    The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.

    How do you teach a kid to play a saxophone? You have her try to blow into one, listen to how it sounds, and adjust. Now imagine teaching saxophone this way instead: A student takes one attempt. The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student.

    This just wouldn’t work. No matter how well honed your prompt is, no kid is just going to learn how to play saxophone from just reading your instructions. But this is the only modality we as users have to ‘teach’ LLMs anything.