Category: Technology

  • Judgement over technical skill

    Alexander Kohlhofer quoting Brian Eno and explaining how judgement becomes more important than technical skill in the age of AI.

    The great benefit of computer sequencers is that they remove the issue of skill, and replace it with the issue of judgement. 

    With Cubase or Photoshop, anybody can actually do anything, and you can make stuff that sounds very much like stuff you’d hear on the radio, or looks very much like anything you see in magazines. 

    So the question becomes not whether you can do it or not, because any drudge can do it if they’re prepared to sit in front of the computer for a few days, the question then is, “Of all the things you can now do, which do you choose to do? “

  • ChatGPT and students

    A rant by a professor posted on Reddit on his struggles with ChatGPT use among students.

    I actually get excited when I find typos and grammatical errors in their writing now.

    My constant struggle is how to convince them that getting an education in the humanities is not about regurgitating ideas/knowledge that already exist. It’s about generating new knowledge, striving for creative insights, and having thoughts that haven’t been had before. I don’t want you to learn facts. I want you to think. To notice. To question. To reconsider. To challenge. Students don’t yet get that ChatGPT only rearranges preexisting ideas, whether they are accurate or not.

  • Input risk in LLM

    Doug Slater talking about input risk when using LLM to code.

    An LLM does not challenge a prompt which is leading or whose assumptions are flawed or context is incomplete. Example: An engineer prompts, “Provide a thread-safe list implementation in C#” and receives 200 lines of flawless, correct code. It’s still the wrong answer, because the question should have been, “How can I make this code thread-safe?” and whose answer is “Use System.Collections.Concurrent” and 1 line of code. The LLM is not able to recognize an instance of the XY problem because it was not asked to.

    The post covers a lot more ground on the risks involved with LLM generated code. Another thing that caught my attention was:

    LLMs accelerates incompetence.

    Simon Willison talks about the other side when he says:

    LLMs amplify existing expertise

    The conclusion is: If you are smart, LLMs can make you—or at least make you sound—smarter. If you are dumb, LLMs will make you dumber, without you ever knowing.

  • Addiction to… vibe coding

    Fred Benenson talking about how you can get addicted to vibe coding. Yes, vibe coding.

    I’ve been using AI coding assistants like Claude Code for a while now, and I’m here to say (with all due respect to people who have substance abuse issues), I may be an addict. And boy is this is an expensive habit.

    Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern (“the code works! it’s brilliant! it just broke! wtf!”), triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.

    What makes this especially effective with AI is the minimal effort required for potentially significant rewards — creating what neuroscientists call an “effort discounting” advantage. Combined with our innate completion bias — the drive to finish tasks we’ve started — this creates a compelling psychological loop that keeps us prompting.

    However, the post is less about addiction and more about the perverse incentives that AI companies have for verbose code generation.

    1. The AI generates verbose, procedural code for a given task
    2. This code becomes part of the context when you ask for further changes or additions (this is key)
    3. The AI now has to read (and you pay for) this verbose code in every subsequent interaction
    4. More tokens processed = more revenue for the company behind the AI
    5. The LLM developers have no incentive to “fix” the verbose code problem because doing so will meaningfully impact their bottom line

    Don’t miss to read the chuckle inducing postscript.

  • Thinking

    Dustin Curtis talking about how AI is impacting his—and possibly others—thinking.

    I thought I was using AI in an incredibly positive and healthy way, as a bicycle for my mind and a way to vastly increase my thinking capacity. But LLMs are insidious–using them to explore ideas feels like work, but it’s not real work. Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show. Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

    The Netflix analogy hit hard.

  • Shallowness of LLMs

    Jason Cohen talking about how LLMs, in their knowledge, are wider but shallower than humans.

    It’s interesting how LLMs are wider than any human, but shallower than the best humans. (At many things; not, e.g. chess)

    It can’t do customer service as well as the best humans, but it can do it in 100 languages, which no human can.

    It can’t program as well as the best humans, but it can program in 100 languages and 1000 libraries, which no human can.

    It’s not as good at math or law or medicine or research or history as the best humans in each of those fields, but it is better than the median human in those fields.

  • Visual Basic 6

    I read an insightful article by Ryan Lucas on the history and legacy of Visual Basic. This article links to another chuckle worthy article by David Platt—written way back in 2012—on why Visual Basic 6 still thrives. Excerpt below.

    A student of mine named Eric once joked that Visual Basic 6 was “the un-killable cockroach” in the Windows ecosystem. That analogy goes deeper than you might think. Cockroaches are successful because they’re simple. They do what they need to do for their ecological niche and no more. Visual Basic 6 did what its creators intended for its market niche: enable very rapid development of limited programs by programmers of lesser experience. It was never meant for heavy-duty coders developing complex applications.

    Visual Basic 6 accomplished its goals by abstracting away the complexity of the underlying Windows OS. Simple things were very simple to accomplish. On the other hand, complex things, such as dealing with threads, were impossible. My rule of thumb for Visual Basic 6 was: if I couldn’t do it within 10 minutes, I couldn’t do it at all.

    Almost all Visual Basic 6 programmers were content with what Visual Basic 6 did. They were happy to be bus drivers: to leave the office at 5 p.m. (or 4:30 p.m. on a really nice day) instead of working until midnight; to play with their families on weekends instead of trudging back to the office; to sleep with their spouses instead of pulling another coding all-nighter and eating cold pizza for breakfast. They didn’t lament the lack of operator overloading or polymorphism in Visual Basic 6, so they didn’t say much.

    The voices that Microsoft heard, however, came from the 3 percent of Visual Basic 6 bus drivers who actively wished to become fighter pilots. These guys took the time to attend conferences, to post questions on CompuServe forums, to respond to articles. Not content to merely fantasize about shooting a Sidewinder missile up the tailpipe of the car that had just cut them off in traffic, they demanded that Microsoft install afterburners on their busses, along with missiles, countermeasures and a head-up display. And Microsoft did.

    But giving Visual Basic .NET to the Visual Basic 6 community was like raising a coyote as a domestic dog, then releasing him into the woods, shouting, “Hunt for your dinner as God intended, you magnificent, wild creature!” Most of them said, “Heck with that. I’m staying on my nice warm cushion by the fire while you open me a can of Alpo.” And Visual Basic 6 kept right on going.

    Ryan Lucas then goes on to argue that because Microsoft made it very difficult—if not impossible—to move from Visual Basic 6 to VB.NET, it ended up costing Microsoft the battle for web.

    Microsoft had broken the trust of its army of Visual Basic developers. Faced with the options of either starting over from scratch in VB.NET or moving to new web-native languages like JavaScript and PHP, most developers chose the latter—a brutal unforced error by Microsoft. (It’s easy to forget the pole position that Microsoft had on the web in 2001: Internet Explorer had 96% market share, and Visual Basic apps could even be embedded into web pages via ActiveX controls.)

  • Quoting Micha Kaufman

    Micha Kaufman’s email that is doing rounds of the internet.

    You must understand that what was once considered ‘easy tasks’ will no longer exist; what was considered ‘hard tasks’ will be the new easy, and what was considered ‘impossible tasks’ will be the new hard.

  • LLMs as index funds

    Venkatesh Rao giving an analogy between LLMs and index funds.

    Foundation models like GPT and Claude now serve as the index funds of language. Trained on enormous corpora of human text, they do not try to innovate. Instead, they track the center of linguistic gravity: fluent, plausible, average-case language. They provide efficient, scalable access to verbal coherence, just as index funds offer broad exposure to market returns. For most users, most of the time, this is enough. LLMs automate fluency the way passive investing automates exposure. They flatten out risk and elevate reliability.

    But they also suppress surprise. Like index funds, LLMs are excellent at covering known territory but incapable of charting new ground. The result is a linguistic landscape dominated by synthetic norms: smooth, predictable, uncontroversial. Writing with an LLM is increasingly like buying the market—safe, standardized, and inherently unoriginal.

    In this new environment, the act of writing raw, unassisted text begins to resemble picking penny stocks. It’s risky, inefficient, and potentially seen as naïve. Yet it remains the only place where genuine linguistic alpha—the surplus value of originality—can be found. Alpha lives in human voice, conceptual invention, emotional charge, and expressive risk. It emerges from the irreducible tensions of context, personality, and thought. And like financial alpha, it is quickly absorbed and neutralized by the systems it disrupts. What begins as a surprise becomes a template; what once felt radical becomes the new benchmark.

    As a result, the most original language is retreating into private markets. In Substacks, Signal threads, Discord servers, and private memos, new forms are being tested in semi-anonymous, high-context settings. These are the linguistic equivalents of venture capital and private equity—spaces of risk, scarcity, and concentrated attention. Just as companies now avoid going public too soon, writers may delay or even refuse public release, fearing dilution or misappropriation. Only once an idea matures might it “IPO” into the public sphere—perhaps as a viral tweet, a manifesto, or a cultural phrase. But even then, its time is limited: LLMs will soon flatten it into beta.

    This is part of Venkatesh Rao’s AI slop writing, where he shares a “recipe”—a set of high-level ideas—that he uses to generate posts with an LLM. I didn’t realise I was reading AI slop until I reached the Recipe section.

  • Moving from Bluehost to WordPress.com

    When I started this blog, I’d decided to use Bluehost to host my custom self-hosted WordPress blog. Why? Because when I Googled it, Bluehost came up as the top result. Looking back, I think Bluehost had bought ads for that Google query, and Google showed its ad at the top.

    Over the years, I realized Bluehost was subpar in terms of performance. My blog was intermittently unavailable due to various server issues at their end. And when it was available, the page load speeds were still sluggish. Overall, I was consistently getting poor performance.

    Eventually, this led me to abandon Bluehost and move my site to WordPress.com.

    The process was fairly straightforward. I went to my Bluehost WordPress site and exported my data via Tools → Export. This generated an XML file, which can then be imported into your new WordPress.com site. The XML file doesn’t contain images—instead, it includes their URLs.

    Figure 1

    At my WordPress.com site, I simply imported the previously generated XML file via Tools → Import. It took care of the rest, including fetching and uploading images to the new site. That’s why it’s imperative to keep the old site active—including the domain name—until all your data is imported and verified.

    To transfer my domain from Bluehost to WordPress.com, I had to follow two steps:

    1. Disable the transfer lock
    2. Get the Transfer Authorization / EPP Code

    Disabling the transfer lock was straightforward. You can toggle this setting easily in Bluehost’s domain settings section.

    Getting the EPP code took a bit more time. When I first generated the EPP code from Bluehost and entered it into WordPress.com, I got an error saying the code was invalid. I tried a few times, even manually typing it to avoid any copy-paste errors with hidden characters, but I still couldn’t proceed. I finally reached out to Bluehost’s chat support. They gave me a different EPP code, which did work. It took a few days for the domain to be fully transferred to WordPress.com.

    Once the domain was transferred, I linked it to my new WordPress.com site—and I was done.