Category: Technology

  • Quoting Micha Kaufman

    Micha Kaufman’s email that is doing rounds of the internet.

    You must understand that what was once considered ‘easy tasks’ will no longer exist; what was considered ‘hard tasks’ will be the new easy, and what was considered ‘impossible tasks’ will be the new hard.

  • LLMs as index funds

    Venkatesh Rao giving an analogy between LLMs and index funds.

    Foundation models like GPT and Claude now serve as the index funds of language. Trained on enormous corpora of human text, they do not try to innovate. Instead, they track the center of linguistic gravity: fluent, plausible, average-case language. They provide efficient, scalable access to verbal coherence, just as index funds offer broad exposure to market returns. For most users, most of the time, this is enough. LLMs automate fluency the way passive investing automates exposure. They flatten out risk and elevate reliability.

    But they also suppress surprise. Like index funds, LLMs are excellent at covering known territory but incapable of charting new ground. The result is a linguistic landscape dominated by synthetic norms: smooth, predictable, uncontroversial. Writing with an LLM is increasingly like buying the market—safe, standardized, and inherently unoriginal.

    In this new environment, the act of writing raw, unassisted text begins to resemble picking penny stocks. It’s risky, inefficient, and potentially seen as naïve. Yet it remains the only place where genuine linguistic alpha—the surplus value of originality—can be found. Alpha lives in human voice, conceptual invention, emotional charge, and expressive risk. It emerges from the irreducible tensions of context, personality, and thought. And like financial alpha, it is quickly absorbed and neutralized by the systems it disrupts. What begins as a surprise becomes a template; what once felt radical becomes the new benchmark.

    As a result, the most original language is retreating into private markets. In Substacks, Signal threads, Discord servers, and private memos, new forms are being tested in semi-anonymous, high-context settings. These are the linguistic equivalents of venture capital and private equity—spaces of risk, scarcity, and concentrated attention. Just as companies now avoid going public too soon, writers may delay or even refuse public release, fearing dilution or misappropriation. Only once an idea matures might it “IPO” into the public sphere—perhaps as a viral tweet, a manifesto, or a cultural phrase. But even then, its time is limited: LLMs will soon flatten it into beta.

    This is part of Venkatesh Rao’s AI slop writing, where he shares a “recipe”—a set of high-level ideas—that he uses to generate posts with an LLM. I didn’t realise I was reading AI slop until I reached the Recipe section.

  • Moving from Bluehost to WordPress.com

    When I started this blog, I’d decided to use Bluehost to host my custom self-hosted WordPress blog. Why? Because when I Googled it, Bluehost came up as the top result. Looking back, I think Bluehost had bought ads for that Google query, and Google showed its ad at the top.

    Over the years, I realized Bluehost was subpar in terms of performance. My blog was intermittently unavailable due to various server issues at their end. And when it was available, the page load speeds were still sluggish. Overall, I was consistently getting poor performance.

    Eventually, this led me to abandon Bluehost and move my site to WordPress.com.

    The process was fairly straightforward. I went to my Bluehost WordPress site and exported my data via Tools → Export. This generated an XML file, which can then be imported into your new WordPress.com site. The XML file doesn’t contain images—instead, it includes their URLs.

    Figure 1

    At my WordPress.com site, I simply imported the previously generated XML file via Tools → Import. It took care of the rest, including fetching and uploading images to the new site. That’s why it’s imperative to keep the old site active—including the domain name—until all your data is imported and verified.

    To transfer my domain from Bluehost to WordPress.com, I had to follow two steps:

    1. Disable the transfer lock
    2. Get the Transfer Authorization / EPP Code

    Disabling the transfer lock was straightforward. You can toggle this setting easily in Bluehost’s domain settings section.

    Getting the EPP code took a bit more time. When I first generated the EPP code from Bluehost and entered it into WordPress.com, I got an error saying the code was invalid. I tried a few times, even manually typing it to avoid any copy-paste errors with hidden characters, but I still couldn’t proceed. I finally reached out to Bluehost’s chat support. They gave me a different EPP code, which did work. It took a few days for the domain to be fully transferred to WordPress.com.

    Once the domain was transferred, I linked it to my new WordPress.com site—and I was done.

  • AI can empower developers to rewrite code without regret

    Matthew Sinclair talks about how AI can help programmers scrap the code they—or rather, the AI—has written and start over when they realise the approach won’t work, may lead to technical debt, or for any of thousands of other reasons, because new code can be generated fairly quickly.

    Working with Claude Code has fundamentally shifted how I think about the economics of programming time. Traditionally, coding involves three distinct “time buckets”:

    • Why am I doing this? Understanding the business problem and value 
    • What do I need to do? Designing the solution conceptually 
    • How am I going to do it? Actually writing the code 

    For decades, that last bucket consumed enormous amounts of our time. We’d spend hours, days or weeks writing, debugging, and refining. With Claude, that time cost has plummeted to nearly zero. I can generate thousands of lines of functional code in a sitting—something that is, frankly, mind-blowing.

    And there’s a new skill that emerges: wielding the knife. With code generation being essentially free, we need to become much more comfortable with throwing away entire solutions. The sunk cost fallacy hits programmers hard—we hate discarding code we’ve invested in, fearing we might break something important or never get back to a working state.

    But when your assistant can rewrite everything in minutes, that calculus changes completely. Three times during my backend project, I looked at substantial amounts of code—thousands of lines that technically worked—and decided to scrap it entirely because the approach wasn’t right. This wasn’t easy. My instinct was still to try to salvage and refactor. But the right move was to step back, rethink the approach, and direct the AI down a different path.

    This willingness to cut ruthlessly is a muscle most developers haven’t developed yet. It requires confidence in your architectural judgment and a radical shift in how you value implementation time.

  • Program vs Product

    Dylan Beattie explaining problem with vibe coding by referencing program vs product.

    Probably the single most important lesson I’ve learned in my career, the thing that I would argue is the hallmark of “experience”, is understanding just how much work it takes to turn a working program into a viable product. It’s why developer estimates are so notoriously optimistic – and why experienced developers are so notoriously cynical. Let’s say you crank out a bit of code that’ll take responses from a web form and add them in an Excel spreadsheet. That’s not that hard… yay! we just built a Typeform competitor in one afternoon! Except, no, you didn’t. You made one thing work one time on one computer. You haven’t considered encoding, internationalization, concurrency, authentication, telemetry, billing, branding, mobile devices, deployment. You haven’t hit any of the weird limits yet – ever had a system work brilliantly for the first 65,535 requests and then fall over? You don’t have a product. At best, you have a proof-of-concept of a good idea that, if some very smart people work very hard, might become a viable product.

  • Programming in natural language

    Edsger W. Dijkstra explaining why using natural language for programming is a bad idea.

    In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

    We know in the meantime that the choice of an interface is not just a division of (a fixed amount of) labour, because the work involved in co-operating and communicating across the interface has to be added. We know in the meantime —from sobering experience, I may add— that a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.

    Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.

    I saw this post on Hacker News and this comment sums up Dijkstra’s point succinctly.

    Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

    As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

    Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

  • Learn to code

    Kyrylo Silin reflecting on importance of learning to code.

    AI is the new shiny toy everyone wants to play with. And to be honest, it’s impressive. The problem with AI is that with every year, it gets better and better. Wait what? How’s that a problem? Well, with every new year you lose control.

    The more you rely on AI, the less you understand what you’re doing. The less you understand, the more AI vendors can control you. And the more control they have, the more they can charge you. It’s a vicious cycle.

    This shift was inevitable. Humanity must adapt to this new reality. AI isn’t going away, and we need to learn how to use it to our advantage.

    The large language models (LLMs) created by tech giants have absorbed decades of knowledge — our knowledge. They’ve been trained on our work.

    Now they’re selling it back to us and telling us we only need to learn English to code. This is a lie. As a new programmer, I don’t want you to fall for it. There’s no corner-cutting. Get your shit together and learn to code.

    Ha!

    While I agree that programmers need to learn to code, the unfortunate reality is that AI will charge us to write code that it trained itself on using our work. The charge is for completing a function in a matter of minutes, sometimes even seconds. The charge is for speed and efficiency.

  • AI Einstein

    Thomas Wolf arguing why we won’t have AI Einstein—at least in its current form. 

    …the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around.

    To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes ‘What if everyone is wrong about this?’ when all textbooks, experts, and common knowledge suggest otherwise.

    Wonderful insight. I am pretty sure that if AI had been available before we discovered that Earth orbits the Sun—and not the other way around—AI wouldn’t have dared to question that knowledge.

  • Knowledge collapse

    A long but worthwhile paper published by Andrew J. Peterson on AI and the Problem of Knowledge Collapse

    We identify conditions under which AI, by reducing the cost of access to certain modes of knowledge, can paradoxically harm public understanding. While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center’ of the distribution. This is generally useful, but widespread reliance on recursive AI systems could lead to a process we define as “knowledge collapse”, and argue this could harm innovation and the richness of human understanding and culture. However, unlike AI models that cannot choose what data they are trained on, humans may strategically seek out diverse forms of knowledge if they perceive them to be worthwhile. 

    The author analyzes how our dependence on AI could limit our knowledge to a narrow subset of views—views on which the AI was trained—and eventually forgetting the “long-tail” ideas. We see this happening right now when we ask the Chinese artificial intelligence, DeepSeek, about Tiananmen Square or Taiwan.

    With increasing integration of LLM-based systems, certain popular sources or beliefs which were common in the training data may come to be reinforced in the public mindset (and within the training data), while other “long-tail” ideas are neglected and eventually forgotten. 

    Such a process might be reinforced by an ‘echo chamber’ or information cascade effect, in which repeated exposure to this restricted set of information leads individuals to believe that the neglected, unobserved tails of knowledge are of little value.

    We have all seen the effect of the echo chamber in social media. An echo chamber with AI responses would only go on to solidify people’s beliefs—whether right or wrong.

    But knowledge collapse has been happening throughout human evolution.

    …traditional hunter-gatherers could identify thousands of different plants and knew their medicinal usages, whereas most humans today only know a few dozen plants and whether they can be purchased in a grocery store. This could be seen as a more efficient form of specialization of information across individuals, but it might also impact our beliefs about the value of those species or of a walk through a forest, or influence scientific or policy-relevant judgements.

    This will be a long term effect of AI, the implications of which will be visible only after 10 or more years. 

  • Simon Willison shares how he uses LLMs to write code

    Simon Willison has published his experience on he uses LLMs to write code. It is filled with great tips and nuggets of wisdom. Here are my favourites.

    If someone tells you that coding with LLMs is easy they are (probably unintentionally) misleading you. They may well have stumbled on to patterns that work, but those patterns do not come naturally to everyone.

    If you assume that this technology will implement your project perfectly without you needing to exercise any of your own skill you’ll quickly be disappointed.

    Instead, use them to augment your abilities. My current favorite mental model is to think of them as an over-confident pair programming assistant who’s lightning fast at looking things up, can churn out relevant examples at a moment’s notice and can execute on tedious tasks without complaint.

    A crucial characteristic of any model is its training cut-off date. This is the date at which the data they were trained on stopped being collected. For OpenAI’s models this is usually October of 2023. Anthropic and Gemini and other providers may have more recent dates.

    This is extremely important for code, because it influences what libraries they will be familiar with. If the library you are using had a major breaking change since October 2023, OpenAI models won’t know about it!

    I find LLMs respond extremely well to function signatures like the one I use here. I get to act as the function designer, the LLM does the work of building the body to my specification.

    Your responsibility as a software developer is to deliver working systems. If you haven’t seen it run, it’s not a working system. You need to invest in strengthening those manual QA habits.

    it’s not about getting work done faster, it’s about being able to ship projects that I wouldn’t have been able to justify spending time on at all.

    And my favourite.

    LLMs amplify existing expertise