Category: Artificial Intelligence

  • Programming in natural language

    Edsger W. Dijkstra explaining why using natural language for programming is a bad idea.

    In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt.

    We know in the meantime that the choice of an interface is not just a division of (a fixed amount of) labour, because the work involved in co-operating and communicating across the interface has to be added. We know in the meantime —from sobering experience, I may add— that a change of interface can easily increase at both sides of the fence the amount of work to be done (even drastically so). Hence the increased preference for what are now called “narrow interfaces”. Therefore, although changing to communication between machine and man conducted in the latter’s native tongue would greatly increase the machine’s burden, we have to challenge the assumption that this would simplify man’s life.

    Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve.

    I saw this post on Hacker News and this comment sums up Dijkstra’s point succinctly.

    Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

    As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

    Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

  • Learn to code

    Kyrylo Silin reflecting on importance of learning to code.

    AI is the new shiny toy everyone wants to play with. And to be honest, it’s impressive. The problem with AI is that with every year, it gets better and better. Wait what? How’s that a problem? Well, with every new year you lose control.

    The more you rely on AI, the less you understand what you’re doing. The less you understand, the more AI vendors can control you. And the more control they have, the more they can charge you. It’s a vicious cycle.

    This shift was inevitable. Humanity must adapt to this new reality. AI isn’t going away, and we need to learn how to use it to our advantage.

    The large language models (LLMs) created by tech giants have absorbed decades of knowledge — our knowledge. They’ve been trained on our work.

    Now they’re selling it back to us and telling us we only need to learn English to code. This is a lie. As a new programmer, I don’t want you to fall for it. There’s no corner-cutting. Get your shit together and learn to code.

    Ha!

    While I agree that programmers need to learn to code, the unfortunate reality is that AI will charge us to write code that it trained itself on using our work. The charge is for completing a function in a matter of minutes, sometimes even seconds. The charge is for speed and efficiency.

  • AI Einstein

    Thomas Wolf arguing why we won’t have AI Einstein—at least in its current form. 

    …the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around.

    To create an Einstein in a data center, we don’t just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes ‘What if everyone is wrong about this?’ when all textbooks, experts, and common knowledge suggest otherwise.

    Wonderful insight. I am pretty sure that if AI had been available before we discovered that Earth orbits the Sun—and not the other way around—AI wouldn’t have dared to question that knowledge.

  • Knowledge collapse

    A long but worthwhile paper published by Andrew J. Peterson on AI and the Problem of Knowledge Collapse

    We identify conditions under which AI, by reducing the cost of access to certain modes of knowledge, can paradoxically harm public understanding. While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center’ of the distribution. This is generally useful, but widespread reliance on recursive AI systems could lead to a process we define as “knowledge collapse”, and argue this could harm innovation and the richness of human understanding and culture. However, unlike AI models that cannot choose what data they are trained on, humans may strategically seek out diverse forms of knowledge if they perceive them to be worthwhile. 

    The author analyzes how our dependence on AI could limit our knowledge to a narrow subset of views—views on which the AI was trained—and eventually forgetting the “long-tail” ideas. We see this happening right now when we ask the Chinese artificial intelligence, DeepSeek, about Tiananmen Square or Taiwan.

    With increasing integration of LLM-based systems, certain popular sources or beliefs which were common in the training data may come to be reinforced in the public mindset (and within the training data), while other “long-tail” ideas are neglected and eventually forgotten. 

    Such a process might be reinforced by an ‘echo chamber’ or information cascade effect, in which repeated exposure to this restricted set of information leads individuals to believe that the neglected, unobserved tails of knowledge are of little value.

    We have all seen the effect of the echo chamber in social media. An echo chamber with AI responses would only go on to solidify people’s beliefs—whether right or wrong.

    But knowledge collapse has been happening throughout human evolution.

    …traditional hunter-gatherers could identify thousands of different plants and knew their medicinal usages, whereas most humans today only know a few dozen plants and whether they can be purchased in a grocery store. This could be seen as a more efficient form of specialization of information across individuals, but it might also impact our beliefs about the value of those species or of a walk through a forest, or influence scientific or policy-relevant judgements.

    This will be a long term effect of AI, the implications of which will be visible only after 10 or more years. 

  • Simon Willison shares how he uses LLMs to write code

    Simon Willison has published his experience on he uses LLMs to write code. It is filled with great tips and nuggets of wisdom. Here are my favourites.

    If someone tells you that coding with LLMs is easy they are (probably unintentionally) misleading you. They may well have stumbled on to patterns that work, but those patterns do not come naturally to everyone.

    If you assume that this technology will implement your project perfectly without you needing to exercise any of your own skill you’ll quickly be disappointed.

    Instead, use them to augment your abilities. My current favorite mental model is to think of them as an over-confident pair programming assistant who’s lightning fast at looking things up, can churn out relevant examples at a moment’s notice and can execute on tedious tasks without complaint.

    A crucial characteristic of any model is its training cut-off date. This is the date at which the data they were trained on stopped being collected. For OpenAI’s models this is usually October of 2023. Anthropic and Gemini and other providers may have more recent dates.

    This is extremely important for code, because it influences what libraries they will be familiar with. If the library you are using had a major breaking change since October 2023, OpenAI models won’t know about it!

    I find LLMs respond extremely well to function signatures like the one I use here. I get to act as the function designer, the LLM does the work of building the body to my specification.

    Your responsibility as a software developer is to deliver working systems. If you haven’t seen it run, it’s not a working system. You need to invest in strengthening those manual QA habits.

    it’s not about getting work done faster, it’s about being able to ship projects that I wouldn’t have been able to justify spending time on at all.

    And my favourite.

    LLMs amplify existing expertise

  • Services as Software

    Eurasia Review talking about the rise of Services as Software

    The traditional Software-as-a-Service (SaaS) model disrupted enterprise Information Technology (IT) by replacing expensive, on-premise software solutions with cloud-based applications. Databases were maintained by the SaaS providers remotely, and the per-seat license model evolved rapidly to annuity payments and the rise of Annualised Recurring Revenue (ARR). SaaS has dominated the technology world for the better part of two decades. Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’.

    A little later in the article. 

    The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development, legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling.

  • Another take on AI and jobs

    Phillip Carter talking about impact of AI on jobs.

    A lot of dumbasses in company leadership see AI and salivate at the idea of reducing headcount so “AI can do the work”. This is clearly a fear that a lot of people who earn paychecks for a living have. I have two thoughts on this topic:

    Firstly, if you’re a company leader who sees a wave as large as the introduction of the computer coming and your thought is to “use less resources to do the same work”, you’re an uncreative hack and it’s you who deserves to be fired. The goal should be how you can accomplish more when you have cognitive co-processors at your disposal.

    Secondly, it is undeniable that a shift will occur and with that there will be damage done. A lot of people are uninterested in learning new skills for work because work is just … work. It’s their means to earn a paycheck so they can do what they actually care about. I won’t judge that behavior because it’s not inherently right or wrong. But I will say that if you don’t want to be caught with your pants down when your workplace does expect you to do more and different things with this technology, there’s no better time than now to start learning how to use it.

    The second paragraph is similar to the reasoning of Dustin Ewers, where he argues that AI will create more software jobs rather than eliminate them.

    And, this gives me hope.

  • Economics of using AI for development

    This is probably the first article I have read on the economics of using AI for development. Vikram Sreekanti and Joseph E. Gonzalez talk about their experience of using Devin for a month

    When Devin works, the economics of using it are pretty good. You currently pay $500 for 250 ACUs, and the small tasks that Devin succeeded at took 1-5 ACUs ($2-10). Paying a few dollars to fix small bugs and save even just one hour per-bug is a great tradeoff — one that we would make any day of the week. The issue is that there’s a very narrow set of tasks that are long enough to require an engineer to context switch and short enough to be in Devin’s working window.

    When Devin doesn’t work, the economics start to look suspect. The 3 bigger tasks we tried averaged about 20 ACUs and 2 of the 3 didn’t yield usable results. While $40 would be extremely cheap for implementing these larger tasks, our (to be fair, limited) sample indicates that these larger tasks consume a disproportional number of ACUs — these tasks weren’t 5-10x harder than the smaller ones that succeeded. More importantly, they often fail, so you get nothing for your $40.

    The last statement is crucial. If you pay a developer $40 and they don’t deliver, you have the option to go back and say, “Hey, this isn’t what I wanted. I expected…”—and still get value for your money.

    But with AI, if you spend $40 and it doesn’t deliver then that money is gone. Poof!

    That said, I don’t want to get carried away. What if, a year from now, AI actually starts delivering!

  • Model Autophagy Disorder

    Interesting read on Livescu.

    …when AI models generate things—text, images, sound—and then those generated products are used to train a subsequent model, the new model actually gets worse at generating images and texts. Over a few generations it can fail completely, producing only a string of gibberish or a single same image over and over again.

    And this is how AI goes ‘MAD’. Later on in the article the author describes a funny little analogy on how to discriminate between rich data and poor data.

  • Comparative advantage

    Noah Smith explaining what is comparative advantage while arguing that we will still have jobs when AI takes over the world—with some caveat.

    Comparative advantage actually means “who can do a thing better relative to the other things they can do”. So for example, suppose I’m worse than everyone at everything, but I’m a little less bad at drawing portraits than I am at anything else. I don’t have any competitive advantages at all, but drawing portraits is my comparative advantage. 

    The key difference here is that everyone — every single person, every single AI, everyone — always has a comparative advantage at something!

    To help illustrate this fact, let’s look at a simple example. A couple of years ago, just as generative AI was getting big, I co-authored a blog post about the future of work with an OpenAI engineer named Roon. In that post, we gave an example illustrating how someone can get paid — and paid well — to do a job that the person hiring them would actually be better at doing:

    Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at

    (In fact, we lifted this example from an econ textbook by Greg Mankiw, who in turn lifted it from Paul Samuelson.) 

    Note that in our example, Marc is better than his secretary at every single task that the company requires. He’s better at doing VC deals. And he’s also better at typing. But even though Marc is better at everything, he doesn’t end up doing everything himself! He ends up doing the thing that’s his comparative advantage — doing VC deals. And the secretary ends up doing the thing that’s his comparative advantage — typing. Each worker ends up doing the thing they’re best at relative to the other things they could be doing, rather than the thing they’re best at relative to other people.