• Outreach

    This comment by th explains how DEI is essentially an outreach program. This is on the news of Python Software Foundations’ decision to withdraw from $1.5 million proposal for US government grant program.

    It seems like a number of the “DEI is anti-merit discrimination” messages in this thread are overlooking how DEI work usually works.

    A relevant tweet from 2016 (https://x.com/jessicamckellar/status/737299461563502595):

    > Hello from your @PyCon Diversity Chair. % PyCon talks by women: (2011: 1%), (2012: 7%), (2013: 15%), (2014/15: 33%), (2016: 40%). #pycon2016

    Increased diversity in communities usually comes from active outreach work. PyCon’s talk selection process starts blinded.

    If 300 people submit talks and 294 are men, then 98% of talks will likely be from men.

    If 500 people submit talks and 394 are men, then ~79% will likely be by men.

    Outreach to encourage folks to apply/join/run/etc. can make a big difference in the makeup of applicants and the makeup of the end results. Bucking the trend even during just one year can start a snowball effect that moves the needle further in future years.

    The world doesn’t run on merit. Who you know, whether you’ve been invited in to the club, and whether you feel you belong all affect where you end up. So unusually homogenous communities (which feel hard for outsiders to break into) can arise even without deliberate discrimination.

    Organizations like the PSF could choose to say “let’s avoid outreach work and simply accept the status quo forever”, but I would much rather see the Python community become more diverse and welcoming over time.

    Filed under
  • Subconscious processing

    There’s a spirited discussion on the research paper A Definition of AGI on Hacker News. This comment by fnordpiglet caught my attention.

    Try this exercise. Do not think and let your mind clear. Ideas will surface. By what process did they surface? Or clear your mind entirely then try to perform some complex task. You will be able to. How did you do this without thought? We’ve all had sudden insights without deliberation or thought. Where did these come from? By what process did you arrive at them? Most of the things we do or think are not deliberative and definitely not structured with language. This process is unobservable and not measurable, and the only way we have to do so is through imperfect verbalizations that hint out some vague outline of a subconscious mind. But without being able to train a model on that subconscious process, one that can’t be expressed in language with any meaningful sufficiency, how will language models demonstrate it? Their very nature of autoregressive inference prohibits such a process from emerging at any scale. We might very well be able to fake it to an extent that it fools us, but awareness isn’t there – and I’d assert that awareness is all you need.

    Filed under
  • Understanding

    François Chollet explaining the concept of understanding.

    To really understand a concept, you have to “invent” it yourself in some capacity. Understanding doesn’t come from passive content consumption. It is always self-built. It is an active, high-agency, self-directed process of creating and debugging your own mental models.

    Filed under
  • Five years as shareholder of SRF

    While I have completed five years with my investment in SRF, the investments themselves has been very uneven. For example, my investments in FY 2023-24 alone account for more than 60% of my total investments (Figure 1). Hence, I am still early in my investment journey.

    Figure 1
    (more…)
    Filed under
  • Gold and Land vs Equity

    Shray Chandra and Deepak Shenoy’s discussion on how we treat gold, land very differently from equities.

    Shray Chandra: But when I’m looking at my portfolio, if I’m going to put them all in equity, I’m this super demanding and difficult person saying this doesn’t deserve to be in my portfolio, it’s not doing well, I’ve lost faith in management, this sector is completely screwed.

    But if 5% of that money is in gold, and gold doesn’t do well, I’m like, oh, this is fine, it’s meant to be hedge. It’s almost like I treat it completely differently. So when it goes up, I said, see, I was right.

    And when it doesn’t do well, I said, yeah, I know, because it’ll do well at the other times. So how does one deal with this sort of like, is this a rational way of thinking about it? Is it just managing emotional expectations or is it like insurance? It won’t do well for many years and some years it will do well.

    Deepak Shenoy: That’s a brilliant philosophical thought process difference. And I think real assets have this advantage. You buy land, you have a similar thing.

    Land has this advantage that people don’t re-value it. And even if they do re-value it, they say, we’ll wait. People tell me that they want to wait for the metro in Bangalore before they sell their apartment.

    I’m like, that’s at least three, four years away where I am. They’re like, huh, it’s okay. I’m like, nobody will tell me.

    I will wait three, four years for Nifty to go back up. It’s very rare to find that, right? I have to try really hard to…

    Listen, it’s not down too much. It’s only been a year. So just hang on for a few more years and it’ll be fine over the long term, right?

    You have to give any investment at least a few years before it gets in. But it’s so much easier to tell that about land or about gold than it is to tell them about stock A or portfolio A or mutual fund Y and so on. So I feel that maybe it has become a second part of nature for us. It says, well, if this mutual fund thing doesn’t work out for me in a year, I’m done. But the land bit, I want to, I don’t mind. That is actually long term.

    Shray Chandra: We’re able to bring in our long term thinking properly when it comes to or the diversification thinking when it comes to say assets like land, real assets. We struggle when it comes to financial assets where the job is, you’re supposed to make me rich. What are you doing?

    Filed under
  • Science

    Steve Blank explaining how science works. He then shares this simple table explaining the difference between theorists and experimentalists.

    Filed under
  • Vibe coding is a lot like stock-picking

    Erik D. Kennedy reflecting on why has AI not impacted designer, yet.

    My hunch: vibe coding is a lot like stock-picking – everyone’s always blabbing about their big wins. Ask what their annual rate of return is above the S&P, and it’s a quieter conversation 🤫

    Ha!

    Filed under
  • Hobby

    Mylar Melodies explaining why we should not be attaching external validations to our hobbies.

    There’s a really good book called Four Thousand Weeks by Oliver Burkeman, a book that I’ve read three times now and if you’re over 40 I cannot recommend enough. It’s basically about the sort of mortality midlife crisis that you have when you enter your 40s. About am I doing enough? Am I going to get it all done? I’m going to die someday is when you start realizing because you’re like I’m halfway through. So you panic about am I doing enough. Have I done it all? Am I getting it all done? 

    Basically the short answer of the book is, you will not get it all done and one of the things that it expounds is the notion of having a hobby and having a hobby that isn’t something that you’re good at. In fact, it goes so far as to say that actually not being good at the hobby is half of what makes the hobby good because you’re just doing it for its own intrinsic rewards. You’re not doing it as a side hustle. You’re not doing it for points. You’re not doing it as a thing you’re going to grow into this business, because by attaching all of this sort of stuff to it, all these obligations, these sort of like expectations, you turn something that is supposed to just be intrinsic. 

    Something that’s just a thing you love and do because you’re alive and you get to enjoy doing things you love while you’re alive. It turns it into something that has like expectations attached to it and that colours it because you’re then thinking well I should only be doing something that’s going to like make my life better or like you know improve you know that I’m building towards it’s going to be a hustle it’s going to earn me money someday. Don’t worry it’s not a waste of time. I’m not wasting my time. But you’re not wasting your time because the point of life is just to enjoy yourself and to live a fulfilling life in whatever form that takes it’s different for everyone. But what I’m saying is it’s really important to have things that you just enjoy doing intrinsically.

    Filed under
  • AI security trilemma

    An insightful post by Bruce Schneier on the security issues plaguing AI. He also suggests that prompt injections might be unsolvable in today’s LLMs.

    The fundamental problem is that AI must compress reality into model-legible forms. In this setting, adversaries can exploit the compression. They don’t have to attack the territory; they can attack the map. Models lack local contextual knowledge. They process symbols, not meaning. A human sees a suspicious URL; an AI sees valid syntax. And that semantic gap becomes a security gap.

    Prompt injection might be unsolvable in today’s LLMs. LLMs process token sequences, but no mechanism exists to mark token privileges. Every solution proposed introduces new injection vectors: Delimiter? Attackers include delimiters. Instruction hierarchy? Attackers claim priority. Separate models? Double the attack surface. Security requires boundaries, but LLMs dissolve boundaries. More generally, existing mechanisms to improve models won’t help protect against attack. Fine-tuning preserves backdoors. Reinforcement learning with human feedback adds human preferences without removing model biases. Each training phase compounds prior compromises.

    This is Ken Thompson’s “trusting trust” attack all over again. Poisoned states generate poisoned outputs, which poison future states. Try to summarize the conversation history? The summary includes the injection. Clear the cache to remove the poison? Lose all context. Keep the cache for continuity? Keep the contamination. Stateful systems can’t forget attacks, and so memory becomes a liability. Adversaries can craft inputs that corrupt future outputs.

    This is the agentic AI security trilemma. Fast, smart, secure; pick any two. Fast and smart—you can’t verify your inputs. Smart and secure—you check everything, slowly, because AI itself can’t be used for this. Secure and fast—you’re stuck with models with intentionally limited capabilities.

    Filed under
  • Genesis of US-East-1

    Ben Thompson talking about origins of the US-East-1, the AWS region that went down a couple of days back and along with it a significant portion of the internet.

    Northern Virginia was a place that, in the 1990s, had relatively cheap and reliable power, land, and a fairly benign natural-disaster profile; it also had one of the first major Internet exchange points, thanks to its proximity to Washington D.C., and was centrally located between the west coast and Europe. That drew AOL, the largest Internet Service Provider of the 1990s, which established the region as data center central, leading to an even larger buildout of critical infrastructure, and making it the obvious location to place AWS’s first data center in 2006.

    That data center became what is known as US-East-1, and from the beginning it has been the location with the most capacity, the widest variety of instance types, and the first region to get AWS’s newest features. It’s so critical that AWS itself has repeatedly been shown to have dependencies on US-East-1; it’s also the default location in tutorials and templates used by developers around the world.

    Filed under