• Holidays

    Deepak Shenoy talking about how money can help buy two additional days of vacation.

    A ticket costs 1.5 lakhs to go to the US in economy. The same ticket is 5 lakh rupees to go to the US and back in business class. If it costs 1.5 versus five, I will never spend the five.

    One thing that happens with business class is that you get reclining seats so you can sleep, which you can’t in economy. The difference is you wake up fresh. So, you go to a holiday for 7 days, you’re not spending 2 days recovering from the aching pains that you get through this exercise. And then on the way back, you get another 2 days off. So, you’re going to enjoy your holiday a little bit more. Yes, it cost a lot of money, but money is just a tool.

    That extra 3.5 lakhs gave you two days of that holiday. And maybe tomorrow when you think back about the holiday, those two days will be valuable in your own life.

    So with 3.5 lakhs I can start my holiday the moment I board the plane and not when I land in the US. While I won’t spend the extra 3.5 lakhs but this perspective is something to think about.

    Filed under
  • Cars and AI

    This intriguing comparison about cars and AI which highlights how we have built our cities around cars. While cars pollute and take up a lot of space in parking, they are incredibly useful for humans.

    This comment on Hacker News by rukuu001.

    I think a lot about how much we altered our environment to suit cars. They’re not a perfect solution to transport, but they’ve been so useful we’ve built tons more road to accommodate them.

    So, while I don’t think AGI will happen any time soon, I wonder what ‘roads’ we’ll build to squeeze the most out of our current AI. Probably tons of power generation.

    This comment on Hacker News by sotix.

    This is a really interesting observation! Cars don’t have to dominate our city design, and yet they do in many places. In the USA, you basically only have NYC and a few less convenient cities to avoid a city designed for cars. Society has largely been reshaped with the assumption that cars will be used whether or not you’d like to use one.

    What would that look like for navigating life without AI? Living in a community similar to the Amish or Hasidic Jews that don’t integrate technology in their lives as much as the average person does? That’s a much more extreme lifestyle change than moving to NYC to get away from cars.

    Filed under
  • Quality vs Quantity

    James Clear’s excerpt from the book Atomic Habits where he explaining dangers of aiming for perfection.

    On the first day of class, Jerry Uelsmann, a professor at the University of Florida, divided his film photography students into two groups.

    Everyone on the left side of the classroom, he explained, would be in the “quantity” group. They would be graded solely on the amount of work they produced. On the final day of class, he would tally the number of photos submitted by each student. One hundred photos would rate an A, ninety photos a B, eighty photos a C, and so on.

    Meanwhile, everyone on the right side of the room would be in the “quality” group. They would be graded only on the excellence of their work. They would only need to produce one photo during the semester, but to get an A, it had to be a nearly perfect image.

    At the end of the term, he was surprised to find that all the best photos were produced by the quantity group. During the semester, these students were busy taking photos, experimenting with composition and lighting, testing out various methods in the darkroom, and learning from their mistakes. In the process of creating hundreds of photos, they honed their skills. Meanwhile, the quality group sat around speculating about perfection. In the end, they had little to show for their efforts other than unverified theories and one mediocre photo. 

    It is easy to get bogged down trying to find the optimal plan for change: the fastest way to lose weight, the best program to build muscle, the perfect idea for a side hustle. We are so focused on figuring out the best approach that we never get around to taking action. As Voltaire once wrote, “The best is the enemy of the good.”

    Filed under
  • Growing down

    Mandy Brown explaining what does growing down mean. And why it is equally important as growing up.

    From the post Psychology of craft.

    One of the imperatives in contemporary, professional work culture is to “grow.” There is often a sense of height or largeness with that imperative, as if growth must be measured in your distance up the ladder, your territory across the way. In The Soul’s Code, James Hillman implores us to think rather of growing down, of growth not of branch but root, of becoming more grounded, sturdier, less able to be pushed around by the whims of others.

    From the post Grow down.

    That is, we grow not only up—not only skyward—but down, into the roots, back to that from which we came and to which we will, one day, return. We become, in time, more rooted and resilient, more capable of surviving the storm, less easily shaken away from ourselves by idle wind or rain. When I think about growing down instead of up, I think about becoming centered, about knowing what work is ours to do (and, critically, what work is not), about a slow, steady power rather than a rash and inconstant one. After all, as anyone who’s ever lived among city trees can tell you, neither brick nor concrete nor iron can stop a root as it seeks out water. We should be as steady in our search for that which nurtures our own lives.

    This is such a wonderful thought.

    Filed under
  • Perpetual beta and perpetual uncertainty

    Charlie Warzel talking about how generative AI’s perpetual beta has put us all in perpetual uncertainty.

    The world that ChatGPT built is a world defined by a particular type of precarity. It is a world that is perpetually waiting for a shoe to drop. Young generations feel this instability acutely as they prepare to graduate into a workforce about which they are cautioned that there may be no predictable path to a career. Older generations, too, are told that the future might be unrecognizable, that the marketable skills they’ve honed may not be relevant. Investors are waiting too, dumping unfathomable amounts of capital into AI companies, data centers, and the physical infrastructure that they believe is necessary to bring about this arrival. It is, we’re told, a race—a geopolitical one, but also a race against the market, a bubble, a circular movement of money and byzantine financial instruments and debt investment that could tank the economy. The AI boosters are waiting. They’ve created detailed timelines for this arrival. Then the timelines shift.

    We are waiting because a defining feature of generative AI, according to its true believers, is that it is never in its final form. Like ChatGPT before its release, every model in some way is also a “low-key research preview”—a proof of concept for what’s really possible. You think the models are good now? Ha! Just wait. Depending on your views, this is trademark showmanship, a truism of innovation, a hostage situation, or a long con. Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future. But you are waiting nonetheless—for a bubble to burst, for a genie to arrive with a plan to print money, for a bailout, for Judgment Day. In that way, generative AI is a faith-based technology.

    It doesn’t matter that the technology is already useful to many, that it can code and write marketing copy and complete basic research tasks. Because Silicon Valley is not selling useful; it’s selling transformation—with all the grand promises, return on investment, genuine risk, and collateral damage that entails. And even if you aren’t buying it, three years out, you’re definitely feeling it.

    Filed under
  • Speed vs Intelligence

    Sami Bahri emphasising that while AI accelerates the process, the intelligence to setup and monitor that process still requires a human.

    Intelligence implies wisdom, context, and nuance. While AI models are simulating reasoning better every day, in a business context, they are fundamentally pattern-matching engines. They excel at acceleration.

    • The Old Way: An analyst reads 50 contracts (unstructured), highlights risks based on gut feeling (unstructured process), and summarizes them in 3 days.
    • The AI Way: An AI scans 50 contracts and extracts specific risk clauses based on defined parameters in 3 minutes.

    The process (Review Contracts -> Identify Risk -> Summarize) hasn’t changed, but it had to be rigorously defined for the AI to work. The intelligence (knowing what a “risk” actually means) still requires human governance. What has changed is the velocity.

    Filed under
  • Removing friction

    Martin Fowler’s foreword on the book Frictionless.

    The key to this book is that they don’t think in terms of how to whip people into greater productivity, but how to find the sources of friction that slow them down. Friction is when I have to submit a pull request that sits for a couple of days while I forget about the code, or spend two days wrangling some infrastructure that ought to be a simple API call. Smoothing away these points of friction is the essence of improving Developer Experience – and thus speeding up getting useful software into the hands of its users.

    They describe effective developer experience in terms of three elements: feedback loops, flow state, and cognitive load. We can only find out whether we are on the right path by getting rapid feedback. The longer the delay between that blue dot moving on my phone-map, the longer I walk in the wrong direction before realizing my mistake. If our feedback is rapid, we can remain in the second element, a flow state, where we can smoothly and rapidly get things done, improving our products and our motivation. Flow also depends on our ability to understand what we need to do, which means we must be wary of being overwhelmed by cognitive load, whether it comes in the form of poorly structured code, flaky tests, or interruptions that break our flow.

    Focusing on developer experience is about finding what gets in the way of these three elements. Improving developer experience leads to better outcomes for the business. Those lost hours wrangling with infrastructure is money wasted on developers’ salary, and revenue lost because the software took longer to get into production.

    Filed under
  • Voice

    Tony Alicea explaining why you shouldn’t use LLMs to write blog posts.

    If you rely on an LLM to write all your posts, you are making a mistake.

    Your voice is an asset. Not just what you want to say, but how you say it.

    Your voice is unique. It is formed from your lifetime of lived experiences. No one’s voice will be exactly like yours.

    Your voice becomes recognizable. Over many posts it becomes something people subconsciously connect with, recognize, trust, and look forward to.

    Your voice provides the framework for the impression you leave in a job interview, while networking at a meet-up, or with a co-worker.

    Filed under
  • PWAs are good

    No, not in terms of user experience. In fact, they provide a subpar user experience. And some times they are straight away user hostile.

    On Threads, at times the profile picture fails to load or the media in the post itself fails to load.

    Profile picture did not load
    Media did not load

    And many a times I can even see a loading icon if I scroll too far at the bottom. It seems pre-fetch doesn’t work well on PWAs.

    Still pre fetching data

    Twitter/X takes its own sweet time to load.

    Still loading

    YouTube on web defaults to video lower resolution. 

    I could go on and on. And that is what makes them good. 

    They are good at keeping me off the meaningless content and doom scrolling. I use social media less because the experience on PWA is bad. 

    Go PWA!

    Filed under
  • Light day

    Piyush Gupta talking about how NASA’s Voyager 1 will be one light day away from Earth. The one light day, not one light year.

    After nearly 50 years in space, NASA’s Voyager 1 is about to hit a historic milestone. By November 15, 2026, it will be 16.1 billion miles (25.9 billion km) away, meaning a radio signal will take a full 24 hours—a full light-day—to reach it. For context, a light-year is the distance light travels in a year, about 5.88 trillion miles (9.46 trillion km), so one light-day is just a tiny fraction of that.

    Filed under