Weimin Chu has documented the scale of renewable energy in China in series of photographs. Yale Environment 360 showcases these photographs along with the energy capabilities of these renewable energy sources.
Last year China installed more than half of all wind and solar added globally. In May alone, it added enough renewable energy to power Poland, installing solar panels at a rate of roughly 100 every second.
The massive buildout is happening across the country, from crowded eastern cities increasingly topped by rooftop solar panels to remote western deserts where colossal wind farms sprawl across the landscape.
“From the ground, it’s hard to grasp the scale of these power plants,” said Chinese photographer Weimin Chu. “But when you rise into the air, you can see the geometry, the rhythm — and their relationship with the mountains, the desert, the sea.”
Matheus Lima sharing his thoughts on processing everything at 2x, just because you can.
Life happens at 1x. Every conversation you’ve ever had. Every walk, every meal, every meaningful experience. None of it comes with a speed dial. We’re biological creatures wired for real-time processing. When someone speaks to you in person, you don’t get to fast-forward through the parts you find boring.
There’s something strange about trying to shortcut how humans communicate. A podcast is just a conversation you’re eavesdropping on. The pauses, the rhythm, the way someone builds to a point. That’s all part of it. Speed it up and you get the words, sure. But you lose the texture.
Your brain needs empty space too. This is the part we’ve collectively forgotten. Boredom is a feature, not a bug. It’s where our best ideas — like starting this blog! — come from. It’s where you actually process what you’ve learned, make connections, have original thoughts. Constant consumption, even sped up, leaves no room for any of that. You need to be bored.
The irony is that consuming faster often means processing less. You’re optimizing for throughput when you should be optimizing for understanding. All those 2x podcasts blur together into background noise. What did you actually retain? What changed how you think? It’s empty calories. It’s fake productivity.
I have a theory about nostalgia: It happens because the best survival strategy in an uncertain world is to overworry. When you look back, you forget about all the things you worried about that never came true. So life appears better in the past because in hindsight there wasn’t as much to worry about as you were actually worrying about at the time.
Bryan Cantrill reflecting back on his days when he decided to become a software engineer.
When I entered university in 1992, it didn’t feel like the right time: the economy for new grads was very grim — and I knew plenty of folks who were struggling to find work (and accepting part time jobs that didn’t need a college degree at all while they searched for something better). I never doubted going to school, but I also have never taken a job for granted.
When I fell in love with computer science as an undergraduate and realized that I wanted to become a software engineer, it didn’t feel like the right time: Ed Yourdon had just written “The Decline and Fall of the American Programmer”, which boldly told any young computer science student that they were wasting their time — that all programming jobs would be done by cheap labor abroad. This argument felt wrong, but I was too in love with computer science to be talked out of it anyway.
When I decided that I was specifically interested in operating systems kernel development, it definitely didn’t feel like the right time: the conventional wisdom in the mid-1990s was that operating systems were done — that Unix was in decline and that the future clearly belonged to Microsoft. I ardently disagreed with this, and my conviction in 1996 brought me to the one company that unequivocally shared it: Sun Microsystems.
Jason Fried explaining why obvious things need to be kept obvious even when the hard things become possible.
Much of the tension in product development and interface design comes from trying to balance the obvious, the easy, and the possible. Figuring out which things go in which bucket is critical to fully understanding how to make something useful.
Shouldn’t everything be obvious? Unless you’re making a product that just does one thing – like a paperclip, for example – everything won’t be obvious. You have to make tough calls about what needs to be obvious, what should be easy, and what should be possible. The more things something (a product, a feature, a screen, etc) does, the more calls you have to make.
This isn’t the same as prioritizing things. High, medium, low priority doesn’t tell you enough about the problem. “What needs to be obvious?” is a better question to ask than “What’s high priority?” Further, priority doesn’t tell you anything about cost. And the first thing to internalize is that everything has a cost.
I will be thinking about this every time I will be driving my car. I own Skoda Kylaq and my variant has automatic climate control. It seems that when automatic climate control became possible in cars, car makers thought users will not use the A/C controls anymore and put in fancy-looking-but-will-get-activated-or-deactivated-on-slightest-of-accidental-touch based controls.
Having touch based controls already makes it difficult to use while driving, but here the layout of the touch panel is so bad at times I take 1-3 seconds to figure out if the action worked or not.
Here’s how my touch based A/C controls look like (Figure 1). At first glance it everything seems to be present. But the moment you start using it, you realise how badly it is designed.
Figure 1: At first glance, it seems ok
Let me annotate and show how to use the touch based panel (Figure 2).
Figure 2: Yikes!
There’s a small display screen (#1) in the top-center which displays key information.
The buttons of recirculate internal/external air (#2), on/off A/C (#3) and automatic climate control (#4) are all toggle touch buttons. But touching them, doesn’t light them up. Instead that information shows up on the display screen.
Similarly, the touch bar for temperature up/down (#5) and fan speed (#6) doesn’t use the line to give out information. That also goes in the display screen.
But the touch button to set the direction of the air outlet (#7) is a multi option button with 4 options. Each touch enables the next option but I haven’t memorised all the options and their order. So cycle through them twice to find my option but the first time I am just getting the order. In the analog days a knob was used here and I understand its usability—and importance—now.
But the toggle touch button to switch rear window heater on/off (#8) has a small LED light to indicate its on/off status. Something that should have been implemented for all the other buttons.
All this has led me to set the temperature, confirm if automatic A/C is enabled and then start my drive. If this is what Skoda wanted from its users, then the obvious option here should have been to have a physical on/off button for automatic A/C control and a physical knob for setting temperature.
What you should know about dramatic designs is there is always been a long-standing debate about whether they’re good for a company to do or not.
Dramatic designs can draw attention; they can create aspiration, but they can also age very quickly, which is why a lot of companies are a little leery of doing super out-there designs for production cars.
You do that for concepts because the concept is about attention and not production.
Therefore, if you look at, for example, the Polo—the Polo didn’t age a lot, but the Polo didn’t also look super fresh when it arrived. It looked like a nice, clean, fresh design, and it looked like that for a long time because it wasn’t a super dramatic design.
Whereas every time Lamborghini, for example, to go right to the other end of this, when they have to build a new supercar, they have to really push a dramatic design out because it is a supercar.
It’s not going to get produced in large numbers, but it’s that drama that is part of that brand’s design story.
But when you take it to production, which will sell a large number of cars, the risk is if it’s too dramatic, then in two years it’ll look like the last trend rather than today’s fresh car.
That’s why dramatic designs—it’s a trick; it’s a good trick, but it may not always work.
Anand Sridharan talking about how predicting future events cannot help us predict the future consequences.
Herein lies the true unknowability of the messy world around us. It is not just we cannot predict major events. It is that, even if we could, we have no idea how their consequences will play out. It is impossible to reliably unravel a chain of future events in a manner that is useful in real-world decision making.
I used a 2-level thought experiment merely to illustrate that the world is unknowable at many levels. There is no reason to stop at two levels. Cause-and-effect plays out at many more levels, often with feedback loops tying certain consequences back to original events. In fact, in the real world, it is far from clear what the original causes are for whatever transpires around us.
This thread by Dror Poleg about how technology has the power to turn in-person work into scalable work.
In 1930, the union of American singers spent the equivalent of $10m on a campaign to stop people from listening to recorded music and watching movies with sound.
When films were silent, theatres employed local musicians to accompany each screening. But once films gained a soundtrack, local musicians were no longer necessary. The economic implications were significant: In 1927, around 24,000 musicians were employed in theatres across the US and Canada. But then came the first talking film — The Jazz Singer.
By 1930, some 7,200 musicians lost their jobs — 30% of the pre-talkie total. In some markets, such as New York and Cincinnati, musician unemployment reached 50-75%.
Over time, all theatre musicians were eliminated, and recorded soundtracks became par for the course.
The advent of records, radio, and talking films made creative work scalable: “300 musicians in Hollywood supply all the ‘music’ offered in thousands of theatres. Can such a tiny reservoir of talent nurture artistic progress?”
A hundred years ago, it seemed improbable that “canned music” would replace “real” music. Joseph N. Weber, president of the American Federation of Music, predicted that the public will not always accept “like-less, soulless, synthetic music.”
Edward More, the Chicago Herald Tribune music critic, agreed with Weber, stating that “the films have a long way to go before they can duplicate living musicians”
Films never managed to “duplicate” living musicians. They didn’t have to.
Disruptive technology doesn’t seek to “replicate.” More often, it sidesteps and makes old standards and processes redundant.
Records and talking films made music cheaper and accessible to a much larger audience. Most of the audience didn’t care about traditional quality.
As a result, we tend to underestimate technology’s power to turn in-person work into scalable work. In many “creative” professions, fewer people can already capture a larger share of the market than ever.
Such professions include programmers and designers, but also teachers and fitness instructors. A Peloton instructor earns about 12 times more than an offline competitor — and can service many more clients at the same time.
We assume that most professions cannot be scaled in the same way. But there is already evidence to the contrary. Many things that seem ridiculous to us now will seem obvious to our grandkids.
This comment by godelski on Hacker News about how folks in the long tail, who seem least productive, are the ones who end up changing the world.
You can’t have paradigm shifts by following the paradigm.
How I think of it is we need a distribution of people (shaped like a power law, not a normal).
Most people should be in the main body, doing what most people do. They’re probably the “most productive”.
Then you have people in the mid tail who innovate but it’s incremental and not very novel. They produce frequently (our current research paradigm optimizes for this). But there aren’t leaps and bounds. Critically it keeps pushing things forward, refining and improving.
But then there’s those in the long tail. They fail most of the time and are the “least productive”. Sometimes never doing anything of note their entire lives. But these are also the people that change the world in much bigger ways. And sometimes those that appeared to do nothing have their value found decades or centuries later.
Not everyone needs to be Newton/Leibniz. Not everyone should be. But that kind of work is critical to advancing our knowledge and wealth as a species. The problem is it is often indistinguishable from wasting time. But I’m willing to bet that the work of Newton alone has created more value to all of human civilization than every failed long tail person has cost us.
In any investment strategy you benefit from having high risk investments. Most lose you money but the ones that win reward you with much more than you lost. I’m not sure why this is so well known in the investment world but controversial in the research/academic/innovation world.
Becca Caddy talking about the future humans in the Pixar movie Wall-E and how that future now looks scary.
I’ve been researching how AI shows up in sci-fi for an article I’m writing, and I keep coming back to Wall-E. Compared to The Terminator or The Matrix, nothing overtly terrifying happens. There’s no war between humans and machines, no extinction event, no malevolent intelligence plotting our downfall.
And yet, Wall-E feels more disturbing than most AI dystopias, at least to me.
Because in Wall-E’s imagined future, humans aren’t enslaved by machines – at least not in the Matrix-y sort of way we usually imagine. But they’re gradually enfeebled by them.
Enfeeblement is a really useful world here. It doesn’t mean oppression or domination. It means becoming exhausted, debilitated and weakened by lack of use. Muscles atrophy, skills fade and agency dulls.
It’s not quite the same as the idea of learned helplessness, but it’s hard not to think of it. Those experiments where animals stop trying to escape from a threat, like drowning. And it’s not because they’re restrained either, but because they’re learned that effort no longer matters.
That’s exactly what happens in Wall-E. Systems move for humans, think for them, decide for them. Until people barely use their bodies, their attention and their capacity to choose at all. Life becomes effortless, deeply comfortable, completely frictionless and smooth.
Wall-E is one of my favourite movies. I still can’t imagine how Pixar pulled it off with no dialogues and only facial features of Wall-E to communicate emotions.
Wall-E came out in 2008, one year after iPhone launch. AI and robots were a far away future. But the movie had the foresight about how AI and robots will transform humans when they become a reality. And now that AI and robots are becoming a reality, the future that Wall-E showed also seems to be becoming a reality. Unbelievable.
You must be logged in to post a comment.