AI is a Superpower, Actually
Last month my check engine light came on. A year ago, I would have driven to the mechanic, described the light, and waited for a diagnosis I couldn't evaluate. Or maybe I'd have tried YouTubing it, Googling it, reading the manual, then given up and gone to the mechanic anyway. This time, I did something different.
I described the symptoms to an AI—the light, a slight hesitation when accelerating, the mileage on my car. It walked me through the most common causes, asked clarifying questions, and helped me understand that a failing oxygen sensor was plausible given the pattern. When I got to the shop, the mechanic confirmed exactly that.
I didn't diagnose it myself. But I understood enough to have a real conversation about whether the repair was urgent, what the alternatives were, and whether the quoted price made sense.
That interaction captures something important about what AI in its current form actually is, and what most of the public conversation about it right now gets wrong.
The Narrative is Backwards
Most of the conversation around AI right now is framed the same way: AI will take your job, replace experts, and hollow out the middle class. The prevailing narrative is that you won't have a job to do and your life will soon be rendered meaningless.
And as of now, it's working. Recent polling shows that public sentiment toward AI remains broadly negative. A Navigator Research survey found that overall favorability skews unfavorable, with women in particular viewing AI negatively by a 7-point margin. A separate Reuters/Ipsos poll found that 71% of respondents are concerned AI will put too many people out of work permanently.
On a recent All In podcast, the hosts were speaking to Tucker Carlson, who can be seen as a skeptic of the advances of AI. What I found most interesting is that when he asked the simple question of why normal everyday Americans should be excited about AI, the hosts didn't have a clear example of why they should be. Everything was abstract; the benefits weren't obvious. (Here's the clip.)
To me, this was a big missed opportunity, but it underscored the fact that there isn't a coherent argument in the public narrative for why you should care. The benefits have not spread widely into public discourse.
This is a shame because I think the narrative is fundamentally wrong.
AI isn't a labor-replacement technology. It's a tool. But more importantly, it's an equalizing technology—one that reshapes who gets access to knowledge and expertise, and who no longer gets to gatekeep it.
Equalizing Technologies Change Everything
In The Sovereign Individual, the authors describe how the most influential technologies don't just improve efficiency—they reset power by lowering barriers that once protected incumbents.
The most transformative innovations follow this pattern.
Gunpowder lowered the barrier to security. You no longer needed knights. A peasant with a rifle could challenge a trained soldier, and feudal power structures began to collapse.
The printing press lowered the barrier to information. You no longer needed clergy. Ideas that once required hand-copied manuscripts could spread across continents, and the Church's monopoly on knowledge eroded.
The internet lowered the barrier to distribution. You no longer needed media companies. Anyone with a connection could publish to the world, and gatekeepers who once decided what got heard lost that authority.
Each time, access expanded, gatekeepers lost leverage, and power moved down the stack.
We tend to frame new technology as "disruption," which triggers fear. But if history is any guide, that fear is misplaced. Nearly every meaningful technological innovation has followed this pattern: it makes hard things easier, expensive things cheaper, and specialized knowledge more accessible. Looking back, we rarely debate whether those shifts were worth it.
AI should be evaluated through the same lens.
How AI is Helping Today
Consider a parent whose child was just diagnosed with ADHD. Before AI, they'd receive a diagnosis, maybe some pamphlets, and a recommendation that feels like it came from a black box. They might spend hours searching forums, reading contradictory articles, and still feel lost.
With AI, that same parent can ask questions in plain language: What are the different treatment approaches? What does the research say about medication versus behavioral therapy for a seven-year-old? What questions should I ask the specialist? What are the common side effects I should watch for?
They don't become a psychiatrist. But they become someone who can participate meaningfully in their child's care rather than simply deferring to whatever they're told.
Or consider signing a commercial lease for the first time. The document is dense. The lawyer costs $400 an hour, and you're not even sure what questions to ask. AI can help you understand what each clause means, flag provisions that are unusual or unfavorable, and surface the questions worth asking before you spend money on legal review.
This pattern shows up everywhere: understanding insurance options, evaluating a contractor's quote, navigating financial aid, troubleshooting an appliance before calling a repair service. The professional still matters. But the conversation changes.
What AI is doing in each of these moments is closing the gap between "completely uninformed" and "informed enough to engage." The internet gave people access to information. AI helps you understand it.
That doesn't mean AI is always right. It isn't. AI reflects the current state of knowledge, which is often incomplete, contested, or simply wrong. But that's not a flaw unique to AI. That's how expertise has always worked.
What changes is the starting point. With traditional search, you need the right vocabulary, enough background knowledge, and some familiarity with a domain just to know where to look. Without that, you're mostly guessing. AI removes that friction. You can start naive. You can ask unsophisticated questions. You can follow lines of reasoning, explore counterarguments, and get up to speed quickly.
You don't suddenly become an expert. Skimming a few answers doesn't replace the years of research and deep engagement that real expertise requires. AI is most powerful when treated as a thinking partner to interrogate, not a source to trust blindly.
Its value isn't in delivering final answers. It's in pointing you in the right direction, stress-testing ideas, and making it easier to explore multiple perspectives, including counterarguments to its own conclusions.
For the person navigating a diagnosis or a contract, this is empowering. But not everyone experiences this shift the same way.
Why This Triggers Resistance
Some pushback against AI is the familiar Luddite response: fear of job loss, displacement, uncertainty about the future. That's understandable and not unique to AI.
But there's a deeper resistance worth understanding, particularly from highly trained professionals. And I think it deserves more empathy than it usually gets.
Consider medicine. Becoming a doctor requires four years of undergraduate study, four years of medical school, three to seven years of residency, and often additional fellowships. That's a decade or more of training before you're trusted to practice independently. The gatekeeping isn't arbitrary. It exists because the stakes are high and the knowledge is genuinely complex.
For most of modern history, this created an inevitable dynamic: the informed speaking to the uninformed. Patients came in knowing little, and doctors translated complex information into recommendations. Trust was required because patients had no way to independently evaluate the reasoning.
That dynamic is shifting.
When a patient shows up having researched their symptoms, understood the differential diagnoses, and prepared specific questions about treatment tradeoffs, something changes. The doctor's expertise isn't less valuable, but it's now being engaged rather than simply accepted.
I think the long-term outcome of this is positive. Informed patients tend to have better outcomes.
But that doesn't mean the transition will be easy.
Doctors have always had biases shaped by their training and experience. When a patient challenges those biases—because they don't like the recommended treatment, or they want to explore a more natural approach, or they simply disagree—it creates a completely different dynamic than blind acceptance.
This is what professionals will have to get used to in the age of AI. Their clients will be more educated. They won't always agree. And yet everyone still needs to work together to get to the best outcome.
Many professions will face this transition as AI adoption grows. I happen to work in one where it's already well underway.
What I'm Seeing in Software
If any field should be panicking about AI right now, it's software engineering.
And yet, what's actually happening looks very different from the narrative.
Prompting, AI-assisted development, and "vibe coding" are becoming part of everyday workflows. Code can go from zero to ninety percent far faster than it used to. Debugging is streamlined. Iteration cycles are shorter.
However, what this hasn't meant is the end of software engineers. If anything, they're more important than ever.
What has changed is the barrier to entry and how software engineers work.
Getting started used to require learning an entirely new language before you could build anything meaningful. That friction is largely gone.
And the way engineers work day-to-day looks different too. There's more iteration, more collaboration with AI tools, and less time stuck on boilerplate. Developers of the future will spend less time writing code line by line and more time building systems and solving problems.
In my opinion, that was always the real job anyway.
The analogy I keep coming back to is calculators. Calculators didn't make math irrelevant. They made arithmetic cheaper. You still need to understand underlying logic to solve real problems, you just don't waste time doing tedious calculations by hand.
The same thing is happening with software. There will still be senior engineers, architects, and mentors. In fact, their role becomes more important.
Everything being built still needs experts and overseers. Code still needs to be reviewed, approved, and integrated into a cohesive system. Someone with experience and taste still needs to act as a gatekeeper.
AI makes early-career engineers dramatically more productive, and inexperienced engineers more hireable, provided they're curious, adaptable, and fluent in these tools. But that output still flows through experienced engineers who know what good systems look like.
That's my experience. But I'm not the only one finding value here.
Watch What People Do
Despite widespread fear, people are already using AI at unprecedented rates. According to a 2025 Menlo Ventures survey of over 5,000 U.S. adults, 61% of Americans have used AI in the past six months, and nearly one in five rely on it every day. Scaled globally, that translates to 1.7 to 1.8 billion people who have used AI tools, with 500 to 600 million engaging daily.
They may say they're worried. But their actions tell a different story.
What that suggests is that people already understand, at some level, what AI offers. They may not have the language for it yet, but they feel the leverage. They feel how much easier it is to get unstuck, to learn faster, to explore ideas.
AI doesn't make people weaker or more replaceable. It makes them more capable.
The opportunity isn't to trust AI blindly. It's to engage with it as a thinking partner: skeptically, deliberately, and actively. To use it to move faster, think more clearly, and build more than was possible before.
That's not a threat. That's a superpower.