AI is a Superpower, Actually

AI is a Superpower, Actually

Most of the conversation around AI right now is framed the same way.

AI will take your job.
AI will replace experts.
AI will hollow out the middle class.

These types of statements are pushing the public to view AI with a certain lens. And it's working.

Recent polling shows that public sentiment toward AI remains broadly negative. A Navigator Research survey found that overall favorability skews unfavorable, with women in particular viewing AI negatively by a 7-point margin (41% favorable vs. 48% unfavorable). A separate Reuters/Ipsos poll found that 71% of respondents are concerned AI will put too many people out of work permanently.

To date, there just aren’t enough arguments in the public discourse that are meaningfully in favor of accelerating AI. Which to me is an absolute shame because I believe AI is a superpower.

I think this negative framing is fundamentally wrong. Not overstated—wrong. And it misses what actually makes AI important.

AI isn’t a labor-replacement technology or something to even be scared of. It's a tool.

But most importantly:

It’s an equalizing technology.

AI equalizes access to knowledge and expertise. As good as that sounds, in my opinion, that is precisely the reason so much of the narrative around AI is so negative.

Equalizing Technologies Change Power Structures

In The Sovereign Individual, the authors describe how the most influential technologies don’t just improve efficiency, but reset power by lowering barriers that once protected incumbents. The most influential technologies are the most equalizing ones.

Gunpowder lowered the barrier to security. You no longer needed knights. The printing press lowered the barrier to information. You no longer needed clergy. The internet lowered the barrier to distribution. You no longer needed media companies.

Each time, access expanded. Gatekeepers lost leverage and power moved down the stack.

We tend to frame new technology as “disruption,” and that framing almost always triggers fear. The uncertainty of what it might do in the short term pushes people into an antagonistic mindset.

But if history is any guide, that fear is misplaced.

Nearly every meaningful technological innovation has followed this pattern. It makes hard things easier, expensive things cheaper, and specialized knowledge more accessible. Over time, barriers fall and gatekeeping weakens. Looking back, we rarely debate whether those shifts were worth it, even though they felt uncomfortable at the time.

AI should be evaluated through this same lens. Not as a disruption to be feared in the short term, but as a technology that reshapes who gets access to knowledge and expertise, and most importantly, who no longer gets to gatekeep it.

How AI Reshapes Power Dynamics

The internet gave people access to information. AI helps you understand it.

That doesn’t mean it’s always right. It isn’t. AI reflects the current state of knowledge, which is often incomplete, contested, or simply wrong. New discoveries happen all the time. Dogmas collapse. Experts disagree with one another constantly.

But that’s not a flaw unique to AI. That’s how expertise has always worked.

What AI fundamentally changes is access.

With traditional search, you need the right vocabulary, enough background knowledge, and some familiarity with a domain just to know where to look or ask useful questions. Without that, you’re mostly guessing.

AI removes much of that friction.

You can start naive. You can ask unsophisticated questions. You can follow lines of reasoning, explore counterarguments, and get up to speed on what matters quickly. Instead of piecing together isolated answers, you get meaningful access to the leading views and their counterpoints, without having to sift through loads of irrelevant content.

You don’t suddenly become an expert, and AI doesn’t eliminate the need for judgment or breakthroughs. Skimming a few answers doesn’t replace expertise — that still requires time, research, and deep engagement with a domain. AI is most powerful when treated as a thinking partner to interrogate, not a source to trust.

AI’s value isn’t in delivering final answers. It’s in surfacing wrong assumptions, stress-testing ideas, and making it easier to explore multiple perspectives quickly, including counterarguments to its own conclusions.

But most importantly, the distance between “uninformed” and “meaningfully informed” collapses.

When people can quickly orient themselves in a niche, the power dynamic changes. Conversations that once required blind trust become collaborative. Expertise remains valuable, but it’s no longer opaque or unquestionable.

Why This Triggers Resistance

Some of the pushback against AI is familiar. Every major technological shift triggers Luddite-style fears about job loss, displacement, and uncertainty about the future. AI is no different in that respect, particularly given how quickly it’s progressing and how rapidly workflows are changing across industries.

What feels new about the resistance to AI is where much of it is coming from.

A meaningful share of the pushback is coming from credentialed, gatekept domains, particularly white-collar professions where value has historically been tied to asymmetric access to knowledge. People spend years studying. They accumulate credentials. They build careers around expertise that, for a long time, was difficult to access or challenge.

That expertise isn’t becoming useless, but it is becoming less opaque.

When someone who would have been completely uninformed a year ago can now show up meaningfully informed — able to follow reasoning, surface counterarguments, or hold contrarian views—it changes the dynamic. Conversations that were once one-directional become interactive. Authority becomes more conversational. Knowledge turns into a shared, truth-seeking process rather than top-down dictation.

Even when that person isn’t an equal, the compression alone can feel destabilizing. Not because expertise no longer matters, but because it no longer exists behind a moat.

Historically, this kind of shift has been a net positive. Reducing knowledge gatekeeping allows ideas to be pressure-tested, exposes weak assumptions and long-standing dogmas, and gradually replaces blind deference with informed engagement and accountability.

What That Looks Like in Practice

This shows up clearly in everyday life.

Many people have experienced the situation where their car breaks down, a mechanic names a price, and you’re left unsure whether the work is actually necessary or if you’re being overcharged, simply because you don’t understand how cars work. The same dynamic exists in healthcare. You change primary care doctors, your history doesn’t fully transfer, or even if it does, you’re not confident the new doctor truly understands what’s going on.

These situations create uncertainty and a sense of inferiority through no fault of your own. The information asymmetry isn’t something you chose, but it puts you at a real disadvantage.

AI changes that.

You can now walk into those conversations with context. You can understand common failure modes, explore plausible causes, and follow the reasoning paths professionals typically use.

You can even document that entire learning process and bring it with you.

That doesn’t replace the expert. And it doesn’t make you right. But it does change the interaction. It creates a more balanced, transparent conversation where assumptions can be surfaced and examined.

Experts don’t have perfect information either. They work with incomplete data, probabilistic models, and experience-based heuristics. Coming in informed can surface ideas, edge cases, or patterns that might otherwise be missed.

The professional can still point out where the model is wrong, explain nuance you missed, and correct false assumptions. But you’re no longer starting from zero and the interaction becomes collaborative rather than one-directional.

AI doesn’t replace professionals. It upgrades their customers and that usually leads to better outcomes.

However, this doesn’t mean work, certain roles, or professions won't evolve.

When access to expertise improves, work doesn’t disappear but the tools and skills to solve those problems will change. To date, nowhere is that more visible than in software.

The Frontlines

Given the hysteria around job loss, if any field should be panicking right now, it’s the one I’m in.

And yet, what’s actually happening looks very different from the narrative.

Software engineering is changing fast. Prompting, “vibe coding,” and AI-assisted development are becoming part of everyday workflows, and the output is already quite good. Code can go from zero to ninety percent far faster than it used to. Debugging is streamlined. Iteration cycles are shorter.

What this hasn’t meant is the end of software engineers. If anything, they’re more important than ever.

What has changed is the barrier to entry. Getting started used to require learning an entirely new language before you could build anything meaningful. That friction is largely gone. The developers of the future will spend less time writing individual lines of code and more time focused on building systems and solving problems.

That was always the real job anyway.

The analogy I keep coming back to is calculators. Calculators didn’t make math irrelevant. They made arithmetic cheaper. You still need to understand the underlying logic to solve real problems, you just don’t waste time doing tedious calculations by hand.

The same thing is happening with software.

There will still be senior engineers and architects. Mentorship and judgment will matter as much as ever. But AI makes early-career engineers dramatically more productive than previous generations. Inexperienced engineers are more hireable than ever, provided they’re curious, adaptable, and fluent in these tools.

AI collapses the path to the meaningful work. The mechanics fade into the background, and what remains is building, reasoning, and problem-solving.

Watch What People Do, Not What They Say

There’s another signal that often gets missed in the AI debate.

Despite widespread fear, people are already using AI at an unprecedented rate. According to a recent Brookings survey on AI usage in the U.S., over half of American adults have used AI tools, and hundreds of millions of people worldwide now engage with these technologies daily.

They may say they’re worried about AI. They may express skepticism or anxiety about where it’s headed. But their actions tell a different story. AI is being integrated into daily workflows, creative processes, research, planning, and learning at a remarkable pace.

What that suggests is that, at some level, people already understand what AI offers. They may not have the language for it yet, but they feel the leverage. They feel how much easier it is to get unstuck, to learn faster, to explore ideas, and to operate with more confidence.

AI doesn’t make people weaker or more replaceable. It makes them more capable. It lowers the cost of understanding, shortens the path to competence, and gives individuals tools that were previously out of reach.

The opportunity isn’t to blindly trust AI or accept it uncritically. It’s to engage with it thoughtfully, skeptically, and deliberately. To use it as a tool to move faster, think better, and build more than was possible before.