Unhinged Conspiracies, AI Doppelgangers, and the Fractured Reality of Naomi Klein

In Doppelganger, you wrote about a South Korean politician who used AI to look younger.

The thing about the Korean example is, it was not hidden. Everyone knew. And it worked for him. So who knows? As our candidates get older, they may rely on AI doppelgangers. It’s being packaged as a way to reach younger voters, because they prefer synthetic reality.

Have you had discussions with your students about AI? Do they actually prefer synthetic reality? 

Last semester, ChatGPT was really everywhere, and we were discussing how they were not using it to write their essays. I think we’ve overfocused on the plagiarism piece of things. It’s just one element within a completely unstable and frightening future. Maybe it’s helpful writing essays, but they also know it’s replacing entire sectors they may have been preparing for—between not being able to afford living in the city to the acceleration of the climate crisis to AI changing the job market.

I’m aware of at least one podcasting company hoping to use AI to translate podcasts into a bunch of different languages. It sounds cool, but then you think: What about translators?

The thing I find disingenuous is when you hear, oh, we’re going to have so much leisure time, the AI will do the grunt work. What world are you living in? That’s not what happens. Fewer people will get hired. And I don’t think this is a fight between humans and machines; that’s bad framing. It’s a fight between conglomerates that have been poisoning our information ecology and mining our data. We thought it was just about tracking us to sell us things, to better train their algorithms to recommend music. It turns out we’re creating a whole doppelganger world.

We’ve provided just enough raw material.

When Shoshana Zuboff wrote The Age of Surveillance Capitalism, it was more about convincing people who’d never had a sense that they had a right to privacy—because they’d grown up with the all-seeing eye of social media—that they did have a right to privacy. Now it’s not just that, even though privacy is important. It’s about whether anything we create is going to be weaponized against us and used to replace us—a phrase that unfortunately has different connotations right now.

Take it back! The right stole “shock doctrine,” you can nab “replace us” for the AI age.

These companies knew that our data was valuable, but I don’t even think they knew exactly what they were going to do with it beyond sell it to advertisers or other third parties. We’re through the first phase now, though. Our data is being used to train the machines.

Fodder for a Doppelganger sequel.

And about what it means for our ability to think new thoughts. The idea that everything is a remix, a mimicry—it relates to what you were talking about, the various Marvel and Mattel universes. The extent to which our culture is already formulaic and mechanistic is the extent to which it’s replaceable by AI. The more predictable we are, the easier it is to mimic. I find something unbearably sad about the idea that culture is becoming a hall of mirrors, where all we see is our own reflections back.

You reached out to Naomi Wolf and she didn’t respond. If she had responded, would you want to debate her?

I think it’s important to engage with what’s being said and marshal counterfacts. But the idea of just sneering at people is dangerous. I think we do need to debate, but whether that means creating some kind of theatrical Naomi vs. Naomi spectacle—I don’t know about that.

You could be second billing to Musk vs. Zuckerberg.

Anyway, as you know from reading the book, it’s not really about her. She’s just a case study. I follow her down the rabbit hole. But I’m more interested in the rabbit hole.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.


Author: showrunner