How to Start an AI Panic

1 year ago 60

Last week the Center for Humane Technology summoned over 100 leaders in finance, philanthropy, industry, government, and media to the Kissinger Room at the Paley Center for Media in New York City to hear how artificial intelligence might wipe out humanity. The two speakers, Tristan Harris and Aza Raskin, began their doom-time presentation with a slide that read: “What nukes are to the physical world … AI is to everything else.”

We were told that this gathering was historic, one we would remember in the coming years as, presumably, the four horsemen of the apocalypse, in the guise of Bing chatbots, would descend to replace our intelligence with their own. It evoked the scene in old science fiction movies—or the more recent farce Don’t Look Up—where scientists discover a menace and attempt to shake a slumbering population by its shoulders to explain that this deadly threat is headed right for us, and we will die if you don’t do something NOW.

At least that’s what Harris and Raskin seem to have concluded after, in their account, some people working inside companies developing AI approached the Center with concerns that the products they were creating were phenomenally dangerous, saying an outside force was required to prevent catastrophe. The Center’s cofounders repeatedly cited a statistic from a survey that found that half of AI researchers believe there is at least a 10 percent chance that AI will make humans extinct.

In this moment of AI hype and uncertainty, Harris and Raskin have predictably chosen themselves to be the ones who break the glass to pull the alarm. It’s not the first time they’re triggering sirens. Tech designers turned media-savvy communicators, they cofounded the Center to inform the world that social media was a threat to society. The ultimate expression of their concerns came in their involvement in a popular Netflix documentary cum horror film called The Social Dilemma. While the film is nuance-free and somewhat hysterical, I agree with many of its complaints about social media’s attention-capture, incentives to divide us, and weaponization of private data. These were presented through interviews, statistics, and charts. But the doc torpedoed its own credibility by cross-cutting to a hyped-up fictional narrative straight out of Reefer Madness, showing how a (made-up) wholesome heartland family is brought to ruin—one kid radicalized and jailed, another depressed—by Facebook posts.

This one-sidedness also characterizes the Center’s new campaign called, guess what, the AI Dilemma. (The Center is coy about whether another Netflix doc is in the works.) Like the previous dilemma, a lot of points Harris and Raskin make are valid—such as our current inability to fully understand how bots like ChatGPT produce their output. They also gave a nice summary of how AI has so quickly become powerful enough to do homeworkpower Bing search, and express love for New York Times columnist Kevin Roose, among other things.

I don’t want to dismiss entirely the worst-case scenario Harris and Raskin invoke. That alarming statistic about AI experts believing their technology has a shot of killing us all, actually checks out, kind of. In August 2022, an organization called AI Impacts reached out to 4,271 people who authored or coauthored papers presented at two AI conferences, and asked them to fill out a survey. Only about 738 responded, and some of the results are a bit contradictory, but, sure enough, 48 percent of respondents saw at least a 10 percent chance of an extremely bad outcome, namely human extinction. AI Impacts, I should mention, is supported in part by the Centre for Effective Altruism and other organizations that have shown an interest in far-off AI scenarios. In any case, the survey didn’t ask the authors why, if they thought catastrophe possible, they were writing papers to advance this supposedly destructive science.

Read Original