The ‘Manhattan Project’ Theory of Generative AI

1 year ago 83

The pace of change in generative AI right now is insane. OpenAI released ChatGPT to the public just four months ago. It took only two months to reach 100 million users. (TikTok, the internet’s previous instant sensation, took nine.) Google, scrambling to keep up, has rolled out Bard, its own AI chatbot, and there are already various ChatGPT clones as well as new plug-ins to make the bot work with popular websites like Expedia and OpenTable. GPT-4, the new version of OpenAI’s model released last month, is both more accurate and “multimodal,” handling text, images, video, and audio all at once. Image generation is advancing at a similarly frenetic pace: The latest release of MidJourney has given us the viral deepfake sensations of Donald’s Trump “arrest” and the Pope looking fly in a silver puffer jacket, which make it clear that you will soon have to treat every single image you see online with suspicion.

And the headlines! Oh, the headlines. AI is coming to schoolsSci-fi writingThe lawGaming! It’s making videoFighting security breachesFueling culture warsCreating black marketsTriggering a startup gold rushTaking over searchDJ’ing your musicComing for your job

In the midst of this frenzy, I’ve now twice seen the birth of generative AI compared to the creation of the atom bomb. What’s striking is that the comparison was made by people with diametrically opposed views about what it means.

One of them is the closest person the generative AI revolution has to a chief architect: Sam Altman, the CEO of OpenAI, who in a recent interview with The New York Times called the Manhattan Project “the level of ambition we aspire to.” The others are Tristan Harris and Aza Raskin of the Center for Humane Technology, who became somewhat famous for warning that social media was destroying democracy. They are now going around warning that generative AI could destroy nothing less than civilization itself, by putting tools of awesome and unpredictable power in the hands of just about anyone.

Altman, to be clear, doesn’t disagree with Harris and Raskin that AI could destroy civilization. He just claims that he’s better-intentioned than other people, so he can try to ensure the tools are developed with guardrails—and besides, he has no choice but to push ahead because the technology is unstoppable anyway. It’s a mind-boggling mix of faith and fatalism.

For the record, I agree that the tech is unstoppable. But I think the guardrails being put in place at the moment—like filtering out hate speech or criminal advice from chatGPT’s answers—are laughably weak. It would be a fairly trivial matter, for example, for companies like OpenAI or MidJourney to embed hard-to-remove digital watermarks in all their AI-generated images to make deepfakes like the Pope pictures easier to detect. A coalition called the Content Authenticity Initiative is doing a limited form of this; its protocol lets artists voluntarily attach metadata to AI-generated pictures. But I don’t see any of the major generative AI companies joining such efforts.

Read Original