Gary Marcus Used to Call AI Stupid—Now He Calls It Dangerous

1 year ago 252

Back then–only months ago—Marcus’ quibbling was technical. But now that large language models have become a global phenomenon, his focus has shifted. The crux of Marcus’ new message is that the chatbots from OpenAI, Google, and others are dangerous entities whose powers will lead to a tsunami of misinformation, security bugs, and defamatory “hallucinations” that will automate slander. This seems to court a contradiction. For years Marcus had charged that the claims of AI’s builders are overhyped. Why is AI now so formidable that society must now restrain it?

Marcus, always loquacious, has an answer: “Yes, I’ve said for years that [LLMs] are actually pretty dumb, and I still believe that. But there's a difference between power and intelligence. And we are suddenly giving them a lot of power.” In February he realized that the situation was sufficiently alarming that he should spend the bulk of his energy addressing the problem. Eventually, he says, he’d like to head a nonprofit organization devoted to making the most, and avoiding the worst, of AI. 

Marcus argues that in order to counter all the potential harms and destruction, policymakers, governments, and regulators have to hit the brakes on AI development. Along with Elon Musk and dozens of other scientists, policy nerds, and just plain freaked-out observers, he signed the now-famous petition demanding a six-month pause in training new LLMs. But he admits that he doesn’t really think such a pause would make a difference and that he signed mostly to align himself with the community of AI critics. Instead of a training time-out, he’d prefer a pause in deploying new models or iterating current ones. This would presumably have to be forced on companies, since there’s fierce, almost existential, competition between Microsoft and Google, with Apple, Meta, Amazon, and uncounted startups wanting to get into the game.  

Marcus has an idea for who might do the enforcing. He has lately been insistent that the world needs, immediately, “a global, neutral, nonprofit International Agency for AI,”  which would be referred to with an acronym that sounds like a scream (Iaai!).

As he outlined in an op-ed he coauthored in the Economist, such a body might work like the International Atomic Energy Agency, which conducts audits and inspections to identify nascent nuclear programs. Presumably this agency would monitor algorithms to make sure they don’t include bias or promote misinformation or take over power grids while we aren’t looking. While it seems a stretch to imagine the United States, Europe, and China all working together on this, maybe the threat of an alien, if homegrown, intelligence overthrowing our species might lead them to act in the interests of Team Human. Hey, it worked with that other global threat, climate change! Uh …

In any case, the discussion about controlling AI will gain even more steam as the technology weaves itself deeper and deeper into our lives. So expect to see a lot more of Marcus and a host of other talking heads. And that’s not a bad thing. Discussion about what to do with AI is healthy and necessary, even if the fast-moving technology may well develop regardless of any measures that we painstakingly and belatedly adopt. The rapid ascension of ChatGPT into an all-purpose business tool, entertainment device, and confidant indicates that, scary or not, we want this stuff. Like every other huge technological advance, superintelligence seems destined to bring us irresistible benefits, even as it changes the workplace, our cultural consumption, and inevitably, us.

Read Original