Now That ChatGPT Is Plugged In, Things Could Get Weird

1 year ago 79

A number of open source projects such as LangChain and LLamaIndex are also exploring ways of building applications using the capabilities provided by large language models. The launch of OpenAI’s plugins threatens to torpedo these efforts, Guo says. 

Plugins might also introduce risks that plague complex AI models. ChatGPT’s own plugin red team members found they could “send fraudulent or spam emails, bypass safety restrictions, or misuse information sent to the plugin,” according to Emily Bender, a linguistics professor at the University of Washington. “Letting automated systems take action in the world is a choice that we make,” Bender adds.

Dan Hendrycks, director of the Center for AI Safety, a non-profit, believes plugins make language models more risky at a time when companies like Google, Microsoft, and OpenAI are aggressively lobbying to limit liability via the AI Act. He calls the release of ChatGPT plugins a bad precedent and suspects it could lead other makers of large language models to take a similar route.

And while there might be a limited selection of plugins today, competition could push OpenAI to expand its selection. Hendrycks sees a distinction between ChatGPT plugins and previous efforts by tech companies to grow developer ecosystems around conversational AI—such as Amazon’s Alexa voice assistant.

GPT-4 can, for example, execute Linux commands, and the GPT-4 red-teaming process found that the model can explain how to make bioweapons, synthesize bombs, or buy ransomware on the dark web. Hendrycks suspects extensions inspired by ChatGPT plugins could make tasks like spear phishing or phishing emails a lot easier.

Going from text generation to taking actions on a person’s behalf erodes an air gap that has so far prevented language models from taking actions. “We know that the models can be jailbroken and now we’re hooking them up to the internet so that it can potentially take actions,” says Hendrycks. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”

Part of the problem with plugins for language models is that they could make it easier to jailbreak such systems, says Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. Since you interact with the AI using natural language, there are potentially millions of undiscovered vulnerabilities. Alkhatib believes plugins carry far-reaching implications at a time when companies like Microsoft and OpenAI are muddling public perception with recent claims of advances toward artificial general intelligence.

“Things are moving fast enough to be not just dangerous, but actually harmful to a lot of people,” he says, while voicing concern that companies excited to use new AI systems may rush plugins into sensitive contexts like counseling services.

Adding new capabilities to AI programs like ChatGPT could have unintended consequences, too, says Kanjun Qiu, CEO of Generally Intelligent, an AI company working on AI-powered agents. A chatbot might, for instance, book an overly expensive flight or be used to distribute spam, and Qiu says we will have to work out who would be responsible for such misbehavior.

But Qiu also adds that the usefulness of AI programs connected to the internet means the technology is unstoppable. “Over the next few months and years, we can expect much of the internet to get connected to large language models,” Qiu says. 

Read Original