Each evaluation is a window into an AI model, Solaiman says, not a perfect readout of how it will always perform. But she hopes to make it possible to identify and stop harms that AI can cause because alarming cases have already arisen, including players of the game AI Dungeon using GPT-3 to generate text describing sex scenes involving children. “That’s an extreme case of what we can’t afford to let happen,” Solaiman says.
Solaiman’s latest research at Hugging Face found that major tech companies have taken an increasingly closed approach to the generative models they released from 2018 to 2022. That trend accelerated with Alphabet’s AI teams at Google and DeepMind, and more widely across companies working on AI after the staged release of GPT-2. Companies that guard their breakthroughs as trade secrets can also make the forefront of AI less accessible for marginalized researchers with few resources, Solaiman says.
As more money gets shoveled into large language models, closed releases are reversing the trend seen throughout the history of the field of natural language processing. Researchers have traditionally shared details about training data sets, parameter weights, and code to promote reproducibility of results.
“We have increasingly little knowledge about what database systems were trained on or how they were evaluated, especially for the most powerful systems being released as products,” says Alex Tamkin, a Stanford University PhD student whose work focuses on large language models.
He credits people in the field of AI ethics with raising public consciousness about why it’s dangerous to move fast and break things when technology is deployed to billions of people. Without that work in recent years, things could be a lot worse.
In fall 2020, Tamkin co-led a symposium with OpenAI’s policy director, Miles Brundage, about the societal impact of large language models. The interdisciplinary group emphasized the need for industry leaders to set ethical standards and take steps like running bias evaluations before deployment and avoiding certain use cases.
Tamkin believes external AI auditing services need to grow alongside the companies building on AI because internal evaluations tend to fall short. He believes participatory methods of evaluation that include community members and other stakeholders have great potential to increase democratic participation in the creation of AI models.
Merve Hickock, who is a research director at an AI ethics and policy center at the University of Michigan, says trying to get companies to put aside or puncture AI hype, regulate themselves, and adopt ethics principles isn’t enough. Protecting human rights means moving past conversations about what’s ethical and into conversations about what’s legal, she says.
Hickok and Hanna of DAIR are both watching the European Union finalize its AI Act this year to see how it treats models that generate text and imagery. Hickok said she’s especially interested in seeing how European lawmakers treat liability for harm involving models created by companies like Google, Microsoft, and OpenAI.
“Some things need to be mandated because we have seen over and over again that if not mandated, these companies continue to break things and continue to push for profit over rights, and profit over communities,” Hicock says.
While policy gets hashed out in Brussels, the stakes remain high. A day after the Bard demo mistake, a drop in Alphabet’s stock price shaved about $100 billion in market cap. “It’s the first time I’ve seen this destruction of wealth because of a large language model error on that scale,” says Hanna. She is not optimistic this will convince the company to slow its rush to launch, however. “My guess is that it’s not really going to be a cautionary tale.”