Why Halt AI Research When We Already Know How To Make It Safer

1 year ago 93

Last week, the Future of Life Institute published an open letter proposing a six-month moratorium on the “dangerous” AI race. It has since been signed by over 3,000 people, including some influential members of the AI community. But while it is good that the risks of AI systems are gathering visibility within the community and across society, both the issues described and the actions proposed in the letter are unrealistic and unnecessary.

The call for a pause on AI work is not only vague, but also unfeasible. While the training of large language models by for-profit companies gets most of the attention, it is far from the only type of AI work taking place. In fact, AI research and practice are happening in companies, in academia, and in Kaggle competitions all over the world on a multitude of topics ranging from efficiency to safety. This means that there is no magic button that anyone can press that would halt “dangerous” AI research while allowing only the “safe” kind. And the risks of AI which are named in the letter are all hypothetical, based on a longtermist mindset that tends to overlook real problems like algorithmic discrimination and predictive policing, which are harming individuals now, in favor of potential existential risks to humanity.

Instead of focusing on ways that AI may fail in the future, we should focus on clearly defining what constitutes an AI success in the present. This path is eminently clear: Instead of halting research, we need to improve transparency and accountability while developing guidelines around the deployment of AI systems. Policy, research, and user-led initiatives along these lines have existed for decades in different sectors, and we already have concrete proposals to work with to address the present risks of AI.

Regulatory authorities across the world are already drafting laws and protocols to manage the use and development of new AI technologies. The US Senate’s Algorithmic Accountability Act and similar initiatives in the EU and Canada are among those helping to define what data can and cannot be used to train AI systems, address issues of copyright and licensing, and weigh the special considerations needed for the use of AI in high-risk settings. One critical part of these rules is transparency: requiring the creators of AI systems to provide more information about technical details like the provenance of the training data, the code used to train models, and how features like safety filters are implemented. Both the developers of AI models and their downstream users can support these efforts by engaging with their representatives and helping to shape legislation around the questions described above. After all, it’s our data being used and our livelihoods being impacted.

But making this kind of information available is not enough on its own. Companies developing AI models must also allow for external audits of their systems, and be held accountable to address risks and shortcomings if they are identified. For instance, many of the most recent AI models such as ChatGPT, Bard, and GPT-4 are also the most restrictive, available only via an API or gated access that is wholly controlled by the companies that created them. This essentially makes them black boxes whose output can change from one day to the next or produce different results for different people. While there has been some company-approved red teaming of tools like GPT-4, there is no way for researchers to access the underlying systems, making scientific analysis and audits impossible. This goes against the approaches for auditing of AI systems that have been proposed by scholars like Deborah Raji, who has called for overview at different stages in the model development process so that risky behaviors and harms are detected before models are deployed into society.

Another crucial step toward safety is collectively rethinking the way we create and use AI. AI developers and researchers can start establishing norms and guidelines for AI practice by listening to the many individuals who have been advocating for more ethical AI for years. This includes researchers like Timnit Gebru, who proposed a “slow AI” movement, and Ruha Benjamin, who stressed the importance of creating guiding principles for ethical AI during her keynote presentation at a recent AI conference. Community-driven initiatives, like the Code of Ethics being implemented by the NeurIPS conference (an effort I am chairing), are also part of this movement, and aim to establish guidelines around what is acceptable in terms of AI research and how to consider its broader impacts on society.

Read Original