Another warning about the AI apocalypse? I don’t buy it | Ivana Bartoletti

1 year ago 84

AI tools like ChatGPT are everywhere. It is the combination of computational power and availability of data that has led to a surge in AI technology, but the reason models such as ChatGPT and Bard have made such a spectacular splash is that they have hit our own homes, with around 100 million people currently using them.

This has led to a very fraught public debate. It is predicted that a quarter of all jobs will be affected one way or another by AI and some companies are holding back on recruitment to see which jobs can be automated. Fears about AI can move markets, as we saw yesterday when Pearson shares tumbled over concerns that AI would disrupt its business. And, looming above the day-to-day debate are the sometimes apocalyptic warnings about the long-term dangers of AI technologies – often from loud and arguably authoritative voices belonging to executives and researchers who developed these technologies.

Last month, science, tech and business leaders signed a letter which asked for a pause in AI development. And this week, the pioneering AI researcher Geoffrey Hinton said that he feared that AI could rapidly become smarter than humanity, and could easily be put to malicious use.

So, are people right to raise the spectre of apocalyptic AI-driven destruction? In my view, no. I agree that there are some sobering risks. But people are beginning to understand that these are socio-technical systems. That is, not just neutral tools, but an inextricable bundle of code, data, subjective parameters and people. AI’s end uses, and the direction it develops, aren’t inevitable. And addressing the risks of AI isn’t simply a question of “stop” or “proceed”.

Researchers such as Joy Buolamwini, Ruha Benjamin and Timnit Gebru have long highlighted how the context in which AI technologies are produced and used can influence what we get out. Explaining why AI systems can produce discriminatory outcomes, such as allocating less credit to women, failing to recognise black faces, incorrectly determining that immigrant families are at higher risk of committing fraud (pushing many into destitution). These are societal problems we already recognise, and show the need for society to reach a consensus on the right direction for technical innovation, the responsible use of these technologies and the constraints that should be imposed upon them.

President Joe Biden
‘President Joe Biden wants a bill of rights to cater for people’s rights in the age of AI.’ Photograph: Leah Millis/Reuters

Fortunately, countries around the world are already grappling with these issues. The US, the EU, India, China and others are rolling out controls and revising regulatory approaches. Meanwhile, global standards are emerging. President Joe Biden wants a bill of rights to cater for people’s rights in the age of AI, and the UN has announced a global digital compact to ensure existing human rights can be upheld in the digital age. Global campaigns such as AUDRi are pushing for the Digital Compact to be effective worldwide.

Companies are aware of these issues as they work on new systems. OpenAI, the company behind ChatGPT, sums them up pretty well. It recognises that, while a lot has been done to root out racism and other forms of hate from ChatGPT’s responses, manipulation and hallucination (which means producing content that is nonsensical or untruthful, essentially making stuff up) still happen. I am confident that trial and error, plus burgeoning research in this area will help.

Specific and worrying new problems arising from AI technologies also need to be addressed. The biggest risk we are facing is the potential erosion of democracy and “ground truth” that we may face with the proliferation of deep fakes and other AI-generated misinformation. What will happen to our public discourse if we are not able to trust any sources, faces and facts?

However imperfectly, the Italian privacy watchdog did have good reason to put its foot down and temporarily ban ChatGPT. It was an attempt to make plain that even groundbreaking technologies must be subject to the rules, like all other products. While calling for new laws, we also can start by applying the ones we have already. One of them is the General Data Protection Regulation (GDPR), often bitterly condemned, but the only tool that has upheld the rights of citizens, as well as workers, in the age of AI and the algorithmic management of hiring and firing. Privacy law may need to be updated but its role does demonstrate the importance of regulation. OpenAI did make some changes in order for the ban to be lifted.

We should remember that also AI presents great opportunities. For example, an AI tool can identify whether abnormal growths found on CT scans are cancerous. Last year, DeepMind predicted the structure of almost every protein so far catalogued by science, cracking one of the great challenges of biology that had flummoxed the world for nearly 50 years.

There is both excitement and fear about this technology. Apocalyptic scenarios of AI similar to those depicted in the Terminator films should not blind us to a more realistic and pragmatic vision that sees the good of AI and addresses the real risks. Rules of the game are necessary, and global agreements are vital if we want to move from somewhat mindless development of AI to responsible and democratised adoption of this new power.

  • Ivana Bartoletti is a privacy and data protection professional, visiting cybersecurity and privacy fellow at Virginia Tech and founder of the Women Leading in AI Network

Read Original