AI Pioneer Leaves Google and Warns of Dangers Ahead

1 year ago 105
AI DangersAs companies improve their AI systems, “they become increasingly dangerous.” Credit: mikemacmarketing / Creative Commons Attribution 2.0 Generic/Wiipedia

Artificial intelligence (AI) pioneer Geoffrey Hinton left Google warning about the growing dangers of developments in the field.

Hinton, who nurtured the technology at the heart of chatbots like ChatGPT for half a century, told the New York Times (NYT) on Monday: “It is hard to see how you can prevent the bad actors from using it for bad things.”

Asked in a BBC interview to elaborate on this he replied: “This is just a kind of worst-case scenario, kind of a nightmare scenario.

“You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.” The scientist warned that this eventually might “create sub-goals like ‘I need to get more power'”.

In 2012, Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard.

As companies improve their AI systems, they become increasingly dangerous he told NYT. “Look at how it was five years ago and how it is now,” he said of AI technology. “Take the difference and propagate it forward. That’s scary.”

Dangers as tech giants are locked in an AI competition

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology.

The tech giants are locked in a competition that might be impossible to stop, Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.

This becomes an issue, he said, as individuals and companies allow AI systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that,” Hinton told the NYT.

In late March Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak were among several tech experts that called for a pause on AI development.

In a letter, the experts warned of potential risks to society and humanity as tech giants such as Google and Microsoft race to build AI programs that can learn independently.

Read Original