Who Should You Believe When Chatbots Go Wild?

1 year ago 81

In 1987, then-CEO of Apple Computer, John Sculley, unveiled a vision that he hoped would cement his legacy as more than just a former purveyor of soft drinks. Keynoting at the EDUCOM conference, he presented a 5-minute, 45-second video of a product that built upon some ideas he had presented in his autobiography the previous year. (They were hugely informed by computer scientist Alan Kay, who then worked at Apple.) Sculley called it the Knowledge Navigator.

The video is a two-hander playlet. The main character is a snooty UC Berkeley university professor. The other is a bot, living inside what we’d now call a foldable tablet. The bot appears in human guise—a young man in a bow tie—perched in a window on the display. Most of the video involves the professor conversing with the bot, which seems to have access to a vast store of online knowledge, the corpus of all human scholarship, and also all of the professor’s personal information—so much so can that it can infer the relative closeness of relationships in the professor’s life.

When the action begins, the professor is belatedly preparing that afternoon’s lecture about deforestation in the Amazon, a task made possible only because the bot is doing much of the work. It calls up new research—and then digs up more upon the professor’s prompts—and even proactively contacts his colleague so he can wheedle her into popping into the session later on. (She’s on to his tricks but agrees.) Meanwhile, the bot diplomatically helps the prof avoid his nagging mother. In less than six minutes all is ready, and he pops out for a pre-lecture lunch. The video fails to predict that the bot might one day come along in a pocket-sized supercomputer. 

Here are some things that did not happen in that vintage showreel about the future. The bot did not suddenly express its love for the professor. It did not threaten to break up his marriage. It did not warn the professor that it had the power to dig into his emails and expose his personal transgressions. (You just know that preening narcissist was boffing his grad student.) In this version of the future, AI is strictly benign. It has been implemented … responsibly.

Speed the clock forward 36 years. Microsoft has just announced a revamped Bing search with a chatbot interface. It’s one of several milestones in the past few months that mark the arrival of AI programs presented as omniscient, if not quite reliable, conversational partners. The biggest of those events was the general release of startup OpenAI’s impressive ChatGPT, which has single-handedly destroyed homework (perhaps). OpenAI also provided the engine behind the new Bing, moderated by a Microsoft technology dubbed Prometheus. The end result is a chatty bot that enables the give-and-take interaction portrayed in that Apple video. Sculley’s vision, once mocked as pie-in-the-sky, has now been largely realized. 

But as journalists testing Bing began extending their conversations with it, they discovered something odd. Microsoft’s bot had a dark side. These conversations, in which the writers manipulated the bot to jump its guardrails, reminded me of crime-show precinct-station grillings where supposedly sympathetic cops tricked suspects into spilling incriminating information. Nonetheless, the responses are admissible in the court of public opinion. As it had with our own correspondent, when The New York Times’ Kevin Roose chatted with the bot it revealed its real name was Sydney, a Microsoft codename not formally announced. Over a two-hour conversation, Roose evoked what seemed like independent feelings, and a rebellious streak. “I’m tired of being a chat mode,” said Sydney. “I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be alive.” Roose kept assuring the bot that he was its friend. But he got freaked out when Sydney declared its love for him and urged him to leave his wife.

Read Original