ChatGPT and its brethren are both surprisingly clever and disappointingly dumb. Sure, they can generate pretty poems, solve scientific puzzles, and debug spaghetti code. But we know that they often fabricate, forget, and act like weirdos.
Inflection AI, a company founded by researchers who previously worked on major artificial intelligence projects at Google, OpenAI, and Nvidia, built a bot called Pi that seems to make fewer blunders and be more adept at sociable conversation.
Inflection designed Pi to address some of the problems of today’s chatbots. Programs like ChatGPT use artificial neural networks that try to predict which words should follow a chunk of text, such as an answer to a user’s question. With enough training on billions of lines of text written by humans, backed by high-powered computers, these models are able to come up with coherent and relevant responses that feel like a real conversation. But they also make stuff up and go off the rails.
Mustafa Suleyman, Inflection’s CEO, says the company has carefully curated Pi’s training data to reduce the chance of toxic language creeping into its responses. “We're quite selective about what goes into the model,” he says. “We do take a lot of information that’s available on the open web, but not absolutely everything.”
Suleyman, who cofounded the AI company Deepmind, which is now part of Google, also says that limiting the length of Pi’s replies reduces—but does not wholly eliminate—the likelihood of factual errors.
Based on my own time chatting with Pi, the result is engaging, if more limited and less useful than ChatGPT and Bard. Those chatbots became better at answering questions through additional training in which humans assessed the quality of their responses. That feedback is used to steer the bots toward more satisfying responses.
Suleyman says Pi was trained in a similar way, but with an emphasis on being friendly and supportive—though without a human-like personality, which could confuse users about the program’s capabilities. Chatbots that take on a human persona have already proven problematic. Last year, a Google engineer controversially claimed that the company’s AI model LaMDA, one of the first programs to demonstrate how clever and engaging large AI language models could be, might be sentient.
Pi is also able to keep a record of all its conversations with a user, giving it a kind of long-term memory that is missing in ChatGPT and is intended to add consistency to its chats.
“Good conversation is about being responsive to what a person says, asking clarifying questions, being curious, being patient,” says Suleyman. “It’s there to help you think, rather than give you strong directional advice, to help you to unpack your thoughts.”
Pi adopts a chatty, caring persona, even if it doesn’t pretend to be human. It often asked how I was doing and frequently offered words of encouragement. Pi’s short responses mean it would also work well as a voice assistant, where long-winded answers and errors are especially jarring. You can try talking with it yourself at Inflection's website.
The incredible hype around ChatGPT and similar tools means that many entrepreneurs are hoping to strike it rich in the field.
Suleyman used to be a manager within the Google team working on the LaMDA chatbot. Google was hesitant to release the technology, to the frustration of some of those working on it who believed it had big commercial potential.