During an interview with Tucker Carlson, Elon Musk said that he plans to create an artificial intelligence (AI) platform called “TruthGPT” to compete with rival AI platforms like OpenAI’s ChatGPT and Google’s DeepMind.
Musk was an early investor and co-founder of OpenAI but has since grown deeply critical of the company. Musk told Carlson that TruthGPT would be more accurate and honest than ChatGPT, which he has criticized for having a liberal bias.
Musk also expressed his belief that AI requires more regulation to ensure that safety and ethical standards are met as the technology progresses. He said that this was essential to prevent AI from one day trying to “annihilate humans”.
Elon Musk plans to launch TruthGPT AI
Musk said that he envisions Truth GPT as a “maximum truth-seeking AI that tries to understand the nature of the universe.”
According to some reports, the Tesla, SpaceX, and Twitter CEO, is currently still amassing a team to work on his AI start up.
A Nevada business filing revealed that Musk incorporated a new business dubbed X.AI Corp on March 9. According to the company filing, Musk is listed as the director and his adviser, Jared Birchall, is secretary.
In December 2015, Musk co-founded OpenAI, which would go on to release a free preview of ChatGPT in December 2022. Since then, ChatGPT has become one of the most popular AI chatbots and has rarely been out of the news cycle since its release.
However, Musk left the board of OpenAI just three years after co-founding the company. Since then, he has been critical of the company’s ethical approach. He has accused the developers of ChatGPT of training the AI language model “to be politically correct.”
The dangers posed by AI
During the interview, which aired on Fox News’ “Tucker Carlson Tonight”, Musk expressed his concerns regarding the potential biases of AI, and claimed that it is “being trained to be politically correct, which is simply another way of… saying untruthful things.”
He also warned of an existential threat to continued human existence posed by AI and proposed regulation as part of the solution to limit these risks.
“I think we should be cautious with AI,” he told Carlson. “There should be some government oversight because its a danger to the public.”
“So, I think we should take this seriously and we should have a regulatory agency. I think we need to start with a group that initially seeks insights into AI, then solicits opinion from industry, and then has proposed rulemaking. And then those rules, you know, will probably hopefully begrudgingly be accepted by the major players in AI,” Musk continued.
Regarding his own proposed AI platform, Musk said that it would be directed to seek understanding of the universe, which he argued would limit the dangers it posed to humanity.
“I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe, it is unlikely to annihilate humans because we are an interesting part of the universe,” said Musk.
Earlier this year, Musk, together with Apple co-founder Steve Wozniak and several other prominent figures in the tech industry, penned an open letter calling for a pause to the development of systems that can compete with human-level intelligence.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” read the letter.