Stay informed with free updates
Simply sign up to the US companies myFT Digest — delivered directly to your inbox.
It can be hard to train a chatbot. Last month, OpenAI rolled back an update to ChatGPT because its “default personality” was too sycophantic. (Maybe the company’s training data was taken from transcripts of US President Donald Trump’s cabinet meetings . . .)
The artificial intelligence company had wanted to make its chatbot more intuitive but its responses to users’ enquiries skewed towards being overly supportive and disingenuous. “Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right,” the company said in a blog post.
Reprogramming sycophantic chatbots may not be the most crucial dilemma facing OpenAI but it chimes with its biggest challenge: creating a trustworthy personality for the company as a whole. This week, OpenAI was forced to roll back its latest planned corporate update designed to turn the company into a for-profit entity. Instead, it will transition to a public benefit corporation, remaining under the control of a non-profit board.
That will not resolve the structural tensions at the core of OpenAI. Nor will it satisfy Elon Musk, one of the company’s co-founders, who is pursuing legal action against OpenAI for straying from its original purpose. Does the company accelerate AI product deployment to keep its financial backers happy? Or does it pursue a more deliberative scientific approach to remain true to its humanitarian intentions?
OpenAI was founded in 2015 as a non-profit research lab dedicated to developing artificial general intelligence for the benefit of humanity. But the company’s mission — as well as the definition of AGI — have since blurred.
Sam Altman, OpenAI’s chief executive, quickly realised that the company needed vast amounts of capital to pay for the research talent and computing power required to stay at the forefront of AI research. To that end, OpenAI created a for-profit subsidiary in 2019. Such was the breakout success of chatbot ChatGPT that investors have been happy to throw money at it, valuing OpenAI at $260bn during its latest fundraise. With 500mn weekly users, OpenAI has become an “accidental” consumer internet giant.
Altman, who was fired and rehired by the non-profit board in 2023, now says that he wants to build a “brain for the world” that might require hundreds of billions, if not trillions, of dollars of further investment. The only trouble with his wild-eyed ambition is — as the tech blogger Ed Zitron rants about in increasingly salty terms — OpenAI has yet to develop a viable business model. Last year, the company spent $9bn and lost $5bn. Is its financial valuation based on a hallucination? There will be mounting pressure on OpenAI from investors rapidly to commercialise its technology.
Moreover, the definition of AGI keeps shifting. Traditionally, it has referred to the point at which machines surpass humans across a wide range of cognitive tasks. But in a recent interview with Stratechery’s Ben Thompson, Altman acknowledged that the term had been “almost completely devalued”. He did accept, however, a narrower definition of AGI as an autonomous coding agent that could write software as well as any human.
On that score, the big AI companies seem to think they are close to AGI. One giveaway is reflected in their own hiring practices. According to Zeki Data, the top 15 US AI companies had been frantically hiring software engineers at a rate of up to 3,000 a month, recruiting a total of 500,000 between 2011 and 2024. But lately their net monthly hiring rate has dropped to zero as these companies anticipate that AI agents can perform many of the same tasks.
A recent research paper from Google DeepMind, which also aspires to develop AGI, highlighted four main risks of increasingly autonomous AI models: misuse by bad actors; misalignment when an AI system does unintended things; mistakes which cause unintentional harm; and multi-agent risks when unpredictable interactions between AI systems produce bad outcomes. These are all mind-bending challenges that carry some potentially catastrophic risks and may require some collaborative solutions. The more potent AI models become, the more cautious developers should be in deploying them.
How frontier AI companies are governed is therefore not just a matter for corporate boards and investors, but for all of us. OpenAI is still worryingly deficient in that regard, with conflicting impulses. Wrestling with sycophancy is going to be the least of its problems as we get closer to AGI, however you define it.