Character.ai bets on companionship chatbots

Stay informed with free updates

The chief executive of artificial intelligence chatbot maker Character.ai believes most people will have “AI friends” in the future, as it faces a string of lawsuits over alleged harm to children and advocacy groups call for a ban on “companionship” apps.

The San Francisco-based start-up — backed by top Silicon Valley investors such as Andreessen Horowitz and a past acquisition target of Meta — is at the vanguard of tech groups building AI-powered chatbots with different personas that interact with people.

It offers AI chatbots that have characters such as “Egyptian pharaoh”, “an HR manager” or a “toxic girlfriend”, which have proved popular with young users.

“They will not be a replacement for your real friends, but you will have AI friends, and you will be able to take learnings from those AI-friendly conversations into your real-life conversations,” said Karandeep Anand, who took over as CEO in June.

He was appointed just under a year after Google poached the founders of Character.ai in a $2.7bn deal. The company said it has 20mn monthly active users, with around half of those female and 50 per cent Gen Z or Alpha, people born after 1997.

However, Character.ai is also the subject of lawsuits from families that allege their children have suffered real-world harms from using the platform.

One case in Florida claims the platform played a role in the suicide of a 14-year-old. Another Texas complaint cites a teen discussing a screen-time dispute with the chatbot, which apparently suggested killing his parents as a potential solution, and a 9-year-old girl exposed to hypersexualised conversations.

The company declined to comment on pending litigation but highlighted recent changes, including launching a separate AI model for under 18s and notifying users if they have spent more than an hour on the platform.

Character.ai also prohibits non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. “Trust and safety is non-negotiable,” said Anand. “We are constantly evolving how to make it safer.”

Advocacy groups, such as US-based Common Sense Media, have been pushing for US legislation that would ban such AI companionship apps being used by minors.

Its survey of more than 1,000 US teens found that 39 per cent have transferred social skills they practised with AI companions to real-life situations, while 33 per cent have chosen to discuss important or serious matters with AI companions instead of real people.

“These products are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains that are still learning how to navigate social interactions and relationships,” said Robbie Torney, senior director of AI programs at Common Sense Media.

Character.ai has recently introduced advertising and has plans to integrate creator-led monetisation features, such as tipping or in-app purchases for digital content. Its primary revenue is through subscriptions, which it said has increased by 250 per cent year-over-year but would not give specific financials. It charges $9.99 a month or $120 annually.

Although the average Character.ai user spends 80 minutes a day on the app, Anand said it offers a more immersive and entertaining experience where users control the narrative “as opposed to purely leanback content consumption”. The company launched its version of a social media feed this month, further emphasising its focus on entertainment.

Another emerging trend is the use of such platforms for romantic and suggestive conversations. Elon Musk’s Grok, created by his start-up xAI, recently launched AI characters that engage in explicit role-play, and Meta’s AI chatbot will also allow romantic interactions for adults.

Character.ai permits romantic conversations with adult users, but not sexually explicit ones. “There’s a whole plethora of 18+ applications which, unfortunately, the feedback loop for them . . . is they want more of it,” Anand said, adding that safety “will never be a trade-off against engagement”.

Meta boss Mark Zuckerberg, who is also pushing these types of AI-generated chatbots to its users, told podcaster Dwarkesh Patel in April that interacting with AI as a friend could help address loneliness.

Anand also argues that such offerings will help people with real-life human interactions. “I see a very utopian world where AI makes us better,” he said. “People end up using these characters . . . as a test bed for making the real-life relationships a lot deeper, a lot more useful and a lot healthier.”

Leave a Comment