Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Italian brainrot memes — surreal artificial intelligence-generated creatures with flamboyant Italian-sounding names that have gone viral on TikTok — are just the latest internet craze popularising AI-generated content.
Italian brainrot content is very obviously fake. But the increasing sophistication of AI technology means that so-called ‘deepfakes’ — AI-generated images, video or audio so lifelike that they might trick users — are becoming more common. And so are articles and social media posts designed to spread untruths. How can we separate fact from fiction?
Many experts use misinformation as a catch-all term to cover the spread of false or misleading information, whether intentionally or not.
Disinformation is more specifically the deliberate spread of lies, typically to manipulate public opinion and influence politics. Often these operations are covert — meaning the people behind them create fake profiles, pose as others, or persuade innocent influencers to spread their messages.
Navigating the Digital World
Young people are “particularly vulnerable” to misinformation, according to Timothy Caulfield, a law professor at the University of Alberta. “Not because they are less smart. It’s because of exposure,” he says. “They are completely and constantly bombarded with information.”
At the same time, people have to deal with changes in the way that big social media platforms such as X and Meta (owner of Facebook and Instagram) police posts. For example, instead of hiring professional teams to fact check content, they now largely rely on users themselves to add context to posts.
Historically, experts in the field of misinformation have presented tell-tale signs for spotting deepfakes; perhaps the edges of a person’s face is a bit blurry, or the shadows in the image do not make sense.
But “AI is only going to continue advancing,” says Neha Shukla, a student and founder of Innovation For Everyone, a youth-led movement campaigning for the responsible use of technology. “It is simply not enough to say to students to look for anomalies — or look for the person with 13 fingers.”
Instead, Shukla says, “this is the time we have to think critically”.
This means understanding how tech platforms operate. Platforms’ algorithms are designed to keep users engaged as long as possible in order to show them advertising, and controversial content will tend to engage users. An algorithm may play to your emotions or fears. As a result, misinformation and disinformation, if it is compelling, can spread faster than the truth.
Shukla points out that when Hurricane Helene devastated Florida in September 2024, spreaders of disinformation got tens of millions of views of their content on X, whereas “fact-checkers and truth tellers got thousands”.
“Students need to know that a lot of these platforms are not designed to spread truth,” Shukla says.
Meanwhile, Dr Jen Golbeck, a professor at the University of Maryland who focuses on social media, says those who push misinformation may have different reasons for doing so.
Some may have “an agenda” — often political. But there are also those with no agenda who “just want to make money”, she warns.
Checklist to spot misinformation
-
Think critically about content — ask who has created it and why
-
Understand how social media platforms serve you content and what their incentives are
-
Cross-check information with trusted sources
-
Take some time offline to seek out other views
-
Lookout for tell-tale signs that an image might be fake — perhaps it is a bit blurry, or the shadows do not make sense
Against this backdrop, it is vital to consider the source of information. “Think through the incentives that people might have to present something a certain way,” says Sam Hiner, the 22-year old executive director at the Young People’s Alliance, a non-profit focused on advocacy for youth issues.
“We need to understand what other people’s values are and that can be a source of trust . . . It’s not just knowing the facts, it’s understanding how people may sway you and what language they would use to do so,” he adds.
Cross checking can also help, Shukla says. Simply copying and pasting a headline into a Google search is not the answer, because some AI-generated news outfits will flood the internet with multiple versions of the same false article. Instead, he adds, check the work of verified journalists, or official government resources, for example.
Experts are split about the usefulness of the new crowdsourced moderation systems on Meta and X, known as Community Notes. Here, people with differing points of view work together to decide whether to add a comment to clarify a post.
Hiner says this type of shared decision-making is “probably going to be the future” when it comes to helping young people establish facts.
But others believe that these labels can be gamed and may still not be factual if they rely on non-professionals. “Because of these changes, young people might think that truth isn’t something that is objective but something you can argue and debate and settle on compromise in the middle,” says Shukla. “That isn’t always the case.”
Simply getting offline is one of the best ways to ensure we are thinking critically, rather than getting sucked into echo-chambers or inadvertently manipulated by algorithms. Hiner also advises finding people with different views offline, “to get a real diversity of perspectives”.
Despite the dangers, Shukla remains optimistic. “If anybody is equipped to handle this information integrity crisis, it’s young people,” she says. “If the pandemic has taught us anything, it’s that Gen Z is scrappy and resilient and can handle so much.”