Viral misinformation on social media has prompted officials to investigate how platforms are failing to stop the spread of false claims. Image: Karolina Grabowska, via Pexels
Viral misinformation on social media regarding the Israel–Hamas war has prompted officials to investigate how platforms are failing to stop the spread of false claims online.
NewsGuard, an organisation dedicated to tracking misinformation, reported that at least 14 false claims related to the war garnered a combined 22 million views across TikTok, Instagram and X within three days of the Hamas attack.
Video footage from older conflicts, video game graphics and digitally altered documents are included in the misleading content being shared online since October 7.
Lisa Kaplan, founder of Alethea, a company which tracks misinformation and disinformation, spoke on the way false claims spread in times of conflict.
“In times of general chaos and conflict, we do see a lot of disinformation and misinformation,” said Kaplan.
“This was true during the invasion of Ukraine, the withdrawal of troops from Afghanistan, and now, with the conflict between Israel and Hamas.”
She also said that those who spread misinformation included both profit-driven and politically-motivated actors.
“We see commercial actors latching on to news coverage and playing both sides of the conflict to get things trending and to build a following for ads or spam campaigns,” Kaplan said.
In other circumstances, politically motivated users post or share false information as they feel pressure to defend their ideological position, if it matched their “preconceived notions or confirmation bias.”
On Tuesday, U.S. Senator Michael Bennet sought information what tech giants Meta, X, TikTok and Google were doing to stop the spread of misinformation.
“Deceptive content has ricocheted across social media sites since the conflict began, sometimes receiving millions of views,” said Bennet, in a letter addressed to the company chiefs.
Some information has been provided about the measures employed by companies in response to recent circumstances; TikTok said it had employed additional Arabic and Hebrew-speaking content moderators; Meta said it had removed or flagged more than 795,000 pieces of content in Hebrew or Arabic in the first three days since the Hamas attack, and both X and YouTube claimed to have removed harmful content.
However, Bennet said the measures taken by the companies were not enough.
Furthermore, he criticised all four companies for laying off staff from their trust and safety teams in the past year, teams which were responsible for monitoring harmful and misleading content.
“The mountain of false content clearly demonstrates that your current policies and protocols are inadequate,” he said in the letter.
“Your platforms are helping produce an information ecosystem in which basic facts are increasingly in dispute, while untrustworthy sources are repeatedly designated as authoritative.”
