How Musk’s X Is Failing To Stem the Surge of Misinformation About Israel and Gaza

In late September, Elon Musk wrote on X that he hoped “people around the world engage in citizen journalism, so we know what’s truly happening and we get real-time, on-the-ground coverage!” He didn’t have to wait long. When Hamas attacked Israel and the country retaliated with an assault on Gaza, posts that purported to show graphic violence in real-time went viral on the platform. Unfortunately, much of it was false.

X, formerly Twitter, has been working to convince advertisers to return to the site despite loosened content rules and amid a widespread pause on ad spend after Musk’s endorsement of an antisemitic post. Musk has pointed to Community Notes — a program run by X in which thousands of fact-checking volunteers from around the world can flag posts on the platform when they lack context or are wrong — as a way to allow for all kinds of speech to continue on the site.

But according to a new analysis by Bloomberg and interviews with misinformation experts, the system has operated slowly and inconsistently at a time when global tensions are especially high and misinformation can have real-world consequences.

“Elon Musk bears direct and personal responsibility for fueling the misinformation crisis on X,” said Emerson Brooking, a resident senior fellow at the Atlantic Council, a Washington-based international-affairs think tank. “This crisis has harmed public understanding of the conflict. It has given rise to countless viral falsehoods, which may be driving policy and even military decisions.”

X didn’t respond to multiple requests for comment.

Prior to Musk’s takeover, Twitter had a system whereby it would limit the distribution of tweets that were potentially problematic, making sure they couldn’t do more damage before its trust and safety team had a chance to review them and figure out if the content violated rules or needed a label to debunk it. Routine spreaders of misinformation would have their accounts suspended or banned. The teams were particularly proactive in times of crisis, like the onset of the Covid-19 pandemic and the US presidential election.

Musk undid much of Twitter’s protocol, saying he prefers to let more people speak their mind. In late October, he declared that posts that were corrected by Community Notes wouldn’t be eligible to make money from X’s creator compensation program, but could generally remain active.

X is a difficult platform to moderate because its recommendation system is so attuned to breaking news events, said Laura Edelson, an assistant professor at Northeastern University who has studied misinformation in large online networks. The content on X — including viral falsehoods on the platform — “accelerates very, very fast,” she said in an interview.

At the same time, Edelson said, X could be, and had been, doing more to combat misinformation on its site. The systematic dismantling of the platform’s trust and safety team “is tantamount to negligence,” she said. The company cut much of its human staff that was responding to harmful content after Musk bought the site for $44 billion last year. While crowdsourcing context for social media posts through Community Notes “is a good idea, it’s absolutely no substitute for actual moderation.”

In order to understand how X has moderated its platform as the Israel-Hamas war has carried on, Bloomberg analyzed hundreds of viral posts on X from Oct. 7, when news of Hamas’ attack on Israel first emerged, to Oct. 21, drawing from a publicly available database of Community Notes posted online. Bloomberg collected notes about the Israel-Gaza conflict that were rated “helpful” by users and an algorithm and thus visible to the public. Reporters filtered the data for keywords related to the conflict, including terms like “Gaza,” “Israel,” “Hamas” and “Palestine.” Bloomberg manually checked posts to confirm they were related to the conflict, then checked whether they contained false information — for example, claims that footage from a video game was actually of the war between Israel and Hamas, which is designated a terrorist organization by the US and the EU. Posts containing opinion or sharing fast-moving breaking news — even if they were later understood to be inaccurate — were not labeled as misinformation in the database.

X has said that it recently added more contributors to Community Notes and that it tries to notify people who have engaged with a post that later receives a note. The company also said that notes were appearing faster on the platform than they used to. But Bloomberg’s analysis found that there was still usually a significant delay before a Community Notes label became visible on the platform.

Across nearly 400 posts with misinformation that Bloomberg checked, a typical note took more than seven hours to show up, while some took as long as 70 hours — a crucial period of time during which a particular lie had the chance to travel far and wide on the platform. As Bloomberg has previously reported, notes only become visible to the public if users from a “diversity of perspectives” are able to agree that a note is “helpful.” A note may also be discarded even after it’s deemed helpful, if its support later goes off-balance and skews toward one side of the opinion spectrum.

In the Atlantic Council’s study of Community Notes, according to Brooking, researchers saw partisan groups hijacking the system for their own purposes. They also observed notes could “radically change” after a few days, including disappearing and reappearing multiple times.

“It seems that X is putting so much emphasis on Community Notes because the platform has run out of resources to do anything else,” Brooking said. “Community Notes aren’t even a band-aid for the misinformation problem.”

In many cases, the accounts pushing misleading war footage and unsubstantiated narratives had blue checks by their names, indicating they paid X at least $8 a month for premium status, which includes “prioritized” ranking for their posts. Under the prior regime, the blue check meant a user was a public figure with a verified identity. Under Musk, accounts spreading misinformation about the Israel-Hamas war could be enabled by the platform’s own features to allow the misinformation about the conflict to spread even further.

In Bloomberg’s analysis, 83 distinct X Premium accounts purchased the special status in the last four months and went on to circulate misinformation about the conflict, based on a comparison of the accounts with a database compiled by the independent researcher Travis Brown, who documented a list of active Twitter Blue accounts in July 2023.

The way people have utilized the new feature on X has real consequences. On Oct. 17, a verified account carrying the name Farida Khan posted, “I have video of that Hamas missile landing in the hospital.” They also claimed to be a journalist with Al Jazeera working in Gaza.

Image from archive.li

But the post was false. The account was created just days before it weighed in on the violence in Gaza, and it had been posting about unrelated matters, like stocks. Soon, Al Jazeera’s official account stated that Farida Khan had no ties to the institution. X ultimately suspended the account.

Several accounts that spread misinformation about the war in the first two weeks of the Israel-Hamas war can be traced back to accounts that had been previously suspended and whose bans were reversed after Musk took over the company.

For example, the far-right activist Laura Loomer, who has called herself a “proud Islamophobe” and was banned from Twitter in 2018 for violating its rules against hateful conduct, had her account restored by Musk last year. Loomer said on Oct. 12 that a “pro-Hamas caravan” had driven through the streets of London while shouting racial epithets through a bullhorn, framing the incident as if it had just occurred.

But the video was from May 2021. British police arrested four people involved in the incident at the time, several news outlets reported.

Reached for comment, Loomer asked why Bloomberg News was trying to “attack a Jewish journalist.”

Since the conflict broke out, X has carried out some moderation of posts. Bloomberg found that the company suspended more than a dozen accounts in its examination of community notes about the Israel-Hamas war. But with the platform operating on little to no transparency about how it enforces its rules, users are left to guess the threshold of policy violation that would result in a suspension. Some repeat offenders have remained active on the platform, with no explanation from the company to the public on why their rule-breaking was not egregious enough to merit them being banned from X.

One account that’s known for spreading antisemitic hate speech in the past and that appeared multiple times in Bloomberg’s dataset is Jackson Hinkle.

In the first two weeks of the conflict, Hinkle falsely said that footage of paragliders showed Hamas surging into Israel. But the old video, featuring parachute jumpers in Egypt, predated the conflict and had been online since at least September. He also said that Colombia expelled its Israeli ambassador, a claim that was proven false by reporters. In late October, he made the extraordinary claim that Israel had lied about the Oct. 7 attacks, citing the reporting of Israeli newspaper Haaretz, which swiftly debunked the lie.

Hinkle, who didn’t respond to requests for comment, continues to post dozens of times a day about the Israel-Gaza conflict on X. In October, he called his account on X the “most viral worldwide,” claiming it had surpassed even Musk’s account. In the beginning of the year, Hinkle had around 100,000 followers on the platform; now he has 2.1 million. Musk has 163 million followers on X.

On Oct. 29, Musk said that “any posts that are corrected by @CommunityNotes become ineligible for revenue share. The idea is to maximize the incentive for accuracy over sensationalism.” Two days later, Hinkle, potentially looking for another avenue to monetize his feed on X, urged his followers to subscribe to his account for $3 in order to help him “DEFEAT THE ZIONIST LIES.”

Hinkle’s is far from the only account that has moved on from spreading misinformation about other topics to spreading misinformation about the Israel-Hamas war.

A new crop of influencers on X with premium status have found virality by repeatedly spreading misinformation on the platform since the Israel-Hamas war broke out, according to Bloomberg’s analysis. Among them are @visegrad24, a right-wing Polish news aggregator that claimed without evidence that the Taliban asked the governments of Iran, Iraq and Jordan for passage to join up with Hamas, and @MarioNawfal, an account that once focused on cryptocurrency and that Musk has routinely engaged with. Nawfal posted about the destruction of a building in Gaza from May 2021 and a clip of a Syrian refugee camp from December 2020, falsely framing the videos as recent footage from the conflict.

Several of the repeat misinformation spreaders in Bloomberg’s analysis overlapped with a recent report from University of Washington’s Center for an Informed Public, which dubbed a handful of high-performing accounts on X as the “new elites,” because they exercised “disproportionate power and influence” over news about the Israel-Hamas war on the platform.

Visegrád 24 and Nawfal didn’t respond to requests for comment.

Nora Benavidez, the senior counsel at Free Press, a digital civil rights group, said that the changes to X’s platform under Musk’s governance have made the social network worse for users around the world and “incentivized grifters, conspiracy theorists and propagandists to drown the platform in disinformation” amid a rising tide of antisemitic and Islamophobic speech online.

“Musk has transformed one of the world’s most widely used sources of breaking news and information into his plaything to promote bigotry,” Benavidez said. “With a platform like Twitter in such shambles, the risks to democracy, national security, and user safety are astounding.”