Bloomberg: Hamas attack on Israel was a test for Musk’s X – and it failed

Posts about the attack on Israel have led to confusion, disinformation, and conflict on Elon Musk’s Twitter, showing that new ownership and policy changes have turned the platform into an unreliable resource during the crisis, researchers say.

Source. Bloomberg reports.

Hours after armed Hamas militants from the Gaza Strip stormed into Israel in the largest attack on the country in decades, unverified photos and videos of air strikes, the destruction of buildings and homes, and other posts depicting military violence-in Israel and Gaza-flooded social media.

Many posts promoted old images of armed conflicts, passing them off as new, and were shared by anonymous accounts with blue check marks – a signal that they had purchased verification at the “premium” price of subscription X.

Other accounts passed off video game footage as military photos. Far-right experts spread fake statements about the conflict that went viral, a common tactic to increase user engagement, the publication writes.

Bloomberg reminds that after being bought by Elon Musk, ex-Twitter weakened its moderation and content policy, the consequences of which became apparent during the current geopolitical crisis.

For example, a significant number of fakes were promoted by anonymous accounts with a blue tick. In the “old” Twitter, this sign served as a quality benchmark, but in the new one, it has become just a sign of paid verification for $8 per month.

At the same time, fact-checking on the platform is increasingly outsourced to volunteers through the Community Notes service.

The sharing program is also incentivized by advertising revenue, which X introduced in July for accounts with a large reach. “I think a lot of them looked at the performance of Russia/Ukraine posts in 2022 and wanted to repeat it,” says Emerson Brookings, an Internet researcher at the Atlantic Council.

“It’s about his [Musk’s] behavior on the platform – he thinks conspiracy is okay, and he spreads it himself, and these things come from the top down,” says Kayla Gogarty, research director at Media Matters. And the platform failed it.”

The European Union launches an investigation into the social network for misinformation about the recently adopted Digital Services Act. The company potentially faces fines of up to 5% of daily turnover for each violation.

“Our policy is open source and transparency, which I know the EU supports. Please list the violations in X to which you refer so that they can be seen by users. Merci beaucoup,” Musk replied to European Commissioner Thierry Breton.

Fakes about Israel’s war with Hamas have been plentiful on other platforms, but on X they have taken on a new quality, says conspiracy theorist Mike Rothschild.

“It has become almost impossible to distinguish between facts and rumors, trolling, or conspiracy theories,” says the researcher. “The changes Musk has made have made social network X not just useless in the crisis, but much worse.

Bloomberg cites several examples of malicious disinformation breaking through weakened filters. The first is a video that allegedly shows Palestinians killing Jewish settlers in their own homes. It was posted by Malaysian streamer Yang Miles Chong, who personally communicates with Musk.

Before it turned out that the footage showed Israeli law enforcement officers, the video was viewed by 12.7 million people. times. X did not delete it, but only marked it with a community note.

Fakes about the US allocation of $8 billion in military aid to Israel and the Taliban’s desire to enter the conflict received coverage in the hundreds of thousands and millions.

A post about weapons allegedly received by Hamas from Ukraine was viewed 7 million times, and a video of an Israeli helicopter being hit by an MANPADS taken from the Arma3 game was viewed almost as many times.

David Frum, a former speechwriter for President George W. Bush, wrote in X that 20 minutes on Twitter used to have a clearly greater informational value than watching the news on TV.

“Now there is still useful information out there, but finding it, selecting it, and distinguishing it from fake news has become much more difficult than it was a year ago,” Frum writes.

X denied that she had flagged and deleted tens of thousands of posts with disinformation in the early days of the conflict. This was handled by a separate task force led by CEO Linda Yaccarino, who even canceled her speech at The Wall Street Journal’s technology conference.

However, the proportion of moderated posts on X was significantly lower than on Facebook, even when adjusted for the size of the audience – 8900 vs. 415 thousand per day, the WSJ notes. In 75% of cases, the post was simply marked with an NSFW (view restriction) without being deleted.

Source bloomberg
You might also like
Comments
Loading...

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More