Elon Musk’s rebranded social media platform, X (previously known as Twitter), recently took down its feature that allowed users to report misinformation, causing a stir among netizens and watchdog organizations. This decision was especially concerning to many as it came right before a significant referendum in Australia.
Introduced in 2022, the misinformation reporting feature in X was a welcome addition for users who were increasingly concerned about the spread of false or misleading information. But, the recent update removed this option. Now, when users try to report a post, they’re presented with choices like reporting for spam, deceptive identities, sensitive material, or hate speech, but not for misinformation.
Alice Dawkins, the executive director of the research group Reset, voiced her concern over this rollback. She pointed out that with the global “bumper year” of elections, it was alarming to see X seemingly retreat from its commitment to counter the tide of misinformation, which had previously resulted in political upheavals in places like the US.
Moreover, Reset noticed that X had eliminated the “politics” category for reporting from every region except the European Union. The EU’s stringent Digital Services Act (DSA) mandates that social media platforms should not harbor misinformation. The timing of this removal is particularly contentious as Australia is on the brink of a landmark referendum – a vote to incorporate an Indigenous voice in its parliament. This significant step towards recognizing the rights and roles of the Indigenous community makes the referendum a focal point, where the spread of misinformation could be detrimental.
Reset also raised an alert on how X might be violating Digi’s Australian Code of Practice on Disinformation and Misinformation. This code emphasizes the importance of allowing users to report misleading content.
Back in 2021, when it was still Twitter, the platform had announced with much fanfare the introduction of the misinformation reporting feature. The rollout was initiated in the U.S., South Korea, and Australia, enabling users to flag tweets they deemed misleading.
However, since Musk’s takeover of the platform in the previous year, X has been at the receiving end of accusations related to the unchecked spread of misinformation. The European Commission’s Vice President, Vera Jourova, noted that users who spread disinformation seem to have gained more traction on X than their counterparts.
This is not the first time X has faced regulatory scrutiny. Earlier in the year, the EU identified X as one of the 19 platforms that needed strict oversight under the DSA. Yet, within a month, Musk decided to withdraw X from the EU’s Code of Practice on Disinformation. This decision was soon after EU officials raised concerns about X’s role in amplifying Kremlin’s propaganda during the Ukraine War.
Wrapping up the concerns, Commissioner Julie Inman Grant mentioned sending a legal notice to X, seeking an explanation for the surging complaints of hate speech and misinformation. Emphasizing the necessity of transparency for accountability, Grant stated that platforms like X need to safeguard their users from the harmful effects of false information.
In the ever-evolving world of social media, where misinformation can sway opinions and influence events, the move by X to remove its reporting feature leaves many questioning its responsibility and role in fostering a well-informed digital community.