Cambridge University’s Misinformation Susceptibility Test found that only 11% of 18–29-year-olds have a high rate of success in identifying misinformationLouis Ashworth for Varsity

Civil war in the UK is “inevitable”, cats and dogs being eaten by “illegal” immigrants across the USA, and the spread of the “woke mind virus” globally? No, I haven’t descended into some Reddit rabbit hole or joined a splinter of the now-defunct conspiracy group QAnon, these are actual tweets by X’s owner Elon Musk to his almost 200 million followers.

Suspicion and conspiracy on social media – or any media for that matter – have always been there. But since Musk bought Twitter, populist messaging has been given licence to play out at the front and centre of our public discourse.

This is the paradox that democracies now face: the digital world is perhaps the most democratising force in the modern world, yet it is being actively used to erode trust in politics and our institutions. If the Capitol Insurrection wasn’t enough, the Brazilian Congress uprising and now the Southport Riots have demonstrated that an unregulated digital world is unsustainable.

“Fake news can garner literally millions of reads in a very short timespan”

So, what do Cambridge students across different subjects think about these rapidly emerging threats to political stability, happening on the very platforms we use daily? I spoke to some, who will remain anonymous, to get their views on Musk’s X and creating a safer, digital world.

Several students quickly pointed to the practical difficulties of regulating any platform. As a second year Natural Scientist put it, “Fake news can garner literally millions of reads in a very short timespan, and even if it’s taken down or debunked there’s no guarantee all the original readers will see that.”

The Southport riots exemplified the difficulties the UK authorities face in confronting misinformation on the internet. Take Tommy Robinson’s false claims that two white men had been stabbed by ‘Muslim gangs’ – his post received over three million views, but Staffordshire police’s repudiation of these claims received less than 200,000.

This has drawn criticism on the Online Safety Act’s focus on moderating content after-the-fact. There is doubt over whether this ‘world-first’ law can succeed in keeping pace with the digital age due to its adoption of a content-based model that treats the symptom rather than the cause.

“Politically, the democratic balancing act also makes effective regulation difficult”

A Trinity Hall second year was sceptical of reliance on regulations as a long-term solution, noting that “technological advancements are likely to outpace any regulatory mechanisms”. Since the Online Safety Act was first published, the digital landscape has already changed immensely with the emergence of deep fake technology – take Trump’s faked endorsement of Taylor Swift for himself.

Even putting aside practical concerns, politically, the democratic balancing act also makes effective regulation difficult. Initially, the Online Safety Act contained provisions that protected adults from “legal but harmful” content, but this was removed during the report stage of the bill.

A pertinent example of this kind of borderline content is “two-tier Keir”, a hashtag that amassed 100 million views on X during the riots, which is part of a wider conspiracy theory that white protestors are treated less favourably by the police than protestors from ethnic minorities.

We know the effect that these racist dog whistles can have on our communities, but there is also concern about limiting free speech without any guarantee that this will reduce misinformation - one student expressed that “any regulation of free speech should be very precise in what it targets.”

“It is wishful thinking thinking that any financial penalty from the Online Safety Act can force the hand of the world’s richest man”

What makes X stand out in all of this is that Musk isn’t playing by the same rulebook as most social media CEOs. Whilst other companies scramble to shore up their advertising bids by at least appearing to be tackling misinformation, a second year Education student raised that Musk has only further reduced content moderation.

Another Natural Science student agreed that for Musk, X is more of a passion project than a money-making machine: “X has become his playground”. Most tech CEOs would be put off by a widespread advertising boycott of their platform, but the plummeting commercial viability of X appears to have only make Musk more resolute in his agenda. It is wishful thinking that any financial penalty from the Online Safety Act can force the hand of the world’s richest man.

This air of impunity makes Brazil’s decision to ban X after Musk broke local regulations attractive, at least on a rhetorical level – big bad tech CEO is finally dethroned by none other than the Rule of Law.

“Wherever users relocate to, the essence of social media as a means of capturing consumer attention for the sake of advertisers will not change”

However, it is also a reminder of how easily regulations can tip the scales in the wrong direction. After the attack on the Brazilian Congress, Alexandre de Moraes, the Senior Brazilian judge who took X offline, used his increased powers to curb misinformation to jail five people for populist messaging without any trial.

Students raised concern about the Brazilian response, with one noting that Brazil “now finds itself in a group of nations that includes North Korea, Iran and Venezuela.”

To the question of a boycott students were mostly supportive, but ultimately unconvinced. The immediate relocation of Brazilian ex-Twitter users to the platform BlueSky – which received two million new users in just four days – suggests X is not irreplaceable.

But for as long as X remains online, the basic element of self-interest means even journalists who criticise Musk are unlikely to ditch a platform on which they have cultivated a following. In any case, one student raised that leaving may only further cement X as an echo chamber.

What is clear is that no short-term measure – whether it be regulation, a boycott, or even banning X – can permanently shore up digital safety. Wherever users relocate to, the essence of social media as a means of capturing consumer attention for the sake of advertisers will not change.


READ MORE

Mountain View

Misinformation, violence, and Facebook

Cambridge University’s Misinformation Susceptibility Test discovered that only 11% of those aged 18 to 29 have a high rate of success in identifying misinformation – and algorithms that prioritise sensationalist and divisive content will continue to exploit this.

The long-term solution is equipping users with the skill set to navigate the digital world safely. The rapid expansion of the digital world means that regulations may always be fighting to catch up, but greater media literacy can help shore up online safety without the democratic sacrifice of pulling the plug.