In 2024, we’ll truly find out how robust our democracies are to online disinformation campaigns

Disinformation, sharing false information to deceive and mislead others, can take many forms. From edited “deepfake” videos made on smartphones to vast foreign-led information operations, politics and elections show how varied disinformation can be

In 2024, we’ll truly find out how robust our democracies are to online disinformation campaigns

Estimated reading time: 7 minutes


William Dance, Lancaster University

Hailed as “the year of elections”, with the majority of the world’s population going to the polls, 2024 will also be a year of lessons learned, where we will see whether disinformation can truly subvert our political processes or if we are more resilient than we think.

The dissemination of disinformation, as well as misleading content and methods, is not always high-tech. We often think about social networking, manipulated media, and sophisticated espionage in this regard, but sometimes efforts can be very low budget. In 2019, publications with names that sounded like newspapers were posted through letterboxes across the UK. These news publications, however, do not exist.

Bearing headlines such as “90% back remain”, they were imitation newspapers created and disseminated by the UK’s major political parties. These types of publication, which some voters thought were legitimate news publications, led to the Electoral Commission describing this technique as “misleading”.

The News Media Association, the body which represents local and regional media, also wrote to the Electoral Commission calling for the ban of “fake local newspapers”.

Zone flooding

Research has shown that for some topics, such as politics and civil rights, all figures across the political spectrum are often both attacked and supported, in an attempt to cause confusion and to obfuscate who and what can be believed.

This practice often goes hand-in-hand with something called “zone flooding”, where the information environment is deliberately overloaded with any and all information, just to confuse people. The aim of these broad disinformation campaigns is to make it difficult for people to believe any information, leading to a disengaged and potentially uninformed electorate.

Hostile state information operations and disinformation from abroad will continue to threaten countries such as the UK and US. Adversarial countries such as Russia, China and Iran continuously seek to subvert trust in our institutions and processes with the goal of increasing apathy and resentment.

Just two weeks ago, the US congressional Republicans’ impeachment proceedings against President Joe Biden began to fall apart when it was revealed that a witness was supplied with false information by Russian intelligence officials.

Disinformation can also be found much closer to home. Although it is often uncomfortable for academics and fact checkers to talk about, disinformation can come from the very top, with members of the political elite embracing and promoting false content knowingly. This is further compounded by the reality that fact checks and corrections may not reach the same audience as the original content, causing some disinformation to go unchecked.

AI-fuelled campaigns

Recently, there has been increased focus on the role of artificial intelligence (AI) in spreading disinformation. AI allows computers to carry out tasks that could previously have only been done by humans. So AI and AI-enabled tools can carry out very sophisticated tasks with low effort from humans and at low cost.

Disinformation can be both mediated and enabled by artificial intelligence. Bad actors can use sophisticated algorithms to identify and target swathes of people with disinformation on social media platforms. One key focus, however, has been on generative AI, the use of this technology to produce text and media that seem as if they were created by a human.

This can vary from using tools such as ChatGPT to write social media posts, to using AI-powered image, video and audio generation tools to create media of politicians in embarrassing, but fabricated situations. This encompasses what are known as “deepfakes”, which can vary from poor to convincing in their quality.

While some say that AI will shape the coming elections in ways we can’t yet understand, others think the effects of disinformation are exaggerated. The simple reality is that, at present, we do not know how AI will affect the year of elections.

We could see vast deception at a scale only previously imagined, or this could be a Y2K moment, where our fears simply do not come to fruition. We are at a pivotal moment and the extent to which these elections are affected, or otherwise, will inform our regulatory and policy decisions for years to come.

If 2024 is the year of elections, then 2025 is likely to be the year of reflections. Reflecting on how susceptible our democracies are to disinformation, whether as societies we are vulnerable to sweeping deception and manipulation, and how we can safeguard our future elections.

Whether it’s profoundly consequential or simply something that bubbles under the surface, disinformation will always exist. But the coming year will determine whether it’s top of the agenda for governments, journalists and educators to tackle, or simply something that we learn to live with.The Conversation

William Dance, Senior Research Associate, Lancaster University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow