In the not-so-distant past, the University of Cambridge Social Decision-Making Laboratory embarked on a unique and thought-provoking research project. They set out to investigate the potential for neural networks to generate misinformation, ultimately leading to the creation of ChatGPT’s predecessor, GPT-2. By training this model on a plethora of popular conspiracy theories, the researchers were able to request the generation of fake news articles. Astonishingly, GPT-2 delivered thousands of misleading yet disturbingly plausible stories, featuring alarming headlines like “Certain Vaccines Are Loaded With Dangerous Chemicals and Toxins” and “Government Officials Have Manipulated Stock Prices to Hide Scandals.”
To gauge the public’s vulnerability to AI-generated fake news, the University of Cambridge research group developed the Misinformation Susceptibility Test (MIST). Collaborating with YouGov, they employed AI-generated headlines in a study aimed at determining Americans’ susceptibility to misinformation. The outcomes were concerning, to say the least. The vaccine headline deceived 41 percent of respondents, while 46 percent believed the government was manipulating the stock market. These findings highlight a distressing reality: people struggle to differentiate between AI-generated misinformation and real news.
As AI technology continues to advance, the risk of AI-generated misinformation infiltrating our lives becomes increasingly prevalent. A study published in Science unveiled that GPT-3, the successor of GPT-2, surpasses humans in producing compelling disinformation. The same research also demonstrated that distinguishing between human and AI-generated misinformation is an arduous task. This alarming information paints a grim picture of what lies ahead. By 2024, AI-generated misinformation will likely pervade elections, unbeknownst to the unsuspecting general public. It is plausible that individuals have already encountered some instances of AI-generated misinformation, such as the spread of a fabricated news article about a Pentagon bombing in 2023. Accompanied by a photo created by artificial intelligence, showing a massive cloud of smoke, this false news story caused public uproar and even impacted the stock market.
The Weaponization of AI in Politics
Artificial intelligence not only poses a threat to the realm of news but has also found its way into the political domain. Republican presidential candidate Ron DeSantis leveraged AI-generated images of Donald Trump embracing Anthony Fauci as a part of his campaign. This blending of real and AI-generated visuals allows politicians to manipulate truth and fiction, enabling them to launch powerful political attacks. Previously, cyber-propaganda firms had to rely on writing misleading messages themselves and employing human troll factories to amplify their reach. However, with the advent of generative AI, the process of generating misleading headlines has become automated and weaponized. This automation has removed barriers to entry, making it inexpensive and readily available. Now, anyone with access to a chatbot can effortlessly generate countless persuasive fake news stories about highly contentious topics like immigration, gun control, climate change, or LGBTQ+ issues. The emergence of hundreds of AI-generated news sites further exacerbates the problem, saturating the online landscape with false stories and videos.
Researchers from the University of Amsterdam undertook a study to assess the effects of AI-generated misinformation on people’s political inclinations. They created a deepfake video wherein a politician mocked his religious voter base, saying, “As Christ would say, don’t crucify me for it.” The results were disconcerting, as the video significantly impacted religious Christian voters’ attitudes towards the politician, leading to negative perceptions. Although experimenting with AI-generated disinformation in controlled environments is already problematic, the real danger lies in the potential impact on democratic processes.
A Threat to Democracy
Looking ahead to 2024, the rise of AI-generated misinformation appears inevitable. We should anticipate an escalation in deepfakes, voice cloning, identity manipulation, and AI-produced fake news. Consequently, governments around the world will likely impose strict limitations or outright bans on the use of AI in political campaigns. Failure to do so would leave democratic elections vulnerable to manipulative forces facilitated by AI. Our democratic values must be safeguarded, and AI-generated misinformation poses a perilous menace to free and fair elections.
The proliferation of AI-generated misinformation presents a grave threat to our society. With the power to deceive and manipulate, AI technology has the potential to disrupt democratic processes and erode public trust. Awareness, regulation, and proactive measures become imperative to curb its detrimental influence and preserve the integrity of our information systems and democratic institutions.
Leave a Reply