Generative AI technology is set to play a significant role in the upcoming 2024 US Presidential elections. With the use of chatbots and deepfakes, political campaigns are expected to be heavily influenced by artificial intelligence. However, the involvement of AI in politics may hinder regulatory efforts in the field. Nathan Lambert, a machine learning researcher at the Allen Institute for AI and co-host of The Retort AI podcast, believes that AI regulation in the US will be impeded due to the election year’s political sensitivity. This article explores the potential consequences of generative AI on the electoral process and the challenges it poses to AI regulation.
The Challenges of AI Attribution in Politics
Lambert predicts that the US election will shape the discourse surrounding AI regulation. The misuse of AI products, whether by campaigns, bad actors, or companies like OpenAI, is expected to complicate the attribution of AI-generated content. As individuals leverage tools like ChatGPT and DALL-E to create election-related content, the dissemination of misinformation and misleading information becomes a concern. Lambert describes the situation as a “hot mess,” reflecting the challenges that arise when dealing with AI-generated content in a political context.
The Early Warning Signs
Even before the 2024 US Presidential election, the use of AI in political campaigns has raised important concerns. Governor Ron DeSantis’ campaign efforts in Florida have already utilized AI-generated images and audio of former President Donald Trump. Additionally, a recent poll conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy revealed that 58% of adults believe AI tools will facilitate the spread of false and misleading information during the upcoming elections. The public’s apprehensions about the impact of AI on the political landscape highlight the urgent need for regulation and responsible AI deployment.
In response to public concerns, prominent tech companies are taking steps to address the potential risks associated with AI in political campaigns. Google announced its intention to restrict the types of election-related prompts its chatbot Bard and search generative experience will respond to leading up to the US Presidential election. Meta, the parent company of Facebook, has similarly decided to prohibit political campaigns from utilizing new AI advertising products, and advertisers utilizing AI tools to alter or create election ads on Facebook and Instagram will be required to disclose their usage. OpenAI has also taken measures to combat disinformation and offensive content by enhancing the way it handles ChatGPT and other AI products.
The Copilot Controversy
Despite the efforts of some tech companies, concerns persist regarding the impact of generative AI technology on the electoral process. Wired reported that Microsoft’s Copilot, previously known as Bing Chat, has been found to spread conspiracy theories, misinformation, and outdated or incorrect information. The systemic nature of these issues raises doubts about the ability to maintain sanitized information when it comes to the election narrative. The potential consequences extend beyond the 2024 Presidential race, as generative AI tools, whether intentionally or unintentionally, have the power to undermine the foundations of democracy.
Alicia Solow-Niederman, an associate professor of law at George Washington University Law School specializing in law and technology, emphasizes the severity of generative AI’s impact on democracy. She references legal scholars Danielle Citron and Robert Chesney, who coined the term “the liar’s dividend.” This concept revolves around the erosion of trust and the destabilization of the electoral system when distinguishing truth from falsehood becomes increasingly challenging. The potential consequences of generative AI tools, particularly through disinformation campaigns, pose a significant threat to the fabric of democracy itself.
With generative AI at the forefront of political campaigns, the 2024 US Presidential elections are poised for unprecedented challenges. The attribution of AI-generated content remains and will likely continue to be a complex issue, requiring careful considerations from both regulators and the media. The actions taken by tech companies, although commendable, may not fully address the underlying risks associated with AI in politics. As the relationship between AI and elections evolves, a responsible and balanced approach to regulation becomes increasingly critical to safeguard democracy from the potential harm perpetrated by AI-generated content.
Leave a Reply