Recent reports have highlighted the alarming rise of deepfake pornography, with an estimated 90 percent of deepfake videos being of nonconsensual porn, mainly targeting women. Despite the prevalence of this issue, there seems to be a lack of legislative focus on addressing nonconsensual deepfakes, with more attention being drawn towards political deepfakes.
Kaylee Williams, a researcher at Columbia University, has been tracking nonconsensual deepfake legislation and notes that many legislators are prioritizing protecting electoral integrity over addressing the intimate image question. This skewed focus is evident in the case of Matthew Bierlein, a Republican state representative in Michigan, who initially delved into legislation on political deepfakes before shifting towards nonconsensual deepfakes.
In Michigan, Bierlein and Democratic representative Penelope Tsernoglou collaborated on a package of nonconsensual deepfake bills, spurred by the viral spread of nonconsensual deepfakes of Taylor Swift. They saw an opportunity to take action and position Michigan as a regional leader in addressing the issue. However, the variability in penalties and protections across states complicates the enforcement of nonconsensual deepfake laws.
According to Williams, the US legal landscape regarding nonconsensual deepfakes is highly inconsistent, with some states allowing for civil and criminal cases against perpetrators, while others focus solely on minors or adults. Laws such as the one in Mississippi target minors, reflecting a growing concern over the use of generative AI by students to create explicit content. On the other hand, laws concerning nonconsensual deepfakes of adults face ethical ambiguities and challenges in proving intent to harm.
Unlike the broad consensus on the inherent moral wrongness of nonconsensual deepfakes involving minors, legislation surrounding adult victims is more complex. The definition of what is considered “ethical” in these cases remains unclear, with many laws requiring proof of malicious intent on the part of the creators and sharers of nonconsensual deepfakes.
Overall, the fragmented approach to nonconsensual deepfake legislation at the state level emphasizes the urgent need for a coordinated and comprehensive legal framework to address this growing threat to privacy and autonomy. As advancements in AI technology continue to facilitate the creation of increasingly convincing deepfakes, lawmakers must prioritize the protection of individuals from exploitation and harm in the digital realm.
Leave a Reply