Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
From 2019 to 2023, the amount of AI-generated disinformation, or deepfakes, increased by 552% and is only expected to increase, according to Will Freedman, a Utah Valley University student studying national security and research assistant at the university’s Herbert Institute for Public Policy.
Students from three of UVU’s organizations, the Herbert Institute of Public Policy, the Center for National Security Studies and the Vivint SMARTLab, researched just how persuasive deepfakes are to the public and their potential impact on elections.
“This is the first election cycle where AI-generated content, namely deepfakes, are projected to play a prominent role,” Freedman said Monday morning to a crowd on the UVU campus, where he and his classmates shared their research findings.
“Seventy percent of Americans say they are concerned about AI, deepfake political robocalls, and 71% say that battleground states are the most likely targets of AI-generated disinformation.”
One high-profile example of AI-generated disinformation involved a deepfake audio message purporting to be President Joe Biden, used in a robocall to discourage New Hampshire voters from participating in a primary election earlier this year. The recording, crafted to sound like Biden, urged voters to wait for the general election in November, creating confusion among recipients before officials confirmed it was fake.
“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the voice sounding like Biden said, per The Associated Press. “Your vote makes a difference in November, not this Tuesday.”
Deepfakes are a problem unique to the 21st century.
As more people become virtually connected and at younger ages, the amount of misinformation spread across the internet has increased, and the advancement of artificial intelligence has made it nearly impossible to discern what is real and what is not on the internet.
“The first step of addressing any problem is understanding the problem,” Hope Fager, a student at UVU studying national security studies and computer science and the strategic research team lead in the Emerging Tech Policy Lab in the Center for National Security Studies, said in the student presentation Monday.
“Therefore, the overall goal of this study is to give policymakers and campaigns the information they need in order to combat this issue effectively,” she added. “In this study, we simulated as best as possible the natural circumstances of scrolling through social media and seeing videos, and then we looked at how those videos affected people that saw them.”
Fager said it only took her the weekend, by herself, to create a deepfake using a free online AI generator that took two real videos — one of her and one of her team members, Leah — swapped their faces and “replaced the audio of the video with an AI-generated audio of Leah’s voice.”
Three questions were laid out before the study:
A total of 244 participants were included in the study nationwide, 40 of whom participated in person at the Vivint SMARTLab, where biometric technologies were used to analyze nonconscious responses.
Mauricio Cornejo Nava, a business and analysis student at UVU and customer experience researcher for the UVU SMARTLab, said participants were divided into four groups — real video, fake video, real audio and fake audio — to eliminate as much bias as possible.
“After the participants were exposed to the media content, we asked them a series of questions to gather their thoughts and impressions,” Nava said. “We used eye tracking, which tracks where the participants were looking when they were watching the content, and facial expression analysis, which can read over 3,000 micro-expressions and translate that into several emotions.”
“What we found is that the deepfake video performed better in four out of the six metrics than the real video. It was perceived as more knowledgeable, as more trustworthy, as more persuasive, and of better content quality,” Nava added.
Once participants were informed of the study’s true nature and that there was a chance they had fallen victim to a deepfake, not all were confident in which one they had been exposed to.
Nava said that “50% of the people that saw the deepfake content were confident in their response. For the real audio, 60% were confident. And for the real video, 70% were confident. So even though they saw the real content, they were still not confident at all.”
Most participants admitted that it didn’t cross their minds that the content they viewed would be AI-generated, and others were concerned that AI-generated material has become so realistic that it’s not identifiable.
“Once people have made an assessment, they’re generally going to stick to their guns, and they’re going to assume that they’re right, even if they’re wrong,” Fager added.
“This means that, with a good deepfake, a stranger could fraudulently become law enforcement, a field expert, a personal friend or a politician; based on our research, someone could adopt their identity, authority and expertise with at least a 50% accuracy rate.”