The rapid advancement of artificial intelligence (AI) has brought about a myriad of benefits to our society. Of most notable applications of AI is the development of synthetic voices, which have become increasingly prevalent in the tech industry. While these AI-generated voices, nearly indistinguishable from a human’s, may seem harmless, there are several potential dangers associated with their use.
Over the past week, a song entirely created by AI went viral on TikTok. “Heart on My Sleeve” contained vocals sung by an AI Drake, featuring an AI The Weeknd. TikTok account Ghostwriter977 is claiming credit for the song. Because of the song’s widespread attention and ambiguous legality, it has been deleted from all streaming platforms. The song garnered over 15M streams across Spotify, Apple Music, TikTok and Youtube before being removed.
On the track, Drake’s chorus sounded nearly identical to his real voice. However, there were a couple of times when his voice bugs out, showing the imperfections of the AI. The lyrics seemed similar to Drake’s discography as well, but it is unclear if the lyrics were written by AI or by Ghostwriter977.
The Weeknd’s feature is also almost flawless, with his voice and tone sounding identical to his actual voice in his real songs. Although the vocals in “Heart on My Sleeve” were on point, the melody in the song was very basic and the mixing and production lacked creativity due to the fact that it wasn’t produced in an actual studio.
Following “Heart on My Sleeve,” multiple other AI-generated songs appeared on social media, going viral; “Ivory & Gold” by Travis Scott and Baby Keem and “Winter’s Cold” by Drake are two of the most streamed AI generated songs.
Another popular trend went viral on TikTok, thanks to AI-generated voices. The trend showcased a variety of US presidents and celebrities in voice chats of video games or ordering fast food on drive-thrus, simulating funny conversations between these figures. One video in particular featured Elon Musk, Donald Trump, Barack Obama, Ben Shapiro, Drake, Joe Biden, and Tucker Carlson all in the same car going through the Chick-fil-A drive-thru and ordering food. They proceeded to make fun of each other while they were giving their orders.
Senior Parth Choudhary enjoys watching these AI-generated TikToks. “I always love how Joe Biden and Trump get into a roast session. I think they [the TikTok videos] are particularly funny because these are conversations that they would never have and would never say,” he said. “But it is a little creepy that the AI can make them sound exactly like the real people.”
Although it is fun to listen to these songs and TikToks, the AI-generated voices bring forth a huge problem; there is a potential for malicious misuse. For instance, these voices can be used to impersonate individuals, manipulate audio recordings or create fake news.
One recent consequence of AI-generated voices involved a ransom call. Arizona mom DeLynne Bock received a phone call from an unknown number, and when she picked up the phone, her daughter Payton was on the other end. A man claimed that her daughter crashed into his car and he was holding her against her will in the back of his truck. As the man shouted profanities at the Bock parents, Payton was heard talking on the phone, saying “Mom, I don’t want to die.”
The mother called the police, believing without a doubt that the voice on the other end was her daughter. Thankfully, when the police called Payton’s phone, she was safe at work.
One of the big concerns behind this case is how the scammer got Payton’s voice. Some have said it was because she is a popular TikTok creator; the scammer could’ve just used the TikTok videos. But Payton disproved those claims because she wasn’t on TikTok until after the scam call happened. So, how did this scammer get her voice and use AI to make a ransom call?
Although this scammer couldn’t have used TikToks to create Payton’s voice, this proposes another big risk to popular creators on social media. It’s becoming increasingly easy for scammers to use a creator’s content to create fake voice recordings by using the videos they post online. Because of our current society’s “cancel culture,” it’s easy for creators and celebrities to be “canceled.” Who is stopping someone from using a creator’s voice and making them say something that could lead to their downfall?
With voice recordings being admissible in court, what stops someone from creating a fake audio recording and using it as evidence? How will the court systems and legislative bodies respond to new AI technologies? When will an AI-generated voice be used to make politicians say something that they never said? Will it ruin campaigns and undermine democracy? There are so many potential catastrophes that can be created through AI.
Senior Nikhil Ramaraju has concerns about the new AI technologies. “I am uncertain about AI. I think it’s going to be an issue at some point,” he noted. “I’m afraid that it will be used to create havoc more frequently as it gets more popular.”
While AI-generated voices may seem innocuous, they pose several potential dangers to society. As we have seen, AI can do it all–from funny TikToks to ransom calls. And as AI continues to improve, it will become increasingly difficult to distinguish between real and synthetic voices, which could lead to a rise in misinformation and propaganda. It is crucial that we take a thoughtful and proactive approach to their regulation and use, to ensure that the benefits of these technologies are not outweighed by their potential harms.
Madelyn Staats • Apr 27, 2023 at 2:34 pm
I like how informative this article was and how you drew awareness towards the way that AI is starting to fake identities and create issues longterm.