Cloud solutions, Industry

AI threats in voice processing

Deepfakes

In 2017, Lyrebird (now Descript’s artificial intelligence research department) released a product that could imitate any person’s voice in just one minute. Today, similar products can reproduce a human voice with only two or three seconds of audio recording. The popularity of a human-like voice is very high. A study by Voicebot showed that customers prefer human voices to synthetic ones when interacting with companies by 71.6 percent.

However, there are also risks. In a well-known fraud case in Hong Kong, an employee transferred $25 million to five bank accounts after an online meeting with an audio and video fake of top management.

Everyone needs to be concerned about the technology being used for nefarious purposes,” says Bob Long, president of the Americas at Daon, a biometrics and identity assurance software company. “It’s no longer where you’re looking at just famous people; it’s everyone.”