Cyber ​​Fraud Can Be Performed with Voice Deepfake Tools

Cyber ​​Fraud Can Be Performed with Voice Deepfake Tools
Cyber ​​Fraud Can Be Performed with Voice Deepfake Tools

Open AI recently demonstrated an Audio API model that can synthesize realistic human speech and audio input text. The Beatles recently released a new song by combining parts of an old recording and at the same time improving the sound quality, thanks to artificial intelligence (AI), delighting their millions of fans around the world once again. But aside from the joy of the band's new masterpiece, there are also dark sides to the use of artificial intelligence to create fake sounds and images.

What can be done with deep voice spoofing?

Open AI recently demonstrated an Audio API model that can synthesize realistic human speech and audio input text. This Open AI software is the closest thing to real human speech for now.

In the future, such models may become a new tool in the hands of attackers. While the Voice API revoices the specified text, users can choose which of the suggested voice options to pronounce the text with. Although the Open AI model cannot be used to create deepfake voices in its current form, the qualities it reveals are an indication that voice production technologies are rapidly developing.

There are almost no devices available today that can produce high-quality deepfake audio that is indistinguishable from real human speech. But in the last few months, more tools for generating human voices have been released. Previously, users needed basic programming skills to use them, but day by day it is becoming much easier to use them.

Fraud using artificial intelligence is rare. However, there are already “successful” case examples. In mid-October 2023, American venture capitalist Tim Draper warned his Twitter followers that scammers could use his voice. Tim shared that the requests for money made with his voice are a result of artificial intelligence that is getting smarter every day.

How can you protect yourself from this?

For now, the best way to protect yourself from this situation is to listen carefully to what the caller tells you on the phone. If the recording is of poor quality, there is noise and the voice sounds robotic, this is reason enough not to trust the information you hear.

Another good way to test the “humanity” of the other person is to ask unconventional questions. For example, if it's a voice model calling you, a question about her favorite color might surprise her because it's often not the kind of thing she expects to encounter. Even if the attacker manually dials and plays the response at this point, the time delay in the response makes it obvious that you have been tricked.

Another safe option is to use a reliable and comprehensive security solution. Although these cannot detect 100 percent deepfake voices, they can help users avoid suspicious websites, payments, and malware downloads by protecting browsers and checking all files on the computer.