AI Voice Deepfakes Con Customers $200K in Three Days

Pindrop’s review of phone calls revealed that financial call centers receive more than 5 billion calls annually.  Between 1,000 and 10,000 calls out of these are fraudsters trying to reap where they did not sow.
AI Voice Deepfakes Con Customers $200K in Three Days

Voice cloning is real and continues to bring new harvests to the millennials who claim to be “small” with a big “god”. Scammers now challenge the established bank security systems by using AI voice deepfakes to create realistic voices that are almost impossible to detect. The CEO of LexisNexis, Haywood Talcove, acknowledged the difficulty resulting from the use of these AI systems; “I am seeing a highly concerning rise in criminals using advanced technology  – AI-generated deepfakes and cloned voices  – to penetrate very devious schemes that are almost difficult to detect.”

CBC News features the stories of Jane (Not her real name) in March. She received a distressed call from her grandson claiming that he had been involved in a car accident and was arrested. The grandson went ahead to explain that the police found drugs in the trunk and he opted to call Jane who would help without judging him.

Out of a grandma’s love, Jane offered to help with the bail without informing the parents. The supposed grandson handed the phone to a police officer who gave instructions on how to post bail. By the end of the day, Jane sent $58,350.  Within three days, Eight people were successfully scammed to the tune of $200,000. In some instances, the victims are linked to a supposed lawyer who demands a hefty retainer. These claims pull the victim’s heartstrings and trigger emotional reactions.

Sample responses from the victims after the scam:

I swear on my mom’s grave. It was so convincing. I know my granddaughter’s voice and it was her.

I really believed it was him

AI Voice Deepfakes Targeting Banks

Initially, criminals used AI to target women and children in nonconsensual Deep Fake porn. Perhaps this could be easily detectable and controlled by law enforcement agents. However, the use of AI deepfake voices in family emergency scamming is a new challenge that not even parents can suspect these criminals when they mimic their children’s voices.

Generic voice production is not only freaky but also will render voice verification approaches that have been used by financial institutions obsolete.  The fraudsters already had a shot at Florida’s investor, Clive Kabatznik. He called his local bank to discuss a huge money transfer. Shortly afterwards, he called again but this time it was not from Kobatznik. AI voice deepfake was making an attempt to inform the banker to move the money to another location. The attempt failed but it reminds bankers and customers of the subtle nature of these scamming techniques.

The technological progress has also been in their favor as audio contents are available freely on social media such as TikTok and Instagram. Pindrop’s review of phone calls revealed that financial call centers receive more than 5 billion calls annually.  Between 1,000 and 10,000 calls out of these are fraudsters trying to reap where they did not sow.

Microsoft VALL-E takes the lead in aiding voice deepfakes within seconds. The New York Times reported an instance in which Rachel Tobac cloned the voice of Sharyn Alfonsi to give Ms Alfonsi’s passport number. There are genuine concerns that the AI deepfakes will remain to be a challenge especially when they reach out to individuals.

How to detect AI Voice Deepfakes

Detecting AI voice deepfakes can be challenging, as the technology used to create them is constantly evolving. However, there are several methods and techniques you can use to identify potential AI voice deepfakes.

Let’s take a look at a few techniques to help you

Listen Closely

Start by carefully listening to the voice. While some deepfake technologies are highly sophisticated, they may still produce subtle anomalies in speech patterns, pronunciation, or intonation that a human voice wouldn’t typically exhibit.

Analyze the Content

Pay attention to the content of the speech. Deepfake creators often use pre-written scripts or sentences that may sound unnatural or out of context.

Check for Inconsistencies

Look for inconsistencies in the voice. For example, if the voice suddenly changes pitch, speed, or tone during a conversation, it could be a sign of a deepfake.

Speaker Verification

Use speaker verification software or services. Speaker verification technology can compare a voice to a known voice sample to determine if they match. However, this method may not always work if the deepfake is highly convincing.

Spectral Analysis

Use spectral analysis software to examine the audio waveform. Deepfake voices may have different spectral patterns than genuine voices.

In Sum

It’s important to note that while these methods can help in detecting AI voice deepfakes, there is no foolproof technique, and some deepfakes may be challenging to identify. As technology advances, so do the capabilities of deepfake creators, making it an ongoing challenge to stay ahead in the detection game. Always exercise caution when dealing with sensitive information and be skeptical of any content that seems suspicious.

Disclaimer: The content provided herein is for informational purposes only, and we make every effort to ensure accuracy and legitimacy. However, we cannot guarantee the validity of external sources linked or referenced. If you believe your copyrighted content has been used without authorization, please contact us promptly for resolution. We do not endorse views expressed in external content and disclaim liability for any potential damages or losses resulting from their use. By accessing this platform, you agree to comply with copyright laws and accept this disclaimer's terms and conditions.

@2023 InstaMart.AI Inc. All rights reserved.

Artificial Intelligence | Daily AI News, How Tos and AI & Data Services
Logo
CONTACT US