Article - IT

Friend or Foe:
The Truth Behind Deepfakes in Business

By Sarah Tijou|11th November 2022

As Deepfake technology becomes more sophisticated, it will be increasingly difficult to tell fake videos apart from real ones. And what’s more alarming is how this technology is being used for cybercrime, becoming a huge security risk for organizations of all sizes. 

We’ve seen manipulations causing immense ramifications for celebrities and politicians, but what about senior business leaders?  

The First Noted Deepfake Business Scam

In 2019, the CEO of an unnamed UK-based energy firm transferred $243,000 to the bank account of a Hungarian supplier, immediately after strict orders from his superior. Or so he thought. The Wall Street Journal reports that the superior’s “distinctive accent and slightly melodious way” of speaking came through the phone, but it was in fact AI voice technology, and he had been scammed.   

Elon Musk Promoting Crypto Scams

Earlier this year, crypto scammers produced a Deepfake video interview of SpaceX founder, Elon Musk, promoting a cryptocurrency scam called BitVex. The video syncs his lips to a script delivered by a software-generated voice that sounds just like his. In the video, fake Musk claims that BitVex is a project he created to ensure Bitcoin is widely adopted and promises 30 percent returns every day over three months on any crypto deposited.  

Fake Meetings Across “Binance”

The world’s largest crypto exchange, Binance, came under threat when scammers made a Deepfake of Chief Strategy Officer, Patrick Hillmann, to trick contacts into taking meetings. In his blog he writes, “Other than the 15 pounds that I gained during COVID being noticeably absent, this Deepfake was refined enough to fool several highly intelligent crypto community members.” 

Are You Sure You’re Dealing with Real People?

These cases are becoming increasingly common. Rick McElroy, principal cybersecurity strategist at VMware says, “Two out of three respondents in our (Global Incident Response Threat) report saw malicious Deepfake attacks, a 13% increase from last year.” 

According to the FBI, scammers are also using Deepfakes to pose as job applicants during remote interviews, so they can gain access to company IT databases, proprietary information, or consumer or financial data.  

In a public advisory, the FBI notes, “In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.” 

“By 2025, AI will power 95% of all customer interactions”

Servion Global Solutions predicts that “by 2025 AI will power 95% of all customer interactions.” This includes live telephone and online conversations that will leave customers unable to ‘spot the bot’. Finance digest reports that at the same time, “consumer expectations that businesses use visual technologies such as virtual and augmented reality, and holograms, are set to skyrocket.  

Businesses that fail to prepare for this future now face a severe risk of being left behind by their competitors. But, the digital-first business economy needs trust, and this trust is under daily assault.” This is down to Deepfakes, undermining the ability of companies and consumers to trust one another online.  

Defense Against the Deepfakes

The sophistication of AI means in most cases we are unable to recognize Deepfake at all. “Big tech companies like Microsoft and Google have been developing tools to detect these threats, and federal legislation is also in the works in an attempt to limit the damage” the Guardian reports. LinkedIn has also introduced new security features to combat fake accounts.  

Last month, the platform deployed deep-learning technology that analyzes Deepfake profile pictures and videos to filter genuine uploads from AI creations. It looks for subtle image artifacts, which may be invisible to the naked eye, associated with images created using AI. Accounts with positive detections will be removed before they can be used to reach out to members. But these steps can only go so far.  

So How Do We Further Protect Our Businesses from This Growing Danger?

Preparing now is necessary for protection. After a recent roundtable event about fighting cyber fraud, Mary Ann Miller, VP of Client Experience, Fraud and Cybercrime Executive Advisor at Prove, spoke to GDS again to further highlight her advice on protecting businesses from Deepfake scams. Here are her three top points to consider: 

One: “Any security routine that relies on facial or voice recognition needs to think about how Deepfakes can beat the solutions. Ensure that the vendors or the providers you’re working with have ways to detect Deepfakes. It’s going to require more and more subject matter experts and teams that understand the subject matter.” 

Two: “Expect these Deepfakes to get better, and if they do get better then it could affect some of your security routines. I always recommend that in any kind of identity, authentication, or fraud routine, make sure there are multiple layers and signals.” Mary Ann says that’s the only time to ‘green light an interaction.’ 

Three: “The first thing is to educate. Ensure your security teams and your executives are educated about the sophistication of Deepfakes. Don’t be naive about how sophisticated they are. Deepfakes can be very dangerous, and we can’t underestimate that.” 

GDS Summits are tailored 3-day virtual event conferences that bring together business leaders and solution providers to accelerate sales cycles, industry conversations and outcomes. Regarding the Digital Innovation Summit 75% of Delegates said their overall experience was above average or excellent and 75% of Delegates who responded said the summit provided them with actionable outcomes to support their current initiatives. 

Apply to Attend 

Back to insights

Related content

Related events