Deepfakes, phoneys and fraud: Why GenAI is changing the game for security teams - GDS Group

Deepfakes, phoneys and fraud: Why GenAI is changing the game for security teams

Article - Security
By Ben Thompson|15th January 2024

As a journalist and presenter, I talk about the intersection of people and technology all the time.

The impact that emerging tech has on the way we work – on our teams, on our processes, on the way we structure our organisations and business models – is an endlessly fascinating topic. Given the rapid rate of technology change, it’s an environment that shifts and changes constantly. As the old saying goes: things have never moved this fast, and they will never be this slow again.

But unbridled adoption of new tools and ways of working also brings fresh risks and new challenges. And nowhere is this more apparent than in the fast-moving area of generative AI (GenAI).

The impact of GenAI on cybersecurity

A couple of weeks ago I had my mind officially blown.

While hosting GDS Group’s Security Summit, I was moderating a panel discussion on emerging digital technologies in the cybersecurity space. We were looking at how developments in generative AI are lowering barriers to entry for cybercriminals and making it easier to create effective social engineering scams and phishing campaigns.

It was a lively conversation. We had representatives from the Royal Bank of Canada, the United States Space Force, and security firm ExtraHop – until it was revealed that, in fact, we didn’t. Because one of our guests was a fake.

And not just any fake. A deep fake.

Sure, he looked like our expected guest. He moved and sounded like him. But he was an AI-generated avatar, placed there by our speaker to highlight the increasing difficulty organizations and individuals alike face in distinguishing between fact and fiction.

Deepening the deep fake debate

The reveal was a genuine jaw-drop moment – for both myself and my fellow panellists – not least because it perfectly realized many of the concerns we’d been discussing at the summit. How attacks were becoming more sophisticated. How technology was making it easier for bad actors to penetrate our defences. And how the speed at which GenAI was being adopted posed huge challenges to our ability to respond as security professionals.

And as Denny Prvu, Global Director for Architecture, Innovation Labs, Immersive, Quantum and Generative AI over at Royal Bank of Canada – and the man behind the deepfake stunt – pointed out, understanding that this is our new reality (pun most definitely intended) is critical.

“I think it’s key that we don’t fear incoming advances in technology,” he told an audience made up of many of North America’s leading security execs. “It’s not so long ago that Wi-Fi was new to all of us, and now we’re going from mobile devices to VR and AR environments, to digital twins, to generative AI and more. Technological progress is natural and irreversible.”

Instead, he said, we need to better understand the human dimension when it comes to mitigating the impact – and then marry that insight with the application of emerging technologies. “We need to look at the ways people interact with systems. Use common sense. Look at patterns. There are so many great behavioural technologies out there, and if we do our due diligence, we’ll be able to chain them all together and get the best out of them.

“It’s exciting to explore these new fields. We just need to apply some common sense.”

A clear and present danger

Prvu’s fellow panellists agreed that this is something we need to focus on as a matter of urgency. “I think this is a real threat, and one that goes beyond just impersonation,” said Thomas Clavel, Senior Director of Product at cybersecurity firm ExtraHop. “Once you start using avatars and other AI technologies that are capable of imitation, you can penetrate networks much more easily – and from there you can do a lot of damage.”

Brian Hostetler, Director of Cyber Operations at the US Space Force, agreed and called for more collaboration around how the sector is evolving. “There’s really no governmental or industry regulations around AI as yet,” he said. “We need to ensure the use of such technologies is both ethical and legal, and as we start maturing, we need to implement guardrails to govern what that looks like moving forwards.”

As ever, finding that balance between technology, governance and the humans in the loop will be key. “At the end of the day, you need to be able to identify behaviour, and find that edge between what’s human and what’s not human, between what’s human behaviour and what’s suspicious behaviour,” said Clavel. “It’s an arms race. You need to deploy AI technologies, apply intelligence, and only that way can you combat the threat.”

Spot the difference

And while there was genuine concern around the speed at which deepfake technology was evolving – and the proliferation of tools available on the dark web for criminals to access – there are some surprisingly analogue responses emerging to help combat the threat.

“Even with the best training and the best people, it’s very hard to spot these fakes,” said Clavel. “But while training is not going to make you fool-proof, it is essential in reducing the margin of error. Beyond that, you also need to have guardrails in place around what is acceptable behaviour versus what is not acceptable, and what behaviours might require a higher level of security. Governance and education remain critical.”

And there were some great ideas shared amongst the wider audience, too. We also heard from Adam Powell, Executive Director of the Election Security Initiative at USC. He and his team are working on tightening cybersecurity ahead of the 2024 election, and he had some great advice gleaned from working with high-level politicians and their security teams.

“We advise that the first line of defense when receiving a voice call from someone you suspect could be a deepfake is to say, ‘Thank you very much, let me call you right back’. And for video calls, we suggest asking them to turn their heads, because the software currently isn’t very good at rendering ears! That will change in time, of course, so the key is to remain vigilant.”

Staying ahead of the game

From operations to customer and employee experience, from product innovation to complex supplier and partner ecosystems, how we interact with new and emerging technologies is shaping the way we do business.

And of course, the speed at which things are evolving unlocks huge opportunities: new products and services, faster time-to-market, more effective use of time and resources. But making the most of those opportunities means keeping security top of mind. And as the pace of change continues to increase, that gets harder to do.

According to the World Economic Forum, 66% of cybersecurity professionals experienced deepfake attacks within their respective organizations in 2022. And researchers predict that as much as 90% of online content may be synthetically generated by 2026.

Ensuring your business is ready to meet that threat is one of the greatest challenges we face in the next few years. Because if security professionals can’t stay ahead of the deepfakes, frauds and phoneys, what hope is there for the rest of us?

Join us at the next GDS Security Summit to collaborate with some of North America’s leading security specialists and find out what the future holds. We can’t wait to see you there!

Back to insights

Related content

Security
Article

Chief of Everything – What Does it Mean to be a Modern CISO?

Today’s CISO needs to wear a lot of hats. With so much on one plate, it’s no wonder that CISOs are asking "are we chief of anything?"
Josh Porter
Find out more
Security
Article

Consolidating Cybersecurity: Is it Worth it?

Airtight cybersecurity is what dreams are made of. You just need to figure out how to get there. Consolidate? Or consolidon't?
Patrick Mclean
Find out more
Security
Article

The CISO’s Dilemma – Doing More With Less in Cybersecurity

The list of CISO priorities is growing. Host and editor Ben Thompson, asks: how do CISOs manage the pressure to be more efficient and effective?
Ben Thompson
Find out more
Security
Article

How Do You Balance Security and Agility While Staying Compliant?

If you're not compliant, you're at risk. If you're too focused on compliance, you slow down your ability to innovate. How do you strike the balance?
Josh Porter
Find out more
Security
Article

Cybersecurity in 2025: Challenges and Solutions

With the year ahead of you, now’s a better time than any to start rethinking your security measures.
Patrick Mclean
Find out more
Healthcare
Article

Healthcare: Staying Secure in 2025

For the healthcare industry making a robust cybersecurity strategy not just important but absolutely critical.
Patrick Mclean
Find out more
Security
Article

Building Operational Resilience in 2025

Hear from the experts on how you can prioritize your operational resilience for 2025.
Josh Porter
Find out more
Security
Article

Managing the Pressures of Being a CISO

Managing the pressures of being a CISO is a constant challenge. Here is how CISOs can lighten the load.
Josh Porter
Find out more
Security
Article

From Shadow IT to Shadow AI – Tackling Emerging Risks

Discover how security leaders are tackling the emerging risks of shadow AI
Josh Porter
Find out more