As AI-powered impersonation attacks escalate, businesses find themselves in a precarious position, facing threats that were once the stuff of science fiction. We’re joined by Sofia Khaira, an expert on the human-centric vulnerabilities that cybercriminals exploit within corporate structures. Today, we’ll explore the “perfect storm” of deepfake technology, dissecting how these scams unfold in critical departments like HR and IT, examining the emerging danger of autonomous AI agents, and discussing the fundamental shifts required to verify the human behind the screen.
The Nametag report calls this a “perfect storm” for impersonation attacks. Considering the $25 million Arup heist, what specific organizational vulnerabilities are fraudsters exploiting with deepfakes, and what does this “storm” look like on the ground for a typical C-suite leader today?
The “perfect storm” is the collision of hyper-realistic, accessible deepfake technology with our ingrained trust in digital communication. The Arup case is a terrifyingly perfect example. Scammers didn’t just hack a system; they hacked a person’s identity, using a deepfake of the CFO to authorize a $25 million transfer. This exploits a core vulnerability: the chain of command and the implicit trust we place in a senior leader’s voice or face. For a C-suite leader today, this storm feels like a constant state of digital paranoia. Every video call, every voice message, every WhatsApp text—even those appearing to come from your CEO, like the attempt on Ferrari—is now suspect. The tools that enabled global business are now potential weapons, giving new superpowers to bad actors and forcing leaders to question the very reality of their daily interactions.
With Gartner predicting one in four job candidates will be fake by 2028, could you walk us through a hiring fraud scenario? Please detail how scammers use deepfake technology in the interview process and what specific red flags HR teams should be trained to spot.
It’s a scenario that keeps me up at night, and we’re already seeing it happen, as with the unsuccessful attack on Pindrop. Imagine a scammer applies for a high-paying remote developer role. They’ve stolen the identity of a top-tier programmer from their online profiles. During the video interview, they use a real-time deepfake, so the hiring manager sees and hears this legitimate, highly-qualified professional. The scammer, who might have no technical skill, is simply a puppet master behind the screen. The vulnerability here is that the hiring process is often fragmented across teams, creating what the research calls “predictable openings.” HR teams need to be trained to spot the subtle tells: a slight lag between audio and video, unnatural or repetitive facial movements, or a strange lack of emotion. A huge red flag is a candidate who is resistant to any form of secondary, out-of-band verification. That chilling Gartner prediction means we must treat candidate verification with the same rigor as financial auditing.
The report identifies IT help desks as a prime target. Describe how a scammer might use a deepfake voice to socially engineer a technician into resetting credentials. What updated, step-by-step verification protocols are essential to counter this specific threat beyond a simple phone call?
This is one of the most direct and dangerous attack vectors because the help desk holds the keys to the kingdom. A scammer can now use a perfect voice clone of an employee—let’s say a senior executive, to add a sense of urgency and authority—and call the help desk. They’ll sound stressed, claim they’re locked out of their account right before a board meeting, and demand an immediate password or multi-factor authentication reset. The technician, hearing a voice they recognize, is socially engineered to bypass protocol. To counter this, we must accept that deepfake impersonation will become a standard tactic in these playbooks. A simple phone call is no longer proof of identity. The new protocol must be multi-layered. For a sensitive request, the technician should have to initiate a live, unscheduled video call and ask the user to perform a “liveness” check, like holding up three fingers or reading a randomly generated phrase from the screen. This is far more difficult to fake in real-time than just a voice.
Looking at the emerging threat of “agentic AI,” what does it mean for a scammer to hijack an autonomous agent? Can you provide an example of the damage a compromised agent could do and what new security frameworks are needed to oversee them?
Agentic AI is the next frontier of this battle. Think of an AI agent as an automated employee with its own credentials and autonomy to perform tasks. Hijacking one is like turning a trusted insider into a malicious actor who works 24/7. The real danger is that its actions look entirely legitimate because the agent is authorized to be in the system. For instance, a compromised agent designed for logistics could be secretly instructed to reroute high-value shipments. Or an agent with access to financial systems could initiate thousands of micro-transactions that funnel money to an external account, all while bypassing the normal human oversight. We urgently need new security frameworks built on a zero-trust model for these agents, constantly monitoring their behavior for anomalies and restricting their access to the absolute minimum required to do their job.
The research advises a “fundamental shift” in verifying the “human behind the keyboard.” Beyond current multi-factor authentication, what does this new approach to identity verification look like in practice? Please share a few concrete, tactical steps a company can implement immediately.
This “fundamental shift” is about moving from trusting a device to verifying a person. We can no longer blindly trust that whoever can click a link or tap a push notification is who they claim to be. In practice, this means implementing more active and dynamic identity checks. One powerful step is to introduce biometric liveness verification for high-risk actions. Before an employee can access critical financial data or approve a large payment, they must complete a quick selfie-video check to prove they are physically present. Another is context-aware security; if an employee is logging in from a new device or a strange location, the system should automatically trigger a higher level of identity verification. It’s about creating an intelligent, risk-based approach to security that can differentiate between a routine login and a potential attack in progress.
What is your forecast for the evolution of AI-powered impersonation attacks? Looking ahead, what single capability will most significantly shift the balance of power between fraudsters and corporate defenders over the next few years?
My forecast is that these attacks will become exponentially more sophisticated and pervasive. The data is already staggering—the number of online deepfakes exploded from around 500,000 to nearly eight million in just two years. We’re moving from one-off attacks to fully orchestrated campaigns run by AI. Looking ahead, the single capability that will most significantly shift the balance of power is the malicious use of agentic AI. When a fraudster no longer has to personally execute an attack but can simply hijack and deploy an autonomous AI agent to do their bidding—exfiltrating data, deploying software, or initiating transactions—the scale and speed of these attacks will be unlike anything we’ve ever seen. This transforms the threat from a human-speed problem to a machine-speed crisis, and our defenses will need a revolutionary leap forward just to keep pace.
