Business Fortitude
    🔥 Trending
    Deepfake attack: 'Many people could have been cheated'
    Tech & Innovation

    Deepfake attack: 'Many people could have been cheated'

    Ross WilliamsByRoss Williams··6 min read
    • An Arup employee transferred $25m to criminals after a video call with deepfaked colleagues in what Hong Kong police called a sophisticated fraud operation
    • Creating a convincing deepfake attack now costs as little as $500, with more sophisticated operations targeting executives costing between $5,000 and $10,000
    • Deepfake incidents have increased roughly 3,000% over two years according to LastPass data, democratising a threat once limited to well-resourced state actors
    • Proper detection systems require military-grade biometric verification analysing blood flow patterns and micro-expressions—technology most SMEs cannot afford or implement

    A finance employee at Arup's Hong Kong office joined what appeared to be a routine video call last year with the company's chief financial officer and several colleagues from London. Nothing seemed amiss—multiple people he recognised were on screen, the conversation followed internal protocols, and he authorised the transfer of $25m to five bank accounts as instructed. Every person on that call was fake.

    The Arup case, first reported by Hong Kong police in 2024, represents something more significant than a spectacular fraud. It marks the point at which video calls—the verification method businesses adopted precisely because it seemed more secure than email or voice—became fundamentally untrustworthy. If a multinational engineering firm with sophisticated security protocols can be duped by deepfaked colleagues on a video conference, the implications for smaller businesses without dedicated cybersecurity teams are stark.

    Business professional on video conference call
    Business professional on video conference call

    The economics have shifted

    Creating a convincing deepfake attack now costs as little as $500, according to Matt Lovell, chief executive of UK-based cybersecurity firm CloudGuard. That figure drops further when criminals use free tools widely available online. A more sophisticated operation targeting specific executives might cost between $5,000 and $10,000—either way, the technology takes minutes to deploy.

    Enjoying this article?

    Get stories like this in your inbox every week.

    Compare this to the investment required to defend against such attacks. Proper detection systems analyse subtle physiological markers—blood flow patterns beneath eyelids and in cheeks, the precise mechanics of how someone turns their head, micro-expressions that AI struggles to replicate perfectly. This isn't software you download for a few hundred pounds. Most small and medium-sized businesses lack even the in-house expertise to assess their vulnerability, let alone implement military-grade biometric verification.

    The asymmetry is brutal. A criminal with basic technical skills and pocket change can target dozens of companies. Each business must invest thousands in defence systems and ongoing monitoring. The mathematics favour the attackers.

    Data from LastPass suggests deepfake incidents have increased roughly 3,000% over two years, though the cybersecurity industry's incentive to emphasise threats means such figures warrant scrutiny. What's harder to dispute is the breadth of targets. Karim Toubba, LastPass's chief executive, was himself deepfaked in 2024 when an employee received a WhatsApp message with AI-generated audio claiming to be him requesting urgent assistance. That particular attempt failed—the employee recognised that WhatsApp wasn't an authorised company channel and the message arrived on a personal rather than corporate device.

    Cybersecurity concept with digital security interface
    Cybersecurity concept with digital security interface

    Sundararaman Ramamurthy, chief executive of the Bombay Stock Exchange, discovered his deepfaked likeness circulating on Indian social media platforms at the start of this year, offering stock tips to investors. The exchange lodged complaints and worked to remove the videos, but Ramamurthy acknowledges a troubling reality: "We don't know how many people have seen this video. We can't really judge if it's had a big impact or not."

    That uncertainty is part of the weapon's design. Unlike traditional fraud, which leaves clear trails and identifiable victims, deepfake attacks create ambient doubt. Every video call now carries a whisper of suspicion. Every executive communication requires additional verification steps that slow business and corrode trust.

    The race nobody's winning

    Whether defence technology can catch up to attack capabilities depends on which cybersecurity executive you ask. Toubba at LastPass strikes an optimistic note, pointing to significant investment flowing into detection technologies that should "accelerate the pace with which organisations will develop technologies to detect and ultimately block these things."

    Lovell at CloudGuard offers a grimmer assessment. "Attack vectors are accelerating faster than we can accelerate defence automation and protection," he says. "Are people moving fast enough to respond to the speed the threat is developing? Absolutely not."

    The divergence in views reflects different business positions, but also a genuine uncertainty about whether defensive measures can ever achieve parity. Detection systems that analyse blood flow patterns are sophisticated, certainly—whether they'll remain effective once criminals train AI models specifically to spoof those biological markers is an open question that the cybersecurity industry prefers not to dwell on.

    What's interesting here is how quickly the threat landscape democratised. Deepfake technology began as the province of well-resourced state actors targeting specific high-value individuals. That phase lasted about five minutes. The same AI tools that power legitimate applications are now accessible to opportunistic fraudsters with no particular technical expertise. The risk surface hasn't just expanded—it's exploded.

    The skills problem

    Tech researcher Stephanie Hare, who co-presents the BBC's AI Decoded programme, identifies the personnel gap as equally critical as the technology deficit. "We have a shortage of cybersecurity professionals worldwide," she notes. Businesses can purchase detection software, but someone needs to implement it, monitor it, update it as threats evolve, and train employees to recognise when verification protocols should override apparent urgency.

    IT professional working on cybersecurity systems
    IT professional working on cybersecurity systems

    Larger corporations can afford dedicated chief information security officers and teams. The warehouse distributor in Leeds or the law firm in Bristol likely cannot. Yet both are equally vulnerable to a deepfaked CEO voice message instructing an urgent bank transfer to a "new supplier account."

    Companies are beginning to treat cybersecurity as a board-level concern rather than an IT department issue, particularly once executives themselves become impersonation targets. That's progress of a sort. But awareness doesn't solve the underlying problem: defence costs more, requires more expertise, and offers no guarantee of effectiveness against a threat that reinvents itself every few months.

    The Arup employee who transferred $25m believed he was following proper procedures. He verified identities through a video call with multiple colleagues—he did exactly what businesses tell employees to do when large sums are involved. The attack succeeded not through negligence but through the erosion of what constitutes reliable verification. As that erosion accelerates, businesses face an uncomfortable choice between paranoid friction at every decision point or accepting fraud as a cost of doing business. Neither option is sustainable.

    • Video verification—the gold standard for authorising sensitive transactions—is no longer trustworthy. Businesses must implement multi-layered authentication protocols that assume any single verification method can be compromised.
    • The cost asymmetry between attack and defence means SMEs are disproportionately vulnerable. Without access to sophisticated detection systems or dedicated security personnel, smaller firms face existential risk from threats that cost criminals hundreds to deploy.
    • Watch for regulatory intervention. As deepfake fraud scales beyond what individual businesses can manage, expect governments to mandate minimum security standards—potentially creating compliance burdens that further disadvantage smaller operators whilst criminals simply move to softer targets.
    Ross Williams
    Ross Williams

    Co-Founder

    Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.

    More articles by Ross Williams

    Comments

    💬 What are your thoughts on this story? Join the conversation below.

    to join the conversation.

    More in Tech & Innovation

    View all →