At the dawn of this year, a seemingly innocuous video surfaced across Indian social media platforms, featuring Sundararaman Ramamurthy, the chief executive of the Bombay Stock Exchange (BSE). In the clip, Ramamurthy appeared to be offering investors sage advice on which stocks to buy, promising substantial returns for those who followed his recommendations. However, this compelling narrative was a sophisticated fabrication – a deepfake video crafted using advanced artificial intelligence, where Ramamurthy’s likeness and voice were digitally manipulated to deliver a false message.
"It was in the public domain where many people could see it, and get cheated into buying or selling stocks, as if I’d recommended them," Ramamurthy explained in an interview, highlighting the insidious nature of such attacks. The financial implications of being misled by a fabricated endorsement from a prominent figure like the BSE CEO could be devastating, potentially leading individuals to make ill-advised investment decisions that result in significant financial losses.
Ramamurthy elaborated on the immediate actions taken by the BSE in response to such incidents. "When we see an incident like this, we immediately lodge a complaint. We go to Instagram and other places where it’s posted to get the video taken down. And we regularly write to the market warning people not to believe in fake videos." This proactive approach underscores the ongoing battle against the proliferation of misinformation through AI-generated content.
The sheer reach and virality of social media make it incredibly challenging to ascertain the exact impact of these deepfake attacks. "We don’t know how many people have seen this video, it’s really difficult to find out, so we can’t really judge if it’s had a big impact or not," Ramamurthy admitted, expressing a fervent hope that the damage would be minimal. "What we want is for it to have had no impact at all. No one should incur a loss because they believe something that is untrue."

Ramamurthy and the Bombay Stock Exchange are far from isolated victims in this escalating digital threat landscape. Karim Toubba, the chief executive of US-based password security company LastPass, shared alarming statistics that paint a grim picture of the rise of deepfakes. "The latest data shows that over the past two years or so, we’ve seen an increase of almost 3,000% in the number of deepfakes being utilized," Toubba stated, underscoring the exponential growth of this malicious technology.
Toubba himself experienced a deepfake attack in 2024, a chilling testament to the personal and professional risks involved. "One of our employees in Europe received an audio message and a text message from someone alleging to be me, urgently requesting some help from me," he recounted. Fortunately, the vigilance of his employee served as a critical safeguard. "The message was on WhatsApp, which for us is not a sanctioned communication channel," Toubba noted. "Also, we have corporate sanctioned mobile devices and this came in via his personal phone. So that made him think this was potentially a little murky, a little fishy." This heightened awareness and adherence to established security protocols averted what could have been a significant breach. The employee promptly reported the incident to LastPass’s cyber-security team, ensuring that no harm was done.
While LastPass managed to thwart the attack, other organizations have not been as fortunate. The British engineering firm Arup fell victim to one of the most sophisticated deepfake attacks recorded in the corporate sphere in 2024. According to reports from the Hong Kong police, an Arup employee working in the region received a communication, purportedly from the firm’s chief financial officer (CFO) based in London, detailing a "confidential transaction."
The situation escalated when the employee engaged in a video call with the purported CFO and other colleagues. Under the impression of legitimate executive direction, the employee proceeded to transfer a staggering $25 million (approximately £18.5 million) to five different bank accounts as instructed. It was only discovered later that the individuals on the video call, including the CFO, were, in fact, expertly crafted deepfakes.
Stephanie Hare, a prominent tech researcher and co-presenter of the BBC’s "AI Decoded" TV program, emphasized the alarming ease with which such scams can now be executed. "You would never want to simply jump on a video call with someone and transfer $25m," Hare remarked, highlighting the inherent risk in unverified digital interactions. "Companies are having to take extra steps to secure these types of communications. That’s the brave new world we’re in now."

The relentless pace of AI development means that deepfake technology is becoming increasingly sophisticated and indistinguishable from reality. Matt Lovell, co-founder and CEO of UK-based cyber-security company CloudGuard, articulated the frightening accessibility of these tools. "Deepfakes are becoming very, very easy to do," Lovell stated. "To generate video and audio quality of extremely accurate specifications – it takes minutes."
Furthermore, the cost associated with creating convincing deepfakes has plummeted, making them accessible to a wider range of malicious actors. "For, say, a simple, single individual-led attack, you’re looking at $500 to $1,000 with the use of largely free tools," Lovell explained. "For a more sophisticated attack, you’re looking at between $5,000 and $10,000." This affordability democratizes the ability to perpetrate large-scale fraud and deception.
In parallel with the advancement of deepfake creation, significant strides are being made in developing countermeasures. Companies are now deploying advanced verification software capable of analyzing subtle physiological cues that are difficult for AI to replicate convincingly. These tools can assess a person’s facial expressions, head movements, and even the minute changes in blood flow beneath the skin. "In your cheeks or just underneath your eyelids, we’ll be looking for changes in blood flow when a person is talking or presenting," Lovell elaborated. "That’s really where we can tease out whether it’s AI-generated or it’s real." By detecting these biological markers, security systems can differentiate between authentic human presence and AI-generated simulations.
However, the landscape remains a dynamic battleground, with organizations engaged in a continuous arms race against evolving cyber threats. "It’s a race, between who can deploy a technology and who can thwart that technology as quickly as possible," observed LastPass’s Toubba. "Luckily, there seems to be quite a bit of money flowing into this, which will only accelerate the pace with which organisations will develop technologies to detect and ultimately block these things." The influx of investment into cybersecurity research and development offers a glimmer of hope in this escalating conflict.
Despite these advancements in detection, some experts express a more cautious outlook. CloudGuard CEO Matt Lovell voiced a more somber assessment of the current situation. "Attack vectors are accelerating faster than we can accelerate defence automation and protection," he stated grimly. "Are people moving fast enough to respond to the speed the threat is developing? Absolutely not." This sentiment highlights the urgent need for accelerated innovation and implementation of defensive strategies.

Stephanie Hare pointed out the critical shortage of skilled professionals capable of combating these sophisticated cyber threats. "We have a shortage of cybersecurity professionals worldwide, We need more people to get into this," she urged, emphasizing the growing demand for expertise in this field.
Hare also noted a palpable shift in corporate awareness regarding the severity of deepfake risks. "In the past it was not considered a priority to secure your operations in quite the same way as it is now," she observed. "Now that we have these types of risks, with the leaders at companies, with CEOs, being deepfaked, I think company executives will be spending more time with their chief information security officers and teams than before. And that is a good thing." The increasing visibility of deepfake attacks targeting high-profile individuals is forcing businesses to elevate cybersecurity to a strategic imperative, fostering closer collaboration between executive leadership and security teams, a development Hare deems a positive and necessary evolution.






