Fake Buffett, Real Reputation Risk: How Deepfakes Are Reshaping the Cyber Landscape

How deepfakes are threatening businesses’ reputations and cyber resilience.
By: | April 7, 2026

In November 2025, Warren Buffett, business mogul, billionaire and CEO of Berkshire Hathaway, was widely seen in TikTok videos promoting specific stocks, cryptocurrency investments and other similar activities.

Except it wasn’t him.

It was a deepfake, an AI-generated video using Buffett’s likeness to convince users to fall for crypto giveaways and investment schemes. The bad actors behind the scam wanted to exploit Buffett’s name for their own gain, at the expense of his reputation.

So realistic were these deepfakes to the untrained eye, that one channel referenced within the fake video amassed over 17,000 subscribers.

Of course, Buffett has the resources and backing to address something like this in real time and have it corrected — which is exactly what his company did. But not every business can rest on its multi-billion-dollar CEO status to right the wrongs of a deepfake reputational plunder.

“More and more cybercriminals are realizing they can cause this kind of harm for organizations much smaller and much less equipped to handle it than Berkshire Hathaway. And so, we want to offer active protection to our client base, because we think we do see it becoming more prevalent every day,” said Michael Phillips, head of cyber portfolio underwriting, Coalition.

Deepfake Fallout: The Harm That Comes from Misinformation

AI-generated videos spreading misinformation have two main purposes: to swindle the end-user, most often driven by a monetary gain, or to harm the reputation of the subject.

John Farley, cyber liability practice lead, Gallagher

“Deepfake technology allows a threat actor to take actual audio or video and manipulate it to create synthetic media,” said John Farley, cyber liability practice lead, Gallagher.

“When they’re able to do that, they’re able to make someone appear to do or say something that they never did. Threat actors can use this to create a false narrative about an individual or an organization, and that can certainly lead to reputational harm.”

But it’s more than that, Farley said.

“It can impact virtually every aspect of society,” he stressed. As an example, he pointed to the judicial system. “We rely on video and audio to determine someone’s guilt or innocence, and now it almost becomes flipped. Are we going to have to prove a negative in that scenario? You could look at how a deepfake video could impact a publicly traded organization in terms of their stock price, at least in the short term.

“There are massive implications for not only an organization, but for society at large,” Farley said.

The scale of the problem is growing rapidly. According to a study released by Deepstrike, a professional hacking organization that tests systems for vulnerabilities, “The volume of deepfake content shared online is exploding. After an estimated 500,000 deepfakes were shared across social media platforms in 2023, that number [wa]s projected to skyrocket to 8 million by 2025.

“This is consistent with a growth rate where the volume of deepfake videos increases by 900% annually. This isn’t linear growth; it’s a viral proliferation that outpaces nearly every other cyber threat.”

Accessibility and Believability Rising

Businesses can face backlash and lose the trust of their customer base. The individual at the center of the deepfake may suffer reputational damage, and the financial fallout can bring operations to a halt.

Plus, it’s increasingly becoming easier for cybercriminals to trick people into believing their lies.

“More cybercriminals are turning to deepfake technology because organizations are more prepared against the traditional types of cyber threats,” said Phillips, referencing phishing attempts or ransomware events.

“If the technology you have at your company is very secure, then it’s much harder for a cybercriminal to have that way in, so they need a different way to cause harm,” he said.

“Deepfakes enable cybercriminals to cause different types of harm than they historically have, and so what we’re seeing is that more and more cybercriminals are motivated to cause harm that’s not directly about stealing money but is also about stealing trust and really harming enterprise value.”

In essence, deepfakes are being used as social smear campaigns, often with the hope of building distrust between an organization and its consumer base.

“The common thread is that deepfakes reduce friction for the attacker. They shortcut trust. Instead of convincing you with a story, the attacker shows you ‘proof,’ and people act faster. That is why verification controls and escalation protocols matter as much as technical detection,” added Ryan Kratz, head of cyber, North America, MSIG USA.

Michael Phillips, head of cyber portfolio underwriting, Coalition

We might think detecting an AI-manipulated image or video should be easy, however the unfortunate truth is, this technology is keeping pace. Coupled with that, attackers are well aware of the staying power of the internet, and once a video is online, it’s unlikely to go away completely.

“The quality [of technology] is improving and the barrier to entry is dropping. What is changing is not only realism, but also volume and targeting. It is becoming easier to generate content that matches a specific person, context, and moment,” said Kratz.

Another area that gives strength to deepfakes is access.

“[Cybercriminals] no longer have to know how to code in Python and PowerShell,” said Kyle Lutterman, VP, cyber product leader & cybersecurity risk engineering, Arch Insurance. In the traditional cybercrime world, where bad actors send phishing emails or vishing calls, a basic understanding of coding malware has been needed.

But now, anyone with internet access and the ability to type a good prompt can create AI-generated images.

The Cyber Overlap

While reputation can hang in the balance should a deepfake plague an individual, experts agreed this is something that bleeds into the cybersecurity space, particularly because of the means and technology used to create these images.

“From a risk standpoint, the key is that deepfakes are not just ‘misinformation.’ They are often a delivery mechanism for fraud, extortion, and social engineering, and that is where cyber and crime overlap becomes very real,” said Kratz.

Cyber insurance provides a level of crisis management services for things like ransomware, phishing and other “traditional” attacks companies can face. But now cybercrime has evolved to include deepfake technology, meaning many are looking to make sure their cyber insurance reacts accordingly.

“First and foremost, each cyber insurance policy has its own terms and conditions, so you need to be mindful of any coverage implications around deepfake incidents,” said Farley.

But, he cautioned, there is nuance in each policy. For example, “In the scenario where your CEO is imitated via deepfake technology and there is no unauthorized access to a network, you may or may not have coverage, because some policies may require an unauthorized access to your network to trigger coverage.”

Threat actors are generally using deepfake technology to carry out social engineering crimes, which is typically covered under a cyber policy. Additionally, “We have had … developments in the market where we’ve had carriers come out and affirmatively state that they are covering these deepfake attacks through endorsements just to clarify coverage,” Farley added.

But a thorough review of policy language around deepfakes should be conducted in order to better protect against this risk.

Protecting Against the Risk of Deepfakes

Case in point, the team at Coalition understood very clearly the cyber implications of an AI-generated deepfake and even went so far as to create a coverage endorsement designed to protect against that very thing.

“We’ve started seeing more macro or high-level targeted campaigns in which C-suite members, board members, other senior executives at enterprises around the world are being targeted by deepfakes that harm their credibility, their brand value, that disrupts operations and trust within the organization and between an organization and its clients,” Phillips explained. “That’s what led us to develop our deepfake response endorsement, something that can respond to that kind of pressure.”

Reputational risk insurance and D&O coverage are two other prominent coverages that can bulk up resilience against deepfake harm.

“From an insurance perspective, this is a good reminder for buyers to stress-test how their cyber, crime, D&O, and media-related coverages work together, because deepfake events do not respect policy silos,” said Kratz.

Ryan Kratz, head of cyber insurance, MSIG USA

Then there are the proactive measures businesses should take, particularly when it comes to spotting and eliminating deepfake risk.

“You can’t really control if someone outside your organization has a philosophical difference in your business’s operations and chooses to do a smear campaign,” said Lutterman. “That’s completely out of your hands. But in terms of deepfakes being used to leverage network access, or potentially a funds transfer, training employees is vital.”

As generative AI tools become cheaper and more accessible, experts said deepfake incidents will likely shift from headline-grabbing scams to routine elements of cybercrime. For organizations, that means reputation management, employee training and insurance coverage will need to evolve just as quickly as the technology itself.

“Deepfakes will get better, there’s no question. We need to make sure organizational controls and insurance architecture are keeping pace,” Kratz said. &

Autumn Demberger is a freelance writer and can be reached at [email protected].

More from Risk & Insurance