AI Deepfakes: Is Security Awareness Training Keeping Up with the Growing Threat?

August 5, 2024 | AI

By: Sarah Varnell

The threat of deepfakes being used in social engineering attacks against enterprise organizations has been a growing concern for several years.

The term “deepfake” refers to videos, audio recordings, and other pieces of media that are meant to appear real but are actually generated by artificial intelligence (AI) tools. As AI technology evolves, it’s becoming harder to detect deepfakes—and enterprises are discovering this firsthand.

Deepfakes: A Brief History

In 2019, TrendMicro highlighted what was (at the time) an “unusual” case of CEO fraud, when fraudsters used a deepfake audio to call a company’s chief executive pretending to be the CEO of the parent company. They demanded an urgent wire transfer of close to $250,000, which was made.

More recently, a multinational company in Hong Kong was scammed out of $25 million this past February after an employee who worked in the finance department attended a video conference call with several executives and other employees—all but one of whom were actually deepfake recreations. By keeping in touch with the employee after the conference call, the fraudsters were able to initiate as many as 15 transactions. The fraud was revealed after the employee spoke to an actual member of management.

Deepfakes are rarely used alone, but are typically used in addition to the existing “attacker toolkit,” which includes more familiar phishing and vishing techniques. It’s not a coincidence that as the use of AI technology increases, rates of phishing attempts and successes are also increasing.

The Big Picture

So is employee cybersecurity training evolving to keep up with this threat?

Yes—but only if enterprises are willing to look beyond “check the box” services. A large number of organizations today view security awareness training as something they simply must provide in order to achieve and maintain certifications or licensing. They aren’t willing to put in the resources needed to pay for and then enforce the training. What’s more, many companies that provide training offer a condensed—and often less costly—version of their courses that targets these organizations taking a low-effort approach to security training. 

Enterprise security leaders need to ask what content is included in their current training—and if social engineering attacks are not sufficiently covered, they need to ask how they can improve and build on the training to ensure all employees know how to identify and report such attacks.

Important questions for executives to consider include:

  1. Do my employees know how to verify communications? Do they know what to look for to see if there is cause for suspicion? 
  2. Do my employees know how and where to report suspicious communications? 
  3. Do my employees have sufficient access to accurate contact information? If they are told to call Bill at the number provided in the email, will they use that number, or will they look up Bill’s number in the directory? 
  4. If there is an incident due to an employee falling victim to a phishing, vishing, or deepfake attack, do my employees know how to respond?

Beyond security training, enterprises must also implement appropriate and sufficient incident response training. Are you testing your plan annually? Is it the same exercise each year, or do you consider new risk areas? Conducting a tabletop exercise and considering a scenario like an attempted or successful deepfake attack is an excellent way to assess your organization’s risk levels and readiness.

With these tactics in place, your team will be better prepared to prevent and respond to threats posed by AI and other new and developing technologies.

Is it time to update your organization’s security training program? Our experts can help you map out a plan and put you on the road to long-term cyber resilience. Contact us today to find out how.

About the Author

Sarah Varnell
Senior Consultant, Attest Services

As a senior consultant in BARR Advisory’s attest services practice, Sarah plans and executes IT audits and risk assessments for clients in the healthcare industry. She also has experience directing cybersecurity governance programs, building privacy programs, and developing security policies and procedures. Sarah is certified in CISA, HITRUST-CCSFP, and HITRUST-CHQP, and is an ISO Lead Auditor.

Let's Talk