Latest News: Forrester Recognizes Binary Defense as Notable MDR Provider

Download Report

Search

Emerging deepfake technology could have security implications for businesses

Author: Kathy Jambor/Randy Pargman

Tom Cruise made headlines recently, but not for a new Mission Impossible movie or jumping on Oprah’s couch. His likeness was used in a viral TikTok deepfake video that so closely resembled Cruise that many could not tell the difference. (It even fooled deepfake detection software.)

While this may seem like a harmless parlor trick or gimmick to get likes and views on social media, deepfakes could have more criminal uses than people realize.

Deepfake videos are videos in which the likeness of one person is replaced with another using a type of artificial intelligence called “deep learning” and allows the creator to manipulate and deceive people using the new likeness. Using the AI technology, criminals could impersonate a trusted leader and disseminate misinformation. In fact, this tactic has already been used by political parties in various countries to create unrest. Deepfake audio is another emerging area of concern—in this case, a voice is modified through similar technology to sound like another person.

Several apps that allow users to create deepfake content can easily be found. To create a convincing deepfake, however, it still takes a great deal of skill and attention to detail. The creator of the Tom Cruise deepfakes took months of training the AI on images of the actor, and then editing the videos frame by frame to make it look authentic. The average person would not be able to pull off anything that came close to the Cruise deepfake, or this video from a few years ago in which comedian Jordan Peele voices former President Barack Obama: https://youtu.be/cQ54GDm1eL0

Security implications around deepfakes

Generally, the deepfakes created now can be detected either by the human eye or by detection software. However, as deepfake apps and technology improve, security implications should be noted and considered.

One of the most financially damaging type of cyberattacks against businesses in the US is wire transfer fraud, usually as a result of a business email compromise. The FBI reported that in 2019, reported losses from business email compromise totaled $1.7 billion, or about half of all losses from cybercrime that year. This scam typically works when cybercriminals break into an email account belonging to a CEO, CFO or another executive and send an email to an employee who is responsible for making external payments. The criminal will send an email message, posing as the executive, to tell the finance department employee to wire a large amount of money to an account that they claim belongs to a supplier or partner business, but actually is controlled by the criminals. There are many variations of this scam, such as criminals using a compromised email account belonging to an actual supplier or business partner, or criminals monitoring an ongoing email thread between two people and just changing the details of messages that contain actual wiring instructions – those are very difficult to detect.

Consider how audio could be added to the mix to make this request seem much more legitimate, and you start to see the implications of deepfakes to businesses.

How to avoid being scammed by a deepfake

To protect against these scams, it’s important for organizations to enact very clear business policies around wire transfers. Putting parameters around how to verify the transfer is an important first step. Require employees to make outgoing calls using known phone numbers to verify changes to large wire transfers before initiating them, even if an incoming email or phone call demanding wire transfer changes seems convincing. Include specific guidance for a dollar amount that needs verification, for instance, anything over $50,000 (or any number you decide makes sense for your business). 

Many companies that have lost money to these scams or learned from another company that has been victimized in the past have implemented these policies to prevent future occurrences of fraud. But what happens if the voice on the other end of the phone is a deepfake? This would require the criminal to initiate the phone call, but it is easy to imagine a scenario in which an employee in the finance department receives a phone call from a criminal using spoofed caller ID and a deepfake voice recording that sounds like an executive, demanding that a wire transfer be sent urgently and then providing the details by email message. That kind of scam could bypass the protections of business policies that otherwise would require a phone call verification.

Trust your gut instinct – if something seems suspicious, always double check with a trusted source such as your manager. If you aren’t usually asked to complete the task you’re being asked to do, this could be a sign of a scam.

Final thoughts on deepfakes

Even the creator of the Tom Cruise deepfake video thinks that the technology used to create a deepfake should be regulated to prevent a criminal from using them to scam people. This article isn’t intended to raise fear and cause skepticism on every video you see, but an awareness that the technology is out there—and advancing—is good knowledge for businesses to have moving forward.

Sources:

https://fortune.com/2021/03/05/tom-cruise-deepfake-creator-technology-should-be-regulated/

https://www.continuitycentral.com/index.php/news/erm-news/5559-deepfakes-are-a-threat-that-businesses-need-to-take-seriously