Opinions expressed by londonbusinessblog.com contributors are their own.
Technological advancements significantly advance business and our societies. However, progress also brings new risks that are difficult to manage. Artificial intelligence (AI) is at the forefront of emerging technology. It is finding its way into more applications than ever.
From automating administrative tasks to identifying hidden business drivers, AI has enormous business potential. However, malicious AI use can harm businesses and lead to an extreme loss of credibility.
The FBI recently marked a rising trend driven by the adoption of remote working, with malicious actors using deepfakes to impersonate interviewees for jobs at US companies. These actors stole the identities of US citizens in order to gain access to corporate systems. The implications for corporate espionage and security are immense.
How can companies combat the increasing use of deepfakes, even as the technology that powers them becomes more powerful than ever? Here are a few ways to mitigate security risks.
Related: A Successful Cybersecurity Company Isn’t About Fancy Technology
Going back to basics often works best when fighting advanced technology. Deepfakes are created by stealing a person’s identifiers, such as their photos and ID information, and using an AI engine to create their digital likeness. Often, malicious actors use existing video, audio, and images to mimic their victim’s mannerisms and speech.
A recent case highlighted the extremes in which malicious actors use this technology. A series of European political leaders believed they… in conversation with the mayor of Kiev, Vitali Klitschko, only to learn that they had interacted with a deepfake.
The Berlin mayor’s office eventually discovered the trick after a phone call to the Ukrainian embassy revealed that Klitschko was engaged elsewhere. Companies would do well to study the lessons learned from this incident. Identity verification and seemingly simple checks can reveal the use of deepfake.
Companies face deepfake risks when interviewing potential candidates for external job openings. Rolling back the standards for remote working isn’t practical if companies want to hire top talent these days. However, by asking candidates to show some form of official identification, recording video interviews, and requiring new employees to visit the company premises at least once immediately after hiring, the risks of hiring a deepfake actor limited.
While these methods do not prevent deepfake risks, they reduce the chances of a malicious actor gaining access to trade secrets when deployed together. Just as two-factor authentication prevents malicious access to systems, these analogous methods can create roadblocks for deepfake use.
Other analogous methods include verifying an applicant’s credentials, including their photo and identity. For example, send the applicant’s photo to the authority and ask them to confirm if they know that person. Check the reference’s credentials by contacting them on official or corporate domains.
Fight fire with fire
Deepfake technology uses deep learning algorithms (DL) to mimic a person’s actions and ways. The result can be spooky. AI can make moving images and seemingly realistic videos of us if there are only a few data points.
Analogous methods can combat deepfakes, but they take time. One solution to quickly detect deepfakes is to use technology against itself. If DL algorithms can create deepfakes, why not use them to see deepfakes too?
In 2020, Maneesh Agrawala from Stanford University came up with a solution which allowed filmmakers to insert words into sentences of video subjects on camera. There was nothing wrong with the naked eye. Filmmakers rejoiced because they didn’t have to reshoot scenes due to flawed audio or dialogue. However, the negative implications of this technology were enormous.
Agrawala and his team were aware of this problem and countered their software with another AI-based tool that detected anomalies between lip movements and word pronunciations. Deepfakes who impose words on videos in a subject’s voice cannot change their lip movements or facial expressions.
Agrawala’s solution can also be used to detect facial impositions and other standard deepfake techniques. As with all AI applications, a lot depends on the data fed to the algorithm. However, even this variable reveals a link between deepfake technology and the solution to combat it.
Deepfakes use synthetic data and datasets extrapolated from real world events to account for multiple situations. For example, artificial data algorithms can process data from a military battlefield incident and collect that data to create even more incidents. These algorithms can change ground conditions, participant readiness variables, weapon conditions, etc. and feed them into simulations.
Companies can use this type of synthetic data to counter deepfake use cases. By extrapolating data from current usage, AI can predict and detect edge use cases and increase our understanding of how deepfakes evolve.
Related: A Beginner’s Guide to Cybersecurity for Business Leaders
Accelerate digital transformation and education
Despite the advanced technology that fights deepfakes, Agrawala warns that there is no long-term solution for deepfakes. This is a disturbing message on the surface. However, companies can combat deepfakes by accelerating their digital stance and educating employees about best practices.
For example, deepfake awareness helps employees analyze and understand the information. Any material circulating with information that seems strange or out of proportion can be instantly recalled. Companies can develop processes to verify identities in remote work situations and ensure that their employees will follow through with deepfake threats.
Again, these methods alone cannot combat the deepfake dangers. However, with all the techniques mentioned earlier, companies can use a robust framework that minimizes deepfake threats.
Advanced technology calls for innovative solutions
The ultimate solution to deepfake threats lies in technological advancement. Ironically, the answer to deepfakes lies in the technology that powers them. The future will no doubt reveal new ways to deal with this threat. Meanwhile, companies must remain aware of the risks associated with deepfakes and work to mitigate them.
Related: The Importance of Training: Cybersecurity Awareness as a Human Firewall