Does Artificial Intelligence Pose a Risk to National Security?

Subscribe To Download This Insight

By Michela Menting | 4Q 2018 | IN-5262

On September 13, 2018, three members of the U.S. Congress-Adam Schiff (D-CA), Stephanie Murphy (D-FL) and Carlos Curbelo (R-FL)-sent a joint letter to Daniel Coats, the Director of U.S. National Intelligence. In this letter, they raised their concerns regarding the use of deepfakes, which are types of hyper-realistic digital forgeries that leverage Machine Learning (ML) to fabricate audio, video, and still images of individuals. The letter comes at a time when U.S. election meddling by foreign agents is a high-profile issue, not just around the electoral process itself, but also concerning news outlets and social media coverage. The proliferation of propaganda and misinformation through bots, fake accounts, and spoofed online identities is at an all-time high. In their letter, the congressmen urge Director Coats to prepare a report to Congress to detail how deepfakes could be maliciously leveraged to harm U.S. national security interests, including reporting on use cases, potential counter-measures, developing monitoring facilities, and recommendations on how to address the issue.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

Deepfakes Under Scrutiny

NEWS


On September 13, 2018, three members of the U.S. Congress─Adam Schiff (D-CA), Stephanie Murphy (D-FL) and Carlos Curbelo (R-FL)─sent a joint letter to Daniel Coats, the Director of U.S. National Intelligence. In this letter, they raised their concerns regarding the use of deepfakes, which are types of hyper-realistic digital forgeries that leverage Machine Learning (ML) to fabricate audio, video, and still images of individuals. The letter comes at a time when U.S. election meddling by foreign agents is a high-profile issue, not just around the electoral process itself, but also concerning news outlets and social media coverage. The proliferation of propaganda and misinformation through bots, fake accounts, and spoofed online identities is at an all-time high. In their letter, the congressmen urge Director Coats to prepare a report to Congress to detail how deepfakes could be maliciously leveraged to harm U.S. national security interests, including reporting on use cases, potential counter-measures, developing monitoring facilities, and recommendations on how to address the issue.

Privacy and Security Concerns

IMPACT


Biometric spoofing through deepfakes is an undeniably growing threat, with adversaries using biometric traits (either taken from public sources or from leaked, private databases) to impersonate legitimate users. Deepfakes emerged on social media networks and streaming platforms (notably Reddit, YouTube, and Twitch) late in 2017. Video-based face swapping using open-source ML algorithms has become an instant online success, most notably for the substitution of porn actresses’ faces with celebrities. The legal debates are focusing on whether such substitutions can be tied to privacy violations (difficult in the case of celebrities), copyright infringement, and even defamation.

The technology behind deepfakes provides highly-convincing material that can be easily believed. The University of Washington recently published research that included the development of a convincing video in which the former U.S. President Barack Obama gives a speech entirely lip-synced by actor Jordan Peele through a ML algorithm. The researchers’ goal is to dig deeper into the phenomenon and create methods on how to authenticate content and discern them from deepfakes.

As such, the potential nefarious impact is far-reaching. Cybercriminal efforts concentrating on identity theft and phishing could be vastly improved through the use of deepfakes. Most significantly, such techniques could really fine-tune spear-phishing and whaling attacks (even vishing). Beyond that, the implications for national security are serious. Threat actors, especially politically-motivated and foreign state-backed actors may well have the resources to target high-profile government and military targets. Congressman Schiff, Congresswoman Murphy, and Congressman Curbelo have every reason to be concerned with the fallout of the technologies underpinning the deepfake phenomenon.

A Dual-Use Technology 

RECOMMENDATIONS


Much like human intelligence, Artificial Intelligence (AI) can be used for either beneficial purposes or harmful purposes. The main concern is that current AI systems in use today suffer from several vulnerabilities, notably around the exploitation of design flaws through adversarial manipulation. Most of these vulnerabilities are not close to being resolved. Deep convolutional neural networks for object recognition, for example, can be fooled by input images perturbed in a visually-indistinguishable manner. The other main concern is with the use of AI through the creation of models that can be used for harmful purposes, either maliciously (for the perpetration of crime, both cyber and physical), or within the context of military or law enforcement (to defensive or offensive ends). Deepfakes are part of this concern.

As AI becomes more efficient, more affordable, and more readily available, and companion products such as hardware, processing power, and data storage also become cheaper and more accessible, the ability of a greater number of users to leverage it for criminal intent will increase. Further, much of the research work done in AI is published in open-source academic journals, and additional information can be found in free-to-use online platforms (e.g., GitHub), making it even more accessible. Malicious actors will be able to better seek and find targets, more accurately manipulate and take advantage of users and systems, and even automate specifically targeted attacks. Using machine translation and speech synthesis, for example, could enable threat actors to minimize the amount of customization and manual supervision needed to carry out successful cyberattack campaigns.

Consequently, it is realistic to consider that there will be a growing number of cyberattacks capable of leveraging AI in the near future, and AI designers should strengthen their models in order to account for adversarial manipulation. Currently, threat actors lack the ability to leverage AI maliciously. As AI research continues to progress,  both academically and commercially, the likelihood of misuse grows in parallel. The use of ML for cybercrime is likely to occur sooner rather than later. Certainly, the U.S. Congressmen are right to worry; there is little doubt that state-sponsored threat actors with the resources to engage in such attacks are likely already exploring how AI technologies can aid in their efforts.

Services

Companies Mentioned