Microsoft and MITRE Release Open Framework to Assist Against AI Cyberattacks on Security Systems

Subscribe To Download This Insight

4Q 2020 | IN-5968

AI security is again in the spotlight with the pandemic threat. Organizations are forced to adapt their security strategies, investing in both human and AI tools. Microsoft’s and MITRE’s new open framework is a key continuation of the ATT&CK framework to work on these objectives.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.


Introducing the New ML Security Framework


On October 23, 2020, Microsoft, in collaboration with MITRE, announced the release of the Adversarial Machine Learning (ML) Threat Matrix, an industry-focused open framework aimed at assisting security analysts in detecting, responding to, and remediating threats against ML systems. This is a significant development for security engineers and software developers. Acknowledging the power that AI-powered cyberattacks can have across a wide spectrum of organizations, Microsoft and MITRE have worked to develop this open framework where the human element is its main focal point.

Robustly Built on ATT&CK MATRIX


With varied contributions from leading organizations (including Microsoft, MITRE, Bosch, IBM, Nvidia, Deep Instinct, and Airbus) and influential security-focused academic institutions (including the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University), the framework is seeded with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE have vetted to be effective against production ML systems. As an open industrial framework, the Adversarial ML Threat Matrix will be powered by a constantly expanding cybersecurity and software development community and closely resembles another open MITRE framework that many security analysts are already familiar with: the MITRE ATT&CK MATRIX. The ATT&CK framework was meticulously created to cover cybersecurity operations across different enterprise systems and for a plethora of attack vectors including threat subterfuge, client exploitation, remote command and control, Advanced Persistent Threats (APTs), fraudulent credential, and privilege access threats. As such, the Adversarial ML Threat Matrix already has a leading start and a powerful arsenal of security tools and security intelligence that is meant to help extend the framework’s coverage far past the standard enterprise networks and position it across next-gen IoT and AI-driven systems.

Having a standardized, yet open-sourced, security framework that security engineers can rely upon to cover the entirety of the constantly evolving IoT cyberthreats is a tall order. It is certainly an ambitious endeavor and one that is sure to rely upon multiple rounds of improvement, albeit with a fair number of challenges. ABI Research posits, however, that it is a necessary step in the right direction and definitely one that can assist organizations’ security engineers and software developers by supplementing their respective expertise, tracking down emerging threats as part of a collaborative effort, and deploying security-first solutions by tapping into a collective pool of knowledge.

As part of many research interviews with IoT-focused companies as well as security software and services organizations, it became apparent that while there is some effort to provide anonymous feedback and insight regarding internal cybersecurity breaches and threats between different companies, there is still a barrier to the active exchange of information. While there are certainly very valid points that impede active information sharing from victims of cyberattacks (loss of customer trust, public image, and shareholder ramifications among the chief concerns), having an open platform that allows software developers and security engineers to be actively engaged, enriching the framework in an collaborative effort, can not only create more security-focused IoT deployments down the line, but also greatly increase ROI due to less downtime and operational discrepancies. This is not a panacea by itself, but rather a necessary tool.

The Pandemic Boosts AI Transformation but to What End?


We are traversing a technological era that is becoming increasingly dependent on ML systems and is accompanied by a resolute vision of an AI-driven future with “mostly positive” results on the horizon, albeit with limited considerations regarding the future outlook on many technological factors (and even less consideration of the vital socioeconomic ones). The ongoing COVID-19 pandemic has radically transformed the current technological evolution across the board. AI-powered cybersecurity attacks are on the rise and greatly amplified by covid-related attacks, preying on the citizens’ thirst for information vital to their well-being. With the rise of remote workers, organizations’ endpoints with elevated credentials capable of accessing company secrets have been spread on a global scale across different countries and home networks and away from the IT network umbrella. Industrial facilities have looked further into automation and digitization of their assets in order to counterbalance the effect of the limited workforce on the factory floors due to epidemiological regulation. New IoT networks are emerging, oftentimes improperly secured with limited security protocols in place, favoring connectivity over encryption or protection.

Most important, there is not enough cybersecurity personnel to combat the emerging threat horizon, forcing security automation to become a necessity against security alert fatigue from human analysts. On the other hand, the same security personnel are also expected to deal with cyberattacks on ML systems (hopefully an issue that will be somewhat tackled with initiatives similar to the Adversarial Matrix). ABI Research posits that amidst the multitude of AI-powered cyberattacks, the raging seas of DDOS attacks, IoT botnet armies, top-tier company hacks, and governmental cyberwarfare, there is a kernel of truth: while AI can be used to launch new cyberattacks, it is also a necessity for change (if properly used). Powered by machine learning, automation is a much-needed weapon in the organizations’ arsenal. Advanced security analytics must be able to assist security automation by providing intelligence about the precise nature, type, gravity, and sophistication of incoming threats, giving software developers and engineers the chance to better hone their skills and prepare for the storm ahead. Thus, market players should not only aim to produce results that only humans, through our exquisite level of sophistication and intelligence, can comprehend, but also that other ML processes may understand, filter, and train upon in order to produce greater levels of reliable automation. Microsoft’s and MITRE’s new Adversarial ML Threat Matrix framework is a shot-in-the-arm against the increased AI-borne cyberthreats instigated by the pandemic and, hopefully, one of other much-needed security initiatives.

The GitHub code for the Adversarial ML Threat Matrix open framework can be found in this link.



Companies Mentioned