The Prospect of EU-Led Artificial Intelligence Regulation

Subscribe To Download This Insight

4Q 2021 | IN-6354

To further invest in the regulation of AI, the EU has released a new Act to provide legal guidelines for risk in AI-based technological approaches.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

What the EU Proposes to do to Regulate AI

NEWS


Building upon the guiding principle of a “trustworthy” Artificial Intelligence (AI) first made public by the European Union (EU) in 2019, in April 2021, the European Commission published its highly anticipated Proposal for an AI Act (Regulation (EU) 2021/0106). The Act lays out a legal framework to regulate AI systems by following a proportionate risk-based approach. Based on a four-tiered scheme, each tier sets requirements and obligations for providers and users to take into account the range of potential risks to health, safety, and fundamental rights that AI systems can pose. It is the most ambitious attempt to regulate AI technologies to date. With its extraterritorial scope, the Act is very likely to produce a “Brussels Effect”– businesses who place AI systems in the EU or whose systems produce outputs used in the EU will have to adapt to the rules. Indeed, much as has been the case with the General Data Protection Regulation (GDPR), EU rules are often considered an international gold standard– over 100 countries have adopted data protection laws modeled on the GDPR, for instance– and ABI Research fully expects for the EU AI Act to have a similar impact.

A Risk-Based Approach to AI Legislation

IMPACT


The EU AI Act was prepared by a high-level expert group on AI composed of fifty-two experts. It was framed to be consistent with other existing EU rules and regulations, most notably the EU Charter of Fundamental Rights, the 2016 Law Enforcement Directive, and, of course, the GDPR. The Act is meant to complement the GDPR, as the latter already covers the use of AI systems when personal data is involved. The EU approach to AI is, moreover, two-pronged, given that the Act was published along with a Coordinated Plan on AI aimed at increasing investment in AI technologies in the EU through ‘responsible research and innovation,’ an approach meant to anticipate and assess potential implications and societal expectations with regard to research.

As mentioned, the Act establishes four tiers regarding the risk AI systems can pose to the public. The Act defines AI rather broadly by including not only machine learning and logic-based AI systems but also statistical approaches (e.g., Bayesian processes), and tiers them into four major categories:

  • Unacceptable-risk AI such as subliminal messaging, biometric identification in public spaces, AI aimed at children, or social scoring will be banned (there will be exemptions for scientific research).
  • High-risk AI such as credit-worthiness recruitment or biometric identification in non-public spaces will require providers to carry out conformity assessments, cyber-risk management practice, post-market assessments, and self-reporting.
  • Limited-risk AI such as chatbots will be subject to transparency obligations, and providers will be encouraged to implement a voluntary code of conduct, allowing users to make informed decisions on whether to participate or not.
  • Minimal-risk AI such as spam filters will have no restrictions, but providers might choose to adhere to a code of conduct.

Enforcement will fall under the purview of individual EU member states and will involve regulatory sandboxes under the supervision of national authorities; in addition, a new European AI Board will be established to facilitate consistent implementation across the Union. The Act establishes that fines be proportionate to company and market size; for instance, the maximum fines will be either €30 million or 6% of total worldwide annual turnover for the preceding financial year, whichever is higher. It is worth stressing that the legislation will undertake significant negotiation before it comes into effect. If past practice is any guide, the EU Parliament will probably seek to strengthen certain aspects of the Act while the individual member states might want to reduce the overall impact. Outside lobbying will also be a factor, not only from private industry, but from non-governmental organizations (NGOs) as well. Human Rights Watch, for instance, while approving of the regulation as a step in the right direction, has rightly pointed out that the Act recognizes significant exceptions for law enforcement, migration control authorities, and indeed all military uses, and this may well result in abuses from state actors.

The Need for Regulation

RECOMMENDATIONS


Most stakeholders agree that there is a need for AI regulation; the actual issue is to adopt the most appropriate rules to safeguard citizen rights without curtailing research and innovation (though there is evidence that “hard laws” can accelerate innovation). There certainly has been much variability in how this balancing act has been approached by different actors, as ABI Research has discussed in An Analysis Of AI Ethics and Governance, but a consensus can be outlined nonetheless. In 2019, an overview of the ethical guidelines adopted by diverse countries, businesses, and stakeholders identified five points of convergence: transparency, justice and fairness, non-maleficence, responsibility, and privacy. And more recently, close to two hundred countries have adopted UNESCO’s recommendation on AI ethics. ABI Research believes the EU Act, with its focus on trustworthy AI and responsible research and innovation, is well-placed to cover these desired criteria, and risk-based approaches will become the international standard, though it remains to be seen if big players such as the US and China will follow suit and, if so, how.

Nevertheless, any AI regulation will necessarily have to be open to new developments in technology– and crucially, to adapt risk-based requirements accordingly. Oftentimes discussions of AI risks are treated as if they were technical problems– namely, this is the risk this technology poses, this is its potential impact, and this is the desired output– when in reality many ethical questions to do with AI are unsettled or are actually unknown. There is significant understanding, though by no means comprehensive agreement, of the ethics of some of the technology that will become central to our lives in the short term, such as the liability of autonomous vehicles or machine bias in law, while in the case of midterm developments such as AI governance or human-machine interaction, the ethical issues are still being worked out (and in the case of possible long-term developments related to AI such as mass unemployment or space colonization, the ethical problems are not very well defined). Naturally, there will be mistakes in the early stages of the implementation of the Act, as the regulator can hardly anticipate some of the issues that will arise.

ABI Research believes that much as was the case with the development of medical ethics in the past, an interdisciplinary approach to the ethics and regulation of AI is required, though this is so far lacking. Academia will play an important role, and it is worth noting that there has been much activity on this front recently. For instance, Oxford University founded an Institute for Ethics in AI in February 2021 within its world-leading Faculty of Philosophy. Be that as it may, businesses worldwide will have to adapt quickly, as the EU AI Act will be a trend-setter.

 

Services

Companies Mentioned