Upcoming Regulation on Artificial Intelligence to Drive Demand for Privacy Enhancing Technologies

Subscribe To Download This Insight

By Michela Menting | 1Q 2024 | IN-7233

Mounting concerns over the security of Artificial Intelligence (AI) applications are driving increasing regulatory overview. This may put a short-term damper on AI growth, but this is opening new opportunities for Privacy-Enhancing Technologies (PETs).

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.


Privacy Concerns Mar Growth Prospects of AI


Italy’s Data Protection Authority (DPA or Garante) recently notified ChatGTP’s OpenAI that it had breached data protection laws as enshrined by the European Union’s (EU) General Data Protection Regulation (GDPR). The DPA had already implemented a temporary ban on the use of ChatGPT by companies and individuals in the country in March 2023. This statement will likely signal other EU member countries to take a similar stand; and at the least, start their own investigations. Certainly, in light of the progress of the EU’s AI Act, the EU is clearly on track to ensure that the use of Artificial Intelligence (AI) within the EU is done within a regulated framework of safeguards, covering privacy, citizen rights, disinformation prevention. However, developing AI applications, including generative AI, will face some new barriers to growth.

Slower Growth versus Longer-Term Impact


Regulation in the AI space will inevitably slow the growth and adoption of AI-related applications. This is the primary argument that many pundits raise in the face of regulation, especially as it stifles innovation and prospective opportunities. However, the potential risks associated with an unbridled AI evolution have been debated and voiced repeatedly over the last decade; in a first instance related to the possibility that AI development will exceed government control, but increasingly, and more urgently today, that AI will be used maliciously (even if legitimately) and cause harm. It is this latter argument that is the primary driver of regulations such as the EU AI Act, which seeks to ban and limit certain applications because of the risk and the threat of misuse.

The most imminent threat is to privacy, according to Italy’s Garante; OpenAI’s ChatGPT is breaching data protection and privacy laws. Given the way Large Language Models (LLMs) work, it is entirely possible that private conversations and other confidential information may find its way into the datasets; and once in there, it can be extracted from the other side.

But beyond that, the goal of regulations is also to prevent processing public data (e.g., that gleaned from social media profiles, or data harvested through opt-in by companies, etc.) to create AI models that can impinge upon basic human rights, such as biometric categorization of sensitive characteristics, facial image scraping, or even human behavior manipulation. This means that model developers will now have to work within the guardrails set by such pieces of legislation, which will constrain and hamper Research and Development (R&D) to some extent.

Time for Privacy-Enchancing Technologies to Shine


Ultimately, regulatory guardrails present an opportunity for Privacy-Enhancing Technologies (PETs) to come into their own. PETs essentially refer to a long-standing concept focused on achieving privacy or data protection through hardware and software solutions, including technologies such as homomorphic encryption, multi-party computation, differential privacy, zero-knowledge proof, synthetic data, and federated learning, for example. Public, and even private data, could be used in AI modeling in a way that would protect the data from being revealed to both the model creators and the eventual users. Their use would still have to function within the requirements of regulations such as those being developed by the EU, but PET controls could provide some of the privacy protections outlined therein.

Some of these PETs have been around for years, but are not commercially viable, and others are fairly complex to implement, requiring algorithmic optimization or more streamlined operation for better usability. The chief issue stems from the need for better hardware capabilities, but advances in AI are starting to enable this. More cost-effective and performant hardware delivered by AI advances will drive the commercialization of PETs alongside it. With additional regulatory requirements making the need for security and privacy a priority, PETs stand a strong chance of finally emerging into practical applicability and, hopefully, lucrative business models that, in turn, can support continued AI innovation.