French Data Authority Issues AI Game Plan, with Special Mention of Video Surveillance after Olympics Controversy

Subscribe To Download This Insight

By Elizabeth Stokes | 2Q 2023 | IN-6991

France’s data protection agency, the CNIL, released an Artificial Intelligence (AI) action plan designed to regulate and audit AI systems last month. The plan singles out AI-equipped video surveillance cameras, recommitting to auditing their use. The plan's release comes a few months after the French government passed a bill legalizing the use of AI video surveillance cameras ahead of the 2024 Paris Olympics, inflaming an already fraught debate in the country about privacy and individual freedom.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

The CNIL's 2023 AI Action Plan

NEWS


France’s data protection authority, the CNIL, announced in May an Artificial Intelligence (AI) action plan, introducing a new framework to regulate the technology and defend citizens' rights to privacy. The framework includes four objectives, one of which features a renewed focus on auditing the use of AI-based video surveillance cameras.

This action plan comes a year before the Paris 2024 Olympics, which has been partially mired in controversy after the French National Assembly approved earlier this year the use of smart cameras for the games. Privacy activists derided the decision, accusing the French government of using the Olympics to usher in indefinite surveillance.

Paris Olympics Test the CNIL's Smart Camera Framework

IMPACT


In the summer of 2022, the CNIL outlined acceptable use cases for smart cameras in public spaces. The organization listed use cases that may appear legitimate, such as using smart cameras to analyze building occupation to determine energy consumption or surveilling traffic patterns in an effort to develop new roads. However, the organization warned that the presence of AI video surveillance cameras in public spaces poses a significant threat to citizens’ rights and freedoms, and the devices should, therefore, be strictly regulated on a per-case basis. The organization stated that the majority of camera deployments should require authorization from a legislative or regulatory authority.

Additionally, the 2022 framework was adamant that law enforcement in France cannot use smart cameras to detect criminal offenses. According to the CNIL, the French police are barred from using smart cameras to detect suspicious or abnormal behavior. Public authorities may use these devices only if authorized by a specific law.

The legal process allowing authorities to use surveillance cameras played out in real time earlier this year when the French government introduced a bill proposing the use of AI video surveillance cameras during the Paris 2024 Olympics. The National Assembly passed the bill in March, and the French court approved the bill last month. The bill will make France the first country in the European Union (EU) to approve the use of AI video surveillance.

The CNIL supported the bill so long as the French government guaranteed that no biometric data would be collected. The French government noted that it would not use facial recognition technology, but opponents argue that collecting data to determine suspicious behavior inherently involves collecting biometric data. Though the CNIL has been at the forefront of technology regulation and privacy protection, its leadership in video surveillance regulation has been tested by the new bill. Other privacy organizations, such as Big Brother Watch, have publicly expressed their disapproval.

Prepare for Smart Cameras to Be Implicated in the Global AI Debate

RECOMMENDATIONS


The CNIL stated in the action plan released last month that it intends to enforce compliance with its 2022 smart camera framework. The rest of the AI action plan is focused on understanding the functions of AI systems, supporting AI innovators who respect individual rights, and auditing other AI-based technologies.

France’s concern about video surveillance and AI comes at a time when the European Parliament is introducing groundbreaking legislation to regulate AI technology. The parliament recently passed a draft law, called the A.I. Act, which would limit facial recognition software use and would force AI vendors to release more information about the data used by their systems. 

Europe has quickly organized and has shown more resolve in regulating AI and video surveillance than the United States, which has, so far, had a fractured approach to auditing the technology. Different states and cities have enacted various ordinances governing the use of AI video surveillance and biometric data collection. Vendors involved in AI systems and video surveillance should anticipate that regulations will continue to be fractured among states, counties, and countries.

Video surveillance vendors should also be aware that their devices will be implicated in the growing debate surrounding generative AI. The release of ChatGPT has forced citizens and governments to confront the power and privacy concerns of machine intelligence. As citizens debate the ethics of generative AI models, the conversation will inevitably lead to other controversial AI-enabled technologies, such as smart cameras. Just as CNIL’s AI action plan singles out smart cameras in its regulatory recommendations, future bills concerning AI systems will include legislation regarding video surveillance, video analytics, and biometric collection. Video surveillance vendors, which might assume that AI systems like ChatGPT will receive most of the present scrutiny, will be implicated in these debates, rather than ignored.

And unlike ChatGPT, video surveillance cameras are not a novel technology—they are implemented around the world in vast numbers. According to ABI Research, the global video surveillance camera installed base will reach 1.2 billion in 2030. The debate surrounding ChatGPT has given citizens the vocabulary to understand the fundamentals of machine intelligence and AI video surveillance vendors should expect that citizens and governments will feel better equipped to question the use of smart cameras.

Services