Registered users can unlock up to five pieces of premium content each month.
The First City in the United States to Outright Ban Facial Recognition Use by Government and Law Enforcement |
NEWS |
A few weeks ago, in May, 2019, a new bill was passed in San Francisco that officially banned the use of facial recognition technologies. Additionally, San Francisco will now proceed to implement a carefully laid-out plan regarding what types of surveillance tools can be used by the city’s law enforcement agencies (border control applications will still apply as they conform to United States and global requirements). Essentially, this means that the bill will impose strict oversight over said tools, the data they process, the agencies they can share said data with, the algorithms used, the companies that provided them, and many other features. These restrictions will ultimately circumvent any existing biometric laws and regulations and will monitor most (if not all) major applications of facial recognition across civil, law enforcement, and government verticals.
Is There Any Merit to Banning Facial Recognition Surveillance? |
IMPACT |
How was this bill actually passed by San Francisco in a city that created so many digital pioneers in Silicon Valley, as well as being one of the leading technologically innovative cities in the United States, if not worldwide? Responses from San Francisco regulators, Californian lawmakers and members of the Congress, spokespeople from the American Civil Liberties Union, and other entities believe that the technology can be abused by governmental and law enforcement agencies and there is a high chance that many countries will evolve into surveillance states.
But is there any merit in this decision? As it turns out, yes there is—and quite a sizable amount for that matter. While it is an unprecedented turn of events that a city or state in the United States goes to such lengths to ban a specific type of biometric technology, there have been, in fact, many instances in the past where:
Additionally, the rising concerns regarding the overt or covert usage, lack of transparency regarding data processing, Artificial Intelligence (AI) and technological upgrades and the inability for regulation to keep up, algorithm limitations and a host of other associated problems with facial recognition-empowered surveillance likely also had an adverse effect that caused this bill to pass. So, there is definitely some merit in banning facial recognition especially given some of the decades-long covert operations mentioned previously.
But what should companies expect regarding the aftermath? Given the proven results of biometrics-based surveillance in favor of public security, is an outright ban the best option?
What Should Companies Expect? |
RECOMMENDATIONS |
This is to be expected, pushback was always part of the tech evolution of biometrics: First and foremost, implementers should note that even such an extreme measure is to be expected. It is only natural for companies to expect at least some continuity from this fallout and negative interactions toward facial recognition in the near future. This response should be treated as part of the natural evolution of biometrics over time. As one would recall, a decade ago even the mention of facial ID would conjure up images of governmental oppression straight out of Hollywood movies. Biometrics has, in general, encountered a lot of pushback over the years and, as the technology continues to evolve, it will not stop anytime soon.
Do not release incomplete products! Facial recognition has issues, but they should be hammered out: It is important to note that facial recognition technologies do have a lot of issues mostly related around algorithm accuracy. Companies should not release any unfinished product or any software that has not been properly “trained” to work in a specific population, or under specific lighting conditions or other limitations (e.g., stadia and public events, casinos and entertainment industry, etc.).
Are algorithms racist?Embrace and improve upon the operational limitations and do not, under any circumstances, deny them: A sizable portion of the negative discourse in the United States (and particularly in San Francisco) originates from the fact that many facial recognition algorithms have been characterized as “racist” (as indicated by independent research groups testing leading software products). In technical terms, this translates to the two following outcomes that have been observed across multiple studies: 1) higher False Positive rates for African Americans, and 2) higher False Negative rates for Caucasians.
In short, data variability and cohort representation are not the culprits here but rather other effects connected both to physiological issues, technical limitation, and, most importantly, image quality and conformity to International Civil Aviation Organization (ICAO) and National Institute of Standards and Technology (NIST) standards when designing recognition software.
Any algorithm designer would immediately attempt to point to a potential mistake regarding the Machine Learning (ML) data: variability and cohort representation. Which, in simple terms, means that some of the population (in this case African Americans) are less represented meaning that there is a higher variation of Caucasian population for the algorithm to train leaving other populations at a disadvantage. This, however, also does not seem to be the case. First, making a representative sample is obviously a statistically sound technique, meaning, adjusting the ratio of populations accordingly. Statically and modelling-wise it is correct to not have the equal number of population data since that is not representative of the actual population. For example, it would also make no sense to increase the number of the Caucasians to the same number of people from Asian descent simply because there is a bias in a facial recognition software in China or most other countries in Asia-Pacific (APAC). Second, more than anything, image quality and conformity to ICAO standards for facial recognition have been shown to increase the correct identification rate. Third, certain populations and particularly those of African descent do have an inherent difficulty to be identified correctly, a fact which points toward lighting effects due to skin pigment.
Instead of denying such limitations out of fear that they can put a company in the spotlight, ABI Research recommends that implementers embrace and acknowledge said technical issues and make it clear that operational limitations are observed and will be hammered out eventually as part of the ML process. This will not only improve scientific discourse but also increase governmental, public, and consumer confidence in the process.
The technology is merely a tool, implementers should accept that oversight is needed: A knife can be used as a weapon to inflict harm, or in the hands of a medical professional to save a life. Facial recognition, by itself, is obviously neither “good” nor “evil” and has proven time and time again that it can help identify dangerous criminals, prevent violent crimes, and even predict crimes given the rapid increase of AI technologies and behavioral analytics. It was also used as a means to severely damage civil liberties —at least in countries where this term actually exists. Objectively speaking, and as made clear in the previous section, it is perfectly understandable that the need to ban certain technologies will arise. On the other hand, the technology itself is not to blame for how certain governmental agencies are using it. One could easily extend this philosophy and simply state that governments should ban themselves over indiscriminate (and quite possibly illegal) usage of biometric surveillance technologies—but that is not the case. Implementers should understand that oversight and regulation is needed. Microsoft publicly announced that law makers and governments should rush to further regulations and oversight regarding biometric surveillance, out of fear (a very sensible response indeed) that privacy is at stake like never before.
Understand the negative feedback, conform to regulatory standards, and invest in future applications: ABI Research strongly suggests that it is not enough for companies to simply accept the negative pushback but make an actual effort to comprehend the message that civil liberties are, indeed, at stake and biometrics combined with AI have the potential to spiral out of control. If there’s a gap in regulation, then address it! It will not only increase a company’s status but it will also help develop safe applications for said technology. For example, when it comes to IoT there are many leading companies (ARM) that when they spot a gap in cybersecurity applications, they would actively engage toward it and force regulatory standardization. The same can be achieved with facial recognition surveillance (as well as the rest of biometric technologies) and be at the very core of the next investment rounds for future-looking facial recognition applications with inherent dangers (e.g., smart city, smart home, automotive).
For more information regarding future-looking biometric surveillance applications across multiple verticals, please see Transformative Horizon: Biometric Surveillance (PT-2217).