Fellow, Artificial Intelligence and Machine Learning, World Economic Forum
Bans on facial recognition may not be the right approach to effectively mitigate the risks of an already ubiquitous technology. A risk-based approach could be the answer.
Facial Recognition Technology (FRT) has come under increasing scrutiny for potentially undermining our privacy, misidentifying people, perpetuating systemic racism, and contributing to surveillance infrastructure.
So far, the response from policymakers has predominantly focused on banning the use of FRT to prevent any harm. In the US, San Francisco and Boston have set the tone for others to follow.
Given the current shortcomings of the technology, this response may seem persuasive for high-risk situations. But most use cases don’t carry the same level of risks. A tailored regulation – rather than bans – should be considered as a viable alternative.
Risks vary depending on the context
Each scenario carries a different risk level; for example, using FRT in a criminal investigation has an additional weight than streamlining an airplane boarding process. Tailored regulation is essential to resolving the significant concerns involved with the technology.
Our dependency on the digital world is growing at an unprecedented pace. A technology like facial recognition carries the potential for strengthening trust on the internet.
Considering the context in which FRT is used can help accurately characterise risks. Once these risks are identified, organisations can then set internal processes to mitigate them by training their staff to recognise potential biases and identify risks that could emerge. These “risk-mitigation processes” are similar to the already ubiquitous in the “privacy-by-design” process.
A robust digital identity tool to harness
Our dependency on the digital world is growing at an unprecedented pace. A technology like facial recognition carries the potential for strengthening trust on the internet. It can help protect our digital identities by replacing logins and passwords with the simple use of our face.
We find ways to chart the path to continue to improve this technology and ultimately leverage its full potential, mitigating risks along the way rather than obliterating it.
Self-assessment and certification
To gain trust from end-users and citizens, tech companies, and FTR users (like airports or law enforcement agencies) need to take further steps such as setting up internal self-assessments and external certification processes. Certification bodies provide labels of compliance in many areas, such as personal data protection. In many industries, policymakers and technology users already use these mechanisms to build trust and transparency. Facial recognition should follow this path and not be used as a one-size-fits-all solution anymore.