“Unmasking Facial Recognition: Trust, Governance, and Ethical Dimensions”?

Umesh Kumawat
14 min readMay 5, 2024

--

Introduction

The human face is a unique and important aspect of human identity. The uniqueness of each face helps in distinguishing one identity from another. Thus, because of its universality and uniqueness, face recognition has become the most widely used and accepted AI biometric method. It is a method of recognizing or validating a person’s identification by examining their face through pictures, films, or real-time using AI facial recognition software. The MIT A.I. Laboratory, led by Frank Rosenblatt, conducted the first work on facial recognition systems in the late 1950s (Press, 2022). After several years of research by Israeli business Face-Six, a commercial face recognition system was released for consumer usage in 2002. On the other hand, recognition technology is mostly utilized for police security and law administration, there is expanding interest in other applications such as Banking, unlocking phones, Airports and border control, Improving retail experiences, Monitoring gambling addictions, Marketing and advertising etc. With the release of the Apple FaceID, nowadays millions of people have face recognition technology in the palms of their hands (Tillman, 2022). Big companies like Panasonic, IBM, Microsoft and NEC Corporation are heavily investing in it after the covid-19 crisis (Yogendra, Khan, Borsai and Kumar, 2022).

The AI and Machine Learning algorithms are used in facial recognition systems to pick out specific, distinguishing characteristics on a person’s face. Deep learning is the dominant approach to facial detection analysis, with algorithms routinely programmed to learn and extract facial features and attributes from big datasets. The method has four general phases. Firstly, the camera recognizes and tracks the image of a face, whether it is alone or in a crowded place. The person in the image could be looking straight toward the camera or in any other direction. Next, an image of the target face is clicked. For analysis, it is easier to match 2D images with public photos or a given database, and that’s why most AI recognition technology prefers to choose 2D images over 3D. Distance between two eyes, the gap between forehead and chin, the shape of chick bones and many other geometrical factors of the face reads by AI software. Then according to facial features, the capture method converts analogue information into digital. Later the study of the face is translated into a math formula known as a faceprint. The faceprint of each person is distinct from others. In the end, the faceprint is compared with other databases of others people’s faces (What is Facial Recognition — Definition and Explanation, n.d.).

However, while adopting this new system in our society, there are always downsides and concerns as with every innovation. The growth of facial recognition and what it’s capable of bringing with a host of privacy and civil liberties issues. According to a significant court judgement action in the United Kingdom, the use of facial recognition by police has been declared illegal because its breaches the people’s essential right to privacy (Staff, 2020). Asian and African people were up to a hundred times more likely than white men to be misdiagnosed according to the analysis of over 100 facial recognition algorithms (Porter, 2019). Due to the covid-19, a face mask is an important component of our life and that’s created difficulty for previous commercialized facial recognition algorithms. The following paper addresses the key governance or ethical concerns, as well as trust, associated with the deployment of AI in the facial recognition sector. The principles of the EU proposal AIA or GDPR, as well as the data for humanity principles, would be used to identify the important governance in the following case study.

Trust (Mass Surveillance and Privacy concerns)

As previously discussed, face recognition is based on collecting, analyzing and storing unique photographs for identification purposes. The face recognition technologies powered by AI, particularly second-wave biometrics, employ more complex technologies and algorithms to collect highly sensitive and personal data (Renda et al., 2022). Rapid growth in AI and technologies generates more personal and sensitive data which is regularly monitored through various devices like surveillance cameras etc. Cameras in public places may infringe on persons’ sovereignty and seriously affect their freedom. The most problematic issue is one of ethics and privacy (Artificial Intelligence in Society, 2019). Governments have been known to collect photos of residents without their permission. The European Commission said in 2020 that it was proposing a five-year ban on face recognition technology in public settings to give it time to develop a legal structure to prevent privacy and ethical violations (Hellard, 2020). Authorization and management risks will be present if there are opportunities to enhance the facial recognition system beyond the fundamentals. For example, facial recognition may be extended from passport checking to airport payments and subsequently to the entire city. Such activities raise serious issues about the right to personal data protection included in Article 5 of the GDPR and the right to privacy stated in the EU AIA act (Art. 5 GDPR –n.d.; ARTIFICIAL INTELLIGENCE ACT, 2021). Face recognition technology is rapidly being employed in public places, and it believes that face recognition is soon becoming the global standard. As a result, new problems such as the inability to move anonymously in public spaces or a conformism that is harmful to one’s freedom of choice might arise as a result of such a mass monitoring system produced by the usage of face recognition technology. Furthermore, the collecting and keeping of facial recognition data pose security vulnerabilities, including the potential of data breach and abuse. Many vendors have been accused of stealing publicly available facial pictures from other websites to fill their databases and even some researchers simply stopped requesting people’s permission (Hammer, 2022). Hackers cracked the Apple FaceID authentication mechanism in just two minutes during the Black Hat Hackers event (Winder, 2019). In 2020, Clearview AI-based company that has billions of photographs for facial recognition technology claimed that it was hacked and lost the whole client’s sensitive information (O’Flaherty, 2020).

Accuracy Rate of Face Recognition Technologies (Najibi, 2020).

Ethics (Accuracy and Discrimination Concerns)

Face recognition technology is widely used, but human oversight is difficult to apply. Face recognition systems differ in their ability to recognize people, and no algorithm is entirely correct in all situations. This is a major cause of worry. According to empirical study, most facial recognition systems’ technical performance is still limited, and face detection algorithms can make two types of errors. When the FTR programme fails to locate a face that is there on a photograph, it is referred to as a false negative and conversely, if a face detector misidentifies a non-facial structure as a real face, it is called a false positive (Grother, Ngan and Hanaoka, 2019). When comparing a face to a name on a watchlist, police in South Wales found that 91% of matches were false positives, with the system making 2,451 wrong identified and only 234 right ones (Chertoff, 2021). Cameras have been proven to influence a facial recognition AI engine in recent studies. A bad image can make recognition difficult, and some characteristics, such as small children or exceptionally tall people, may provide images that are unsuitable for face recognition. When there are significant age gaps, FRT systems are also less accurate. The characteristic differences might occur at the capture stage, which is when only one image is captured before any comparison with other photographs. The mistake rates can be considered when comparing photographs with distinct brightness, shadows, backgrounds, postures, or emotions, or when using low-quality images. Camera defects, settings, client-side detection or quality rating algorithms all affect a facial recognition system. A lack of training data is another source of algorithmic bias in facial recognition algorithms (Leslie, 2018). Due to the general risk of blunders, several companies have opted to leave the FRT business. Face-matching technology was not commercialized by Axon, a leading provider of police wearable cameras in the United States, due to significant ethical concerns and technological limitations (Coldewey, 2019). Similarly, Microsoft and Amazon have stopped developing facial recognition software and services, and IBM has indicated that it will exit the industry (Asher Hamilton, 2020).

In addition, one of the biggest issues with face recognition technologies is racial bias. Discrimination is defined as a system in which one individual is treated differently than another in a similar situation. Bias in algorithmic decision-making can arise during the creation, training, and execution of face recognition algorithms, as a consequence of bias embedded in the algorithm, or as a result of how the results are presented by the person or agency operating the facial recognition. Even though face recognition algorithms guarantee classification accuracy of above 90%, these results are not consistent. Concerning discoveries that bring into question, the ethics of facial recognition have often surfaced in recent years. Nearly 117 million adults in the United States have images on law enforcement’s face recognition network. However, it’s concerning that face recognition system faults were more prevalent on dark-skinned faces, whereas light-skinned faces had fewer problems. The NIST conducted independent evaluations in July 2020 and found that face recognition technology for women of different colors was found to be the least accurate among 189 algorithms. These computational biases have significant real-world consequences. Face recognition technology is used by different levels of law enforcement, as well as US Customs and Border Protection, to increase police and airport inspections, respectively. This technology is frequently used to examine who gets a job or a place to live. According to an analyst at the American Civil Liberties Union, wrong matches may result in delayed flights, extended interrogations, inspection, placements, unwanted police interactions, mistaken arrests or worse wrongful convictions. Even if developers can make the algorithms more equal, some activists are afraid that law enforcement may unfavorably use the technology, affecting the poor disproportionately It has been discovered that false positives disproportionately affect persons of color in the United States and that they undermine the usual presumption of innocence in criminal trials by putting more of a responsibility on suspects and defendants to prove (Najibi, 2020). Such a result violates the European Union AI act and Article 21 of the GDPR, which outlaws inequality of any kind (Art. 5 GDPR –n.d.; ARTIFICIAL INTELLIGENCE ACT, 2021).

Governance

In reality, these fundamental rights are still in the development phase and increasing their boundaries to the private sector. However, they provide some practical directions for use of facial reorganization and contain only basic things such as data protection and development technologies. According to Article2(1) GDPR, the General Data Protection Regulation applies to both automatic and human processing of personal data as part of a filing system. The two Article 5(1)(f) GDPR and principles of Data Humanity require data to be handled in a way that ensures proper security for personal data, including protection against unauthorized or unlawful processing, as well as accidental loss, destruction, or damage, using appropriate technological or organizational methods. According to GDPR Article 5(1)(e), surveillance film used to identify should be destroyed after a few days, ideally automatically. Data processing software should be designed under data protection guidelines specified in data protection by design and by default, Article 25 GDPR. To comply with the GDPR’s transparency standards, the two-layered method is recommended for video surveillance and a warning sign such as entering the surveillance area or place. As under Article 9 of GDPR and principles of Data Humanity; except with prior consent of the concerned person, no personal data relating to racial/ethnic origin or religion/background or uniquely identifying or sexual orientation. If the person is incapable of giving consent, then data shall be processed with safeguard. Data shall never be revealed to anyone without prior notification of the subject. Article 22 states that no data should be handled automatically without the consent of the subject specialist who is concerned. However, if authorized by a union or member state legislation, with all necessary safeguards to preserve the subject’s rights and freedoms. Article 28 clarifies when data must be processed on behalf of the controller, and consent (either general or particular) is also necessary when it is handled by other processors. All information, including but not limited to the subject matter, length, nature, purpose, kind, and so on, must be stated in the consent (General Data Protection Regulation (GDPR) — Official Legal Text, 2018; ARTIFICIAL INTELLIGENCE ACT, 2021; V. Zicari and Zwitter, n.d.).

The proposed European Union AIA act also, states that the recognition technologies, which are increasingly being used to identify persons, may be extremely beneficial in ensuring public safety and security. However, they can be obtrusive, and the chance of algorithm mistakes is significant. As a result, the use of these technologies can have an impact on people’s fundamental rights, result in discrimination, violate the right to privacy, and even lead to mass monitoring. As a violation of Union ideals, certain very harmful AI approaches are disallowed (Article 5). They’re illegal because they risk people’s safety, employment, and freedoms. This encompasses both government social scores and technology that subtly influences human behavior. According to article 5, human manipulation systems are prohibited because they violate human rights and pose an unforeseen risk. According to article 61, a market safety assessment and ex-post market surveillance shall be done before launching high-risk AI. AI technology applied in infrastructure, education, administration, migration, and border control management is prohibited under Article 7 since it violates human rights and poses an unknown risk. The new framework, which is based on a risk-based approach, establishes a slew of rules and requirements for AI system development, market placement, and use. Due to the unacceptable dangers, they represent, AI systems that pose a demonstrable threat to people’s safety and basic rights would be barred from the EU market under this pyramidal approach (ARTIFICIAL INTELLIGENCE ACT, 2021).

Impact of Governance

The GDPR was officially adopted by the European parliament in April 2016. It applies to everyone involved in the processing of data individuals in the context of selling facial recognition services in the European Union. Article 6 of GDPR states six grounds beyond which companies cannot legally process data. More care needs to be taken for sensitive personal data. Video surveillance is a high-risk operation because the information is most valuable and can be used to find theft and fraud easily. Video surveillance should only be used to record data for the intended purpose and not for any other reason. GDPR also specifies that the controller cannot process data without consent and in case of breach need to inform the customer within 72 hrs. Article 35 states that documentation of compliance with the law is necessary. Like data protection impact assessment (DPIA) especially when data protection is a must for information like very sensitive data. The Seven Principles that need to be followed for processing public data are lawfulness, purpose, accuracy, storage, minimization, confidentiality and accountability. Organizations can be fined up to 4% of annual global turnover over non-compliance (Barnoviciu et al., 2019).

In February 2020, the European Commission released a White Paper on Artificial Intelligence, underlining the consequences of utilizing remote identity verification AI systems, particularly face recognition technology, in the EU. The new law proposed a moratorium and even permanent banning on the use of facial recognition in public places as per risk-based approach classification. Risks shall be classified as prohibited, high, limited, and minimal risks. The commission also proposed not to use AI systems for petty offences but only for serious criminal offences like terrorism real-time processing should be used. Use of facial recognition for purposes other than regular requirements like grocery shopping malls shall not be affected. The commission also wants to introduce a self-assessment system to standardize the particulars of a person in public information. There will be major consequences, especially for those that supply high-risk facial recognition AI systems. To be able to sell their AI goods and services in the union, they would have to meet several regulations aimed at protecting consumers’ safety, health, and basic rights. National market surveillance agencies would be in charge of ensuring that face recognition businesses adhere to the requirements and rules for high-risk AI systems, with the right to restrict or remove them from the market if they did not.

The final data for humanity principle addresses the challenges of climate change, migration, violating people’s personal data (space), and many more. The ability to utilize data (particularly personal data) comes with the obligation to use it equitably for the general benefit of all people, i.e., to serve humanity. Some of the following principles must be followed by companies to use data fairly: utilizing data in ways that do not damage people, which supports peaceful coexistence, helping those in need, helping the environment, and creating equality among people throughout the world. The above-described criteria must be observed while using face recognition data, and no data must be shared or used for reasons other than those listed without prior authorization (V. Zicari and Zwitter, n.d.).

Conclusion

Facial recognition technology is rapidly being employed by governmental authorities for security and commercial organizations to improve productivity, due to the rapid growth of artificial intelligence. At the same time, facial recognition technology jeopardizes basic rights the right to privacy and data protection can result in discrimination and has an unpredictable and difficult to measure influence on democracy and liberty. Artificial intelligence, particularly facial recognition is governed by existing legal norms in the European Union and national data protection laws, primarily in data protection guidelines. Face recognition technology processing of biometric data putting individual rights at risk as a result GDPR imposes strength criteria and restrictions on it.

Although existing data privacy standards must be enforced and implemented, they may not be sufficient to address all the problems raised by the use of facial recognition technology. Aside from possible risks to basic rights and freedoms, the broader influence of modern technologies on society must be examined. More limits on commercial and government entities’ usage of facial recognition technology are urgently needed to build confidence in these technologies. It is vital to clarify when, why and how they could be used as well as the significance of accuracy and fairness assessments and the extent of government oversights.

References-

2021. ARTIFICIAL INTELLIGENCE ACT. [online] Available at: <https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN>

Asher Hamilton, I., 2020. Outrage over police brutality has finally convinced Amazon, Microsoft, and IBM to rule out selling facial recognition tech to law enforcement. Here’s what’s going on. [online] Business Insider. Available at: <https://www.businessinsider.com/amazon-microsoft-ibm-halt-selling-facial-recognition-to-police-2020-6?r=US&IR=T>

Barnoviciu, E., Ghenescu, V., Vasile Carata, S., Ghenescu, M., Mihaescu, R. and Chindea, M., 2019. GDPR compliance in Video Surveillance and Video Processing Application. [online] Ieeexplore.ieee.org. Available at: <https://ieeexplore.ieee.org/abstract/document/8906553>

Coldewey, D., 2019. TechCrunch is part of the Yahoo family of brands. [online] Techcrunch.com. Available at: <https://techcrunch.com/2019/06/27/police-body-cam-maker-axon-says-no-to-facial-recognition-for-now/>

General Data Protection Regulation (GDPR). n.d. Art. 5 GDPR — Principles relating to processing of personal data — General Data Protection Regulation (GDPR). [online] Available at: <https://gdpr-info.eu/art-5-gdpr/>

Chertoff, P., 2021. Facial Recognition Has Its Eye on the U.K.. [online] Lawfare. Available at: <https://www.lawfareblog.com/facial-recognition-has-its-eye-uk>

Grother, P., Ngan, M. and Hanaoka, K., 2019. Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects. [online] https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf. Available at: <https://doi.org/10.6028/NIST.IR.8280>

Hammer, A., 2022. Clearview AI seeking to put 100b photos in facial recognition database. [online] Mail Online. Available at: <https://www.dailymail.co.uk/news/article-10523739/Clearview-AI-seeking-100-billion-photos-facial-recognition-database.html>

Hellard, B., 2020. [online] Itpro.co.uk. Available at: <https://www.itpro.co.uk/security/privacy/354570/eu-considers-a-five-year-ban-on-facial-recognition>

O’Flaherty, K., 2020. Clearview AI, The Company Whose Database Has Amassed 3 Billion Photos, Hacked. [online] Forbes. Available at: <https://www.forbes.com/sites/kateoflahertyuk/2020/02/26/clearview-ai-the-company-whose-database-has-amassed-3-billion-photos-hacked/?sh=69e8ffeb7606>

Leslie, D., 2018. Understanding bias in facial recognition technologies. [online] Tu-ring.ac.uk. Available at: <https://www.turing.ac.uk/sites/default/files/2020-10/understanding_bias_in_facial_recognition_technology.pdf>

Najibi, A., 2020. Racial Discrimination in Face Recognition Technology — Science in the News. [online] Science in the News. Available at: <https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/>

Porter, J., 2019. Federal study of top facial recognition algorithms finds ‘empirical evidence’ of bias. [online] The Verge. Available at: <https://www.theverge.com/2019/12/20/21031255/facial-recognition-algorithm-bias-gender-race-age-federal-nest-investigation-analysis-amazon>

Press, G., 2022. A Very Short History Of Artificial Intelligence (AI). [online] Forbes. Available at: <https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/?sh=433706666fba>

Renda, A., Arroyo, J., Sipiczki, A., Maridis, G., Fernandes, M., Endrodi, G., Milio, S., Devenyi, V., Georgiev, S. and de Pierrefeu, G., 2022. Study supporting the impact assessment of the AI regulation. [online] Shaping Europe’s digital future. Available at: <https://digital-strategy.ec.europa.eu/en/library/study-supporting-impact-assessment-ai-regulation>

Staff, R., 2020. [online] Available at: <https://www.reuters.com/article/us-britain-tech-privacy-idUSKCN2572B8>

Tillman, M., 2022. What is Apple Face ID and how does it work?. [online] Pocket-lint. Available at: <https://www.pocket-lint.com/phones/news/apple/142207-what-is-apple-face-id-and-how-does-it-work>

V. Zicari, R. and Zwitter, A., n.d. Data for Humanity — Big Data Lab. [online] Bigdata.uni-frankfurt.de. Available at: <http://www.bigdata.uni-frankfurt.de/dataforhumanity/> [Ac-cessed 31 May 2022].

Winder, D., 2019. Apple’s iPhone FaceID Hacked In Less Than 120 Seconds. [online] Forbes. Available at: <https://www.forbes.com/sites/daveywinder/2019/08/10/apples-iphone-faceid-hacked-in-less-than-120-seconds/?sh=c4bf12521bc3>

Yogendra, B., Khan, S., Borasi, P. and Kumar, V., 2022. Facial Recognition Market Size, Trends | Growth Factors — 2030. [online] Allied Market Research. Available at: <https://www.alliedmarketresearch.com/facial-recognition-market>

--

--

Umesh Kumawat
Umesh Kumawat

Written by Umesh Kumawat

IT Developer || Data Scientist

No responses yet