International Research Journal of Engineering and Technology (IRJET)
e-ISSN: 2395-0056
Volume: 12 Issue: 12 | Dec 2025
p-ISSN: 2395-0072
www.irjet.net
The Privacy-Bias-Accountability Triad: Diagnosing Systemic Ethical Failures in Facial Recognition Technology Rania Khan1, Safa Fatima2, Nazifa Hajwane3, Shashi Saxena4, Prof Ashok Yadav5 1, Student, Department of Information Technology, Reena Mehta College of Arts, Commerce, Science & Management
Studies, Mira Bhayandar West, Maharashtra 401101
2 ,3,4,Student Department of Information Technology, Reena Mehta College of Arts, Commerce, Science &
Management Studies, Mira Bhayandar West, Maharashtra 401101
5Prof and Head of department (Department of information technology), Reena Mehta College of Arts, Commerce,
Science & Management Studies, Mira Bhayandar West, Maharashtra 401101, India -------------------------------------------------------------------------***------------------------------------------------------------------------
Abstract - Facial recognition technology (FRT) is a rapidly
governance in AI surveillance systems are among the main concerns [1], [2], [8]. The paper addresses how the societies can Coe exist the legitimate advantages of FRT and simultaneously protect individual rights.
growing technology that has already established its presence in many sectors, offering great benefits in terms of security, convenience, and efficiency. At the same time, however, the technology's coming into use on such a large scale poses many ethical issues that are serious enough to attract the attention of scholars. The ethical scenario related to FRT is presented in a thorough manner in this paper, which discusses the three major aspects: individual privacy violation, systemic algorithmic bias, and institutional accountability. A qualitative research design that comprises systematic literature review, legal-policy mapping, and case-study synthesis is used to process the data and come up with the current state of ethical debate and regulatory response. The main results show that there is a conflict between technological utility and fundamental human rights, which happens to be the case with anonymity, dignity, and nondiscrimination issues. The paper proposes a conceptual model (the Privacy–Bias–Accountability Triad) to clarify the relationship between these ethical dimensions. In the absence of strong regulatory frameworks, the ethical risks that are brought about by FRT can exceed the social benefits
1.1 BACKGROUND AND CONTEXT The development of FRT was particularly fast at the end of the 20th century and the beginning of the 21st century when it changed from basic pattern-matching systems to complicated neural networks that could do high-speed recognition. The use of FRT by both the public and private sectors has been the main topic of a worldwide discussion about the extent of privacy invasions, misuse of data, and worst- case scenarios. Among the fears are constant surveillance, police overreach, and biometric monitoring becoming commonplace.
1.2 STATEMENT OF THE PROBLEM The use of unregulated FRT in many places presents a threat to democratic values and civil liberties, although it does have operational benefits. Without a globally accepted standard, societies will suffer the consequences of privacy violations, inequitable outcomes, and lack of responsibility. Wrong identification in law enforcement creates serious ethical and legal risks such as wrongful arrests and discrimination against races [9].
Key Words: Privacy, Algorithmic Bias, Accountability, Surveillance, Ethics, Facial Recognition, Governance.
1. INTRODUCTION The widespread use of digital technologies has led to the point where biometrics, and especially facial images, is used more and more often as identifiers. Face Recognition Technology (FRT) relies on deep learning algorithms to map and authenticate facial features and thus becomes a revolutionary tool in security, commerce, and everyday digital activities. Its uses range. From unlocking smartphones to recognizing people in large gatherings, and still continue to grow. However, as its use moves away from controlled settings into over-the-top public surroundings, the ethical implications become clearer. FRT raises questions about the right to stay anonymous, the need for consent, and the quality of participation in democratic processes. Newer literature has pointed out that privacy loss and lack of
© 2025, IRJET
|
Impact Factor value: 8.315
1.3 RESEARCH OBJECTIVES 1.
Determine the manners in which FRT violates individual's right to privacy. 2. Look into the factors and consequences of bias in algorithms that come with FRT systems. 3. Examine the regulations that are there to hold FRTs accountable in cases of misuse and mistakes. 4. Build a concept model that connects the areas of privacy, bias, and accountability.
|
ISO 9001:2008 Certified Journal
|
Page 48