The Privacy-Bias-Accountability Triad: Diagnosing Systemic Ethical Failures in Facial Recognition Te

Page 1


International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056

Volume: 12 Issue: 12 | Dec 2025 www.irjet.net p-ISSN: 2395-0072

The Privacy-Bias-Accountability Triad: Diagnosing Systemic Ethical Failures in Facial Recognition Technology

1, Student, Department of Information Technology, Reena Mehta College of Arts, Commerce, Science & Management Studies, Mira Bhayandar West, Maharashtra 401101 2 ,3,4,Student Department of Information Technology, Reena Mehta College of Arts, Commerce, Science & Management Studies, Mira Bhayandar West, Maharashtra 401101 5Prof and Head of department (Department of information technology), Reena Mehta College of Arts, Commerce, Science & Management Studies, Mira Bhayandar West, Maharashtra 401101, India ***

Abstract - Facialrecognitiontechnology(FRT)isarapidly growing technology that has already established its presence in many sectors, offering great benefits in terms of security, convenience, and efficiency. At the same time, however, the technology'scomingintouseonsuchalargescaleposesmany ethical issues that are serious enough to attract the attention ofscholars.TheethicalscenariorelatedtoFRT ispresentedin a thorough manner in this paper, which discusses the three major aspects: individual privacy violation, systemic algorithmic bias, and institutional accountability. A qualitative research design that comprises systematic literature review, legal-policy mapping, and case-study synthesis is used to process the data and come up with the current state of ethical debate and regulatory response. The main results show that there is a conflict between technological utility and fundamental human rights, which happens to be the case with anonymity, dignity, and nondiscriminationissues.Thepaperproposesaconceptualmodel (the Privacy–Bias–Accountability Triad) to clarify the relationshipbetweentheseethicaldimensions.Intheabsence of strong regulatory frameworks, the ethical risks that are brought about by FRT can exceed the social benefits

Key Words: Privacy, Algorithmic Bias, Accountability, Surveillance, Ethics, Facial Recognition, Governance.

1. INTRODUCTION

The widespread use of digital technologies has led to the pointwherebiometrics,andespeciallyfacialimages,isused more and more often as identifiers. Face Recognition Technology(FRT)reliesondeeplearningalgorithmstomap and authenticate facial features and thus becomes a revolutionary tool in security, commerce, and everyday digital activities. Its uses range. From unlocking smartphonestorecognizingpeopleinlargegatherings,and stillcontinuetogrow.However,asitsusemovesawayfrom controlled settings into over-the-top public surroundings, theethicalimplicationsbecomeclearer.FRTraisesquestions abouttherighttostayanonymous,theneedforconsent,and thequalityofparticipationindemocraticprocesses.Newer literature has pointed out that privacy loss and lack of

governanceinAIsurveillancesystemsareamongthemain concerns[1],[2],[8].Thepaperaddresseshowthesocieties can Coe exist the legitimate advantages of FRT and simultaneouslyprotectindividualrights.

1.1 BACKGROUND AND CONTEXT

ThedevelopmentofFRTwasparticularlyfastattheend of the 20th century and the beginning of the 21st century when it changed from basic pattern-matching systems to complicated neural networks that could do high-speed recognition.TheuseofFRTbyboththepublicandprivate sectorshasbeenthemaintopicofaworldwidediscussion about the extent of privacy invasions, misuse of data, and worst- case scenarios. Among the fears are constant surveillance, police overreach, and biometric monitoring becomingcommonplace

1.2 STATEMENT OF THE PROBLEM

The use of unregulated FRT in many places presents a threat to democratic values and civil liberties, although it doeshaveoperationalbenefits.Withoutagloballyaccepted standard,societieswillsuffertheconsequencesofprivacy violations,inequitableoutcomes,andlackofresponsibility. Wrong identification in law enforcement creates serious ethical and legal risks such as wrongful arrests and discriminationagainstraces[9].

1.3 RESEARCH OBJECTIVES

1. Determine the manners in which FRT violates individual'srighttoprivacy.

2. Look into the factors and consequences of bias in algorithmsthatcomewithFRTsystems.

3. ExaminetheregulationsthataretheretoholdFRTs accountableincasesofmisuseandmistakes.

4. Build a concept model that connects the areas of privacy,bias,andaccountability.

Rania Khan1 , Safa Fatima2 , Nazifa Hajwane3, Shashi Saxena4, Prof Ashok Yadav5

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056

Volume: 12 Issue: 12 | Dec 2025 www.irjet.net p-ISSN: 2395-0072

1.4 SIGNIFICANCE OF THE STUDY

Bybringingtheideasofprivacy,bias,andaccountability together into one analytical model, the research puts forward an important argument in technology ethics. The results do not only help decision-makers in designing regulations but also provide support to developers in incorporatingethicsintotheirdesigns,andmakethepublic moreawareofthesocialtrade-offsthatcomewithbiometric surveillance

2. LITERATURE REVIEW

2.1

Technical Literature

Technical literature shows that FRT accuracy is very much affected by light conditions, angle of the face, resolution,anddemographicfactors.Apioneeringworkby BuolamwiniandGebrupointedoutlargedifferencesinracial andgenderperformanceofthecommercialsystems[1].The 2023–2024 multistage survey credits the entire FRT pathway imagecapturingtoverification withracialbias andindicatesthattrainingdataandsystemarchitectureare themainsourcesofthisbias[2].The2025reviewofprivacypreservingFRTmethodsisinclinedtowardssyntheticdata, cryptographicmasking,andfederatedlearningarchitectures, butitstillwarnsthatthereislargeaccuracy–privacytradeoffs[10].

2.2 Legal and Regulatory Literature

Accordingtolegalliterature,theregulatoryframeworkis fragmented. The GDPR is the provider of some of the strictestbiometricsafeguardsinEuropewhereasmostother parts of the world have no similar regulations. The U.S. is ratherdecentralizedinthatitadoptscitylevelrestrictions insteadoffederalregulations.A2024reviewsuggeststhat theinconsistenciesinprivacyenforcementandgovernance are still remote, even though AI regulatory efforts are comingup[8].

Country/City

San Francisco, USA Government Ban

First major city to ban police use of FRT.

Boston, USA Citywide Ban 2020 Prohibited use by cityagenciesdue to civilrightsconcerns.

European Union ProposedAI Act 2021 Restrictsreal-time biometric surveillancein publicspaces.

Toronto, Canada Temporary Moratorium 2022 HaltedFRTusein publichousingdue toprivacyrisks.

India NoFormal Ban Policeuse continues; Legalframeworks underdebate.

UK (London) Regulated Use 2020 Policeuseallowed with strict proportionality rules.

2.3 Ethical Discourse

Ethicshasraisedeyebrowsat"surveillancecapitalism," the continuous tracking of people's behavior’s which, accordingtothecritics,leadstothedemiseofcontroland anonymity[4].Theethicalissuesbecomemoreseriouswhen theFRTapplicationspreadsbeyonditsinitialdeployment gaining the infamous “mission creep” label. The existing accountability mechanisms are weak; hence they do not challengeandcontroltheprevailingprivacyviolationsand the biased outcomes. The contemporary academic debate emphasizesthatlackofaccountabilityboosts,privacyand biasharms[11].

2.4 Identification of research gaps

Literaturecomprehensivelycoverstheissuesofprivacy andharmscausedthroughalgorithmicbiasesbutdoesnot connecttheconceptsofaccountabilityasamajorfactorthat influencesboth.Moreover,intherarecaseswhereallthree factors - liability, auditability, and transparency - are broughttogether,theyarenotincludedinthegovernance models.TheproposedstudyfillsthisgapbyusingthePBA Triad.

3. RESEARCH METHODOLOGY

3.1

Research Design

Thebasisofthisinvestigationisasystematicreviewofthe preexisting literature regarding the Facial Recognition Technology's (FRT) ethical landscape. In addition, we conductedacomparativepolicyanalysistoassessthevarious regulatorymeasures.Thisqualitativemethodisverysuitable for addressing multidimensional ethical issues, where interpretationandnormativereasoningplayacentralrole.

3.1 Data Collection Methods

We aimed at forming a complete understanding and therefore,wetooktheliteraturefromfourmajoracademic databases:GoogleScholar(togetthewidestcoverage),IEEE Xplore (for the technical side), Scopus (for the social

Table -1: SelectedGlobalRegulatoryResponsestoFRT

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056

Volume: 12 Issue: 12 | Dec 2025 www.irjet.net p-ISSN: 2395-0072

sciences), and JSTOR (for law and philosophy area). Our searchwasbasedonastrategythatwasspecificwithterms like "facial recognition", "ethics", "privacy", "bias", and "accountability"tofindthemostrelevantliterature.Ourmain focuswasonthepublicationsmadebetween2018and2025 coveringboththebasicstudies[1,4],andthelatestdebates [2,8,10,11].

3.2 Sampling Technique

Themostinformativesourceswereselectedfollowinga purposive sampling. This implies that we used the fixed criteria:theincludedsourcesmustbepeer-reviewedarticles ormajorinstitutionalreportsthatofferin-depthinformation on privacy, bias or accountability in FRT. Technical papers lacking an ethical discourse were excluded. The selection procedure consisted of an initial screening of titles and abstractsfollowedbyafull-textreadingtofilteralargeinitial set of results down to a focusedandhigh-quality literature collectionwhichwasthensubjectedtoin-depthanalysis.

3.3 Data Analysis Methods

Theliteratureselectedwasanalyzedintwointerlinked steps:

3.4.1. Thematic Analysis: Firstofall,wereadandreread the texts thoroughly, and in doing so, we pinpointed and codedthemainareasofdiscourseandargumentsthatcame up repeatedly (for example, loss of anonymity or demographic disparity). These codes were then gathered and assigned to the overarching themes that form the backbone of this paper's findings: Privacy infringement, algorithmicbias,andtheaccountabilitydeficit.

3.4.2. Normative Synthesis: Next,wewentfurtherthanjust merethemeidentification;wewentacriticalevaluationof thearguments,andweighedopposingvaluesandsolutions. ItwasduringthisprocessofsynthesisthatthePrivacy–Bias–Accountability(PBA)Triademergedasanewmodeltoshow howtheethicalchallengesareinterrelated

4. CONCEPTUAL MODEL: THE PRIVACY BIAS ACCOUNTABILITY (PBA) TRIAD:

ThePBATriaddemonstratesthedynamicinteractionofthe threedimensionsofprivacy,bias,andaccountability:

ď‚· Privacy: FRTleadstotheriskofanonymityloss,purpose creep,andviolationofconsent.

ď‚· Bias: The problems relate to the differencesinaccuracy, misidentification,andimpactoncertaingroups.

ď‚· Accountability: Thereare issuessuchaslack ofliability, absenceofauditing,andpoorredressmechanisms. An accountability failure will result in an escalation of privacyandbiasissueswhileabiasedsystemwillerodethe trustinprivacysafeguards.Thiscomprehensivemodelisthe basisforethicalgovernance.

5. RESULTS AND DISCUSSION

Table -2: SummaryofKeyEthicalChallengesinFRT Deployment.

Ethical Concern Domain Affected Observed Consequence

Privacy Violation IndividualRights

Chillingeffecton freedomofspeech andassembly;lossof anonymityinpublic spaces.

Algorithmic Bias SocialEquity

Accountability Deficit Governance& RuleofLaw

Misidentification, wrongfularrests, discriminationin accesstoservices.

Institutionaldenialof responsibility,slow regulatoryresponse, lackofredressfor harm.

5.1 Privacy as the Central Ethical Loss

FRTisoneofthemostsignificantviolationsofprivacy among the ethical issues in present-day automated facial recognition technology. Although it is to a certain extent possibletorevokeconsenttotheprocessingoffacialdata, that is often not true for the purposes the data has been repurposed this is the case of function creep. Effective privacy protections must therefore always be structural safeguardspreventingmisuse,extendingbeyondthemere consentgivenbythedatasubjects.

5.2 Persistence of Algorithmic Bias

The occurrence of bias in FRT is not simply a technical error,butitsignifiesawholesystemproblemthatiscaused byenteringprejudiceddatasets,buildingmodelspoorly,and allowing wrong socio-technical deployment contexts. Analysis throughout the 2024 pipeline confirmed the existence of racial disparities at every phase [2]. Policing studies conducted in recent years demonstrate that the enforcementcontextsaugmentsuchbiases,leadingtoamix ofwrongfularrestsandracialdisparity[9].

5.3 Accountability Deficit

Failurestoholdaccountablerepresentoneofthemain gaps in governance. It is the ambiguity whether the responsibility lies with the developers, integrators, or deplorer’sthatallowsthemtoevadeliability.Intheabsence ofexternalauditsandmandatedtransparency,commitments in the areas of privacy and fairness will be ineffective as therewillbenomeansofenforcingtherightsofthesubjects involved.ThePBATriadhighlightsthataccountabilityisthe keyonwhichtheprivacyandfairnessprotectionrely.

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056

Volume: 12 Issue: 12 | Dec 2025 www.irjet.net p-ISSN: 2395-0072

6. COMPARISON WITH PREVIOUS STUDIES

Thedisclosureofprivacyviolationsanddiscriminationby the technology confirmed the basic concepts of the researchers like Zuboff [4] and Buolamwini & Gebru [1]. However,themaincontributionofthecurrentpaperisnot tohighlighttheseproblemsagainbuttolinkthemthrough the Privacy–Bias–Accountability (PBA) Triad. Previous studieshavefrequentlyanalyzedtheseissuesseparately,but thisframeworkshowstheirinherentconnectionsandputs the lack of accountability as the main problem that exacerbatesprivacyandbiasharms.Oneexampleisthatthe image of bias is [1, 3] while the ethical prescription of [4] pointsmoralityintermsofsurveillance;thePBATriadacts like a common diagnostic framework linking the cause (accountability failure) to the effect (privacy and bias harms). This integrated view, alluded to in the budding governanceliterature[11],isheremappedoutasaunified framework, thus presenting a new way of looking at the diagnosing and solving of the systemic ethical failures of FRT.

7. CONCLUSION AND RECOMMENDATIONS

7.1 Summary of Key

Findings

Thefindingsofthisstudyarethattheethicalchallenges offacialrecognitiontechnology(FRT)arenotindependent matters, but rather an intra-acted coming together of the issuesofprivacyinvasions,algorithmicbias,andpresumably lackofaccountability.TheuseofthePBATriadmodelleads totheconclusionthatnoissuecanberesolvedwithout to couple it with others. The latter cannot be solved without integrated governance that at the same time secures the privacy rights of individuals, guarantees fairness in the algorithmic process, and gives accountability to the developersandusersoftheFRT.Ifnot,theuseofFRTwill remain a double-edged sword cutting deeper into democraticvaluesandsocialjustice.

7.2

Limitations of the Study

Thisresearchbeingonlyaqualitativestudyandbeingbased on literature review does not have the full authority that empiricalresearchcangive.Infact,thefindingswouldgain moreweightandstrengththroughtakingintoaccountand reporting the psychological and behavioral impact of FRT surveillanceondifferentcommunities.

7.3 Suggestions for Future Research

ď‚· Standardized Audit Frameworks: Thefirststepisto create and validate very specific and replicable protocols for the bias audits in FRT systems that will includetheuseofstandardizeddatasetsandmetricsfor evaluation.

ď‚· Cross-Cultural Governance Models: Thenextpointis to carry out comparative studies of FRT regulation in differentpoliticalandculturalcontextstoidentifybest practicesthatcanbetransferred.

ď‚· Technical Solutions Evaluation: Thethirdsuggestion is an empirical assessment of the use of privacyenhancing technologies (PETs) in real-world FRT deployments, with a focus on the trade-offs between privacypreservationandsystemperformance.

ď‚· Longitudinal Behavioral Studies: Finally,researchis suggestedtomeasuretheimpactoflong-termexposure toFRTsurveillanceonpublicbehavior,freeexpression, and participation in public life among various demographicgroups

7.4

Practical Recommendations

ď‚· Tiered Moratorium: Disallowing facial recognition technology(FRT)inmasssurveillanceandsocialscoring would be an immediate measure, meanwhile preapprovalcriteriawouldbeappliedforthepublic-sector applicationsofFRTthatarenotyetinplace.

ď‚· Mandatory Algorithmic Auditing: Prior to the deploymentofnewtechnologies,therequirementwillbe for an independent, third-party bias audit, followed by regularaudits,withtheresultsandplansfordealingwith anyissuesmadepublic.

ď‚· Clear Liability Frameworks: Itisessentialtomapout theresponsibilityofallactorsateach stageoftheFRT lifecycle fromdevelopersandintegratorsthroughendusers sothatthereareclearroutestolegalactionwhen peoplesufferharm.

ď‚· Data Governance Protocols: The rigour of data governance applied would include no more data being keptthannecessary,specificpurposesforwhichdatais collected, and the imposition of timelines for deletion, with technical safeguards such as encryption and federatedlearningdeployedadditionally.

ď‚· Adopt an Integrated Governance Approach: ThePBA Triadisoneoftheframeworksregulatorscoulduseto ensurethatapolicyaddressingoneethicaldimension privacy,forexample doesnotinadvertentlyexacerbate others such as by making the surrounding systems lesstransparentandaccountable.

ď‚· Multi-Stakeholder Oversight: Regulatoryframeworks shouldcreateindependentoversightbodies,withpublic representation,toapproveandmonitorthedeployment of facial recognition technology (FRT), especially in sensitive areas such, as law enforcement and public services.

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056

Volume: 12 Issue: 12 | Dec 2025 www.irjet.net p-ISSN: 2395-0072

REFERENCES

[1] J. Buolamwini and T. Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial GenderClassification,”Proc. FAT,pp.77–91,2018.

[2] S.Yüceretal.,“RacialBiasWithinFaceRecognition: A Survey,”arXiv:2305.00817,2023.

[3] I. Raji and J. Buolamwini, “Actionable Auditing: Investigating the Impact of Publicly Naming Biased PerformanceResults,”Proc.AAAI/ACMAIES,pp.429–435,2019.

[4] S. Zuboff, The Age of Surveillance Capitalism. PublicAffairs,2019.

[5] European Commission, “Artificial Intelligence Act,” COM(2021)206final,2021.

[6] PrivacyInternational,“FacialRecognition:AThreat toPrivacy,Democracy,andFreedoms,”2020.

[7] G.SmithandT.Miller,“RegulatingFacialRecognition Technology: Balancing Innovation and Human Rights,” Computer Law & Security Review, vol. 41, 2021.

[8] X.Wangetal.,“BeyondSurveillance:Privacy,Ethics, and Regulations in Face Recognition Technology,” FrontiersinBigData,2024.

[9] S.Pour,“PoliceUseofFacialRecognitionTechnology and Racial Bias,” American Journal of Artificial Intelligence,vol.7,no.1,pp.17–23,2023.

[10] “EnsuringPrivacyinFaceRecognition:ASurveyon Data Generation, Inference and Storage,” Discover AppliedSciences,vol.7,2025.

[11] “A New Epoch of Face Analytics: Technological EvolutionThroughEthicalandLegalChallenges,”AI andEthics,2025.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.