International Research Journal of Engineering and Technology (IRJET) Volume: 11 Issue: 08 | Aug 2024
www.irjet.net
e-ISSN: 2395-0056 p-ISSN: 2395-0072
How does the rise of deep fake technology undermine democratic processes and erode public trust–what strategies can be implemented to mitigate these threats? SHRAVIL AGGARWAL -----------------------------------------------------------------------------***------------------------------------------------------------------------Abstract: Deep fakes are synthetic media, in which artificial intelligence morphs someone’s face on an already existing video of an actor. This research explores the evolution of deep fakes, the risks they present, and their broader meaning in political, psychological, and social backdrops. Deepfakes have raised serious concerns due to their ability to manipulate information. Developed for entertainment purposes, deepfakes have quickly evolved into a tool for misinformation, harassment, and fraud. The current state of deep fake technology, although still developing, is already capable of causing significant harm to pre-existing democratic models. Politically, deepfakes pose a serious threat to democracy and societal trust. They can be used to spread false information, manipulate public opinion, and destabilize political systems. The psychological impact of deepfakes is equally concerning, as they can be used to harass individuals, create false narratives, and undermine personal relationships. The potential for deepfakes to damage an individual's reputation, private life, and profession is significant, as they are known to be weaponized and to create damaging content that appears real. Addressing the threats posed by deepfakes requires practical and feasible approaches discussed in this research, such as developing AI-driven detection tools, legal frameworks, and public awareness campaigns. It is crucial to explore future trends in AI and understand how they might be detrimental to the public.
Introduction: Artificial Intelligence (AI) has come a long way since its invention in the 1950s by Alan Turing and John McCarthy. Since then, AI has been a topic of many studies. Deep Fakes, a significant advancement in AI and Machine Learning, though a relatively new topic, has sparked considerable interest and debate in the community. By leveraging sophisticated algorithms, particularly deep learning techniques, deep fake softwares can create hyper-realistic but synthetic videos/audio/images that cannot be discerned from reality by an untrained eye. This has raised several security concerns, not only because it contributes to the spread of misinformation, but it also invades personal privacy and can ruin the reputation of public figures. It creates mass hysteria in the public, which is detrimental to national security. Although Deep fakes are crucial in the cinematography and editing industries, the risks involved in the unrestricted use of deep fakes exceed their benign nature. The potential for misuse and breaking the societal trust is too high. Hence, it is crucial to identify the victims of this rampant technology and to suggest effective solutions (temporary or permanent) for this threat that deep fakes pose. Many research papers provided solutions to these threats. Some of them even discussed the failure of the judicial system due to the manipulation of truth. However, the victims of this technology remain anonymous and are still suffering the consequences, of something they didn’t do because “Legal loopholes don’t help victims of deepfakes abuse”. This research aims to unveil these problems related to deep fakes and hopes to provide some solutions.
Literature Review: ●
Deep fakes, fake news, and what comes next by Sean Dack (The Henry M. Jackson School of International Studies, 2019)
The article concentrates on how in the 2016 American presidential election as well as in the 2017 French presidential election, Russian syndicates targeted the then-candidates. For example, Emmanuel Macron was a target of such a campaign in the 2017
© 2024, IRJET
|
Impact Factor value: 8.226
|
ISO 9001:2008 Certified Journal
|
Page 260