Seeing is no longer believing: a study of Deep Fake identification ability and its social impact
Research Article
Open Access
CC BY

Seeing is no longer believing: a study of Deep Fake identification ability and its social impact

Yijun Xie 1*
1 Department of Security and Crime Science, University College London
*Corresponding author: leooooooxie@outlook.com
Published on 30 October 2025
Volume Cover
AEI Vol.16 Issue 10
ISSN (Print): 2977-3911
ISSN (Online): 2977-3903
Download Cover

Abstract

This research explores university students’ ability to identify deepfake images generated by artificial intelligence, and their criminological implications when faced with manipulated content. Through an online questionnaire with 129 participants, the study designed deepfake identification tasks and attitude scales combining demographic, cultural, and experiential factors. The results indicate an overall deepfake identification accuracy of 55.8%, consistent with previous meta-analyses. Regression analysis showed that the education level and AI tool usage experience positively predicted performance, while age and gender had no significant effect. Students who received prior deepfake training had significantly improved identification accuracy; cultural background failed to reach statistical significance. More importantly, exposure to deepfakes reduced trust in digital images and self-judgement confidence. These findings indicate that despite university students’ high reliance on digital platforms, they are not naturally more resilient to deepfakes and remain susceptible to crimes such as fraud, identity theft, and forged evidence. Overall, this paper demonstrates that deepfakes present novel victimisation risks within criminology while undermining the credibility of digital evidence, highlighting the importance of advancing relevant education and prevention strategies within higher education and criminal sciences.

Keywords:

deepfake detection, artificial intelligence, media credibility, user perception, AI-generated content

View PDF
Xie,Y. (2025). Seeing is no longer believing: a study of Deep Fake identification ability and its social impact. Advances in Engineering Innovation,16(10),28-40.

References

[1]. Dong, R., Yuan, D., Wei, X., Cai, J., Ai, Z., & Zhou, S. (2025). Exploring the relationship between social media dependence and internet addiction among college students from a bibliometric perspective.Front. Psychol., 16. https: //doi.org/10.3389/fpsyg.2025.1463671

[2]. Tambe, S. N., & Hussein, N. A.-H. K. (2023). Exploring the Impact of Digital Literacy on Media Consumer Empowerment in the Age of Misinformation.MEDAAD,2023, 1–9. https: //doi.org/10.70470/medaad/2023/001

[3]. Hasan, Ala Bawazir, Mustafa Abdulraheem Alsabri, Alharbi, A., & Abdelmohsen Hamed Okela. (2024). Artificial intelligence literacy among university students—a comparative transnational survey.Front. Commun., 9. https: //doi.org/10.3389/fcomm.2024.1478476

[4]. Russell, S. J., & Norvig, P. (2022). Artificial Intelligence: A Modern Approach, 4th US ed. Berkeley.edu. https: //aima.cs.berkeley.edu/?utm_source

[5]. Poole, D. L., & Mackworth, A. K. (2023). Artificial Intelligence: Foundations of Computational Agents.Artint.info. https: //artint.info/?utm_source

[6]. Nilsson, N. J. (2009). The Quest for Artificial Intelligence. https: //doi.org/10.1017/cbo9780511819346

[7]. CSRC Content Editor. (2025). Generative artificial intelligence - Glossary.CSRC. https: //csrc.nist.gov/glossary/term/generative_artificial_intelligence?utm_source

[8]. Babaei, R., Cheng, S., Duan, R., & Zhao, S. (2025). Generative Artificial Intelligence and the Evolving Challenge of Deepfake Detection: A Systematic Analysis.J. Sens. Actuator Netw., 14(1), 17–17. https: //doi.org/10.3390/jsan14010017

[9]. Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review.Technol. Innov. Manag. Rev., 9(11). https: //timreview.ca/article/1282?utm_source

[10]. Hawkins, W., Russell, C., & Mittelstadt, B. (2025). Deepfakes on Demand: the rise of accessible non-consensual deepfake image generators.ArXiv. https: //arxiv.org/abs/2505.03859

[11]. Zendran, M., & Rusiecki, A. (2021). Swapping Face Images with Generative Neural Networks for Deepfake Technology – Experimental Study.Procedia Comput. Sci., 192, 834–843. https: //doi.org/10.1016/j.procs.2021.08.086

[12]. Citron, D., & Chesney, R. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.Calif. Law Rev., 107(6), 1753. https: //scholarship.law.bu.edu/faculty_scholarship/640/

[13]. Mukta, M. S. H., Ahmad, J., Raiaan, M. A. K., Islam, S., Azam, S., Ali, M. E., & Jonkman, M. (2023). An Investigation of the Effectiveness of Deepfake Models and Tools.J. Sens. Actuator Netw., 12(4), 61. https: //doi.org/10.3390/jsan12040061

[14]. Kaur, A., Hoshyar, A. N., Saikrishna, V., Firmin, S., & Xia, F. (2024). Deepfake video detection: challenges and opportunities.Artif. Intell. Rev., 57(6). https: //doi.org/10.1007/s10462-024-10810-6

[15]. Anastasiia, I. (2023). Fake news as a Distortion of Media Reality: Tell-truth Strategy in the post-truth era - ProQuest.Proquest. https: //www.proquest.com/openview/e394a9e0817455abea421ff3394a1d8c/1?pq-origsite=gscholar& cbl=396497

[16]. Pfänder, J., & Altay, S. (2025). Spotting false news and doubting true news: a systematic review and meta-analysis of news judgements.Nat. Hum. Behav., 1–12. https: //doi.org/10.1038/s41562-024-02086-1

[17]. Søe, S. O. (2019). A unified account of information, misinformation, and disinformation.Synthese, 198. https: //doi.org/10.1007/s11229-019-02444-x

[18]. Gelfert, A. (2018). Fake News: a Definition.Inform. Log., 38(1), 84–117.

[19]. Gelfert, A. (2021). What is fake news?Routledge EBooks, 171–180. https: //doi.org/10.4324/9780429326769-22

[20]. Appel, M., & Prietzel, F. (2022). The detection of political deepfakes.J. Comput.-Mediat. Commun., 27(4). https: //doi.org/10.1093/jcmc/zmac008

[21]. Sandoval, M.-P., De Almeida Vau, M., Solaas, J., & Rodrigues, L. (2024). Threat of Deepfakes to the Criminal Justice system: a Systematic Review.Crime Sci., 13(1). https: //doi.org/10.1186/s40163-024-00239-1

[22]. Ahmed, S. (2021). Who inadvertently shares deepfakes? Analyzing the role of political interest, cognitive ability, and social network size.Telemat. Inform., 57, 101508. https: //doi.org/10.1016/j.tele.2020.101508

[23]. Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.Soc. Media Soc., 6(1). https: //doi.org/10.1177/2056305120903408

[24]. Hameleers, M. (2024). Cheap Versus Deep Manipulation: The Effects of Cheapfakes Versus Deepfakes in a Political Setting.Int. J. Public Opin. Res., 36(1). https: //doi.org/10.1093/ijpor/edae004

[25]. Ranka, H., Surana, M., Kothari, N., Pariawala, V., Banerjee, P., Surve, A., Reddy, S. S., Jain, R., Lalwani, J., & Mehta, S. (2024). Examining the Implications of Deepfakes for Election Integrity.ArXiv. https: //arxiv.org/abs/2406.14290?utm_source

[26]. Farmer, L. (2022). Visual literacy and fake news: Gaining a visual voice. Stud. Technol. Enhanc.Learn.. https: //doi.org/10.21428/8c225f6e.b34036b2

[27]. Croitoru, F.-A., Hiji, A.-I., Hondru, V., Ristea, Nicolae Catalin, Irofti, P., Popescu, M., Rusu, C., Ionescu, R. T., Khan, F. S., & Shah, M. (2024). Deepfake Media Generation and Detection in the Generative AI Era: A Survey and Outlook.ArXiv. https: //arxiv.org/abs/2411.19537?utm_source

[28]. Diel, A., Lalgi, T., Schröter, I. C., MacDorman, K. F., Teufel, M., & Bäuerle, A. (2024). Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers.Comput. Hum. Behav. Rep., 16, 100538. https: //doaj.org/article/a2803c8a1b9441458f000fd0fe82ea47?utm_source

[29]. Feng, K.J., Ritchie, N., Blumenthal, P., Parsons, A., & Zhang, A. X. (2023). Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions.ArXiv. https: //arxiv.org/abs/2303.12118?utm_source

[30]. da Gama Batista, J., Jean-Philippe Bouchaud, & Challet, D. (2015). Sudden trust collapse in networked societies.Eur. Phys. J. B, 88(3). https: //doi.org/10.1140/epjb/e2015-50645-1

[31]. Groh, M., Sankaranarayanan, A., Singh, N., Kim, D. Y., Lippman, A., & Picard, R. (2022). Human Detection of Political Speech Deepfakes across Transcripts, Audio, and Video.ArXiv. https: //arxiv.org/abs/2202.12883?utm_source

[32]. Hameleers, M., van der Meer, T. G. L. A., & Dobber, T. (2024). Distorting the truth versus blatant lies: The effects of different degrees of deception in domestic and foreign political deepfakes.Comput. Hum. Behav., 152, 108096. https: //doi.org/10.1016/j.chb.2023.108096

[33]. Weikmann, T., Greber, H., & Nikolaou, A. (2024). After Deception: How Falling for a Deepfake Affects the Way We See, Hear, and Experience Media.Int. J. Press Polit., 30(1). https: //doi.org/10.1177/19401612241233539

[34]. Kim, J.-H. (2022). The Excessive Use of Social-Media Among College Students: The Role of Mindfulness.Open Access J. Youth Subst. Use Suicide Behav., 5(5), 1–8. https: //irispublishers.com/oajap/fulltext/the-excessive-use-of-social-media-among-college-students-the-role-of-mindfulness.ID.000624.php

[35]. Attewell, S. (2025, May 21). Student Perceptions of AI 2025 - Artificial intelligence.Artif. Intell.. https: //nationalcentreforai.jiscinvolve.org/wp/2025/05/21/student-perceptions-of-ai-2025/#respond

[36]. Krupp, L., Steinert, S., Kiefer-Emmanouilidis, M., Avila, K. E., Lukowicz, P., Kuhn, J., Küchemann, S., & Karolus, J. (2023). Unreflected Acceptance -- Investigating the Negative Consequences of ChatGPT-Assisted Problem Solving in Physics Education.ArXiv. https: //arxiv.org/abs/2309.03087?utm_source

[37]. Roe, J., Perkins, M., & Furze, L. (2024). Deepfakes and Higher Education: A Research Agenda and Scoping Review of Synthetic Media.J. Univ. Teach. Learn. Pract., 21(10). https: //doi.org/10.53761/2y2np178

[38]. Nygren, T., Wiksten Folkeryd, J., Liberg, C., & Guath, M. (2020). Students Assessing Digital News and Misinformation.Disinformation in Open Online Media, 12259, 63–79. https: //doi.org/10.1007/978-3-030-61841-4_5

[39]. Leeder, C. (2019). How college students evaluate and share “fake news” stories.Libr. Inf. Sci. Res., 41(3), 100967. https: //doi.org/10.1016/j.lisr.2019.100967

[40]. Alexander, S. (2025). Deepfake Cyberbullying: The Psychological Toll on Students and Institutional Challenges of AI-Driven Harassment.Clear. House, 1–15. https: //doi.org/10.1080/00098655.2025.2488777

[41]. Cosme Torres, L. (2025). Law Student Allegedly Used AI to Create Porn of Fellow Students — Then Tried to Apologize.People. https: //people.com/law-student-allegedly-used-ai-create-porn-fellow-students-11773557?utm_source

[42]. ondyari. (2022, December 2). FaceForensics++: Learning to Detect Manipulated Facial Images.GitHub. https: //github.com/ondyari/FaceForensics

Cite this article

Xie,Y. (2025). Seeing is no longer believing: a study of Deep Fake identification ability and its social impact. Advances in Engineering Innovation,16(10),28-40.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Journal: Advances in Engineering Innovation

Volume number: Vol.16
Issue number: Issue 10
ISSN: 2977-3903(Print) / 2977-3911(Online)