ISSN 1309-1581
TR EN

Early Detection of Lone-Wolf Radicalization: The Role of Conversational Artificial Intelligence

Yalnız Aktör Radikalleşmesinin Erken Tespiti: Sohbet Tabanlı Yapay Zekanın Rolü
DOI: 10.5824/ajite.2026.01.001.x
Sayfa: 1-25
EN Abstract

Early Detection of Lone-Wolf Radicalization: The Role of Conversational Artificial Intelligence

This article reveals the potential of conversational artificial intelligence as an early-warning tool in detecting lone-wolf radicalization. Drawing on psychological approaches to radicalization, particularly Moghaddam&#39s "staircase to terrorism" and Horgan&#39s model of terrorist engagement, the study analyzes how dialogue-based AI systems might identify behavioral and linguistic cues of extremist trajectories. The research also evaluates the ethical and governance implications of integrating AI into a counter-radicalization framework concerning the EU AI Act and UNESCO&#39s Recommendation on AI Ethics. The paper claims that the responsible deployment of AI can complement traditional prevention mechanisms by enhancing situational awareness and early intervention capacities. Ultimately, the study contributes to bridging the gap between digital ethics and security studies by offering an agenda for ethically aligned, preventive AI governance.
TR Öz

Yalnız Aktör Radikalleşmesinin Erken Tespiti: Sohbet Tabanlı Yapay Zekanın Rolü

Bu makale, sohbet tabanlı yapay zekânın yalnız aktör radikalleşmesini erken seviyede tespit edebilecek bir uyarı aracı olarak kullanılma ihtimalini incelemektedir. Radikalleşmenin psikolojik kuramlarından Moghaddam&#39ın "terörizme giden merdiven modeli" ve Horgan&#39ın katılım süreci yaklaşımı çerçevesinde, sohbet tabanlı yapay zeka sistemlerinin aşırı eğilimlerle bağlantılı dilsel ve davranışsal göstergeleri nasıl algılayabileceği analiz edilmektedir. Ayrıca, bu teknolojilerin radikalleşmeyle mücadele modellerine entegre edilmesinin etik ve yönetişim boyutları, AB Yapay Zekâ Yasası ve UNESCO Yapay Zekâ Etiği Tavsiyesi kapsamında değerlendirilmektedir. Etik gözetim altında geliştirilen yapay zeka uygulamalarının durumsal farkındalığı artırarak önleyici politika araçlarını güçlendirebileceği çalışmanın hipotezidir. Çalışma, dijital etik ile güvenlik çalışmaları arasındaki boşluğu doldurmayı amaçlayarak etik uyumlu bir yapay zeka yönetişimi yaklaşımını önermektedir.
Kaynakça 33
  1. AlgorithmWatch. (2021). UNESCO's AI ethics recommendation: Promise and pitfalls. https://algorithmwatch.org
  2. Borum, R. (2011). Radicalization into violent extremism I: A review of social science theories. Journal of Strategic Security, 4(4), 7-36. https://doi.org/10.5038/1944-0472.4.4.1
  3. Cacioppo, J. T., & Patrick, W. (2008). Loneliness: Human nature and the need for social connection. W. W. Norton.
  4. Cambria, E., Poria, S., Hazarika, D., & Kwok, K. (2017). SenticNet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10655
  5. Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
  6. Conway, M. (2017). Determining the role of the Internet in violent extremism and terrorism: Six suggestions for progressing research. Studies in Conflict & Terrorism, 40(1), 77-98. https://doi.org/10.1080/1057610X.2016.1157408
  7. European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on artificial intelligence (AI Act). Official Journal of the European Union.
  8. Floridi, L. (2013). The ethics of information. Oxford University Press.
  9. Floridi, L. (2016). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
  10. Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185-193. https://doi.org/10.1007/s13347-019-00354-x
  11. Franklin, M., Moreira Tomei, R., & Gorman, T. (2023). Critical reflections on the EU AI Act: Between innovation and regulation. Computer Law & Security Review, 49, 105773. https://doi.org/10.1016/j.clsr.2023.105773
  12. George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences. MIT Press.
  13. Horgan, J. (2005). The psychology of terrorism. Routledge.
  14. Horgan, J. (2014). The psychology of terrorism (2nd ed.). Routledge.
  15. Judiciary of England and Wales. (2023). R v Jaswant Singh Chail: Sentencing remarks and court documents. The National Archives (UK).
  16. Kruglanski, A. W., Gelfand, M. J., Bélanger, J. J., Sheveland, A., Hetiarachchi, M., & Gunaratna, R. (2014). The significance quest theory: A motivational account of radicalization and terrorism. Political Psychology, 35(S1), 69-93. https://doi.org/10.1111/pops.12163
  17. Lygre, R., Eid, J., Larsson, G., & Ranstorp, M. (2011). Terrorist groups and lone actors: A comparison of psychological profiles. Studies in Conflict & Terrorism, 34(6), 495-515. https://doi.org/10.1080/1057610X.2011.571193
  18. Mathur, A., Broekaert, E., & Clarke, M. (2024). Artificial intelligence and radicalization: Risks, ethics, and governance. International Centre for Counter-Terrorism. https://icct.nl/publication/ai-and-radicalization
  19. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679
  20. Moghaddam, F. M. (2005). The staircase to terrorism: A psychological exploration. American Psychologist, 60(2), 161-169. https://doi.org/10.1037/0003-066X.60.2.161
  21. Novelli, C., Hacker, P., Morley, J., Trondal, J., & Floridi, L. (2024). Institutionalizing trustworthy AI in Europe: The architecture of the EU AI Act. AI & Society, 39(3), 1123-1141. https://doi.org/10.1007/s00146-024-01867-0
  22. Picard, R. W. (1997). Affective computing. MIT Press.
  23. Reuters. (2023, February 3). Man who plotted to kill Queen encouraged by AI chatbot, UK court hears. https://www.reuters.com
  24. Schmidt, A., & Wiegand, M. (2017). A survey on hate speech detection using natural language processing. Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media (SocialNLP 2017), 1-10. https://doi.org/10.18653/v1/W17-1101
  25. Spaaij, R. (2012). Understanding lone wolf terrorism: Global patterns, motivations and prevention. Springer.
  26. The Ethics of AI or Techno-Solutionism. (2025). Critical perspectives on global AI governance. Springer Nature.
  27. Tucker, J. A., Guess, A., Barberá, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Political Science Quarterly, 133(3), 555-593. https://doi.org/10.1002/polq.12791
  28. Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
  29. UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://unesdoc.unesco.org
  30. van Wynsberghe, A., Cath, C., Jobin, A., & Floridi, L. (2025). From principles to practice: Challenges for global AI governance. AI and Ethics, 5(1), 23-41. https://doi.org/10.1007/s43681-024-00351-7
  31. Weaver, M. (2023, February 3). Queen assassination plot: AI chatbot urged man to carry out attack, court hears. The Guardian. https://www.theguardian.com
  32. Weizenbaum, J. (1966). ELIZA-A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. https://doi.org/10.1145/365153.365168
  33. Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). SAGE Publications.