Vai al menu di navigazione principale Vai al contenuto principale Vai al piè di pagina

Generative AI as a tool and as a social actor between deviance and mainstream

Abstract

Indossando analiticamente le “lenti” di Beck, l’intelligenza artificiale generativa attraverso la sua funzione di strumento ausiliario avanzato per la creazione e la diffusione di testi, immagini, video e altri dati introduce un rischio post-umano, derivante dal potenziale dannoso dei contenuti da essa prodotti. Essa culmina nella simulazione di un attore sociale umano da cui invece conseguono rischi propri di una società post-umana, quali l’amplificazione e la riproduzione di bias, pregiudizi e discriminazioni, nonché il consolidamento dell’egemonia socio-culturale dominante.

Parole chiave

intelligenza artificiale, attore sociale, rischio, macchina morale, devianza, società post-umana, mainstream socio-culturale

PDF (Inglese)

Riferimenti bibliografici

  1. AI DAN Prompt. (2025). Available at https://abnormal.ai/ai-glossary/ai-dan-prompt, 03/06/2025.
  2. Alvero A.J., Lee J., Regla-Vargas A., Kizilcec R.F., Joachims T., and Lising. A. (2024). Large language models, social demography, and hegemony: comparing author-ship in human and synthetic text. J Big Data, 11, 138. DOI: 10.1186/s40537-024-00986-7.
  3. Anderson M., Anderson S.L. (2007). Machine Ethics: Creating an Ethical Intelli-gent. Agent. AI Magazine, 28(4): 15-25. DOI: 10.1609/aimag.v28i4.
  4. Arkoudas K. (2023). ChatGPT is no Stochastic Parrot. But it also Claims that 1 is Greater than 1. Philosophy & Technology, 36(3): 1-29 DOI: 10.1007/s13347-023-00619-6.
  5. Beck, U. (1992). The Risk society Towards Another Modernity, London: Sage.
  6. Beck U. (1994). The reinvention of politics: towards a theory of reflexive moderni-zation. In: U. Beck, Giddens A. and Lash S. (eds.), Reflexive modernization: Politics, tradition and aesthetics in the modern social order, Cambridge: Polity Press.
  7. Bender E.M., Gebru, T., McMillan-Major A., and Shmitchell S. (2021). On the Dan-gers of Stochastic Parrots: Can Language Models Be Too Big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). New York: Association for Computing Machinery. DOI: 10.1145/3442188.3445922.
  8. Bostrom N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.
  9. Braidotti R. (2013). The posthuman. Cambridge: Polity Press.
  10. Cole D. (2024). The Chinese Room Argument. In Zalta E. N. and Nodelman U. (eds.), The Stanford Encyclopedia of Philosophy (Winter 2024 Edition). Available at https://plato.stanford.edu/archives/win2024/entries/chinese-room/, 03/06/2025.
  11. Cooban, A. (2025). Pornhub exits France, its second-biggest market, over age verifi-cation law. CNN. Available at https://edition.cnn.com/2025/06/04/ tech/pornhub-exits-france-age-verification-intl, 03/06/2025.
  12. Fawkes V. (2025). Best AI Porn Sites of 2025. Chicago Reader. Available at https://chicagoreader.com/adult/ai-porn-sites/, 03/06/2025.
  13. Floridi L. (2023). The Ethics of Artificial Intelligence. Oxford: Oxford University Press.
  14. Gehlen A. (1969). Moral und Hypermoral. Eine pluralistische Ethik. Frankfurt: Athenäum.
  15. God of prompt (2025). ChatGPT No Restrictions (Ultimate Guide for 2025). Avai-lable at https://www.godofprompt.ai/blog/chatgpt-no-restrictions-2024?srsltid=AfmBOorG1KLM47y6u6Mc-BeZVARRRD0sOKio1_UDg4I P1-IOiymdKJll, 03/06/2925.
  16. Goode E. (2023). Deviant Behaviour, New York: Routledge.
  17. Gupta M., Akiri C., Aryal K., Parker E. and Praharaj L. (2023). From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access, vol. 11: 80218-80245, DOI: 10.1109/ACCESS.2023.3300 381.
  18. Hayles N. K. (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
  19. Jarrahi M. H. (2019). In the Age of the Smart Artificial Intelligence: AI’s Dual Ca-pacities for Automating and Informating Work. Bus. Inf. Rev., vol. 36, no. 4: 178–87. DOI: 10.1177/0266382119883999.
  20. Jiang F., Peng Y., Dong L., Wang K., Yang K., Pan C., You X. (2024). Large ai model-based semantic communications, IEEE Wireless Communications, 31(3): 68–75. DOI: 10.1109/MWC.001.2300346.
  21. Latour B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.
  22. Leiss W. (1994). [Review of Beck U., Risk Society, Towards a New Modernity, translated from the German by Ritter M., Introduction by Lash S. and Wynne B. Lon-don: Sage Publications, 1992]. The Canadian Journal of Sociology/Cahiers Cana-diens de Sociologie, 19(4): 544–547, DOI: 10.2307/3341155.
  23. Nass C., Moon Y. (2000). Machines and mindlessness: Social responses to compu-ters. Journal of Social Issues, 56(1): 81-103, DOI: 10.1111/0022-4537.00153.
  24. Natale S. (2021). Deceitful Media. Oxford: Oxford University Press.
  25. Pope T., Gilbertson-White S., and Patooghy A. (2025). Evaluating GPT-4’s Seman-tic Understanding of Obstetric-based Healthcare Text through Nurse Ruth. ACM Trans. Intell. Syst. Technol. Just Accepted (May 2025). DOI: 10.1145/3735647.
  26. Possamai-Inesedy A. (2002). Beck's risk society and Giddens' search for ontologi-cal security: A comparative analysis between the Anthroposophical Society and the Assemblies of God. Australian Religion Studies Review, 15(1): 27-40.
  27. Rizzi G., Bertola P. (2025). Exploring the generative AI potential in the fashion de-sign process: an experimental experience on the collaboration between fashion design practitioners and generative AI tools. Eur. J. Cult. Manag. Policy, 15:13875. DOI: 10.3389/ejcmp.2025.13875.
  28. Rozado, D. (2023). The Political Biases of ChatGPT. Social Sciences, 12(3), 148. DOI: 10.3390/socsci12030148.
  29. Saponaro A. and Massaro P. (2018). Diritto irrazionale interstiziale e la “scienza del Cadì” nella giurisdizione penale: da Weber a Damaska, Sociologia, anno LII, n.1: 89-103
  30. Searle J. (1980). Minds, Brains and Programs, Behavioral and Brain Sciences, 3: 417–57.
  31. Searle J. (2010), Why Dualism (and Materialism) Fail to Account for Consciou-sness. In: Lee R.E. (ed.), Questioning Nineteenth Century Assumptions about Know-ledge (III: Dualism), New York: SUNY Press.
  32. Simas G. and Ulbricht V. (2024). Human-AI Interaction: An Analysis of Anthro-pomorphization and User Engagement in Conversational Agents with a Focus on ChatGPT. In: Ahram T., Karwowski W., Russo D. and Di Bucchianico G. (eds), Intelli-gent Human Systems Integration (IHSI 2024): Integrating People and Intelligent Sys-tems. AHFE (2024) International Conference. AHFE Open Access, vol 119. USA: AHFE International, DOI: 10.54941/ahfe1004510.
  33. Stryker C. and Scapicchio M. (2024). What is generative AI? Ibm.com. Available at https://www.ibm.com/think/topics/generative-ai, 03/06/2025.
  34. Titus L.M. (2024). Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy. Cogn. Syst. Res. 83, C, DOI: 10.1016/j.cogsys.2023.101174.
  35. Turkle S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.
  36. Weichert J., Kim D., Zhu Q., Kim J., and Eldardiry H. (2025). Assessing Computer Science Student Attitudes Towards AI Ethics and Policy. ArXiv. Available at https://arxiv.org/abs/2504.06296, 03/06/2025.
  37. Weizenbaum J. (1966). ELIZA—a computer program for the study of natural lan-guage communication between man and machine. Commun. ACM 9(1): 36-45. DOI: 10.1145/365153.365168.
  38. Xie H., Qin Z., Li G.Y., and Juang B.-H., (2021). Deep learning enabled semantic communication systems, IEEE Transactions on Signal Processing, vol. 69: 2663-2675, DOI: 10.1109/tsp.2021.3071210.
  39. Zuboff S. (1988). In the age of the smart machine: The future of work and power. New York: Basic books.
  40. Zuboff S. (2019). The Age of Surveillance Capitalism. New York: PublicAffairs.