ARTIFICIAL INTELLIGENCE AND HIGHER EDUCATION. Possibilities, acceptable risks and limits that should not be crossed.

Authors

DOI:

https://doi.org/10.1344/REYD2024.2%20Extraordinario.49175

Keywords:

artificial intelligence, generative artificial intelligence, higher education, research, risks

Abstract

Generative AI has also reached the world of the university, both at the level of students who access higher education and at the level of teachers who find themselves in their double aspect as teachers and researchers with this technological reality that only grows. . However, it is necessary to know the limits of what Gen AI can offer as well as the risks that these technologies can pose for higher education since, despite the advantages that every technology has, important doubts also arise about the risks that these technologies may pose. carry in the medium and long term future. In this work we focus essentially on the risks that various social scientists have detected, fundamentally, to try to provide arguments for a calm and essentially prudent reflection on the incorporation of Gen AI into higher education. Almost all the evils of peoples and individuals arise from not having known how to be prudent and energetic during a historical moment, which will never return, Santiago Ramón y Cajal pointed out, these words from our Nobel Prize winner in Medicine and father of neuroscience warn us that There are critical moments in the regulation of problems before they become or can become unsolvable, as we consider in this study, that moment in the field of Gen AI has arrived.

Author Biography

Luis Miguel González de la Garza

Professor of Constitutional Law
National University of Distance Education (UNED)

References

Alkaissi, H., y McFarlane, S. I. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2) 2023. De recomendable consulta en: https://pubmed.ncbi.nlm.nih.gov/36811129/

Atkinson-Abutridy J, Grandes modelos de Lenguaje, Marcombo, Madrid, 2023

Ben G, Los defectos de las políticas que requieren supervisión humana de los algoritmos gubernamentales, 45 Ley y seguridad informática Rev. 1,2022.

Berzal F, Redes Neuronales & Deep Learning, Granada, 2019

Burke, P, Ignorancia. Una historia global, Alianza Ed, Madrid, 2023

Carr, N, Atrapados. Cómo las máquinas de apoderan de nuestras vidas, Taurus, Madrid, 2014

Carr, Nicholas, “¿Qué está haciendo Internet con nuestras mentes? Superficiales”, Taurus, Barcelona, 2020

Edelson, M, y otros, Following the Crowd: Brain Substrates of Long-Term Memory Conformity, Science, 333, 2011

Emsley, R. ChatGPT: these are not hallucinations – they’re fabrications and falsifications. Schizophr 9, 52, 2023. https://doi.org/10.1038/s41537-023-00379-4

Garnier-Brun, J, M. Benzaquen y J.P. Bouchaud, “Unlearnable Games and “Satisficing” Decisions: A Simple Moldeo for a Complex World”, arvix.org 2312.12252v2 5 de enero de 2024. https://link.aps.org/doi/10.1103/PhysRevX.14.021039

Guía para el uso de IA generativa en educación e investigación. https://unesdoc.unesco.org/ark:/48223/pf0000389227

Grant, R W, Los hilos que nos mueven. Desenmarañando la ética de los incentivos, Avarigani, Madrid, 2021

Glauner, Patrick, PetkPetko Valtchev y Radu State, “Impact of Biases in Big Data”, Actas del 26º Simposio Europeo sobre Redes Neuronales Artificiales, Inteligencia Computacional y Aprendizaje Automático (ESANN 2018). arXiv: 1803.0089

Haidt, J, La generación ansiosa. Por qué las redes sociales están causando una epidemia de enfermedades mentales entre nuevos jóvenes, Deusto, Barcelona, 2024

Hicks, M,T, J Humphries, y J, Slater, ChatGPT is bullshit, Ethics and Information Technology, 2024. https://doi.org/10.1007/s10676-024-09775-5

Horwitz, J, Código roto. Manipulación política, fake news, desinformación y salud pública, Ariel, Barcelona, 2024.

Inteligencia artificial y educación: guía para las personas a cargo de formular políticas. https://unesdoc.unesco.org/ark:/48223/pf0000379376

Inteligencia artificial y educación: una visión crítica a través de la lente de los derechos humanos, la democracia y el estado de derecho (2022) https://book.coe.int/en/education-policy/11333-artificial-intelligence-and-education-a-critical-view-through-the-lens-of-human-rights-democracy-and-the-rule-of-law.html

Jonas, H, El principio de responsabilidad. Ensayo de una ética para la civilización tecnológica, Herder, Barcelona, 2015, p.-16.

Kearns, M y A Roth, El algoritmo ético. La ciencia del diseño de algoritmos socialmente éticos, La Ley Wolters Kluwer, Madrid, 2020

Keita Kurita y otros, “Measuring Bias in Contextualized Representations”, 1st ACL Workshop on Gender Bias for Natural Language Processing 2019. Puede verse en: Cornell University https://arxiv.org/abs/1906.07337v1, p. 31.

Khaneman, D, O Sivony y C.R. Sunstein, Ruido. Un fallo en el juicio humano, Debate, Barcelona, 2021

Kranzberg, M, Kranzberg's Laws, Technology and Culture, jul, 1986, Vol. 27, No. 3, p. 547.

Larson, E J. El mito de la inteligencia artificial. Por qué las máquinas no pueden pensar como nosotros lo hacemos, Shackleton, Barcelona, 2022.

López de Matarás, Ramón, El traje nuevo de la inteligencia artificial, Investigación y Ciencia, nº 526 Julio 2020.

Matute, H, “Ilusiones y sesgos cognitivos”, Investigación y Ciencia, noviembre, 2019.

Maher C, Mentes vegetales, una defensa filosófica, Bauplan, Madrid, 2022

Messeri L, Crockett M J, Artificial intelligence and illusions of understanding in scientific reserarch, Nature, 627, 2024

McLuhan, M, Comprender los medios de comunicación. Las extensiones del ser humano. Paidós, Barcelona, 2009

Nguyen A, Ngo HN, Hong Y, Dang B, Nguyen BT. Ethical principles for artificial intelligence in education. Educ Inf Technol (Dordr). 2023;28(4):4221-4241. doi: 10.1007/s10639-022-11316-w. Epub 2022 Oct 13. PMID: 36254344; PMCID: PMC9558020.

Nussbaum, M. Paisajes del pensamiento. La Inteligencia de las emociones. Paidós, Barcelona, 2008.

O`Neil C, Armas de destrucción matemática. Como el Big Data amenaza la desigualdad y amenaza la democracia, Capitán Swing, Madrid, 2017, p. 175.

Pohl, Rüdiger F, Cognitive Illusions. A Handbook of Fallacies and Biases in Thinking, Judgment and Memory, Psychology Press, Taylor & Francis Group, New York, 2004, p. 40.

Recomendación sobre la ética de la inteligencia artificial. https://unesdoc.unesco.org/ark:/48223/pf0000381137_spa?posInSet=2&queryId=a2836fbc-3a67-4c9c-b414-3b1bd9998da0

Roselli, D.; Matthews, J.; Talagala, N. Managing Bias in AI. In Proceedings of the Companion Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019, pp. 539-544.

Solove, D J. y Matsumi, H, AI, Algorithms, and Awful Humans. 96 Fordham Law Review, 16 de octubre de 2023.

Sunstein C. R y R H. Thaler, Un pequeño empujón, Taurus, Madrid, 2009, pp. 255-272.

Sharot, Tali, Christof W Korn y Raymond J Dolan, How unrealistic optimism is maintained in the face of reality, Nat Neurosci, 14 (11), May 1, 2012, p. 6.

Vicente, L., Matute, H. Humans inherit artificial intelligence biases. Sci Rep 13, 15737, 2023.

Vicente, L, Matute, H, Humans inherit artificial intelligence biases. Sci Rep 13, 15737, 2023.

Xu Z, Jain S, M Kankanhalli, Hallucination in Inevitable: An Innate Limitation of Large Language Models, arXiv:2401.11817v1 [cs.CL]

Published

2024-12-31

How to Cite

González de la Garza, L. M. (2024). ARTIFICIAL INTELLIGENCE AND HIGHER EDUCATION. Possibilities, acceptable risks and limits that should not be crossed. Education and Law Review, (2 Extraordinario), 115–145. https://doi.org/10.1344/REYD2024.2 Extraordinario.49175