Can Artificial Intelligence help provide more sustainable feed-back?
DOI:
https://doi.org/10.1344/der.2024.45.50-58Keywords:
Artificial Intelligence, Assessment, Feedback, Higher EducationAbstract
Peer assessment is a strategy wherein students evaluate the level, value, or quality of their peers' work within the same educational setting. Research has demonstrated that peer evaluation processes positively impact skill development and academic performance. By applying evaluation criteria to their peers' work and offering comments, corrections, and suggestions for improvement, students not only enhance their own work but also cultivate critical thinking skills. To effectively nurture students' role as evaluators, deliberate and structured opportunities for practice, along with training and guidance, are essential.
Artificial Intelligence (AI) can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an Artificial Intelligence (AI) Large Language Model (LM) was trained to achieve automated evaluation. The findings show the similarity between human evaluation and automated evaluation, which allows expectations to be generated regarding the possibilities of AI for this purpose. The challenges and prospects of this process are discussed, along with recommendations for optimizing results.Artificial intelligence can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an artificial intelligence Large Language Model was trained to achieve automated evaluation. The challenges and prospects of this process are discussed, along with recommendations for optimizing results.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Eloi Puertas Prats, María Elena Cano García
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The authors who publish in this journal agree to the following terms:
- Authors retain copyright and grant the journal the right of first publication.
- The texts published in Digital Education Review, DER, are under a license Attribution-Noncommercial-No Derivative Works 4,0 Spain, of Creative Commons. All the conditions of use in: Creative Commons,
- In order to mention the works, you must give credit to the authors and to this Journal.
- Digital Education Review, DER, does not accept any responsibility for the points of view and statements made by the authors in their work.