Research at the University of Reading shows that AI-generated answers often evade detection in academic assessments and can outperform student responses, urging a global update in educational AI policies and practices.
Researchers have discovered that even seasoned exam graders may find it difficult to identify responses produced by Artificial Intelligence (AI). This study, carried out at the University of Reading in the UK, is part of an initiative by university administrators to assess the risks and benefits of AI in research, teaching, learning, and assessment. As a consequence of their findings, updated guidelines have been distributed to faculty and students.
The researchers are calling for the global education sector to follow the example of Reading, and others who are also forming new policies and guidance and do more to address this emerging issue.
In a rigorous blind test of a real-life university examinations system, recently published in the peer-reviewed journal SciTechDaily