AI Outperforms Students in Real-World “Turing Test”

Artificial Intelligence Data AI Problem Solving

A study at the University of Reading revealed that AI-generated exam answers often go undetected by experienced exam markers, with 94% of such answers going unnoticed and achieving higher grades than student submissions. The researchers call for the global education sector to develop new policies and guidance to address this issue. The study emphasizes the need for a sector-wide agreement on the use of AI in education and highlights the responsibility of educators to maintain academic integrity. The University of Reading is already taking steps to incorporate AI in teaching and assessment to better prepare students for the future.

Research at the University of Reading shows that AI-generated answers often evade detection in academic assessments and can outperform student responses, urging a global update in educational AI policies and practices.

Researchers have discovered that even seasoned exam graders may find it difficult to identify responses produced by Artificial Intelligence (AI). This study, carried out at the University of Reading in the UK, is part of an initiative by university administrators to assess the risks and benefits of AI in research, teaching, learning, and assessment. As a consequence of their findings, updated guidelines have been distributed to faculty and students.

The researchers are calling for the global education sector to follow the example of Reading, and others who are also forming new policies and guidance and do more to address this emerging issue. 

In a rigorous blind test of a real-life university examinations system, recently published in the peer-reviewed journal SciTechDaily