The Impact of Allowing ChatGPT on Student Scores and Test Completion Time in a Mid-Term Math Exam
Abstract
This study explores the impact of ChatGPT usage on student performance in a mid-term mathematics exam, focusing on two key research questions. First, whether there is a significant difference in the average total scores between students using ChatGPT and those not using it. The findings indicate that students who used ChatGPT scored significantly lower on average than their non-GPT-using peers, with fewer high-achieving students and a higher proportion failing to meet the minimum performance threshold. Second, the study examines whether there is a significant difference in the average time taken to complete the test between the two groups. Surprisingly, no notable difference in completion time was observed, challenging the assumption that ChatGPT users would either complete the exam faster or spend time comparing their answers with those generated by GPT. These results highlight the need for further investigation into the role of AI tools like ChatGPT in education, particularly their effectiveness in enhancing learning outcomes in mathematics.