At the moment, generative AI like ChatGPT is capable of writing essays and answering very complex problems with surprising nuance. This has introduced a major problem for higher education – students are now regularly using AI to cheat and there is no reliable way to prove it.
The humanities departments are currently the most vulnerable. However, there are many indications that these language models will soon become a more significant problem in STEM departments as well. Speaking to instructors, I hear many different strategies for combating this form of cheating – some want to fight it and others are choosing to embrace it.
The first general strategy is to make it difficult to use AI in the first place. An instructor may try to develop course material in a way that would make it less likely that ChatGPT would be able to answer questions about it.
I’ve tried this method myself but it has severe limitations. ChatGPT is capable of answering specific questions about highly tailored material. For example, I asked ChatGPT to explain a problem that “Professor Perez” presented in class against a particular theory. It was able to accurately guess, partly because there are only so many problems that have been presented against a given view. In cases where the chatbot gets the answer wrong, it’s often still good enough to warrant partial credit.
Another strategy that has gained traction is incorporating AI into the learning and assessment experience. This can be done by having students critically engage with the answers ChatGPT gives to course questions or by having students use the chatbot to create an essay outline and then write the paper themselves. I assume that the thought here is that we’re instilling in students that it’s perfectly fine to use AI as a supplementary resource. The problem is that there is no reason for students to refrain from using this resource to just write the rest of the paper for them.
Some of my colleagues are now considering having students hand-write all assignments in class. While this would certainly prevent students from using AI, it also comes with several drawbacks. Class time is valuable and having students write their homework in class would significantly cut back on the time available for actual teaching. It’s also just not feasible to have students write entire papers during class time.
It’s actually not terribly difficult to discern when a student has used AI in an assignment. GPT simply doesn’t write like an undergrad and it will often use bizarre reasoning or just plain falsehoods. The problem is that unless the student confesses, it can be near impossible to prove that they cheated.
Some instructors have turned to using online AI detectors to catch instances of cheating. Unfortunately, these detectors often classify completely original work as AI generated and so they aren’t much help in proving that an assignment is not the student’s own. Thankfully, when I ran this very article through a detector, I was relieved to see that it calculated a 0% chance that it was AI generated.
Perhaps we shouldn’t worry about students using AI to cheat because they won’t really end up using the knowledge they’re meant to be acquiring anyway. We might think that much of college education is useless. Of all the maths that I learned in college, I’ve only used a small fraction of them, and I’ve never employed what I learned in a course in “Magic, Religion, and Witchcraft.”
But college is more than just about gaining particular pieces of knowledge. That bachelor’s diploma also indicates to possible employers that you can follow directions and sit in one spot for hours to complete a task. A student that cheated their way through college is not likely to be in a good position to become a valuable long-term employee.
Many of us teaching at the college level have a sense of optimism that if only we talk to our students about the importance of learning and enriching their lives, we can help them see that they are only hurting their future selves by cheating. But this might be overly optimistic. It would still be entirely in the hands of students whether they submit their own work.
We simply cannot allow this to be the case. Students will cheat if left to their own devices – even after a fiery speech about the value of knowledge. Therefore, to protect students from themselves, the onus is on the colleges and on the instructors to limit the opportunities that students have to cheat, even if it entails unfortunate trade-offs.
Rafael Perez is a doctoral candidate in philosophy at the University of Rochester. You can reach him at [email protected].