Surviving AI this term

Last Spring, I was flooded with AI-generated exams as the term went on and more students learned what ChatGPT could do. So, for the fall, my options were to abandon take-home essay exams/papers or make them AI-resistant. (Attempting to make AI-proof assignments is likely a fool’s errand.)*

I opted for the latter and I can report my efforts were largely successful, at least as far as I could tell.

In playing around with ChatGPT, I found that it could easily summarize my assigned readings and answer my exam questions. But it would not use quotations from the readings. Even when I asked it to select the most important quotes from these readings, it simply refused to do so. (It’s now been several months since I’ve tested this, and it may have changed already.) Google’s Bard was the same. This seemed to me to be a feature of LLMs that I could exploit to make my exams AI-resistant.

First, I broke up my exams into two phases with separate due dates. I gave students the questions they’d have to eventually write essays on (for phase 2). But for phase 1, they just had to select 3-5 quotes from each reading that they planned to use in their essays. That was 20% of the exam. After they submitted them, I graded their quotes, looking for whether random sentences were selected and whether the selected quotes could be used to answer the essay questions. Students were even allowed to re-submit based on that feedback to select better quotes, if needed.

Then they had to write their essays, using those quotes. Because the quotes had to be in there, it was much harder for them to get ChatGPT to write their essays for them. When some tried it, their essays stood out. There typically were no quotes in them, an obvious red flag when grading. Now, if TurnItIn’s AI checker (or ZeroGPT or some other similar service), then I’m much more confident this isn’t a false positive.

To be clear, ChatGPT can still generate these essays that include the quotes, but the student has to work harder to get them. They have to revise the prompt significantly, instead of pasting in my essay question. At least one student tried it, but their essay still had telltale signs of AI generation that I, my chair, and my associate dean all agreed sufficiently demonstrated the student didn’t write it. Maybe some students got away with it, but if so, I believe they were few.

* This is primarily focused on my Intro to Philosophy course. For upper-level courses, my strategy will be different and will include some assignments that incorporate AI use.