Here I am, still shouting from the rooftops about “AI-detection” and why it’s so harmful to students

whitney gegg-harrison
5 min readAug 22, 2024

--

I’ve been working on a particularly distressing case recently, providing support in the appeals process to a student at another institution (which I am being careful not to name!) who was (I believe) falsely accused of submitting AI-generated work in one of their classes. Since I’ve been writing publicly about my opposition to the use of AI-detection tools, this student reached out to me for support. I’m going to try to describe here what I find so distressing about this case, without revealing any identifying details.

This student had been proactive about documenting their process, in part because they had read articles about how L2 writers (like them) are more likely to be falsely flagged as “AI”, and as a result, decided to screen-record themself writing their papers last semester in case that ever happened to them. One of those short papers got flagged by Turnitin as “30% AI” and was referred to the academic integrity office. I’ll note here that in the little corpus study I did last summer, which I gave a talk about at CCCCs this year, several of my old essays were flagged as “30% AI” by Turnitin’s detector; Turnitin even admits that the false positive rates are especially high for items flagged at “20%” or less, and while they are not very transparent about how they test these things, I suspect that “30%” isn’t much different on that front.

The academic integrity office wouldn’t accept the student’s screen-recording as evidence, because, they said, the student could have been transcribing from ChatGPT in another window; I’ve watched the whole almost 4 hour long video and y’all, it’s a bog-standard example of a student writing, deleting what they’ve written, writing it a different way, writing more stuff, going back to earlier stuff and changing it…exactly what I’d expect to see. (Also, 3+ hours of watching someone write is *boring*.) They also said that the student’s handwritten notes & and outline (which clearly connect to the final essay) could have been transcribed from ChatGPT, and rejected the student’s evidence demonstrating that the flagged paper was similar in terms of writing style, word choices, etc to ones they’d submitted that hadn’t been flagged; after all, the university argued, those essays could still have been AI generated. No, I’m not kidding, that’s literally the argument — I’ve seen the transcript of the meeting. It’s infuriating!

(Seriously, how is a student ever supposed to prove that something is their own writing if every piece of evidence they offer is explained away as “well, that could be AI, too!”? I’m so troubled by how much AI has destroyed our ability to trust each other, and I’m especially concerned about students who, because these tools first came out nearly 2 years ago (when they were still in high school!), can’t point to earlier college writing as evidence of their writing style because a skeptical person can just dismiss it as “possibly also AI”.)

The evidence the university DID use in declaring this student guilty also infuriates me, and it’s the main reason why I’m writing this post because I want everyone to know NOT TO DO THIS.

They took the professor’s prompt, which was quite prescriptive about the format the essay should take, but gave the students the opportunity to choose a particular focus. They then updated it with some details to reflect the student’s particular choice, and then prompted ChatGPT to produce an essay, which they argued had “striking similarities” to the essay this student submitted. Y’all…the “similarities” they’re pointing to can literally ALL be explained by the prompt and the choice of topic itself. Literally. If you give students a prompt that lays out the exact format a response essay should take (which is arguably a good thing to do, in terms of transparent assignment design), that prompt will also produce an essay with that format when given to ChatGPT. This isn’t actually suspicious, it’s just how LLM-based chatbots work!

And not only that, they flagged particular words in this student’s essay as ones “frequently produced by chatbots” and y’all, this list is so terrible it would be funny if it weren’t hurting this student. Things like “in this essay, I will”, which is so often used in student writing that it’s become an online trope/meme! Utterly normal words of English like “significant” and “moreover” are flagged, too. It’s ridiculous. I’ve been putting together a little corpus analysis based on texts in MICUSP (which are from 2010–2012, so hopefully immune to the “it could still be AI!” crap) to show just how wrong-headed this list is. (Spoiler alert: students have been using these supposed “words frequently produced by chatbots” in their academic writing since at least 2010!)

I know it’s not my job to defend every student who gets caught up in these cases just because I’ve been pretty public about my views on “AI detection”, and I need to not let it consume me. I know that part of why it consumes me is that I identify so strongly with these students — I know that it’s likely only a matter of time before someone accuses me of submitting AI-generated work, and then what am I to do? (I’d share artifacts of my writing process, that’s what I’d do…and that’s why this case upsets me so much, because the student DID do that; they did exactly what I’d recommend someone do if they wanted to be able to demonstrate that they were the author of their work, and they’re still not believed.)

I don’t know how many institutions are using approaches like these (hopefully not many!), but I want to scream from every rooftop I can possibly access that THIS IS NOT OK.

It’s fine to say you don’t want your students to use generative AI in their assignments and penalize them for violating your policy if they do. I’m actually quite sympathetic to that as someone who cares deeply about students doing their own learning (though I also think it’s possible to create a classroom where students are doing their own learning while engaging with AI tools in various ways, and I’m not inclined towards punitive approaches in my classroom).

But: if you’re going to ban AI in your classes and penalize students for using it, you owe it to your students to teach them how to document their process in such a way that you would be satisfied that they truly are the author of their own work. (You should really be doing that anyway; a focus on process rather than just the final product is a good pedagogical approach for other reasons, too.)

So anyway, this is me screaming from a rooftop I happen to have access to, hoping that people will listen.

EDIT (10/1/2024): GOOD NEWS! The appeal went through and the case was decided in the students’ favor! If only I could produce a 60+ page analysis/advocacy letter for every student in this position, but alas, there’s only one of me and I am tired.

--

--

whitney gegg-harrison
whitney gegg-harrison

Written by whitney gegg-harrison

linguist. cognitive scientist. writing teacher. mama. knitter. violinist. vegetarian. working towards a better world.

No responses yet