Maintaining the value of writing and thinking in the age of LLMs

whitney gegg-harrison
7 min readJun 16, 2023

--

I hadn’t intended to drop off the map after writing my post in February, but just a few days later, my kid brought COVID home from school and I’m only just now, in mid-June, feeling like I have anything like the kind of energy I had before. (Finishing out the second half of the semester while dealing with COVID and its aftermath was NOT fun!) Of course, lots and lots of things have changed in the world of LLMs and generative AI since my February post, and I’m both gratified and horrified to see that much of what I had to say about the inherent flaws of AI detection has been borne out by the real-world experiment that Turnitin imposed on students and teachers this Spring.

Because of the particular sorts of expertise I have, I’ve also been asked to put together workshops for faculty at my university who need help thinking through how to (re)design their classes in light of the developments in generative AI, and in particular, thinking about writing assignments in the context of LLMs that produce writing that cannot be reliably distinguished from student writing. And what I keep coming back to are these questions: why are we having our students submit writing? What is it that we’re hoping that writing will help them to learn, and what is it about their learning that we’re hoping to be able to assess via their writing? We need clear-eyed answers to those questions before we can start deciding whether and how to adapt our assignments.

This piece in The Conversation captures so much of my own thinking about generative AI and writing. I have been pondering lately why it is that I feel no desire whatsoever to use tools like ChatGPT, and while a big part of that is that I’m very aware of the ethical issues inherent to this technology (for which, see this response to the “AI Pause Letter”), what I’m realizing is that one of the reasons I don’t personally see a place for tools like ChatGPT in my own writing process is that I am someone who writes in order to think — like, writing is how I figure out what I think, and how I grapple with tricky things and make sense of them. (I doubt this surprises anyone who knows me!) So the idea of replacing any part of that process with something automated just makes no sense to me at all. It feels like missing the point — if I’m not the one doing the thinking, then the writing doesn’t have any value to me. But this is not a judgment on anyone who does see a valuable place for those tools in their own writing process! I’m actually quite curious to understand the ways in which the tools affect writers’ processes (for better and for worse)!

One idea that keeps popping up is that what LLMs do is “democratize” education, eliminating “busywork” by allowing everyone to start from a place where writing skills aren’t getting in the way of their ability to do whatever it is that they actually need to do. And perhaps that is a valuable use of these tools in some contexts. I think there are likely productive ways of using the technology, but as a cognitive scientist, I’m very skeptical about the claims I’ve seen arguing that using tools like ChatGPT is a way of eliminating “busywork” that will open up space for students to do higher level work. To be clear, I do think that there could be some truth to the claims, perhaps most obviously for assignments that require writing but aren’t about learning to write; if writing isn’t the learning goal, then maybe a tool like ChatGPT is leveling the playing field and allowing students who don’t have strong writing skills to show that they do have a good conceptual understanding. But that’s not an answer that works for writing professors: the thing we want our students to do is actually the writing itself. For us, writing isn’t busywork that distracts from the actual learning goal…it IS the learning goal. And even when writing isn’t actually the learning goal, I think it really depends on how tools like ChatGPT are being used, because if the learning goal is to see how well students understand a concept but they’re able to pass muster with a piece of writing that required no thinking from them, then you aren’t measuring their understanding at all. And the claims about the benefits of eliminating “busywork” depend a lot on what we mean by “busywork”.

The only way to build skills is to do the work, to practice, and unfortunately “practice” often feels like “busywork” to students if we instructors aren’t thoughtful about creating meaningful arenas for practice. In my non-academic life as a violinist, I am currently engaged in my 7th round of “100 Days of Practice” (here’s a long post I wrote about the 100 Days project after my 5th round), so practice is something I think about quite a lot. One of the things that can be very frustrating for new violin students is that a great deal of practice needs to happen in order to simply gain technique before the really fun music becomes accessible; you have to know how to hold the instrument, how to have a solid “left-hand frame”, how to manipulate and coordinate your right-hand fingers, wrist, elbow, and shoulder to produce a straight bow stroke, etc. There are all sorts of tricks that good teachers use to make the technical practice feel fun, and it’s been really interesting to learn some of these techniques by observing my daughter’s teachers (she started at just shy of 4, whereas I was already 10 before I started, so I’ve learned a lot about early childhood music education as a parent!).

But some students, understandably, don’t see the value in doing scales and technique-focused etudes…they want to get to the “real” music. And while there are definitely ways to incorporate “real music” into technique practice, what happens when you try to barge ahead and play technically-demanding music without having done the work that should have led up to it is that you simply don’t have the skills you need, and it will show. You won’t be able to make informed choices about how to play what you’re playing; you’ll find yourself scrambling to make your fingers go where they need to or to do something interesting with the bow. You’re also a lot more likely to get injured!

Those who argue for incorporating “real” music into the violin learning process have a really good point: if you’re just practicing scales, then not only is that going to be boring, but you’re also never practicing applying the skills that you’re building in the context where you’ll actually be using them — what’s actually important is to be able to use the skills in the context of making music. And that’s what’s important for student writers, too — they need to practice thinking, and practice turning thoughts into sentences, and practice organizing those sentences into paragraphs, and practice revising those paragraphs, sentences, and thoughts…all so that they can produce writing that says what they want it to say and does what they want it to do in the context of the classroom and beyond. If they skip that practice, they’re going to lack the skills they need to critically evaluate whatever writing gets generated for them by ChatGPT, or Claude, or Bing, or whatever LLM they’re using, and that’s going to be limiting.

If you never practice doing your own thinking, guess what: you’re not going to build that skill. And that strikes me as very dangerous for the students growing up with these tools. It’s one thing to be a very experienced writer and thinker who decides to play around with LLMs, but we truly don’t know yet what the impact will be on students who never actually have to do the work that we did in the process of gaining our experience. Skill-building of any kind requires doing the work — there just aren’t any shortcuts. We learn through practice, by doing, and if we fail to practice foundational skills, we pay a price later. The problem, when it comes to writing, is that the way we ask students to practice foundational skills often feels very much like busywork. We have to make it meaningful and interesting, just like thoughtfully-chosen “real” music and well-designed but musically satisfying etudes do for young violinists.

So that is what we need to be doing as instructors: designing opportunities for students to practice skill-building in ways that do NOT feel like the kind of busywork best avoided by turning to ChatGPT. I was recently re-reading James Lang’s “Cheating Lessons” book, and though it was written a decade ago and as such doesn’t say anything about generative AI, I actually think it’s a GREAT reference for how to create these kinds of assignments. Really, the thesis of the book is that the same pedagogical strategies that make cheating less likely are the ones that also promote the best learning. Which I think is absolutely true! But it’s also perfect because it means that the book can be a great resource for people like me, whose concern about LLMs/ChatGPT is that it is going to negatively impact students’ learning and skill development, AND for people whose concerns are about using LLMs to cheat (and for that latter group, I hope you’ll also read the piece I posted to Medium back in February explaining why “AI-detection” tools should not be part of your anti-cheating efforts; more recent news articles have proven that piece to be prescient).

What I hope to do the rest of this summer is use this space to brainstorm ideas about how to apply the principles from Lang’s book (and other solid pedagogical principles) to the questions about how to approach AI in the writing classroom. So stay tuned — I’m definitely not done thinking about this!

--

--

whitney gegg-harrison

linguist. cognitive scientist. writing teacher. mama. knitter. violinist. vegetarian. working towards a better world.