A Behavioral Science take on ChatGPT Cheating: What Can Instructors Do?

Photo by Alex Knight on Upsplash

Ask ten teachers or professors about ChatGPT, and you are likely to hear ten different responses.  Some of my colleagues seem willing to embrace it, view it as the future reality, are exploring it as a tool to promote thinking and improved writing, and lump opponents to the chatbot in with those who at an earlier time wished to ban the calculator.  Others seem flummoxed by AI, fearing that it will make cheaters of all students, end original writing and reasonable ways to assess it, and in extreme cases that it will spell the doom of higher education.  Forgive our divergence of opinion – the technology is new, we’re all scrambling to keep up, and we’re a stubbornly independent lot anyway.

My present focus

I’ll lay my cards on the table: I am not interested in using AI to teach writing or improve thinking although my primary intention is not to begrudge individuals with those desires.  My concern at this moment (and in this blog) lies exclusively in trying to prevent students from using the chatbot as a substitute for their own original writing.  If you share this concern with plagiarism, with students dishonestly passing off the work of the chatbot as their own, please read on.

Why is this my focus?  I think the calculator comparison is flawed.  If you’re a screenwriter or attorney, I can perhaps condone the use of ChatGPT to allow you to do more in less time, with fewer errors because you already have expertise.  You know how to do your job.  But my students aren’t a finished product by any measure.  To reach their full potential, they need to be able to think critically, to craft original arguments, to research a topic successfully, to skillfully incorporate evidence, and to communicate a certain logic of thought.  For students to develop these dispositions, they need to practice them independently.  Having someone or something else generate arguments doesn’t develop that skill in oneself.  My assignments are intended to assess these skills along with students’ general understanding of the material.  I can’t accurately evaluate their understanding if their writing isn’t their own, and call me old school, but it delegitimizes grading when students are awarded grades based on work they didn’t create.

What to do then?

There are three principal ways to discourage students from using ChatGPT or future AI to cheat.  The first is to rule by fear, to police the use of chatbots.  At this point, the AI chat detection software is unreliable, but students may not know this as there are varied claims and reports about being able to identify ChatGPT’s work.  I find no problem with including a deterrent statement on the syllabus such as, “I reserve the right to run any suspicious papers through detection software.”  At an elite high school in my hometown, administrators warned students that their work would be subject to such checks and found a very low rate of chatbot usage according to detection devices.  It would not be a bad idea to tell your students that you plan on submitting the assignment yourself to a chatbot so that you will have a good understanding of how it answers the question and recognize those mimicking it.

A second way to dissuade chatbot use involves modifying assignments or the writing process so that they will be ChatGPT-proof, or at least more difficult to write with AI.  Some strategies are to have students connect the material with something deeply personal, to write about a local issue, or a very recent news event, as the bot is out of its comfort zone in these instances, at least for now.  Process changes include asking students to turn in an outline or drafts of their work (although not foolproof), asking students to summarize or explain their work – perhaps without warning, requiring them to orally present their work, or foregoing technology entirely and asking them to handwrite essays in class.  However, none of these are necessarily ideal solutions – some are not foolproof, and others may require greater time investment or sacrificing preferred methods of assessment.

Confession: I am not an expert on detection software or in how to restructure assignments to be more chatgpt proof.  My aim is to help you understand how to apply knowledge from the behavioral sciences to discourage ChatGPT cheating in your classroom.

There is one key insight the behavioral science offers: People will cheat to the extent that they can do so and still view themselves as wonderful human beings.  The behavioral economist Dan Ariely refers to our ability to rationalize dishonest, self-serving behavior as the “fudge factor.”  But there are limits to our cognitive flexibility – if the cheating seems too extreme, we are not able to rationalize being dishonest.

Critically, there are two features of the current student experience that encourage ChatGPT cheating because they ignore the fudge factor: (1) the lack of clear and consistent guidelines denouncing the use of chatbots; and (2) pluralistic ignorance, or perceptions that everyone else is using AI and not doing so means falling behind or being at an unfair disadvantage.

The lack of clear and consistent guidelines

The first issue is problematic because as instructors hold divergent views toward chatbots and their permissible use on assignments, students receive mixed messages.  Some teachers may require its use on papers, others may expressly forbid it, and still others may not explicitly mention AI at all.  I embarrassingly admit to being in the latter category until this upcoming semester.  All this variation increases the fudge factor – students can convince themselves that using ChatGPT when not explicitly prohibited falls within the realm of being an honest student.  To fight the fudge factor, the policy against AI must be explicit but also drawn with a hard line in the sand.  Students must be told that using the bot at all, for any part of the paper, constitutes cheating, that there is no such thing as only a “little” cheating involved with using AI.  Even better, work on framing suggests that it is better to label them as cheaters, as people are highly motivated to avoid viewing their self-definition in these negative terms.  The key is to fight off students’ ability to rationalize cheating as acceptable and within the bounds of being an honest student.  Having a comprehensive institution policy across all courses would help in this regard.

As it stands currently, the fudge factor is large.  One recent study found that although students expressed positive attitudes toward AI, many feel anxious and lack clear guidance on how to use it.  They weren’t clear on where the boundary for cheating lies, nor did they have any idea on whether their institution had any rules or guidelines for using AI responsibly.  It’s a fudge factor nightmare.  One can easily imagine a student cheating with the chatbot able to convince themselves that it’s not really acting dishonestly, that it must be ok because it’s used in some classes, that their school doesn’t seem to have a policy, that the teacher hasn’t mentioned it, and so on.  Or perhaps they feel honest because they only used AI to help them with the paper but not write the entire document.

And there is another layer here, a fundamental disconnect between students and their perceptions of the value of writing assignments.  According to Charlie Murr, a rising sophomore at Purdue University, for many students the fudge factor is enhanced by beliefs that written assessments are not foundational to education but rather serve as busy work.  As Murr states, “many students believe there are no long-term consequences of not learning the material if we can just Google it or use ChatGPT for it.”  From our conversation, I surmise that in addition to laying out clear, absolute guidelines about not using chatbots, instructors need to explain to students that the goal isn’t simply to write papers but to understand how to write papers well.  And that ChatGPT negates a lot of this process.  Students won’t see themselves as cheaters unless they understand what skills and attributes they will be deprived of developing if they rely on chatbots.  A tough sell, perhaps, but otherwise, they may rationalize cheating while still viewing themselves as honorable.

All of this may help explain why at present, one study found that only 51% of students believe using AI for exams or papers is cheating.  Only 41% of students believe using AI for assignments is “morally wrong.”  Good luck discouraging its use as long as those views remain unchanged.

Perceptions that everyone else is doing it

The reality is that we do not have a consistent and reliable understanding of how many students use AI to cheat, how often they do so, and to what extent in their assignments.  But it may not be as high as students would guess.  If a May 2023 article in the Chronicle of Higher Education with the headline, “I’m a student.  You have no idea how much we’re using ChatGPT” is any indication, students believe AI cheating is rampant.  Here is the chilling opening to the article, “look at any student academic-integrity policy, and you’ll find the same message: Submit work that reflects your own thinking or face discipline. A year ago, this was just about the most common-sense rule on Earth. Today, it’s laughably naïve.”

But the data – admittedly quickly outdated, sparse, and reliant on self-reporting – don’t necessarily corroborate such a bleak view.  One survey of 1,000 undergraduate and graduate students found that only about 20% admitted to using AI for their course work.  A 2023 study reported that 30% of college students admit to having used ChatGPT for their homework.  Of course, other estimates are higher depending in part on differing populations.  Online course provider study.com found that over half of students admitted to using ChatGPT to write an essay.  Interestingly, almost three-quarters said that they wanted the chatbot banned, which tells me that there are fears that many of their peers are using it.  The situation becomes a prisoner’s dilemma: If you do the right thing and eschew AI for papers while everyone else uses it, ultimately, they gain at your expense.

The fudge factor looms large here as well.  If students believe most of their classmates are using chatbots, it begins to feel more acceptable and less dishonest.  Under the principle of social proof, we often look at the commonality of a behavior to judge its appropriateness.  To fight rationalizations that “everyone else is using it,” why not share the data with students indicating that only a minority are using it to write papers and complete assignments?  In other areas such as energy conservation, people conserve more when seeing how their behavior compares to other residents or even learning that their neighbors care about energy conservation.  Our behavior often falls short of our own standards because we imagine other peoples’ behavior falls short or that they don’t really value those standards.

The dastardly side of me wonders if an anonymous reporting system, whereby students could secretly report peers who have admitted to using ChatGPT, would be beneficial.  My suggestion comes not with the expectation that it would produce a list of students to be investigated, but that it would suppress students sharing their use of AI with classmates, thereby breaking through pluralistic ignorance and the sense that everyone else is doing it.

Reminders of ethical standards

Research by Ariely and colleagues has shown across different settings that reminding people of ethical standards lowers cheating.  Princeton students, for instance, did not cheat at all in one experiment when asked to sign their honor code, but cheated just as much as their counterparts from Yale and MIT when no mention was made of the honor code.  Why not ask students to sign a statement that they did not use AI to help them write a paper, something along the lines of “I promise that the work I am turning in is entirely my own and that I did not use ChatGPT or any AI to assist me.”  Such pledges would work even better if institutions developed strong norms against cheating (there is no better time!) and if students were asked to sign the pledge before they began working on the written assignment.

Of course, not every student will be affected by such moral reminders.  For that matter, those who are little invested, perhaps taking courses they care little about, who resent the rising costs of education, who feel frustrated by their economic prospects, and just the generally disillusioned may be less impressed by any of the interventions I have suggested.  If you feel bitter or trapped, the fudge factor may not mean much.  But for countless others, the hope is that it will.