If AI Misconduct is Suspected
Suspected misuse of generative AI can be a sensitive and uncertain area. While academic integrity must be upheld, students are still learning the boundaries of appropriate AI use—and many may not yet recognize it as misconduct. The best way to respond begins long before the issue arises, with proactive steps early in the semester and a thoughtful, student-centered approach if concerns emerge.
-
Early in the Semester: Set the Stage
-
Get to know your students—their stories, their writing style, and how they approach work in your class.
Familiarity with students’ voices makes it easier to notice when a submission seems out of character. -
Collect brief writing samples through low-stakes assignments, either in class or online.
These early pieces help establish a baseline. Low-stakes work is also less likely to be AI-generated. -
Clearly define your AI expectations. Share your policy on AI tool use in your syllabus and assignment guidelines. Transparency helps students understand boundaries and reduces confusion.
-
Retain copies of student work from the start of the course to use for comparison if concerns arise
-
-
When You Suspect Misuse
- Don’t rely solely on AI detection scores.
These tools have high false positive rates and are not reliable as standalone evidence. Some detection tools also show bias against multilingual or non-native English speakers. A score may raise a flag—but it is not proof. -
Compare with prior work.
Look for differences in vocabulary, tone, sentence structure, or depth of thinking. If the new submission feels overly generic or inconsistent, note specific examples.
- Don’t rely solely on AI detection scores.
-
Meeting with the Student
- Request a conversation.
Frame the meeting as a check-in rather than an accusation. Approach with curiosity and assume good intent. -
Begin with questions.
Instead of saying their work is suspect, ask for clarification about “muddy points” in their submission. You might say:
“I found this section intriguing—can you tell me more about what you meant here?”
Or:
“This is a complex idea. Can you walk me through how you arrived at it?” -
Assess understanding.
If the student can explain their ideas clearly and with confidence, their work may well be genuine. Proceed with good judgment. -
If concerns persist, share your observations.
If the student struggles to explain the work or seems unfamiliar with its content, raise your concerns gently. Show examples of how this writing differs from earlier work. Invite the student to reflect on why this might be the case. -
Use AI detection scores sparingly and cautiously.
They may help support your case, but should not be your first or only piece of evidence. -
If the student admits using AI improperly, you are within your rights to apply academic consequences, as appropriate under your institutional policy.
However, consider approaching the situation with grace. Many students may not fully understand that their use of AI crossed a line. This can be a teachable moment. You might offer an opportunity to revise the assignment or re-submit work under supervision—while still documenting the incident as required. -
If the student denies misconduct, but you have strong reason to believe it occurred, explain your concerns and your regret about the consequences that must follow. Document the evidence thoroughly in case of a grade appeal.
- Request a conversation.
-
Notify your Chair and other relevant administrators.
Notify your department chair and other appropriate administrators. Keeping others informed ensures consistency and support in handling the situation.
-
References and Resources
- ChatGPT. OpenAI, 22 July 2025, chat.openai.com/chat. Assistance with revising, synthesizing sources, and editing of this page.
- AI detectors: An ethical minefield
- Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector
- August 16, 2023 | Michael Coley | Vanderbilt University
Video: How to Have Conversations with Students You Suspect Have Misused Generative AI
(Run time: 28:00)
Slides for later reference.
Download these suggestions as an accessible pdf document.
How to Avoid Instances of Misconduct
There is no panacea to keep students from the misuse of Gen AI (or other types of cheating). However, we can incorporate some instructional measures that may decrease student misuse.
-
Discuss the purpose of education
Early in the semester, talk about why students are here in higher education. If they are to learn material and skills so that they can be successful in a future career, it's important to engage in the "struggle of learning," even when it is hard and uncomfortable. This means not using AI when the learning needs to be done on their own.
-
Discuss academic integrity
Broader than just the misuse of Gen AI, explore with your class who is harmed when students are dishonest about their work contributions. (Consider not just the student themselves, but peers, the institution, their future job coworkers, and anyone else they may serve within this discussion.)
The Integrity Game is a free resource (European) you can assign to students to "play" before class or during class. (Students start with a case on the home page and choose their own adventure.) While AI is not incorporated into these cases, you can fold discussion of AI use into a debriefing discussion with your class.
-
Set course expectations together
Create an early-semester activity where students contribute to a Gen AI policy for your class. What are their expectations, and what do they think constitutes "cheating with AI?" You may not agree with their responses, which can be a good starting point for further discussion. However, your students may also surprise you with their suggestions!
-
Be sure to have a clear syllabus policy
It's hard to have a discussion with students you believe have misused Gen AI if you haven't explained clearly what constitutes misuse in your syllabus. We offer some suggestions for how to do this on the Syllabus page of our Gen AI Toolkit.
-
Foster an environment of trust, not policing
This article provides a thoughtful discussion about the harmful effects of relying on AI detectors, which are not reliable enough to use as sure evidence of AI misconduct.
False accusations of students, in addition to an atmosphere of mistrust, creates an environment of policing rather than mentorship. These conditions are far less conducive to the good learning we wish to support in our classes.
Despite the availability of AI detection in D2L, try to avoid using these scores as your "gotcha" evidence. (And remember, while they may be correct in many cases, every once in a while they falsely accuse human writing as being AI-sourced. Would you want to be in that student's shoes? How can they prove otherwise?)
Instead, follow some of the suggestions above and below in this accordion to create a positive course atmosphere. In addition, research indicates that, in comparison to AI-generated writing, human writing is
- Easier to read
- Less complex or syntactically dense
- Longer
-
Adjust instruction and strategies
See our Advice & Strategies page of this Toolkit for more ideas!
This webpage from Northern Illinois University's CITL also offers some good advice - scroll to What You Can Do Instead/Rethink Assessment.
Have other suggestions? Email us at teaching@etsu.edu so we can consider including them!
Stout Drive Road Closure