You have been waiting for this chapter. Every student reading this book has the same nagging question: what counts as cheating, and what doesn't? The internet is full of bad answers — either "all AI use is cheating" or "AI is just a tool, anything goes." Both positions are wrong. The actual answer requires you to think for ten minutes about what you're trying to learn, and that's why most students avoid it.
This chapter is the ten minutes.
The wrong question
"Is AI cheating?" is the wrong question. It treats AI as a single thing with a single moral status, which it isn't. Asking AI to write your essay is different from asking AI to quiz you on the chapter. Pretending these are the same is what gets students into trouble — they either use AI for everything or for nothing, when the real answer requires sorting case by case.
The right question is: what is this assignment trying to teach me, and does this use of AI help me learn it or skip the learning?
That sentence is the whole framework. Re-read it. Memorize it. It will resolve 95% of the cases you'll ever face.
The three categories
Every conceivable use of AI on schoolwork falls into one of three buckets.
Category 1: clearly fine
Uses that help you learn the thing the assignment is trying to teach.
- Using AI as a tutor to explain concepts
- Generating practice quiz questions for yourself
- Asking AI to critique your draft and identify weak arguments (not rewrite them)
- Brainstorming essay topics or research directions
- Asking AI for analogies or alternative explanations
- Using AI to plan your study schedule
Use them aggressively. They make you smarter.
Category 2: clearly cheating
Uses where AI does the thing the assignment was supposed to teach you. Cheating regardless of what the syllabus says.
- Submitting AI-generated prose as your own writing
- Submitting AI-generated code as your own when the assignment is a programming exercise
- Having AI take your online quiz or exam
- Having AI write your discussion board posts
- Generating fake citations and submitting them
- Using AI to translate in a language class where translation IS the assignment
The gut check: would you want your professor to see the chat history?
Category 3: the murky middle
Where most real situations live and where you actually have to think.
- Outlining an essay before you write. Probably fine, check the syllabus.
- Fixing grammar in writing you produced. Mostly fine, but heavy editing crosses a line.
- Debugging code you wrote yourself. Almost always fine, the same way you'd ask a TA.
- Writing a single difficult sentence in an otherwise-yours paper. Usually too far — rewrite it yourself, even if worse.
- Generating first-draft notes from a lecture you missed. Murky.
- Asking AI to rewrite your introduction more engagingly. Usually too far.
- Asking AI to suggest counterarguments you should address. Fine if you engage with them in your own words.
Most violations come from convincing yourself something was Category 1 when it was Category 2.
The "30-minute rule"
Here is a sanity check that resolves most murky cases. Ask yourself:
Could I do this assignment without AI in 30 minutes more than it would take me with AI?
If the answer is yes, you're probably fine. AI is acting as a force multiplier on work you could do yourself.
If the answer is "no, this would take me hours longer without AI" or "I genuinely couldn't do this without AI" — pause. The assignment is probably testing skills you don't yet have, which means it's doing exactly what it's supposed to do, and AI is helping you skip the learning.
This rule isn't perfect, but it catches a lot of bad decisions before you make them.
How to read a syllabus's AI policy
Three policies are common right now. You should know which one your class uses before you do anything.
Policy A: AI is prohibited entirely. Anything generated by AI counts as plagiarism. These classes usually exist because the assignments are skill-building (intro composition, foundational math, language learning). If you're in this class, the right move is to actually not use AI for anything graded — including outlines and edits. Use AI for studying outside the assignments, and that's it.
Policy B: AI is allowed with disclosure. You can use AI but you have to say how. Usually this means a paragraph at the end of the assignment listing what you used AI for. Take this seriously — being honest about your AI use is how this policy stays alive. The professors offering this policy are doing you a favor; reward them by being precise.
Policy C: AI is allowed without restriction. Treat this with caution. Even when AI is "allowed," submitting AI-generated prose as your work is still cheating in spirit, even if the syllabus permits it. The framework above still applies. The professor is trusting you to use the tool well, not testing whether you'll cheat.
If your syllabus is silent on AI — which still happens — assume Policy A and ask the professor before you do anything else. Silence does not mean permission.
Talk to your professor early
The most underrated move in the whole AI ethics question. Asking "what's your policy on using AI for outlining?" early in the semester:
- Establishes you as a thoughtful student who takes integrity seriously
- Gets you a clear rule you can rely on
- Removes the ambiguity that causes most violations
Office hours, week two, three minutes:
"Hi professor, I want to be clear about your AI policy. The syllabus says X. Could you walk me through what counts as fine versus what crosses the line? I want to make sure I'm using it the way you'd want me to."
You'll be in the top 5% of students who bothered to ask — also the top 5% they'll advocate for later when you need a recommendation letter.
When you're not sure, don't
The final rule. If you're sitting at your desk at 11 p.m. wondering whether some particular use of AI is okay, the answer is almost always "don't, finish the assignment without it, and ask the professor about the gray area tomorrow." The downside of being conservative is one B+ on one assignment. The downside of being wrong is academic probation.
For a fuller treatment of these issues — including how AI ethics applies to research, professional contexts, and your eventual career — AI Ethics and Responsible AI is the longer course. But the framework in this chapter, applied honestly, will keep you out of almost every kind of trouble.
You're not trying to figure out how much AI you can use without getting caught. You're trying to figure out how much AI you can use while still becoming the kind of person you want to be. Those are the same question, and the answer is in the framework above.

