Generative AI (GenAI) tools like Copilot, ChatGPT, GrammarlyGO, or Google Gemini and many others are now widely accessible to students. These tools can quickly generate text, summarise articles, and polish writing to a high standard. While AI can support learning when used ethically, it becomes a concern when it replaces original student work. For you, detecting AI-generated content in assessments is now part of maintaining academic integrity. We will look at some ways of identifying the use of GenAI in your students submitted assessments.
Examples of GenAI Use in Submissions
- Overly academic language that doesn’t match the student’s style
Example: A student who usually writes in a clear but simple style suddenly submits an essay that includes phrases like “This paradigm elucidates the intersectionality of post-structuralist critique and contemporary social discourse.”
This may be too advanced or abstract for the student’s known writing ability.
- Perfect structure with no deeper analysis
Example: A report follows textbook structure (introduction, discussion, conclusion) with no errors, but fails to provide any original interpretation, case examples, or reflection on local practice.
- Unrealistic improvement in a short time
Example: A student who received feedback on weak referencing and basic structure suddenly submits a flawless assignment with no errors, improved referencing, and academic tone—without attending tutorials or seeking support.
- Generic content that avoids context
Example: An assessment on New Zealand health policies includes general information about healthcare systems but lacks specific mention of the NZ context, Māori health models, or local frameworks.
- Inclusion of incorrect or made-up references
Example: The reference list includes journal articles with no Digital Objective Identifier (DOI) or that cannot be found. Some titles sound plausible but do not exist in real databases.
Things to Look Out For
- Sudden shift in writing quality
Compare with past work or in-class writing. AI-generated content is often more polished, structured, and error-free than previous submissions. This is a red flag.
- Lack of personal voice or specific examples
AI tools often generate broad content that avoids cultural references, lived experience, or course-specific material
- Too much content in a short time
A student submits a 1,000-word essay within an hour of receiving the task. AI can write quickly, but human students need time to research and plan.
- Overuse of formal connectors or transitions
Words like “moreover,” “consequently,” or “in conclusion” are used excessively. AI tools often rely on these patterns for flow.
- Incorrect or vague content that sounds confident
AI-generated text may include factual errors, made-up theories, or vague descriptions while still sounding authoritative.
- Matching AI detection patterns
AI-detection tools (e.g., Turnitin’s AI checker) may help, but results should be considered carefully. These tools are not 100% reliable and should be used alongside professional judgement. - Unusual referencing or citation style
References may be formatted inconsistently or follow a style that we do not use at MIT (e.g., MLA instead of the expected APA 7). Some may even be entirely fabricated.
Hints and Tips
- While it is important to maintain academic standards, it is also important to talk to your students rather than assume misconduct.
- If you suspect GenAI use, ask them to explain their work or describe writing process including assignment drafts, versions of assignments etc.
- Lastly, for Akonga misconduct refer to the Akonga-Policy-2025-Formerly-Student-Regulations (1).pdf Sec 13.3 and or 15.
What’s Next?
What is Copilot and How Do We Use It?
How to Access and Start to Use Copilot
How to Use Copilot to Write a Good Assessment
How to Spot Student Use of GenAI in Assessment Submissions
How to Use Turnitin’s AI Writing Detection Feature