Microsoft Copilot is an AI assistant designed to help you with a wide range of tasks. It can help you write your assessments, class activities and rubrics to name a few. If you provide good prompts (insert: how to write good prompt page here), Copilot AI will produce the information you want within seconds. It’s like having a smart friend who can assist you with almost anything.
However, Copilot and other AI tools do have limitations. AI might give information that is incorrect, outdated, or too general. It cannot understand personal context or your exact classroom needs unless you explain clearly in your prompt. AI also cannot make decisions for you or replace your professional judgement. It does not know your learners or their specific abilities. Here a look at limitations of Copilot.
- Output is only as good as the input: Like other GenAI, Copilot depends heavily on the prompts they’re given. Vague or overly broad prompts can lead to unhelpful or generic responses. Getting useful output often requires refining your questions and giving clear context.
- Not always accurate: These tools can sometimes “hallucinate”—producing content that sounds plausible but is factually incorrect or outdated. This is especially important when generating academic content, where accuracy and citation matter.
- Lacks pedagogical judgement: While Copilot can help create content, it doesn’t understand your students, your teaching style, or the curriculum’s intent. It can’t replace the professional judgement of an educator when it comes to what’s appropriate, engaging, or inclusive.
- Doesn’t understand local context: Copilot may not reflect local policies, cultural nuances, or institutional practices unless you explicitly include them in your prompt. This can be a risk if you’re relying on it for assessment design or course materials.
- Ethical and academic integrity risks: Using Copilot with students requires clear guidelines. Uncritical use may lead to overreliance or even academic misconduct if students copy AI-generated content without attribution or understanding.
- Don’t believe everything Copilot says: Context is important. Be careful for when you are using Copilot that you do not have higher expectations that its limitations. For example, if you ask Copilot to produce an image of a brown dog, it will produce an image of a brown dog whether it’s a cartoon, an emoji or a real one.
- Don’t use Copilot to mark students work: Copilot lacks human insights needed to assess nuance, context, and originality. It also can’t apply your institution’s marking rubrics or academic integrity standards reliably. Your students are looking forward to your feedback on their work. It is important you provide feedback from your perspective to help them improve their writing.
- Don’t use Copilot to replace academic judgement: Like other GenAI, Colipot can summarise content or generate questions, but it doesn’t know your learning outcomes, programme goals, or the needs of your students. It can’t decide what’s pedagogically sound, inclusive, or developmentally appropriate.
- Don’t rely on it for factual accuracy without checking: Copilot can make up references, misstate facts, or give outdated information. Always fact-check and verify anything it generates, especially when preparing teaching content or assessments.
- Don’t use it to write assessment questions without review: While it can help brainstorm or draft quiz and exam questions, Copilot doesn’t understand your marking criteria or how students might interpret a question. Any Copilot-generated assessment content must be critically reviewed and tested.
- Don’t ask GenAI to grade or give feedback on student work.
It lacks the human insight needed to assess nuance, context, and originality. It also can’t apply your institution’s marking rubrics or academic integrity standards reliably.
- Don’t let it replace your engagement with students: Using GenAI to write all announcements, feedback, or discussion prompts can make communication feel impersonal. Students value authentic interaction and your expertise.
- Don’t use confidential student or staff data in prompts: GenAI tools—especially cloud-based ones—should not be fed sensitive, identifying, or proprietary information. This can violate privacy policies or data protection rules.
- Don’t assume it understands local or cultural context: Copilot is trained on broad, mostly global data. It might overlook New Zealand-specific content, cultural perspectives, or Te Tiriti obligations unless you explicitly include them in your prompts.
Hints and Tips
- Copilot is used by MIT for privacy reasons.
- MIT has little policies on GenAI and the schools set their own GenAI rules regarding the use of GenAI.
- Join a group or be part of a discussion about the use of AI
- Follow MIT policies and procedures regarding the use of GenAi including Copilot
What’s Next?
What is Copilot and How Do We Use It?
How to Access and Start to Use Copilot
How to Use Copilot to Write a Good Assessment
How to Spot Student Use of GenAI in Assessment Submissions
How to Use Turnitin’s AI Writing Detection Feature