There are things AI can’t do yet, however, ‘yet’ is important because AI’s improvement is exponential.
10 ways ChatGPT may not be as helpful as you may have thought
- ChatGPT is limited to information before mid-2021 because it has been trained on pre 2021 data. If you require information after mid-2021 ChatGPT will find it more difficult.
- It doesn’t ask the user for clarification of their prompts. Users need to be skilled at prompt writing (prompt engineering) and critiquing its response.
- It’s sensitive to the phrasing of the prompts – it can claim not to be able to answer, but with a slight rephrasing of the prompt, it can answer. Users need to be aware of this and learn how to rephrase prompts.
- It can’t do logical reasoning. Activities which require logical reasoning will be more difficult to do with ChatGPT.
- It can misinterpret data it is using when writing a response e.g., metaphor. Students would need to be aware, and fact check. Activities written to leverage this limitation will be harder for ChatGPT to do.
- Its responses can lack insight and depth. It doesn’t understand the meaning behind the words it uses. It is a large-language model trained to generate words in a sentence based on the next most logical word. Activities which require insight and depth will be harder for ChatGPT to do.
- Responses can be factually wrong, but still sound plausible and be coherent (hallucinations). Users need to be skilled at evaluating responses and fact checking.
- It invents references that look correct but don’t exist (hallucinations). Students need to check and provide live links to references where applicable.
- It will sometimes respond to harmful instructions or show bias. Users need to be aware of this and develop their critical thinking skills, so they can evaluate responses generated and rewrite responses where needed.
- Indigenous knowledge and culture that is held orally is not in the model.