It is important to fact-check the information you receive from the AI tool to ensure that it is not false.
Approach any information the AI tool produces cautiously (be a critical reader).
AI tools should only be used as a supplement to your research, not as a replacement.
Check your prompts. The information you get out is only as good as the requests you put in.
AI should be used ethically and in guidance with university student regulations. If you are unsure, then it is important that you follow instructions from your lecturers.
By using AI in a way which might undermine your skills as a university student, you will not be achieving their learning outcomes, so it may affect your academic development.
As the content recalled from ChatGPT and similar tools cannot be replicated by another person and cannot be linked to, we recommend that you reference the outputs from generative AI, such as ChatGPT in the same way that you would a personal communication or correspondence.
There is Harvard guidance on referencing generative AI on Cite them Right.
Cite them right also has some background information on AI tools and academic work.
Risks with using information from ChatGPT
Information obtained from ChatGPT should not be considered a primary source and should be used in conjunction with other sources.
Additionally, since AI models can sometimes produce incorrect or biased information, it's crucial to verify information obtained from ChatGPT with other sources before including it in your work.
Use AI with CAUTION
Understand that Large Language Models (including ChatGPT) are designed only to summarise, predict and generate texts. They won’t do the thinking for you.
Never submit chunks of text produced by AI as your own work. You may be in breach of the academic conduct regulations.