Artificial Intelligence (AI) has made huge advances in recent months, and it is likely that every industry and community will be impacted by the opportunities it presents. However, as with any new and emerging technology, we must ensure we use it responsibly, and that we understand both what it is capable of, and what its limitations are.
This page is intended to clarify your obligations around the use of AI, should you choose to engage with this technology. If you have any further questions or wish to speak to someone about AI, you can contact the Library and Learning kaimahi for support, using the links to the left below.
In general you are free to use AI at any stage of your education at UCOL. In return, we expect you to display transparency around AI use when submitting work for assessment. This means that if you choose to use AI tools at any stage of working on an assessment, you must declare this use.
AI is good at many tasks, but is poor at others. Understanding this can be useful when determining how to use AI to assist your learning. The section ‘About AI’ below provides a brief overview of how AI works and some of the pros and cons of AI tools. You might try:
There are also other tools you may already use which involve AI, such as tools for translation and checking grammar.
If you remember the following statement, then you will generally be on the right track:
Before you use AI to help you, you should ask yourself – am I using AI to help me approach my work and build my understanding of the topic, or am I using it to avoid having to understand the topic?
You may use AI in the same way you would use a friend or kaiako to help. Use AI to help you understand the assessment, to give guidance on how to approach it, and to help you learn and understand the topic which the assessment is about, but you should not ask it to do your work for you. Remember, the goal of using AI is to help you understand, and the purpose of assessments is to assess your understanding.
The use of AI while studying will fall into three broad categories, each with different requirements around acknowledging or referencing. Further information and examples can be found in the APA Guide.
The output of any AI software is not inherently trustworthy; all information you obtain from an AI should be verified with another source before being believed and included in any work.
Therefore, you should cite the source you used to verify the information, and not the AI. As a result, it is unlikely that you will need to cite any AI in most situations.
While there is unlikely to be a need for citing AI, there may be occasions when you want to quote an AI because the way it concisely or clearly stated something resonates with you. If, after verifying the information provided by the AI is correct, you still wish to quote the AI, then this needs to be done according to the APA 7 referencing style (see the link above, or in the Useful Links box to the right, for further details and examples of referencing).
Beyond citing or quoting AI, there are many other ways in which AI can be useful when completing an assessment. Any such use of an AI must be declared by including a brief statement at the start of your reference list explaining which AI software you used, and how you used it (see the link above, or in the Useful Links box to the right, for further details and examples of declaring AI use).
AI can be a great tool when developing or refining understanding of a topic or concept. If you use it in this way prior to actually starting work on an assessment, then you do not need to declare its use.
AI is software that makes use of large amounts of data in order to be able to make predictions in response to new data it receives. Some AI is used to analyse information which is given to it, like looking for patterns in medical and pharmaceutical data.
More recently, ‘generative’ AI has been making headlines. This is the sort of AI we are talking about here, and it is able to create (or ‘generate’) new content in response to prompts. Chatbots, like ChatGPT and Bard, and image AI, like Midjourney and DALL·E, are generative AI.
AI is impressive software, but it is only software. One of the key differences between AI and human intelligence is understanding. Humans have consciousness behind their thoughts and, when discussing a topic, are conveying their understanding. AI never genuinely understands anything that it talks about because behind the text it produces are just mathematical models and data. There is nothing there to do the understanding.
Yes – in fact, it already is! Non-generative AI is being used in many different applications, from finance to healthcare. The recent surge in generative AI has the potential to result in even more significant changes, as industries all around the world establish ways in which AI can be incorporated into their processes. This frees human workers from a multitude of different tasks, but it’s important to remember that AI doesn’t understand anything it talks about – so it will be more important than ever for humans to actually understand the content, while harnessing AI to do some of the more tedious work.
Because AI works by looking at a lot of data and finding patterns, it is great at structure and organizing information, but it is entirely dependent on the data it was trained on. Therefore each AI system’s strengths will depend on this data set.
ChatGPT was trained on a great deal of text and will be good at creating text that is similar to its training data. While AI appears to know a lot about many different topics, it is in reality limited to the information it was trained on. The result is that AI is not always accurate, and you should always verify any information provided by AI.
In general, AI is more likely to be correct about fairly common topics which appear frequently in its training data, and more likely to make errors when discussing more niche topics that didn’t appear much. It is important to remember not to get complacent about fact-checking AI, especially as you move into more advanced topics.
It is also useful to realise that AI will always produce very average content based on patterns it has learned from the training data. While AI might be able to produce decent content about common topics, it will never produce great content, or content about niche topics, on its own. Great content must come from the user knowing how AI works and using it as a tool to help communicate their own understanding.