Skip to Main Content
Library.MacEwan.ca

Artificial Intelligence - Student Guide

Assessing Generative AI

As with any tool used to inform your understanding, further your learning, or assist with content creation,  ethical, evaluative, and appropriate use is key. Whether you are considering which AI tool to use or wondering when to trust generative AI outputs, the following can help:

Assessing GenAI Output

Accuracy

  • Generative AI tools like ChatGPT are able to produce a lot of different content, from quick answers to a question to cover letters, poems, short stories, outlines, to complete essays and reports. However, the content created should be carefully checked, as it may contain errors, false claims, or plausible sounding, but completely incorrect or nonsensical answers.
  • Generative AI can also be used to create fake images or videos so well that they are increasingly difficult to detect, so be careful which images and videos you trust, as they may have been created to spread disinformation.

Bias 

  • Generative AI relies on the information it finds on the internet to create new output. As information is often biased, the newly generated content may contain a similar kind of bias. Example of potential bias include gender-bias, racial bias, cultural bias, political bias, religious bias, and so on.  Scrutinize AI generated content closely for inherent biases.  

Comprehensiveness

  • AI content may be selective as it depends on the algorithm which it uses to create the responses, and although it accesses a huge amount of information found on the internet, it may not be able to access subscription based information that is secured behind firewalls. 
  • Content may also lack depth, be vague rather than specific, and it may be full of clichés, repetitions, and even contradictions. 

Currency

  • AI tools may not always use the most current information in the content they create. In some disciplines it is crucial to have the most recent and updated information available. Think, for example, about the recent pandemic. Research was going at a very fast pace and it was important to have not only the most comprehensive and most reliable data available, but also the most recent. Technology is another area that is constantly changing, and information that is valid one year, may not be valid the next. There are many other examples, and it is important that you check the publication dates for any sources of information that are used in AI-generated texts. 

Sources

  • Generative AI tools don't always include citations to the sources of information. It is also known to create citations which are incorrect and to simply make up citations to non-existent sources (sometimes referred to as AI Hallucination). 
  • To avoid accidentally relying on an AI hallucination always confirm with a separate reliable source.
  • Not crediting sources of information used and creating fake citations are both cases of academic misconduct and breaches of Academic Integrity. See MacEwan's Academic Integrity page.

Copyright

  • Generative AI tools rely on what they can find in their vast knowledge repository to create new work, and a new work may infringe on copyright if it uses copyrighted work for the new creation.
  • For example, there have been several lawsuits against tech companies that use images found on the internet to program their AI tools. One such lawsuit in the United States is by Getty Images which accuses Stable Diffusion of using millions of pictures from Getty's library to train its AI tool. They are claiming damages of US $1.8 trillion.
  • There is much debate about the ownership of copyright to a product that was created by AI. Is it the person who wrote the code for the AI tool, the person who came up with the prompt, or is it the AI-tool itself? Although currently in Canada, AI-generated works are not copyright protected, this may change in the future.  Also note that laws in other countries may differ from that in Canada. 

Assessing GenAI Tools: The ROBOT test

There are a number of frameworks for assessing AI tools, including the ROBOT test developed by The LibrAIry.

Reliability

 

  • How reliable is the information available about the AI technology?
  • If produced by another party, what are the author’s credentials? Biases?

Objective

 

 
  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform? To convince?

Bias

 
  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?

Owner

 

 

  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
  • Who can use it?

Type

 

 

  • Which subtype of AI is it?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

The ROBOT test was created by Hervieux & Wheatley, and is made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Further Learning

Bloomberg Live. (2019, March 29). The Coded Gaze: Bias in Artificial Intelligence [Video]. YouTube. https://youtu.be/eRUEVYndh9c?si=HtaVlBF6fpGS0BYU

Jarry, Jonathan. (2024, March 4). How to Spot AI Fakes (For Now). McGill. https://www.mcgill.ca/oss/article/critical-thinking-technology/how-spot-ai-fakes-now

Bond, Shannon. (2023, June 13). AI-generated images are everywhere. Here's how to spot them. NPR. https://www.npr.org/2023/06/07/1180768459/how-to-identify-ai-generated-deepfake-images

Credit

Artificial Intelligence by Ulrike Kestler at the Kwantlen Polytechnic University Library is licensed under a CC-BY-NC-SA 4.0 licence. Teaching with Generative AI by BCIT Library Services is licensed under a CC-BY-NC 4.0 licence.

 
Licensed under CC BY-NC-SA | Details and Exceptions