Generative AI tools can assist us in our daily lives, at work, or when studying. As with any tool, ethical, evaluative, and appropriate use is key. Below are ethical and harm considerations connected to generative AI for you to explore.
"Some Harm Considerations of Large Language Models (LLMs)" by Rebecca Sweetman is licensed under CC BY-NC-SA 4.0.
Building, training, and using generative AI models requires the use of a significant amount of energy and contributes carbon emissions. It also consumes a lot of water for cooling. Researchers and companies are exploring ways to make generative AI more sustainable, but it is still important to consider whether your use of AI is worth the environmental impact and to use generative AI tools as efficiently as you can.
Although many generative AI tools are currently free, more and more are applying a cost to access them or to use premium features. This creates barriers for those who are unable to afford access. However, generative AI tools can also function as accessibility aids. For example, Maggie Melo wrote about ChatGPT as an assistive technology for students and faculty with ADHD.
University experiences develop your knowledge and skills so when you finish a degree, you’re well equipped for employment or further study. Using generative AI to create content that you have not expanded on, modified, or meaningfully engaged with, without acknowledgment, means you are presenting work that is not your own and you have not developed your knowledge and skills. Getting a generative AI tool to create or rewrite your assignment, and then submitting that work as your own, is cheating. It is similar to asking another human to do your work for you. If you use generative AI, you need to disclose which tool(s) you used and in what way. For more information on Academic Integrity consult MacEwan University's Academic Integrity Page and MacEwan's Principles for Ethical Use of Generative AI.
Furthermore, if you intend to publish work incorporating AI-generated content, you should check the publisher guidelines about what is allowed.
There are several copyright issues relevant to the development and use of generative AI tools. How the training data for the AI tool is gathered, whether it includes copyright-protected material, and whether permission or a licence from the rights holder has been acquired, or needs to be acquired, are all important considerations. Using substantial portions of copyright-protected works as inputs or as certain types of outputs with AI tools may also have copyright implications. While at present there seems to be no statutory basis for copyright protection of AI-generated outputs in Canada, such outputs can be infringing of other copyrights, and the liability for such infringements can be an issue for both developers and users of generative AI tools. The Government of Canada is exploring these issues through its Consultation on Copyright in the Age of Generative Artificial Intelligence and the Artificial Intelligence and Data Act.
Generative AI presents complex challenges for rights management. The technology is moving quickly, and regulatory activity needs time to respond to and reflect these changes. One example of this issue relates to artists and writers whose content has been used to train generative AI.
The implications of your content contribution are a critical part of rights management that you should be aware of before using generative AI tools. By submitting content to AI platforms through prompts or uploads, you may grant an AI tool the right to reuse and distribute this content, and that might result in a breach of copyright or privacy. You should use caution when submitting content, especially information or data you did not create, to AI platforms.
Like other digital tools, generative AI tools collect and store user data. Signing up to use generative AI tools allows companies to collect data about you. This data can be used to make changes to tools to keep you engaged.
User data may also be sold or given to third parties for marketing or surveillance purposes. When interacting with AI tools, you should be cautious about supplying sensitive information, including personal, confidential, or proprietary information or data.
Generative AI tools can create biased content, for any or all of these reasons:
People may embed their biases when they create them.
There can be biases in the datasets used to train them.
Generative AI may create its own biases from how it interprets the data it has been trained on.
Companies often do not disclose the data they used to train a generative AI model. Generative AI can't tell a user what data it used to generate particular content, nor can it accurately cite its sources or produce a reliable bibliography. Because of this, content from generative AI cannot be used as a credible and reliable information source.
AI models sometimes produce incorrect, biased, or outdated information. In some cases, a generative AI tool will state that it is unable to provide a correct answer, but in other cases, it may generate a false answer that appears to be correct. This is known as a “hallucination.” For example, ChatGPT sometimes fabricates citations to sources that do not exist. To avoid using or spreading misinformation, verify the accuracy of AI-generated content using reliable sources before including it in your work.
Using Generative AI, by Deakin University Library is licensed under a CC-BY-NC 4.0 licence. Using Generative AI by University of Alberta Library is licensed under a CC-BY-NC 4.0 licence.