There are many ethical considerations when using generative AI (GenAI) tools. It is important to understand these factors to ensure your use of GenAI is ethical, responsible, and supports collective AI digital literacy. We can manage the risks by choosing when and how to use GenAI tools.
Adapted from VCC Guidelines for Generative AI in Teaching and Learning.
Bias and Discrimination |
GenAI learns from a lot of data. Sometimes this data has mistakes or stereotypes. This means GenAI can create content that is also biased, and may perpetuate sexism, ableism, racism, and other forms of discrimination. This has been shown in both text and image generation tools. |
Unreliable Content (Hallucinations) |
GenAI can “hallucinate.” This means it gives answers that sound true but are wrong. Always check the information. |
Equity in Access |
Not everyone can use AI easily. You need good internet, devices, skills, and some tools are not accessible for people with disabilities. |
Data Collection |
GenAI uses books, websites, and images, even if they are copyrighted. This means the creators of that content may not have given permission for it to be used. It can also collect your personal information, like photos, names, or text messages, which might be shared or stolen. |
Indigenous Knowledges and Relationships |
GenAI can harm Indigenous rights, culture, and knowledge (see the First Nations Principles of OCAP). It may cause stereotypes or cultural appropriation. |
Lack of Human Interaction |
If you rely too much on GenAI for learning, you might miss real connections with teachers and classmates. AI is a tool, but it cannot replace human interaction. |
Environmental Impact |
Training GenAI models requires large amounts of electricity and freshwater, and there are additional environmental costs with using these tools once trained. Current energy estimates are that it takes the same amount of energy to generate one image as to charge a cell phone and 2 cups of water for 20-50 prompts. A GenAI search uses 4-5 times the energy of a conventional web search. This can improve with smaller, localized GenAI models. |
Ownership and Control of Generated Content |
When GenAI creates new content, it can be difficult to determine who owns the resulting work. This raises questions about intellectual property rights and who has the right to use or distribute the generated content. |
Unethical Labour Practices |
Development of GenAI tools relies on humans to review the training process. The development of these tools involved the exploitation of human workers, particularly in the Global South, to train and review their tools and moderate the content. |
Privacy Invasion Through Re-Identification |
GenAI can sometimes identify people in photos or videos, even if names are hidden. This is a privacy risk. |
Critical Thinking and Creativity |
If you rely too much on GenAI to read, write, or think for you, you might lose some of your own literacy, creativity, and critical thinking skills. It’s important to use AI as a tool, not as a replacement for your own thinking. |
Changing Rules and Policies |
AI rules and policies change often. Check updates about privacy, pricing, or intellectual property rights so you can make informed decisions. |
"Some Harm Considerations of Large Language Models (LLMs)" by Rebecca Sweetman is licensed under CC BY-NC-SA 4.0.
Content by Vancouver Community College Library is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License