Generative AI can produce information that sounds accurate but is actually incorrect. This is known as a “hallucination.” It happens because the AI predicts what seems likely based on patterns in data, rather than verifying facts. For instance, GenAI might make up statistics, historical events, scientific facts, or citations.
GenAI learns from lots of data. Sometimes, that data that is out-of-date, flawed, or biased. This means GenAI can create content that is also outdated, misleading, or biased, and may perpetuate sexism, ableism, racism, and other forms of discrimination.
Adapted from Using Generative AI: Evaluating Generative AI (University of Alberta).
Generative AI tools can create many types of content—like quick answers, cover letters, poems, short stories, outlines, essays, and reports. But even though the results may sound convincing, they can include mistakes, false claims, or nonsensical information. That’s why it’s important to verify the content to catch these problems.
Generative AI can also be used to create fake images and videos that look real. Be careful and think critically.
Adapted from Artificial Intelligence: Assessing AI-generated content (Camosun College).
Generative AI tools don’t always show where their information comes from. Sometimes, they even make up fake citations—this is called an “AI hallucination.” The tool might list a real author or journal, but the article title, page numbers, or even the entire source might not exist at all.
Using information without giving proper credit and citing made-up sources are both forms of plagiarism. Always double-check any sources AI gives you. You can use tools like the VCC Library Search to see if the source is real and reliable.
Adapted from Artificial Intelligence: Assessing AI-generated content (Camosun College).
AI tools don’t always use the most up-to-date information. They may not include the latest research or news, especially if the topic is changing quickly.
In some disciplines, it is crucial to have the most recent and updated information available. For example, during the COVID-19 pandemic, new studies and updates were coming out all the time. It was important to have the most recent and reliable information—not just general facts. The same goes for fast-moving fields like technology, where something that was true last year might already be outdated.
That’s why it’s important to check the dates on any sources mentioned in AI-generated content. Make sure the information is still current and relevant for your topic.
Adapted from Artificial Intelligence: Assessing AI-generated content (Camosun College).
Generative AI creates content by learning from information found online. But not everything on the internet is fair or balanced. If the information it learns from is biased, the content it creates can also be biased. This bias can show up in different ways—like sexism, racism, cultural bias, political bias, or religious bias.
That’s why it’s important to think critically when using AI tools. Always check for bias.
Adapted from Artificial Intelligence: Assessing AI-generated content (Camosun College).
AI-generated content can be limited or incomplete. Even though AI tools pull from a huge amount of information online, they often can’t access content that’s behind paywalls or locked in private databases—like academic journals or subscription-only websites.
Also, the way AI creates responses depends on how it’s programmed. This means the content might be vague, overly general, or missing important details, interconnections, and wider context. It can also include clichés, repeat the same ideas, or even contradict itself.
Adapted from Artificial Intelligence: Assessing AI-generated content (Camosun College).
Generative AI tools rely on what they can find in their vast knowledge repository to create new work, and a new work may infringe on copyright if it uses copyrighted work for the new creation.
For example, some tech companies have been sued for using copyrighted images to train their AI tools. One major case in the United States involves Getty Images, which claims that the AI tool Stable Diffusion used millions of its photos without permission. Getty is asking for $1.8 trillion in damages.
There’s also a lot of debate about who owns the copyright to something made by AI. Is it the person who wrote the AI code? The person who gave the prompt? Or the AI tool itself? In Canada, AI-generated content is not currently protected by copyright, but this could change. Keep in mind that copyright laws vary by country, so what’s true in Canada might not apply elsewhere.
Adapted from Artificial Intelligence: Assessing AI-generated content (Camosun College).
Recent research has raised concerns about how AI learns over time. As more content is created by generative AI, that same AI-generated content may be used to train future AI tools. This can cause problems because if the original content has mistakes or lacks depth, those issues can get repeated and become even worse in newer versions of AI.
A study by Shumailov et al. (2003) found that this can lead to something called “model collapse.” This means that over time, AI tools may start to forget what real, high-quality information looks like. The result is that the AI becomes less accurate and less useful.
Adapted from Artificial Intelligence: Assessing AI-generated content (Camosun College).
Content by Vancouver Community College Library is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License