Using AI ethically and effectively in academic research can be complex and challenging, but, at the same time, it can be a highly valuable tool. It is significantly impacting the entire scholarly community, including students, educators, researchers, writers, educational institutions, and publishers.
VCC produced LibGuides & Videos:
Using Copilot to Generate Keywords for Database Searches.
Using Copilot to Deepen Research Ideas.
Using Generative AI to Do Research/ York University. This comprehensive guide outlines the various types of AI tools and their respective specialties.
AI tools can support researchers, writers, and students during their academic research process, including the word-smithing and research stages. The University of Arizona states that wordsmithing tasks are those that don’t require search and are related to idea generation, honing ideas and forming the building blocks for creating academic research papers by focusing on writing levels and style.
As highlighted in a Harvard Business Publishing article, using AI in academic research can be a valuable partner, not a replacement. The benefits of generative AI in the educational sector revolve around the technology’s potential to enhance the efficiency of the writing process and enable researchers to communicate their findings more effectively.
A 2024 systematic review found that AI can support academic writing and research by helping to manage complex ideas and comprehensive information. Specifically, it can help in six core ways:
Using AI tools to speed up part of the writing and research process may also enable academic journals to enter the widespread food industry more quickly, allowing fellow researchers to draw upon their findings and initiate their studies sooner, ultimately leading to a faster influx of research into the publishing sphere. AI tools can also help support researchers whose native language is not English in writing and gathering information more easily and efficiently.
Researchers can use AI during the ideation and writing process for tasks such as:
Whether an AI tool is grounded in a fact-based source is crucial when choosing one to support academic research. One way to ascertain the credibility and validity of academic food research is to examine the extent to which a tool relies on objective data.
The University of Arizona states that ChatGPT 4.0 Mini and Claude 3.5 Sonnet, both free versions, are grounded in something other than facts, as they only operate by relying on their training data.
When platforms or tools like these use data based on their training, it can quickly become outdated, restrictive and inaccurate in providing reliable information. ChatGPT 4o mini was trained until October 2023 while Claude 3.5 Sonnet was trained until April 2024.
Other accessible AI tools are grounded in fact-based sources, which means they can also use web search results or other types of search results alongside their AI-generated findings to provide more comprehensive insights on a particular area of food research.
ChatGPT Plus, ChatGPT 4o (available for limited use in free accounts), Perplexity AI (available in both free and pro versions), Microsoft Copilot (in free and pro versions) and Google Gemini (in free and pro versions) are examples of these available and grounded AI tools.
Many tools have free and pro versions. The free tools often provide limited functions and usage limits, while the pro versions offer more extensive and unlimited capabilities.
There is no standard definition or consensus on the use of AI tools in academic research, making it more challenging for researchers to understand the best practices and limitations of using AI tools to support their research. Typically, the rule of thumb is that journal policies stipulate that it’s the author’s responsibility to ensure the validity of information provided by AI.
The use of AI in academic research is becoming a field of study in its own right. Guillaume Cabanac, a professor of computer science at the University of Toulouse, explored this subject in 2021. With his team, Cabanac identified several telltale signs of text generator use in academic research, including “tortured phrases”, complicated or convoluted wordplay instead of simple terminology and generative AI.
AI detection tools are one way to counter unethical uses of AI and a lack of vetting or disclosure. In 2023, researchers studied a tool that can review science writing and differentiate between it and that created by ChatGPT with 99% accuracy. Rather than adopting a “one-size-fits-all” approach, the researchers aimed to develop an accurate tool tailored to a specific type of writing.
As AI continues to advance, academia must prioritize education on AI tools and the opportunities and challenges associated with their use to maintain scientific integrity, trust, and credibility.
Researchers can use AI during the research process for tasks such as:
In 2024, at the Special Libraries Association's annual conference, hosted at the University of Rhode Island, Brian Pitchman, Director of Strategic Innovation at Evolve Project, discussed AI’s new frontiers, including the challenges it’s likely to encounter as it evolves.
One of these is described as “garbage in, garbage out,” emphasizing the importance of precise and high-quality data. Ultimately, the essential rule of thumb is that the results of anything you’re doing in AI are only as good as the data you put in it in the first place.
1. Accuracy of results and the challenge of generative inbreeding in AI content
The downside of using AI in academic research and writing is that it may lack accuracy. It is at risk of providing false references, otherwise known as artificial hallucinations. Fictional information, too, is a considerable concern. AI tools’ capabilities to learn user biases and feed these into algorithms also have the potential to produce offensive material, including sexist and racist content.
Whether AI can detect AI is also a problem today due to the sheer amount of content that AI generates. If AI tools are using that content and populating the research sphere with even more AI-based content, it becomes difficult to know what’s AI and what’s not AI. The term for this is generative inbreeding.
2. Unethical uses of AI tools in academic research and writing
With AI rapidly on the rise due to the arrival of its more advanced evolution, generative AI, academic institutions, such as schools, colleges, universities and professional development organizations, have concerns about the proliferation of the technology in education.
Now, peer-reviewed academic journals are also worried about the rate and level at which AI is being deployed to support researchers with writing, from creating research outlines and drafts to completing entire papers.
Without undergoing a vetting procedure by publishing houses or academics, disclosing AI tools in their work, using AI tools may be considered plagiarism. AI tools could also result in the spread of fake references and insights, producing an inaccurate and non-credible picture of the food research space. The added problem of failing to make the use of AI clear, and exactly how and where it’s been used in the journal article, is another issue affecting academia.
3. Difficulty in detecting AI, restricting trust and credibility
AI increasingly appears in academic journal searches and writing, ultimately finding its way into final journal articles. However, while it provides various uses, it’s often hard to detect, limiting its acceptance and uptake in the research community. Subsequently, it risks restricting academia’s trust in the research process and potentially the findings and conclusions themselves, lowering their credibility.
Despite ethical watchdogs investigating instances of generative AI use in academic research that makes its way into scientific writing, there’s no advanced method of detection that matches AI’s sophistication.
In August 2023, the online publication WIRED brought attention to one peer-reviewed study in the academic journal Resources Policy, belonging to Elsevier Publishing, that contained the sentence: “Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table.”
Apart from this sentence, the journal article appeared like other academic research papers. The study’s authors were listed by name and institution, and did not appear to have been generated using AI language models. After another researcher published a screenshot of this sentence on X (formerly known as Twitter), Elsevier began investigating. In response, Elsevier highlighted its publishing ethics on X, referencing its rules on acceptable use and necessary disclosure methods.
While the publishing house does not prohibit the use of AI tools, it does require disclosure. Without disclosure, readers, including other researchers and the publishers, do not know the methods—a growing number of which may rely on AI to support their writing and research process—are used, pulling a veil over writing and research methods.
This section of the guide has been written by Natasha Spencer-Jolliffe, Lion Spirit Media, and was added in 2025
Content has been adapted from IFIS (https://ifis.libguides.com/literature_search_best_practice/artificialintelligence)
The content is licensed under a Creative Commons Attribution - NonCommercial - ShareAlike 4.0 License.
Content by Vancouver Community College Library is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License