Skip to Main Content

Citing Sources

Learn about frequently-used citation styles and citation tools.

Citing Generative AI

This guide's pages about APA, MLA, and Chicago styles (also known as the "Big Three") each include a tab about how to cite any use of, and tools used for, generative AI in that style.
All of the styles listed on the More Styles page also include information about this.

1. For Authors -- Most publishers have policies for using and citing genAI tools. When you are considering submitting to a journal or conference, please check its author guidelines for its policies about genAI.

2. Always Check Citations -- GenAI tools are quite poor at giving correct citations. You will need to check them in scholarly sources such as library databases or Google Scholar. That’s a good reason to save your time by *starting* with scholarly sources when searching for information.

3. How Recent is the Information?  If you need up-to-date information, use a word like “recent,” “current,” or “latest” in your prompt, or else the genAI tool will most likely not mention when it was last updated. I asked Google Gemini, whose most recent training data was June 2024, about this*: "Could the time difference between the LLM's training date and the agent [which does a real-time web search] cause incorrect information being retrieved?"

ChatGPT said:

ANSWER: "Yes, the time gap between my last training update (June 2024) and real-time web search can sometimes lead to incorrect or misleading information in a few key ways:

1. Outdated Assumptions from the LLM:

  • Since my core knowledge is frozen at June 2024, I might make assumptions based on past trends that are no longer true.
  • Example: If a politician was in power in 2024 but was recently ousted, my LLM might still refer to them as the leader unless I pull fresh data from the web.

2. Mismatch between LLM Knowledge and New Facts

  • My LLM-generated response might not always perfectly integrate with the new information I find online.
  • Example: If you ask about the "current inflation rate," I might mix old economic trends from my training data with fresh numbers from a web search, leading to a confusing or inconsistent answer.

3. Contextual Errors in Interpreting New Events

  • If a major shift happens (like a war breaking out, a country changing its name, or a corporation merging), I might not fully grasp the implications since I lack historical context beyond my last training date.
  • Example: If a new scientific breakthrough disproves a widely accepted theory, my LLM might still reference outdated research unless I directly search for updates.

4. Limitations of Web Search

  • Web sources can be biased, inconsistent, or even incorrect. If I rely on them too much without proper validation, I might amplify misinformation.
  • Example: If an early news report on a breaking event is later corrected, but I fetched the initial incorrect report, I might unintentionally share outdated or misleading details."

*On March 18, 2025.

ASTM (American Society for Testing and Materials)

 
  • Banner at the top of ASTM Compass, their database of standards:   "Using Artificial Intelligence (AI) on ASTM standards and related intellectual property is prohibited. Violations will result in suspension of access." 

COPE (Committee on Publication Ethics) 

  • COPE is "committed to educating and supporting editors, publishers, universities, research institutes, and all those involved in publication ethics. COPE aims to move the culture of publishing towards one where ethical practices become a normal part of the culture itself."
     
  • Here is their position statement about "Authorship and AI Tools"

Hastings Center Report