Skip to Main Content
Research Guides Homepage

Artificial Intelligence (AI): Limitations

The focus of this research guide is the educational applications of generative AI.

Although generative AI tools offer many exciting possibilities for students and faculty, users need to be aware of the limitations involved.  Here are some examples of limitations:

Limitations of AI-Based Tools

1.  Accuracy - A typical AI model is not assessing whether the information it provides is correct.  Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt.  Sometimes this results in a correct answer, but sometimes it does not - and the AI cannot interpret or distinguish between the two.

AI can be wrong in multiple ways:

  • It can give a wrong or misleading answer
  • It can omit information by mistake
  • It can make up completely fake people, events, and articles - called hallucinations
  • It cannot accurately produce its sources
  • It can interpret your prompts in an unexpected way
  • It can mix truth and fiction  (University of Maryland Libraries)

2.  Bias - Text or images generated by AI tools have no human author, but they are trained on materials created by humans with human biases.  Unlike humans, AI tools cannot reliably distinguish between biased material and unbiased material when using information to construct their responses.  (University of South Florida Libraries)

3.  Cost - At present, many generative AI tools are free to use, which is one reason for their popularity.  However, no one seems to know how long free will last.  At least two pricing models exist among those generative AI tools that are not free to use:  subscription based, which can be monthly, semi-annual, or annual; and pay as you go.  (for more information, see Generating Content and Profits, by Pedro Palandrani and Why Is ChatGPT Free?, by Eric Martin)

4.  Privacy - LLMs' privacy policies may allow the creators to sell and profit off of your personal information.  Anything you submit to an LLM may become a part of the LLM's learning corpus.  (University of Washington Health Sciences Library)

5.  Security - One can use indirect prompt injections, such as hidden instructions on a web page, to make an AI system steal data, manipulate a resume, or run code remotely on a machine.  Indirect prompt injections have been ranked as the top vulnerability for those deploying and managing LLMs.  (from Generative AI's Biggest Security Flaw Is Not Easy to Fix, by Matt Burgess)

6.  Timeliness - Many common generative AI tools are not connected to the internet and cannot update or verify the content they generate.  Also, many tools (including ChatGPT) are trained on data with cutoff dates, resulting in outdated information or the inability to provide answers about current information and events.  (University of Southern California Libraries)  

https://www.sanjac.edu/library| Central Library: 281-476-1850 | Generation Park Campus: 281-998-6350 x8133 | North Library: 281-459-7116 | South Library: 281-998-6350 ext. 3306