Generative AI was trained on information from the open web. This information includes quality sources, such as The New York Times and academic articles, as well as not so reputable sources, like blogs and social media. Generative AI cannot tell the difference between accurate and inaccurate information. It is programed to be convincing, not necessarily accurate.
AI output is not 100% correct. These mistakes are often called hallucinations. "These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they will always make a certain number of mistakes" (Metz and Weise 2025).
Bias in AI models refers to output errors caused by skewed training data. Such bias can cause models to produce inaccurate, offensive, or misleading predictions. Biased AI models arise when algorithms prioritize irrelevant or misleading data traits over meaningful patterns (When AI Gets It Wrong, 2025).
ChatGPT, and other generative AI tools, do not fact check, do not care about bias, or if the information is complete, or where the information came from, or if the information is useful. AI should be used with caution and care, especially if using it for school. Consider these questions when using AI tools:
Ivy Tech faculty may or may not allow students to use AI when completing assignments and coursework. Before you use AI for your assignments, check your syllabus and ask your professor if using AI is allowed in their class. You can find your class syllabus on IvyLearn.
Before you use AI for an assignment, ask yourself:
Does my instructor allow me to use AI?
Why would I use AI for this task?
How would it help me? Make sure to select an AI tool that is appropriate for the task.
Be sure to review and evaluate any responses that you get from AI. These tools can "hallucinate," which means they sometimes provide false responses.
If using AI in your assignments, cite the output appropriately.