Faculty Mentor
Hayden Wimmer
Faculty Mentor Email Address
hwimmer@georgiasouthern.edu
Patterns in AI-Driven Misinformation: A Comparative Analysis of ChatGPT and Gemini
Location
Russell Union Ballroom
Type of Research
Proposed
I am a... (Select all that apply)
Graduate Student
Session Format
Poster Presentation
Select Your Campus
Statesboro Campus
College
Allen E. Paulson College of Engineering & Computing
Department
IT
Abstract
As Large Language Models (LLMs) become more widely used globally and shape public opinion, it becomes increasingly important to recognize their distinct tendency to produce, amplify, and disseminate misleading information. Preliminary findings indicate that, although these models are susceptible to hallucinations, their outputs exhibit variations in language sophistication and accuracy. Earlier studies have shown that LLM-generated misinformation is more difficult to identify than human-written content, while new comparative assessments have revealed significant disparities in accuracy, language style, and fact-checking skills. However, a research gap remains in systematically comparing the specific misinformation patterns between competing LLMs such as ChatGPT and Gemini, especially in real-world contexts. This research aims to address this by directly examining and contrasting their outputs to identify distinct trends and vulnerabilities while providing actionable insights for developing more resilient countermeasures to AI-driven misinformation by examining the implications for misinformation detection, governance, and digital literacy for the general public.
Program Description
.
Author Rights: Apply an Embargo
2-11-2026
Presentation Type and Release Option
Event
Start Date
4-23-2026 10:00 AM
End Date
4-23-2026 12:00 PM
Recommended Citation
Akinbanjo, Marvel, "Patterns in AI-Driven Misinformation: A Comparative Analysis of ChatGPT and Gemini" (2026). GS4 Student Scholars Symposium. 22.
https://digitalcommons.georgiasouthern.edu/research_symposium/2026/2026/22
Patterns in AI-Driven Misinformation: A Comparative Analysis of ChatGPT and Gemini
Russell Union Ballroom
As Large Language Models (LLMs) become more widely used globally and shape public opinion, it becomes increasingly important to recognize their distinct tendency to produce, amplify, and disseminate misleading information. Preliminary findings indicate that, although these models are susceptible to hallucinations, their outputs exhibit variations in language sophistication and accuracy. Earlier studies have shown that LLM-generated misinformation is more difficult to identify than human-written content, while new comparative assessments have revealed significant disparities in accuracy, language style, and fact-checking skills. However, a research gap remains in systematically comparing the specific misinformation patterns between competing LLMs such as ChatGPT and Gemini, especially in real-world contexts. This research aims to address this by directly examining and contrasting their outputs to identify distinct trends and vulnerabilities while providing actionable insights for developing more resilient countermeasures to AI-driven misinformation by examining the implications for misinformation detection, governance, and digital literacy for the general public.