AI Forensics
AI Forensics
Research Thrust Summary
This research thrust establishes AI Forensics as a critical research domain, addressing the investigation, security, and ethical challenges posed by Generative AI models, AI-driven platforms, and emerging technologies. The research explores forensic methodologies for various AI systems, including large language models (LLMs) like ChatGPT and Gemini, AI-powered image and video generators, and consumer AI-driven devices such as smart glasses, AR/VR headsets, and mobile operating systems like Android and Windows.
The long-term goal is to develop forensic techniques for detecting AI-generated content, tracing AI-driven cybercrimes, and analyzing digital artifacts left by AI applications across platforms. This involves investigating AI-assisted fraud, misinformation, deepfakes, and privacy risks, while assessing the limitations of current forensic tools in analyzing AI-generated data. The project integrates cybersecurity, digital forensics, and criminology, aiming to equip law enforcement, forensic professionals, and researchers with the skills to handle AI-powered cyber threats and security vulnerabilities.
Beyond research, this initiative advances AI forensic education and workforce training, incorporating AI forensics into cybersecurity and criminal justice curricula. Through collaborations with industry, academia, and policymakers, the project will contribute to AI governance, policy recommendations, and the development of forensic solutions for next-generation AI technologies.
This study presents a holistic forensic analysis of the ChatGPT Windows application, focusing on identifying and recovering digital artifacts for investigative purposes. With the use of widely popular and openly available digital forensics tools such as Autopsy, FTK Imager, Magnet RAM Capture, Wireshark, and Hex Workshop, this research explores different methods to extract and analyze cache, chat logs, metadata, and network traffic from the application.
This forensic analysis of Ray-Ban Meta smart glasses examines their technology and privacy risks. With cameras, microphones, GPS, and Bluetooth, they capture sensitive data, raising legal concerns. The study explores data storage, transmission, and recovery after deletion. Key aspects include photo/video quality, Meta View integration, and metadata analysis (geolocation, timestamps, device details). Using forensic tools like Autopsy, metadata is extracted from platforms like Facebook and Instagram to detect tampering and deleted data. Privacy risks include bystander exposure, storage vulnerabilities, and hacking threats.
Participants
SHIELD Lab Team
Siva Chaithanya Guttapati
Malithi Wanniarachchi Kankanamge
Syed Mhamudul Hasan
Collaborators
Dr. Ahmed Imteaj, Assistant Professor, Southern Illinois University Carbondale
Dr. Sujung Cho, Associate Professor, Southern Illinois University Carbondale
Dr. Mijing Kim, Assistant Professor, Illinois State University
Publications
ChatGPT Forensics (Under Review)