Current Research
Current Research
This project investigates the security, privacy, and forensic challenges emerging from multimodal human-AI interaction, where users engage with AI systems through text, speech, gestures, images, and immersive platforms. As large language models (LLMs), vision-language models (VLMs), and other large multimodal models become embedded in collaborative tools, smart devices, and extended reality environments, it introduces new risks, ranging from unauthorized inference of personal traits to malicious content generation and behavioral surveillance. Our research agenda develops forensic tools to audit AI behavior, privacy-enhancing technologies to protect user signals, and security frameworks for collaborative systems. By addressing these risks across modalities and contexts, we aim to build a foundation for trustworthy, accountable, and privacy-preserving emerging AI ecosystems. Read more
Wearable AI is revolutionizing sectors such as construction, healthcare, and sports by enhancing safety, personal health monitoring, and performance optimization. This research aims to strengthen the security, privacy, and trustworthiness of Wearable AI systems throughout their entire lifecycle, from development to deployment. Employing advanced machine learning techniques, these devices integrate into daily activities but face vulnerabilities during data collection, processing, and testing. The goal is to create a sustainable, holistic framework that improves system security and trust, ensuring the solutions are lightweight, efficient, and implementable in real-time. Read more
As artificial intelligence (AI) systems are increasingly deployed in latency-critical and resource-constrained environments, such as autonomous vehicles, embedded healthcare, and edge-based decision-making platforms, the intersection of security, robustness, and system efficiency has emerged as a critical research frontier. While advances in adversarial resilience, differential privacy, and multimodal learning have strengthened the integrity of AI models, these improvements often introduce significant operational overheads: elevated energy consumption, increased inference latency, and greater carbon intensity. Moreover, emerging classes of attacks now exploit these very inefficiencies, using techniques like energy-latency sponge attacks or throughput flooding to degrade performance or exhaust resources. These developments challenge the assumption that security and robustness are orthogonal to system performance, and highlight the need for new design methodologies that jointly optimize security, responsiveness, and sustainability. Read more
This research thrust examines the integration of Language Models (LMs), including both Large and Small variants, into Cyber-Physical Systems (CPS) to address functionality, sustainability, security, and privacy challenges. CPS, blending cybernetics, software, and physical components, face issues like data fragmentation and significant privacy concerns. LMs can substantially improve data processing and enable sophisticated, human-like text interactions. The goal is to determine how LMs can enhance decision-making and system efficiency while adhering to sustainability, security, and privacy standards in various industrial CPS applications. Read more
Previous Research Projects
This research thrust focuses on developing a secure, privacy-preserving, and resource-aware blockchain framework for interdependent smart cities. In smart city environments, various interconnected systems rely on data sharing and collaboration, making security and privacy crucial concerns. The proposed blockchain solution aims to ensure the integrity and confidentiality of data while optimizing resource utilization. By leveraging advanced cryptographic techniques and distributed consensus mechanisms, the framework will provide a trustworthy platform for secure transactions and data exchange. Additionally, resource-awareness will enable efficient utilization of computing resources, addressing the scalability challenges associated with blockchain technology. The research project aims to contribute to the development of sustainable and resilient smart city infrastructures by enabling secure and privacy-preserving interconnectivity between diverse systems. Read more
This research thrust focuses on developing innovative machine learning techniques that prioritize privacy preservation, transparency, and interpretability in digital healthcare. It involves designing models that provide accurate predictions while safeguarding sensitive patient data through differential privacy techniques. The project also emphasizes creating explainable machine learning algorithms to enhance trust and understanding in healthcare decision-making. Collaboration with healthcare institutions and real-world data validation aim to advance privacy-aware and transparent machine learning solutions for accurate predictions and informed healthcare choices, while upholding patient privacy and regulatory compliance. Read more
The research thrust aims to develop a usable privacy-preserving mechanism for continuous location sharing in the Internet of Social Things systems. In IoT environments, continuous location sharing is crucial for various applications, including social networking and personalized services. However, privacy concerns arise as users are required to share their location data with IoT systems. This project seeks to design a mechanism that allows users to maintain control over their location information while still enjoying the benefits of IoT services. The proposed solution will employ privacy-enhancing techniques such as secure data aggregation and differential privacy to protect sensitive location information. Read more
Secure and Trustworthy Intelligent Systems (SHIELD) Lab
EGRA-409D
1230 Lincoln Dr, Carbondale, IL 62901