SecureCPS-LM: Leveraging Language Models for Trustworthy Cyber-Physical Intelligence
SecureCPS-LM: Leveraging Language Models for Trustworthy Cyber-Physical Intelligence
🧠📡🔒 Research Description
🔍 Overview
Cyber-Physical Systems (CPS) are foundational to critical infrastructure, enabling real-time coordination of sensing, computation, and actuation across domains such as transportation, manufacturing, energy, and healthcare. These systems are inherently complex, operate under tight resource constraints, and demand strong guarantees around safety, privacy, and resilience. Meanwhile, the emergence of large and small language models (LLMs and SLMs) has introduced powerful new capabilities in abstraction, contextual inference, and language-grounded reasoning. This research investigates how the unique strengths of language models can be strategically leveraged to enhance the security, privacy, and trustworthiness of CPS, serving not just as augmentative tools, but as active components in system introspection, anomaly detection, and privacy mediation.
Rather than treating language models as passive interfaces or inference engines, we explore their role as semantic intermediaries that can monitor, explain, and defend CPS behavior across the cyber-physical boundary. This includes generating interpretable representations of complex system dynamics, reasoning over multimodal data streams for anomaly detection, and acting as privacy-aware mediators between human operators and sensitive control policies. At the same time, we address the risks of LM integration, including model leakage, prompt-based manipulation, and latency overhead, and develop deployment strategies that are secure, efficient, and CPS-aware.
🧪 Research Thrusts
This project establishes a rigorous framework for designing and deploying language models in CPS with dual purpose: (1) enhancing the functionality and transparency of CPS operation, and (2) enabling language-model-driven security and privacy analysis within constrained and safety-critical environments. The project’s core thrusts include:
LM-Driven Security and Anomaly Detection in CPS: Leverage LMs to interpret sensor logs, command histories, and system telemetry to detect behavioral anomalies, misconfigurations, and cyber-physical inconsistencies, serving as a semantic filter over noisy, high-dimensional data.
Privacy-Aware Language Interfaces for Control and Monitoring: Develop secure, LM-mediated interfaces that provide interpretable summaries of system behavior to operators while protecting sensitive data through context-aware redaction, abstraction, and bounded reasoning.
Trust Calibration and Policy Verification via LMs: Use language models to generate natural-language explanations and structured reasoning chains for CPS decisions, enabling human-in-the-loop validation, transparency in automated control, and post-hoc auditing for compliance.
Secure and Sustainable Deployment of LMs in CPS Architectures: Explore model selection, compression, and scheduling strategies to deploy LMs in a tiered manner, matching SLMs to embedded devices and LLMs to supervisory layers, while accounting for latency, energy, and attack surfaces.
Threat Modeling and LM-Augmented Defense Mechanisms: Formalize new threat models where LMs play an active role in defending CPS against both traditional and LM-specific attacks, including semantic spoofing, command injection, and sensor fusion attacks.
🌐 Research Impact
This research defines a novel paradigm in which language models are active agents of cyber-physical security and trust, rather than external observers or passive components. By positioning LMs as semantic intermediaries between physical processes, digital signals, and human reasoning, we unlock new modes of system verification, explainability, and control. The project advances both the science and engineering of trustworthy CPS, yielding secure-by-design frameworks, LM-powered diagnostic tools, and sustainable deployment strategies. It sets the foundation for a new generation of CPS that are not only autonomous and adaptive, but also intelligible, defensible, and ethically aligned.
Current Projects
This research project investigates the integration of large and small language models (LLMs and SLMs) into Cyber-Physical Systems (CPS) to enhance functionality, responsiveness, and human-machine interaction across safety-critical domains such as healthcare, smart homes, and public safety. Language models offer powerful capabilities in semantic reasoning, contextual understanding, and real-time communication, but their deployment in CPS also introduces risks, including privacy leakage, control manipulation, and resource inefficiency. The project aims to establish a secure and scalable framework for LM-enabled CPS through three core objectives:
Analyze security and privacy risks introduced by LMs in CPS environments, including adversarial prompts, data inference, and unsafe autonomous behaviors;
Develop lightweight, privacy-aware deployment strategies using energy-efficient fine-tuning, edge-based control, and runtime safeguards;
Demonstrate real-world CPS-LM prototypes, such as fall detection systems, emergency response agents, and medical triage assistants, and evaluate them in terms of accuracy, latency, and trust.
LLM-Augmented Cyber-Physical Systems Semantic Reasoning for Real-Time CPS Trustworthy Language Model Integration Small Language Models for CPS Privacy and Control in LM-CPS Interaction
As language models (LLMs) become increasingly embedded in decision-making pipelines within Cyber-Physical Systems (CPS), their ability to reason about security, safety, and system anomalies is critical. These models offer promising capabilities in natural language interpretation, abstraction, and knowledge recall, making them attractive tools for threat detection, adversarial analysis, and response planning in dynamic CPS environments. However, their effectiveness in complex, real-world settings remains uncertain, particularly when facing distributional drift, incomplete information, or ambiguous system feedback.
This research investigates the strengths and limitations of LLMs in understanding, reasoning about, and simulating security threats across the full lifecycle of AI-integrated CPS. From initial data collection and model training to deployment, feedback, and adaptation, we assess how well LLMs support situational awareness, anomaly detection, and adversarial reasoning in environments with high uncertainty and real-time constraints.Our objectives are threefold:
Characterize the reasoning capabilities and failure modes of LLMs in threat identification tasks under shifting system behavior, sensor drift, and adversarial ambiguity—including evaluating their performance on edge cases, emergent behavior, and open-world conditions;
Simulate security incidents and threat vectors using LLMs to test CPS resilience, enable synthetic adversarial generation, and uncover gaps in model-based decision-making and safety protocols;
Model feedback loops and lifecycle-aware vulnerabilities in LLM-enabled CPS, capturing how inference errors, delayed updates, or ambiguous outputs propagate through autonomous control pipelines and introduce new classes of systemic risk.
LLM-Based Threat Reasoning in CPS Common Sense and Adversarial Understanding Lifecycle-Aware CPS Security LLM-Driven Threat Simulation
Publications
Malithi Wanniarachchi Kankanamge, Abdur R. Shahid, Ning Yang, S M Jamil Uddin, Rahul Biswas, "Development of a fall detection and safety communication system using small language models", In the 42nd International Symposium on Automation and Robotics in Construction (ISARC 2025).
Awal Ahmed Fime, Md Zarif Hossain, Saika Zaman, Abdur R Shahid, Ahmed Imteaj, "Towards Trustworthy Autonomous Vehicles with Vision-Language Models Under Targeted and Untargeted Adversarial Attacks", Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 619-628
W.K.M Mithsara, Abdur R. Shahid, Ning Yang, "Leveraging Large Language Models for Zero-Shot Detection and Mitigation of Data Poisoning in Wearable AI Systems", NeurIPS workshop on GenAI for Health: Potential, Trust and Policy Compliance, 2024
W.K.M Mithsara, Abdur R. Shahid, Ning Yang, "Zero-Shot Detection and Sanitization of Data Poisoning Attacks in Wearable AI using Large Language Models", 23rd International Conference on Machine Learning and Applications (ICMLA 2024)
W.K.M Mithsara, Abdur R. Shahid, Ning Yang, "Intelligent Fall Detection and Emergency Response for Smart Homes Using Language Models", 23rd International Conference on Machine Learning and Applications (ICMLA 2024)
Malithi Wanniarachchi Kankanamge, Syed Mhamudul Hasan, Abdur R. Shahid and Ning Yang, "Large Language Model Integrated Healthcare Cyber-Physical Systems Architecture", In The 48th IEEE International Conference on Computers, Software, and Applications (COMPSAC 2024)
Abdur R. Shahid, Syed Mhamudul Hasan, Malithi Wanniarachchi Kankanamge, Md Zarif Hossian and Ahmed Imteaj, "WatchOverGPT: A Framework for Real-Time Crime Detection and Response Using Wearable Camera and Large Language Model (LLM)", In The 48th IEEE International Conference on Computers, Software, and Applications (COMPSAC 2024)
Syed Mhamudul Hasan, Alaa M Alotaibi, Sajedul K Talukder and Abdur R. Shahid, "Distributed Threat Intelligence at the Edge Devices: A Large Language Model-Driven Approach", In The 48th IEEE International Conference on Computers, Software, and Applications (COMPSAC 2024)