Wearable AI: Enhancing Security, Privacy, and Trust Across the System Lifecycle
Wearable AI: Enhancing Security, Privacy, and Trust Across the System Lifecycle
🧠⌚🔒 Research Description
🔍 Overview
Wearable AI systems are rapidly becoming integral to modern life, enhancing safety in construction, enabling continuous health monitoring, and advancing athletic performance optimization. As these devices collect, process, and act upon sensitive behavioral and physiological data, they introduce a range of new vulnerabilities and attack surfaces. This research program investigates the security, privacy, and trustworthiness of Wearable AI across its entire lifecycle, from development and data acquisition to model training, deployment, and real-world interaction. We aim to establish a comprehensive framework that safeguards data integrity, mitigates adversarial threats, and ensures sustained trust and performance in dynamic, high-stakes environments.
🧪 Research Thrusts
Our vision integrates three interrelated thrusts:
Lifecycle Security in Wearable AI Systems: We systematically examine security vulnerabilities across the full development pipeline of wearable AI, including training data collection, ML model development, firmware integration, and deployment in the field. We focus on risks such as data poisoning, model inversion, and firmware-level tampering, and propose lightweight, real-time defense strategies suited for constrained devices.
Privacy-Preserving and Trustworthy Inference: Wearable devices often perform real-time inference on sensitive user data, including physiological signals, gestures, and movement patterns. We develop methods to protect against privacy attacks such as membership inference, feature leakage, and context-based de-anonymization, while ensuring robust functionality and user-centric transparency in edge-based AI architectures.
Resilience and Sustainability of On-Device Intelligence: To ensure long-term reliability and operational viability, we design security mechanisms that are energy-efficient, latency-aware, and compatible with real-time constraints. This thrust addresses the trade-offs between protection and performance, aiming to make security an embedded, sustainable component of wearable AI rather than an afterthought.
🌐 Research Impact
This project establishes a foundational framework for building secure, privacy-aware, and trustworthy Wearable AI systems that are ready for deployment in safety-critical and real-time settings. By addressing vulnerabilities throughout the AI lifecycle, this work will enable resilient and ethical integration of AI into human-centered environments. Our contributions will inform industry standards, influence regulatory development, and help shape a new generation of responsible, transparent, and secure AI-enabled wearables.
Current Projects
This research project explores the emerging potential of large language models (LLMs) to interpret, summarize, and contextualize human behavior from multimodal activity data. As wearable sensors and smart environments increasingly track physical, physiological, and contextual signals, such as movement patterns, environmental cues, and interaction logs, there is a growing opportunity to generate rich, narrative-level insights into daily life. LLMs, with their strong capabilities in pattern abstraction, contextual inference, and natural language generation, offer a powerful means to transform low-level activity traces into meaningful, human-centered representations. These capabilities can enable applications such as automated life journaling, behavior-aware digital assistants, and personalized health and productivity insights. However, using LLMs in this context also introduces novel risks, including biased interpretation of activity patterns, unintended behavioral profiling, and privacy violations from inferred context. This project aims to develop a foundational framework for using LLMs in human activity recognition (HAR), with an emphasis on interpretability, personalization, and responsible deployment. Objectives of this project are follows.
Design LLM-augmented models that semantically enrich and interpret multimodal sensor data for accurate, personalized activity recognition and context understanding;
Construct and utilize paired datasets that link structured behavioral data with natural language summaries, enabling training and evaluation of narrative-level HAR systems;
Develop privacy-aware, bias-mitigated methods for transforming activity traces into language-based reflections, supporting applications in life logging, wellness analytics, and assistive technologies.
LLM-Based Human Activity Recognition Multimodal Behavior Understanding Narrative Generation from Sensor Data Privacy-Preserving Life Logging Personalized Contextual AI
The proliferation of wearable technologies in health monitoring, activity recognition, and intelligent transportation systems has introduced a growing reliance on machine learning (ML) models for real-time, context-aware inference. However, the integration of ML into resource-constrained, distributed, and privacy-sensitive wearable ecosystems exposes the underlying models to a range of sophisticated vulnerabilities, including data poisoning, inference-time attacks, and energy-latency sponge attacks, that can undermine system reliability, compromise user safety, and degrade trust in AI-assisted decision-making.
This research project investigates the full-spectrum vulnerability landscape of wearable ML systems, from data collection and training to deployment and inference. We aim to develop principled, efficient, and context-aware defense mechanisms capable of securing these systems under adversarial and uncertain conditions. Building on our prior work in federated learning, differential privacy, spatiotemporal poisoning, and trust-aware inference on wearables, this project systematically addresses both known and emerging threats across the ML lifecycle. Our objectives are threefold:
Characterize attack surfaces and failure modes in wearable ML systems, including spatiotemporal data poisoning, label-flipping attacks, inference-time adversarial perturbations, and energy-latency sponge attacks targeting resource exhaustion and performance degradation;
Develop mitigation techniques tailored for wearable and edge environments, such as privacy-preserving robust training methods, anomaly-aware aggregation in federated learning, runtime inference filtering, and trust-calibrated fusion of multi-sensor data streams;
Quantitatively evaluate defense strategies under real-world constraints, focusing on adversarial robustness, energy efficiency, computational overhead, detection latency, and overall system utility in applications including activity recognition, cognitive state monitoring, and behavior-driven interventions.
Adversarial Machine Learning in Wearables Data Poisoning Attacks and Defenses Energy-Latency Sponge Attacks Federated Learning Security Robust Human Activity Recognition
Publications
Abdur R. Shahid, Syed Mhamudul Hasan, Ahmed Imteaj, and Shahriar Badsha, "Context-Aware Spatiotemporal Poisoning Attacks on Wearable-Based Activity Recognition" in IEEE International Conference on Computer Communications (INFOCOM), 2024. (Poster Publication).
Ahmed Imteaj, Tanveer Rahman, Saika Zaman, Md Zarif Hossain and Abdur R. Shahid, "Enhancing Road Safety through Cost-Effective, Real-Time Monitoring of Driver Awareness with Resource-Constrained IoT Devices", In The 48th IEEE International Conference on Computers, Software, and Applications (COMPSAC 2024)
Abdur R. Shahid, Ahmed Imteaj, and Md Zarif Hossain, "Assessing Wearable Human Activity Recognition Systems Against Data Poisoning Attacks in Differentially-Private Federated Learning." in IEEE SmartSys @ IEEE SmartComp, 2023.
Abdur R. Shahid, Ahmed Imteaj, Peter Y. Wu, Diane A. Igoche, and Tauhidul Alam, "Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System". In IEEE SSCI, 2022.
Abdur R. Shahid, and Sajedul Talukder, "Privacy-Preserving Activity Recognition from Sensor Data", In Proceedings of the 37th ACM CCSC Eastern Conference (ACM CCSC), October 2021.
Yujian Tang, Samia Tasnim, Niki Pissinou, S. S. Iyengar, and Abdur R. Shahid. "Reputation-Aware Data Fusion and Malicious Participant Detection in Mobile Crowdsensing." In 2018 IEEE International Conference on Big Data (Big Data), pp. 4820-4828. IEEE, 2018.