Trustworthy and Resource-Aware AI: Balancing Security, Latency, and Sustainability
Trustworthy and Resource-Aware AI: Balancing Security, Latency, and Sustainability
🧠⚡🔒 Research Description
🔍 Overview
As artificial intelligence (AI) systems are increasingly deployed in latency-critical and resource-constrained environments, such as autonomous vehicles, embedded healthcare, and edge-based decision-making platforms, the intersection of security, robustness, and system efficiency has emerged as a critical research frontier. While advances in adversarial resilience, differential privacy, and multimodal learning have strengthened the integrity of AI models, these improvements often introduce significant operational overheads: elevated energy consumption, increased inference latency, and greater carbon intensity. Moreover, emerging classes of attacks now exploit these very inefficiencies, using techniques like energy-latency sponge attacks or throughput flooding to degrade performance or exhaust resources. These developments challenge the assumption that security and robustness are orthogonal to system performance, and highlight the need for new design methodologies that jointly optimize security, responsiveness, and sustainability.
🧪 Research Thrusts
This project aims to establish a principled framework for modeling, measuring, and mitigating the performance and sustainability costs of secure AI across diverse architectures and deployment environments. Targeted platforms include large language models (LLMs), vision-language models (VLMs), GPU-based inference systems, and lightweight edge devices common in automotive and wearable applications. Our work integrates both proactive and reactive approaches, modeling threats that target computational inefficiencies and developing metrics and mitigations that enable sustainable-by-design secure AI. The core objectives are as follows:
Model and analyze resource-sensitive threat landscapes across heterogeneous AI deployments, including inference-time attacks that exploit energy usage, model robustness, and latency constraints in edge and automotive systems.
Develop formal sustainability-aware security metrics, such as the Robust-Carbon Trade-Off Index (RCTI), Energy-Based Attack Efficiency (EAE), and Cost per Unit of Robustness Change (CRC), enabling system designers to reason quantitatively about the environmental cost of security and the attack surface of resource-exhaustion vulnerabilities.
Design mitigation techniques for secure and sustainable AI, including lightweight adversarial defense algorithms, energy-aware privacy-preserving mechanisms, and resource-bounded robust inference pipelines optimized for deployment in constrained environments.
🌐 Research Impact
This research effort establishes a novel research direction at the intersection of AI security, system efficiency, and sustainable computing. It responds to pressing challenges in deploying robust AI systems in real-world environments where energy, latency, and resource constraints are critical. By modeling new classes of efficiency-targeted threats, developing metrics to quantify the hidden costs of security, and designing lightweight, sustainable defenses, this project enables the next generation of AI systems to be not only secure and trustworthy but also operationally viable and scalable. The outcomes will inform future AI design principles, deployment standards, and policy frameworks for building efficient, resilient, and responsible AI across sectors such as automotive and healthcare.
Current Projects
This research project addresses the growing need to design adaptive, efficient, and secure federated learning (FL) systems capable of operating across heterogeneous and resource-constrained environments. As FL becomes a foundational approach for privacy-preserving AI in applications such as mobile health, smart vehicles, and wearable sensing, participating devices increasingly vary in their computational power, network quality, energy availability, and latency constraints. At the same time, ensuring system-wide robustness and data privacy often incurs significant environmental and operational costs, including increased energy usage, carbon emissions, and degraded responsiveness. These trade-offs, if unmanaged, limit the scalability and sustainability of federated AI.
This project aims to build a comprehensive framework for optimizing federated learning across multiple constraints, security, latency, energy consumption, and carbon impact, tailored to diverse real-world deployment contexts. Our objectives are threefold:
Systematically investigate how heterogeneity in hardware, connectivity, and task priorities affects the performance and efficiency of federated learning, especially under energy- or latency-critical scenarios;
Develop formal multi-objective optimization models that capture the trade-offs among robustness, energy use, responsiveness, and environmental impact in FL systems, enabling adaptive client selection and aggregation based on contextual priorities;
Design and evaluate dynamic FL protocols and metrics that support sustainable, secure, and performant learning, including techniques for energy-aware contribution weighting, latency-sensitive aggregation, and carbon-conscious participation incentives.
Energy-Aware Distributed AI Secure and Sustainable Vision-Language Models (VLMs) Trustworthy Multimodal Intelligence at the Edge
This research project addresses the growing need to detect and mitigate a new class of threats targeting the energy and latency profiles of AI-enabled cyber-physical systems (CPS). As AI models, particularly those embedded in autonomous vehicles, wearables, and edge devices, become more complex and resource-intensive, they also become vulnerable to attacks that exploit their energy consumption and real-time performance constraints. These include energy-latency sponge attacks, inference flooding, and adaptive computational exhaustion, which can degrade system responsiveness, drain power, and jeopardize safety in latency-critical environments. Despite growing deployment of AI in constrained physical systems, little attention has been paid to adversarial behaviors that intentionally manipulate system efficiency as a vector for disruption. This project aims to establish a foundational framework for characterizing, detecting, and mitigating energy-latency-based attacks in AI-driven CPS environments. Our objectives are threefold:
Formally model energy-latency sponge attacks and other resource-targeting adversarial behaviors across AI systems deployed in edge, embedded, and real-time CPS settings;
Develop lightweight and context-aware detection techniques that monitor energy, compute, and timing signatures to identify abnormal model execution patterns and dynamic degradation indicative of adversarial behavior;
Design and evaluate defense mechanisms, including runtime throttling, selective inference, and attack-resilient scheduling, that restore system stability without compromising core functionality, safety, or resource availability.
Energy-Latency Sponge Attacks Adversarial Resource Exploitation Real-Time AI Threat Detection Resilient Edge and Embedded Intelligence
Publications
Syed Mhamudul Hasan, Taminul Islam, Munshi Saifuzzaman, Khaled R Ahmed, Chun-Hsi Huang, Abdur R. Shahid, "Carbon Emission Quantification of Machine Learning: A Review", IEEE Transactions on Sustainable Computing (2025)
Syed Mhamudul Hasan, Hussein Zangoti, Iraklis Anagnostopoulos, Abdur R. Shahid, "Sponge Attacks on Sensing AI: Energy-Latency Vulnerabilities and Defense via Model Pruning", arXiv preprint arXiv:2505.06454
Syed M Hasan, Abdur R. Shahid, Ahmed Imteaj, "Evaluating Sustainability and Social Costs of Adversarial Training in Machine Learning", IEEE Consumer Electronics Magazine (2024)
Syed Mhamudul Hasan, Abdur R. Shahid, and Ahmed Imteaj, "Towards Sustainable SecureML: Quantifying Carbon Footprint of Adversarial Machine Learning", In the GreenNet of the IEEE International Conference on Communications (IEEE ICC 2024)
Syed Mhamudul Hasan, Abdur R. Shahid, Ahmed Imteaj, "The Environmental Price of Ingelligence: Evaluating the Social Cost of Carbon in Machine Learning". In The 11th IEEE Conference on Technology for Sustainability (SusTech 2024)