Privacy-Preserving Deep Reinforcement Learning for Secure Resource Orchestration in Cyber-Physical Systems
DOI:
https://doi.org/10.26438/ijsrnsc.v13i2.268Keywords:
Privacy, Deep Reinforcement Learning, Resource, Cyber-Physical Systems, Attack, SensitiveAbstract
This research addresses the critical challenge of secure and efficient resource allocation in Cyber-Physical Systems (CPS) by introducing a Deep Reinforcement Learning (DRL) framework integrated with privacy-preserving federated learning. Unlike traditional methods, our approach ensures that raw data remains localized, thereby mitigating privacy risks and enhancing trust within the CPS ecosystem. A custom-designed reward function is proposed to optimize both resource utilization and privacy assurance, balancing performance and security goals. To strengthen data confidentiality, we incorporate a variant of Differential Privacy, which increases the privacy budget without significantly compromising data utility—achieving a privacy guarantee of 0.8 while maintaining over 92% model accuracy. Experimental validation on a smart grid test bed demonstrates the efficacy of the proposed model, achieving a 17.6% improvement in resource allocation efficiency, a 23% reduction in communication overhead, and a 12% increase in system throughput compared to baseline DRL models without privacy constraints. Overall, the framework demonstrates state-of-the-art performance in optimizing resources in complex, distributed CPS environments while upholding stringent privacy requirements. The proposed method offers a scalable and secure solution for next-generation CPS applications in smart infrastructure.
References
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Hassabis, D., “Human-level control through deep reinforcement learning,” Nature, Vol.518, Issue 7540, pp.529–533, 2015.
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., ... & Hassabis, D., “Asynchronous methods for deep reinforcement learning,” International Conference on Machine Learning, PMLR, 2016.
Schulman, J., Wolski, P., Ho, J., & Abbeel, A., “Proximal policy optimization algorithms,” arXiv preprint, arXiv:1707.06342, 2017.
Sutton, R. S., & Barto, A. G., “Reinforcement learning: An introduction,” MIT Press, 2018.
Chen, X., & Lau, V. K. N., “Resource allocation for wireless networks with reinforcement learning,” IEEE Wireless Communications, Vol.25, Issue 3, pp.180–187, 2018.
Liu, Y., Wang, J., & Xie, W., “Deep reinforcement learning for dynamic resource allocation in cloud computing,” IEEE Transactions on Cloud Computing, Vol.8, Issue 4, pp.1165–1178, 2020.
Mao, H., Alizadeh, M., & Katabi, D., “Resource allocation with deep reinforcement learning,” ACM SIGCOMM Computer Communication Review, Vol.46, Issue 2, pp.169–182, 2016.
Amin, S., Schwartz, S. P., & Sastry, S. S., “Survey on control of cyber-physical systems: Toward a systematic approach,” Annual Reviews in Control, Vol.42, pp.1–18, 2016.
Cárdenas, J., Amin, S., & Sastry, S., “Secure control: Towards resilient cyber-physical systems,” International Journal of Critical Infrastructure Protection, Vol.4, Issue 1, pp.21–32, 2011.
Maglaras, L. A., & Ferrigno, L., “Security challenges in smart grids: A survey,” Renewable and Sustainable Energy Reviews, Vol.52, pp.995–1003, 2015.
Karakostas, G., & Mathiason, M. A., “Secure resource allocation in multi-domain systems,” IEEE Symposium on Security and Privacy, pp.1–8, 2010.
Khan, O., & Misra, S., “A survey of security issues in cloud computing and their mitigation techniques,” Journal of Network and Computer Applications, Vol.80, pp.25–44, 2017.
Anderson, P. M., & Ryan, K. R., “Deep reinforcement learning for cyber security,” IEEE Symposium on Security and Privacy (SP), pp.1–8, 2017.
Draper, J., Long, G., & Thomas, J., “Learning optimal attack strategies in security games using deep reinforcement learning,” IEEE International Conference on Machine Learning and Applications (ICMLA), pp.1–7, 2018.
Mamdouh Muhammad, Abdullah S. Alshra‘a, Reinhard German,Survey of Cybersecurity in Smart Grids Protocols and Datasets, Procedia Computer Science,Volume 241, 2024.
Zou, Y., & Ou, J., “Deep reinforcement learning for intelligent cyber security: A survey,” arXiv preprint, arXiv:1909.03562, 2019.
Zhang, K., Yang, Z., & Başar, T., “Multi-agent reinforcement learning: A selective overview of theories and algorithms,” Handbook of Reinforcement Learning and Control, pp.321–384, 2021.
Li, Z., et al., “Deep reinforcement learning for energy-efficient building climate control,” Applied Energy, Vol.235, pp.1076–1087, 2019.
Zhao, J., et al., “Deep reinforcement learning for task scheduling in cloud data centers,” IEEE International Conference on Cloud Computing (CLOUD), pp.1–8, 2018.
Ebrahimi, A., et al., “Deep reinforcement learning for autonomous driving,” IEEE Intelligent Systems, Vol.35, Issue 6, pp.84–93, 2020.
Al-Garadi, M. A., Mohamed, A., Al-Ali, A. K., Du, X., Ali, I., & Guizani, M., “A survey of machine and deep learning methods for Internet of Things (IoT) security,” IEEE Communications Surveys & Tutorials, Vol.22, Issue 3, pp.1646–1685, 2020.
Hespanha, J. P., “Linear systems theory,” Princeton University Press, 2017.
Goodfellow, I. J., Shlens, J., & Szegedy, C., “Explaining and harnessing adversarial examples,” arXiv preprint, arXiv:1412.6572, 2014.
Chittaranjan Pradhan, Abhishek Trehan, “Integration of Blockchain Technology in Secure Data Engineering Workflows,” International Journal of Computer Sciences and Engineering, Vol.13, Issue 1, pp.1–7, 2025.
N. Charuhasini, P. Drakshayani, P. Dhana Sri Aparna, P. Pravalika, Ch. Praneeth, “Analysing Privacy-Preserving Techniques in Machine Learning for Data Utility,” International Journal of Computer Sciences and Engineering, Vol.13, Issue 2, pp.64–70, 2025.

Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2025 Manas Kumar Yogi, A.S.N. Chakravarthy

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.