Comparative analysis of Q-learning, SARSA, and deep Q-network for microgrid energy management.

Sreyas Ramesh, Sukanth B N, Sri Jaswanth Sathyavarapu, Vishwash Sharma, Nippun Kumaar A A, Manju Khanna
Author Information
  1. Sreyas Ramesh: Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India.
  2. Sukanth B N: Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India.
  3. Sri Jaswanth Sathyavarapu: Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India.
  4. Vishwash Sharma: Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India.
  5. Nippun Kumaar A A: Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India. aa_nippunkumaar@blr.amrita.edu.
  6. Manju Khanna: Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bengaluru, India.

Abstract

The growing integration of renewable energy sources within microgrids necessitates innovative approaches to optimize energy management. While microgrids offer advantages in energy distribution, reliability, efficiency, and sustainability, the variable nature of renewable energy generation and fluctuating demand pose significant challenges for optimizing energy flow. This research presents a novel application of Reinforcement Learning (RL) algorithms-specifically Q-Learning, SARSA, and Deep Q-Network (DQN)-for optimal energy management in microgrids. Utilizing the PyMGrid simulation framework, this study not only develops intelligent control strategies but also integrates advanced mathematical control techniques, such as Model Predictive Control (MPC) and Kalman filters, within the Markov Decision Process (MDP) framework. The innovative aspect of this research lies in its comparative analysis of these RL algorithms, demonstrating that DQN outperforms Q-Learning and SARSA by 12% and 30%, respectively, while achieving a remarkable 92% improvement over scenarios without an RL agent. This study addresses the unique challenges of energy management in microgrids and provides practical insights into the application of RL techniques, thereby contributing to the advancement of sustainable energy solutions.

Keywords

References

  1. Muhammad & Khalid Smart grids and renewable energy systems: perspectives and grid integration challenges. Energy Strateg. Rev. 51, (2024).
  2. Khanna, M., Srinath, N. K. & Mendiratta, J. K. Feature extraction of time series data for wind speed power generation. In IEEE 6th International Conference on Advanced Computing (IACC), Bhimavaram, India, 2016 (2016).
  3. Arwa, E. O. & Folly, K. A. Reinforcement learning techniques for optimal power control in grid-connected microgrids: a comprehensive review. In IEEE Access, vol. 8 (2020).
  4. Dridi, A., Afifi, H., Moungla, H. & Badosa, J. A novel deep reinforcement approach for iiot microgrid energy management systems. In IEEE Transactions on Green Communications and Networking, vol. 6 (2022).
  5. Erick, A. O. & Folly, K. A. Reinforcement learning approaches to power management in grid-tied microgrids: a review, 2020 Clemson University Power Systems Conference (PSC), Clemson, SC, USA (2020).
  6. Dom��nguez-Barbero, D., Garc��a-Gonz��lez, J., Sanz-Bobi, M. A., Eugenio, F. & S��nchez-��beda. Optimising a microgrid system by deep reinforcement learning techniques energies. 13, 2020 (2020).
  7. Esmat Samadi, A. & Badri Reza Ebrahimpour,decentralized multi-agent based energy management of microgrid using reinforcement learning. Int. J. Electr. Power Energy Syst. 122, (2020).
  8. Xu, J., Li, K. & Abusara, M. Preference based multi-objective reinforcement learning for multi-microgrid system optimization problem in smart grid. Memetic Comp. (2022).
  9. Huang, Y. et al. Resilient distribution networks by microgrid formation using deep reinforcement learning. In IEEE Transactions on Smart Grid, vol. 13 (2022).
  10. Mbuwir, B. V., Geysen, D., Spiessens, F. & Deconinck, G. Reinforcement learning for control of flexibility providers in a residential microgrid. IET Smart Grid. 3, 1 (2020). [DOI: 10.1049/iet-stg.2019.0196]
  11. Mohammed, H., Alabdullah, M. A. & Abido Microgrid energy management using deep Q-network reinforcement learning. Alex. Eng. J. 61(11), (2022).
  12. Daniel, J. B., Harrold, J., Cao, Z. & Fan Renewable energy integration and microgrid energy trading using multi-agent deep reinforcement learning. Appl. Energy 318, (2022).
  13. She, B., Li, F., Cui, H., Zhang, J. & Bo, R. July, Fusion of Microgrid Control with model-free reinforcement learning: review and vision. In IEEE Trans. Smart Grid. 14, (2023).
  14. Li, Y. et al. Deep reinforcement learning for smart grid operations: algorithms, applications, and prospects. In Proceedings of the IEEE, vol. 111 (2023).
  15. Massaoudi, M. S., Abu-Rub, H. & Ghrayeb, A. Navigating the landscape of deep reinforcement learning for power system stability control: a review. In IEEE Access, vol. 11 (2023).
  16. Cao, D. et al. Reinforcement learning and its applications in modern power and energy systems: a review. J. Mod. Power Syst. Clean. Energy 8, (2020).
  17. Yang, T., Zhao, L., Li, W. & Zomaya, A. Y. Reinforcement learning in sustainable energy and electric systems: A survey. Annu. Rev. Control. 2020 (2020).
  18. Somasundaran, N., Radhika, N. & Venkataraman, V. Smart grid test bed based on GSM. Proc. Eng. (2012).
  19. Kiran, P. & Vijaya Chandrakala, K. R. M. New interactive agent based reinforcement learning approach towards smart generator bidding in electricity market with micro grid integration. Appl. Soft Comput. 97, (2020).
  20. Sowmya Reddy, V. S., Chandan, K., Nimmy, P., Smitha, T. V. & Nagaraja, K. V. An efficient machine learning model for smart grid stability prediction in our prestigious conference: International Conference on Emerging Technologies in Engineering and Science (ICETES) (2023).
  21. Nippun Kumaar, A. A. & Kochuvila, S. Reinforcement learning based path planning using a topological map for mobile service robot. In 2023 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India (2023).
  22. Shivkumar, S., Amudha, J. & Kumaar, N. A. A. Federated Deep Reinforcement Learning for Mobile Robot Navigation. (2024).

Word Cloud

Created with Highcharts 10.0.0energymicrogridsmanagementRLSARSAcontrolrenewablewithininnovativechallengesresearchapplicationQ-LearningDeepDQNPyMGridframeworkstudytechniquesModelanalysisQ-learningQ-networkgrowingintegrationsourcesnecessitatesapproachesoptimizeofferadvantagesdistributionreliabilityefficiencysustainabilityvariablenaturegenerationfluctuatingdemandposesignificantoptimizingflowpresentsnovelReinforcementLearningalgorithms-specificallyQ-Network-foroptimalUtilizingsimulationdevelopsintelligentstrategiesalsointegratesadvancedmathematicalPredictiveControlMPCKalmanfiltersMarkovDecisionProcessMDPaspectliescomparativealgorithmsdemonstratingoutperforms12%30%respectivelyachievingremarkable92%improvementscenarioswithoutagentaddressesuniqueprovidespracticalinsightstherebycontributingadvancementsustainablesolutionsComparativedeepmicrogridMicrogridpredictive

Similar Articles

Cited By