Distributed Q-Learning for energy harvesting heterogeneous networks
Tipus de documentText en actes de congrés
EditorInstitute of Electrical and Electronics Engineers
Condicions d'accésAccés obert
We consider a two-tier urban Heterogeneous Net- work where small cells powered with renewable energy are deployed in order to provide capacity extension and to offloa d macro base stations. We use reinforcement learning techniq ues to concoct an algorithm that autonomously learns energy inflow and traffic demand patterns. This algorithm is based on a decentr al- ized multi-agent Q-learning technique that, by interactin g with the environment, obtains optimal policies aimed at improvi ng the system performance in terms of drop rate, throughput and ene rgy efficiency. Simulation results show that our solution effec tively adapts to changing environmental conditions and meets most of our performance objectives. At the end of the paper we identi fy areas for improvement.
© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
CitacióMiozzo, Marco [et al.]. Distributed Q-Learning for energy harvesting heterogeneous networks. A: Workshop on Green Communications and Networks with Energy Harvesting, Smart Grids, and Renewable Energies. "2015 IEEE International Conference on Communication Workshop". Institute of Electrical and Electronics Engineers, 2015, p. 2006-2011.
Versió de l'editorhttp://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7247475&tag=1