PublisherInstitute of Electrical and Electronics Engineers
Rights accessOpen Access
We consider a two-tier urban Heterogeneous Net- work where small cells powered with renewable energy are deployed in order to provide capacity extension and to offloa d macro base stations. We use reinforcement learning techniq ues to concoct an algorithm that autonomously learns energy inflow and traffic demand patterns. This algorithm is based on a decentr al- ized multi-agent Q-learning technique that, by interactin g with the environment, obtains optimal policies aimed at improvi ng the system performance in terms of drop rate, throughput and ene rgy efficiency. Simulation results show that our solution effec tively adapts to changing environmental conditions and meets most of our performance objectives. At the end of the paper we identi fy areas for improvement.
CitationMiozzo, Marco [et al.]. Distributed Q-Learning for energy harvesting heterogeneous networks. A: Workshop on Green Communications and Networks with Energy Harvesting, Smart Grids, and Renewable Energies. "2015 IEEE International Conference on Communication Workshop". Institute of Electrical and Electronics Engineers, 2015, p. 2006-2011.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder. If you wish to make any use of the work not provided for in the law, please contact: firstname.lastname@example.org