Adaptive optics control with multi-agent model-free reinforcement learning
View/Open
Cita com:
hdl:2117/362076
Document typeArticle
Defense date2022-01-14
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
We present a novel formulation of closed-loop adaptive optics (AO) control as a multi-agent reinforcement learning (MARL) problem in which the controller is able to learn a non-linear policy and does not need a priori information on the dynamics of the atmosphere. We identify the different challenges of applying a reinforcement learning (RL) method to AO and, to solve them, propose the combination of model-free MARL for control with an autoencoder neural network to mitigate the effect of noise. Moreover, we extend current existing methods of error budget analysis to include a RL controller. The experimental results for an 8m telescope equipped with a 40x40 Shack-Hartmann system show a significant increase in performance over the integrator baseline and comparable performance to a model-based predictive approach, a linear quadratic Gaussian controller with perfect knowledge of atmospheric conditions. Finally, the error budget analysis provides evidence that the RL controller is partially compensating for bandwidth error and is helping to mitigate the propagation of aliasing.
CitationPou, B. [et al.]. Adaptive optics control with multi-agent model-free reinforcement learning. "Optics express", 14 Gener 2022, vol. 30, núm. 2, p. 2991-3015.
ISSN1094-4087
Files | Description | Size | Format | View |
---|---|---|---|---|
oe-30-2-2991.pdf | 2,864Mb | View/Open |