On the convergence of block majorization-minimization algorithms on the grassmann manifold
Cita com:
hdl:2117/416725
Document typeArticle
Defense date2024
Rights accessOpen Access
All rights reserved. This work is protected by the corresponding intellectual and industrial
property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public
communication or transformation of this work are prohibited without permission of the copyright holder
Abstract
The Majorization-Minimization (MM) framework is widely used to derive efficient algorithms for specific problems that require the optimization of a cost function (which can be convex or not). It is based on a sequential optimization of a surrogate function over closed convex sets. A natural extension of this framework incorporates ideas of Block Coordinate Descent (BCD) algorithms into the MM framework, also known as block MM. The rationale behind the block extension is to partition the optimization variables into several independent blocks, to obtain a surrogate for each block, and to optimize the surrogate of each block cyclically. However, known convergence proofs of the block MM are only valid under the assumption that the constraint sets are closed and convex. Hence, the global convergence of the block MM is not ensured for non-convex sets by classical proofs, which is needed in iterative schemes that naturally emerge in a wide range of subspace-based signal processing applications. For this purpose, the aim of this letter is to review the convergence proof of the block MM and extend it for blocks constrained in the Grassmann manifold.
CitationLopez, C.; Riba, J. On the convergence of block majorization-minimization algorithms on the grassmann manifold. "IEEE signal processing letters", 2024, vol. 31, p. 1314-1318.
ISSN1070-9908
Publisher versionhttps://ieeexplore.ieee.org/document/10518081
Files | Description | Size | Format | View |
---|---|---|---|---|
MM_convergence_Grassmann_Letter_posprint.pdf | 242,4Kb | View/Open |