Nonconvex regularizers for sparse optimization and rank minimization

Authors
Sagan, April
ORCID
Loading...
Thumbnail Image
Other Contributors
Mitchell, John E.
Lai, Rongjie
Xu, Yangyang
Wang, Meng
Issue Date
2021-05
Keywords
Mathematics
Degree
PhD
Terms of Use
Attribution-NonCommercial-NoDerivs 3.0 United States
This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.
Full Citation
Abstract
This dissertation addresses the problem of minimizing a nonconvex relaxation to the rank of a matrix. In the first of three works presented in this dissertation, we present the problem of rank minimization as a semidefinite program with complementarity constraints, and show connections between relaxations of the complementarity constraint formulation and other formulations with nonconvex regularizers. In the next, we show how to use the low rank factorization of a semidefinite matrix to derive computationally efficient algorithms for minimizing a nonconvex relaxation of the rank function. Lastly, we analyse a very general set of problems minimizing involving nonconvex regularizers to promote sparse and low rank structures, and present a novel analysis of a commonly used class of algorithms guaranteeing convergence to a matrix close to the underlying ground truth low rank matrix.
Data analysis techniques that rely upon a matrix being low rank have received much attention in the past decade, with impressive computational results on large matrices and theoretical results guaranteeing the success of Robust PCA and matrix completion. Many of these results are based off of minimizing the nuclear norm of a matrix (defined as the sum of the singular values) as a surrogate for the rank function, similar to minimizing the $l_1$ norm to promote sparsity in a vector.
While the convex relaxation is an incredibly useful technique in many applications, minimizing the nuclear norm of a matrix has been shown to introduce a (sometimes very large) estimator bias. Intuitively, we expect to see this bias because if we hope to recover a rank $r$ matrix, we must impose enough weight on the nuclear norm term so that the $(r+1)$th singular value is zero. By the nature of the nuclear norm, this requires also putting weight on minimizing the first $r$ singular values, resulting in a bias towards zero proportional to the spectral norm of the noise added to the true data matrix.
Fortunately, recent work has shown that the estimator bias from convex regularizers can be reduced (or even eliminated, for well conditioned matrices) by using nonconvex regularizers, such as the Schatten-p norm or the minimax concave penalty (MCP).
Description
May 2021
School of Science
Department
Dept. of Mathematical Sciences
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Relationships
Rensselaer Theses and Dissertations Online Collection
Access
CC BY-NC-ND. Users may download and share copies with attribution in accordance with a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. No commercial use or derivatives are permitted without the explicit approval of the author.