Show simple item record

dc.rights.licenseUsers may download and share copies with attribution in accordance with a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. No commercial use or derivatives are permitted without the explicit approval of the author.
dc.contributorMitchell, John E.
dc.contributorLai, Rongjie
dc.contributorXu, Yangyang
dc.contributorWang, Meng
dc.contributor.authorSagan, April
dc.date.accessioned2021-11-03T09:25:51Z
dc.date.available2021-11-03T09:25:51Z
dc.date.created2021-07-09T09:12:53Z
dc.date.issued2021-05
dc.identifier.urihttps://hdl.handle.net/20.500.13015/2708
dc.descriptionMay 2021
dc.descriptionSchool of Science
dc.description.abstractThis dissertation addresses the problem of minimizing a nonconvex relaxation to the rank of a matrix. In the first of three works presented in this dissertation, we present the problem of rank minimization as a semidefinite program with complementarity constraints, and show connections between relaxations of the complementarity constraint formulation and other formulations with nonconvex regularizers. In the next, we show how to use the low rank factorization of a semidefinite matrix to derive computationally efficient algorithms for minimizing a nonconvex relaxation of the rank function. Lastly, we analyse a very general set of problems minimizing involving nonconvex regularizers to promote sparse and low rank structures, and present a novel analysis of a commonly used class of algorithms guaranteeing convergence to a matrix close to the underlying ground truth low rank matrix.
dc.description.abstractData analysis techniques that rely upon a matrix being low rank have received much attention in the past decade, with impressive computational results on large matrices and theoretical results guaranteeing the success of Robust PCA and matrix completion. Many of these results are based off of minimizing the nuclear norm of a matrix (defined as the sum of the singular values) as a surrogate for the rank function, similar to minimizing the $l_1$ norm to promote sparsity in a vector.
dc.description.abstractWhile the convex relaxation is an incredibly useful technique in many applications, minimizing the nuclear norm of a matrix has been shown to introduce a (sometimes very large) estimator bias. Intuitively, we expect to see this bias because if we hope to recover a rank $r$ matrix, we must impose enough weight on the nuclear norm term so that the $(r+1)$th singular value is zero. By the nature of the nuclear norm, this requires also putting weight on minimizing the first $r$ singular values, resulting in a bias towards zero proportional to the spectral norm of the noise added to the true data matrix.
dc.description.abstractFortunately, recent work has shown that the estimator bias from convex regularizers can be reduced (or even eliminated, for well conditioned matrices) by using nonconvex regularizers, such as the Schatten-p norm or the minimax concave penalty (MCP).
dc.language.isoENG
dc.publisherRensselaer Polytechnic Institute, Troy, NY
dc.relation.ispartofRensselaer Theses and Dissertations Online Collection
dc.subjectMathematics
dc.titleNonconvex regularizers for sparse optimization and rank minimization
dc.typeElectronic thesis
dc.typeThesis
dc.digitool.pid180621
dc.digitool.pid180622
dc.digitool.pid180623
dc.rights.holderThis electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.
dc.description.degreePhD
dc.relation.departmentDept. of Mathematical Sciences


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record