• Login
    View Item 
    •   DSpace@RPI Home
    • Rensselaer Libraries
    • RPI Theses Open Access
    • View Item
    •   DSpace@RPI Home
    • Rensselaer Libraries
    • RPI Theses Open Access
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Nonconvex regularizers for sparse optimization and rank minimization

    Author
    Sagan, April
    Thumbnail
    View/Open
    180622_Sagan_rpi_0185E_11857.pdf (3.946Mb)
    Other Contributors
    Mitchell, John E.; Lai, Rongjie; Xu, Yangyang; Wang, Meng;
    Date Issued
    2021-05
    Subject
    Mathematics
    Degree
    PhD;
    Terms of Use
    This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.;
    Metadata
    Show full item record
    URI
    https://hdl.handle.net/20.500.13015/2708
    Abstract
    This dissertation addresses the problem of minimizing a nonconvex relaxation to the rank of a matrix. In the first of three works presented in this dissertation, we present the problem of rank minimization as a semidefinite program with complementarity constraints, and show connections between relaxations of the complementarity constraint formulation and other formulations with nonconvex regularizers. In the next, we show how to use the low rank factorization of a semidefinite matrix to derive computationally efficient algorithms for minimizing a nonconvex relaxation of the rank function. Lastly, we analyse a very general set of problems minimizing involving nonconvex regularizers to promote sparse and low rank structures, and present a novel analysis of a commonly used class of algorithms guaranteeing convergence to a matrix close to the underlying ground truth low rank matrix.; Data analysis techniques that rely upon a matrix being low rank have received much attention in the past decade, with impressive computational results on large matrices and theoretical results guaranteeing the success of Robust PCA and matrix completion. Many of these results are based off of minimizing the nuclear norm of a matrix (defined as the sum of the singular values) as a surrogate for the rank function, similar to minimizing the $l_1$ norm to promote sparsity in a vector.; While the convex relaxation is an incredibly useful technique in many applications, minimizing the nuclear norm of a matrix has been shown to introduce a (sometimes very large) estimator bias. Intuitively, we expect to see this bias because if we hope to recover a rank $r$ matrix, we must impose enough weight on the nuclear norm term so that the $(r+1)$th singular value is zero. By the nature of the nuclear norm, this requires also putting weight on minimizing the first $r$ singular values, resulting in a bias towards zero proportional to the spectral norm of the noise added to the true data matrix.; Fortunately, recent work has shown that the estimator bias from convex regularizers can be reduced (or even eliminated, for well conditioned matrices) by using nonconvex regularizers, such as the Schatten-p norm or the minimax concave penalty (MCP).;
    Description
    May 2021; School of Science
    Department
    Dept. of Mathematical Sciences;
    Publisher
    Rensselaer Polytechnic Institute, Troy, NY
    Relationships
    Rensselaer Theses and Dissertations Online Collection;
    Access
    Users may download and share copies with attribution in accordance with a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. No commercial use or derivatives are permitted without the explicit approval of the author.;
    Collections
    • RPI Theses Online (Complete)
    • RPI Theses Open Access

    Browse

    All of DSpace@RPICommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    DSpace software copyright © 2002-2022  DuraSpace
    Contact Us | Send Feedback
    DSpace Express is a service operated by 
    Atmire NV