## Topics in matrix approximation

##### Author

Nambirajan, Srinivas##### Other Contributors

Kramer, Peter Roland, 1971-; Magdon-Ismail, Malik; McLaughlin, Joyce; Mitchell, John E.;##### Date Issued

2015-12##### Subject

Applied mathematics##### Degree

PhD;##### Terms of Use

This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.; Attribution-NonCommercial-NoDerivs 3.0 United States##### Metadata

Show full item record##### Abstract

A fundamental need in computational linear algebra is computing with matrices quickly but approximately. This is commonly achieved by approximating matrices, either deterministically or randomly such that the structure in these matrices essential to computation is preserved well. We study two useful and natural problems in this area, one involving deterministic, low-rank approximation of a matrix, and the other involving randomized approximation.; Next, we study a randomized approximation of a matrix to obtain good preconditioners to it. A ubiquitous operation in computational linear algebra is the solution of a linear system $\A \x = \b$. The technique used to quickly obtain relative-error solutions to such systems with high probability is finding good randomized preconditioners to $\A$ for use in an appropriate iterative algorithm - Chebyshev or Conjugate Gradient, for instance. An established result for such preconditioning of symmetric, diagonally dominant (SDD) matrices has recently been extended to finite element matrices arising from finite element meshes for elliptic PDEs. The computation of such preconditioners is expensive, requiring $O(rn^2 + n^3)$ operations for a matrix $\A \in \reals^{n, n}$ for an $r > n$, of the order of the number of elements in the finite element mesh. We provide a method that computes these preconditioners in $\tilde{O}(n^3 \log (rn))$ (where $\tilde{O}$ hides poly-logarithmic factors), which is a significant improvement for $r = \omega(n)$.; First, we study the low-rank approximation of a matrix, $\C \in \reals^{m, n}$, using a matrix of rank at most $k< \min (m, n)$ under spectral (operator) norm with the additional constraint that the approximation contains columns belonging to a specified, $r$-dimensional subspace $\sB$. We derive a closed form expression for the solution to this problem and present an algorithm to compute it. A similarly constrained approximation under the \emph{Frobenius} norm allows a quick solution obtained in $O(T_{svd}(\B))$, where $T_{svd}(\B)$ is the number of operations taken to compute the full singular value decomposition of a matrix $\B \in \reals^{m, n}$ whose range is $\sB$. However, there was no known algorithm for the problem in \emph{spectral} norm. We provide the first closed form solution to the problem and an algorithm to compute it that runs in $O(T_{svd}(\C))$. We use this algorithm to then improve an existing result in low-rank approximation drastically: The best known result in computing a general low-rank approximation of a matrix guarantees only a \emph{relative error} approximation; we guarantee the existence of \emph{optimal} low-rank approximations.;##### Description

December 2015; School of Science##### Department

Dept. of Mathematical Sciences;##### Publisher

Rensselaer Polytechnic Institute, Troy, NY##### Relationships

Rensselaer Theses and Dissertations Online Collection;##### Access

CC BY-NC-ND. Users may download and share copies with attribution in accordance with a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. No commercial use or derivatives are permitted without the explicit approval of the author.;##### Collections

Except where otherwise noted, this item's license is described as CC BY-NC-ND. Users may download and share copies with attribution in accordance with a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. No commercial use or derivatives are permitted without the explicit approval of the author.