## Topics in matrix approximation

Loading...

##### Authors

Nambirajan, Srinivas

##### Issue Date

2015-12

##### Type

Electronic thesis

Thesis

Thesis

##### Language

ENG

##### Keywords

Applied mathematics

##### Alternative Title

##### Abstract

A fundamental need in computational linear algebra is computing with matrices quickly but approximately. This is commonly achieved by approximating matrices, either deterministically or randomly such that the structure in these matrices essential to computation is preserved well. We study two useful and natural problems in this area, one involving deterministic, low-rank approximation of a matrix, and the other involving randomized approximation.

Next, we study a randomized approximation of a matrix to obtain good preconditioners to it. A ubiquitous operation in computational linear algebra is the solution of a linear system $\A \x = \b$. The technique used to quickly obtain relative-error solutions to such systems with high probability is finding good randomized preconditioners to $\A$ for use in an appropriate iterative algorithm - Chebyshev or Conjugate Gradient, for instance. An established result for such preconditioning of symmetric, diagonally dominant (SDD) matrices has recently been extended to finite element matrices arising from finite element meshes for elliptic PDEs. The computation of such preconditioners is expensive, requiring $O(rn^2 + n^3)$ operations for a matrix $\A \in \reals^{n, n}$ for an $r > n$, of the order of the number of elements in the finite element mesh. We provide a method that computes these preconditioners in $\tilde{O}(n^3 \log (rn))$ (where $\tilde{O}$ hides poly-logarithmic factors), which is a significant improvement for $r = \omega(n)$.

First, we study the low-rank approximation of a matrix, $\C \in \reals^{m, n}$, using a matrix of rank at most $k< \min (m, n)$ under spectral (operator) norm with the additional constraint that the approximation contains columns belonging to a specified, $r$-dimensional subspace $\sB$. We derive a closed form expression for the solution to this problem and present an algorithm to compute it. A similarly constrained approximation under the \emph{Frobenius} norm allows a quick solution obtained in $O(T_{svd}(\B))$, where $T_{svd}(\B)$ is the number of operations taken to compute the full singular value decomposition of a matrix $\B \in \reals^{m, n}$ whose range is $\sB$. However, there was no known algorithm for the problem in \emph{spectral} norm. We provide the first closed form solution to the problem and an algorithm to compute it that runs in $O(T_{svd}(\C))$. We use this algorithm to then improve an existing result in low-rank approximation drastically: The best known result in computing a general low-rank approximation of a matrix guarantees only a \emph{relative error} approximation; we guarantee the existence of \emph{optimal} low-rank approximations.

Next, we study a randomized approximation of a matrix to obtain good preconditioners to it. A ubiquitous operation in computational linear algebra is the solution of a linear system $\A \x = \b$. The technique used to quickly obtain relative-error solutions to such systems with high probability is finding good randomized preconditioners to $\A$ for use in an appropriate iterative algorithm - Chebyshev or Conjugate Gradient, for instance. An established result for such preconditioning of symmetric, diagonally dominant (SDD) matrices has recently been extended to finite element matrices arising from finite element meshes for elliptic PDEs. The computation of such preconditioners is expensive, requiring $O(rn^2 + n^3)$ operations for a matrix $\A \in \reals^{n, n}$ for an $r > n$, of the order of the number of elements in the finite element mesh. We provide a method that computes these preconditioners in $\tilde{O}(n^3 \log (rn))$ (where $\tilde{O}$ hides poly-logarithmic factors), which is a significant improvement for $r = \omega(n)$.

First, we study the low-rank approximation of a matrix, $\C \in \reals^{m, n}$, using a matrix of rank at most $k< \min (m, n)$ under spectral (operator) norm with the additional constraint that the approximation contains columns belonging to a specified, $r$-dimensional subspace $\sB$. We derive a closed form expression for the solution to this problem and present an algorithm to compute it. A similarly constrained approximation under the \emph{Frobenius} norm allows a quick solution obtained in $O(T_{svd}(\B))$, where $T_{svd}(\B)$ is the number of operations taken to compute the full singular value decomposition of a matrix $\B \in \reals^{m, n}$ whose range is $\sB$. However, there was no known algorithm for the problem in \emph{spectral} norm. We provide the first closed form solution to the problem and an algorithm to compute it that runs in $O(T_{svd}(\C))$. We use this algorithm to then improve an existing result in low-rank approximation drastically: The best known result in computing a general low-rank approximation of a matrix guarantees only a \emph{relative error} approximation; we guarantee the existence of \emph{optimal} low-rank approximations.

##### Description

December 2015

School of Science

School of Science

##### Full Citation

##### Publisher

Rensselaer Polytechnic Institute, Troy, NY