####
LOW-RANK MATRIX ESTIMATION (COMPLETION and EXTRACTION)

**COMPLETION**: Given some entries of a matrix, discover the rest

**EXTRACTION**: Recover a matrix from its superposition with another

**MODELING-BASED PAPERS**

**Matrix Completion from Power-Law Distributed Samples Meka,Jain and Dhillon, NIPS 2009**

**(COMPLETION)** This paper notes that the assumption of the observed entries being uniformly distributed does not hold good in real-world situations like Netflix problem. They provide an innovative bipartite graph-based representation of the matrix, and employ random graph models to model the observed entries. Based on this representation, they provide an iterative cascading algorithm to estimate the remaining entries by successively solving linear equation systems. Actually, this paper simultaneously performs **Completion** and **Factorization**.

**Robust video denoising using low rank matrix completion Ji,Liu,Shen and Xu, CVPR 2010**

**(COMPLETION)** This is a straightforward application of the setting of Candes-Recht to Video Denoising. Patch matrices (spatio-temporal fragment reshaped to form matrix) from videos are modelled as low-rank matrices. Entries are marked as noisy or genuine based on neighbors. The noisy entries are assumed to be missing, thus posing the problem as low-rank matrix completion for the patch matrices. Clearly unable to handle corrupted frames as that amounts to a whole column being unknown.

**RASL: Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images Peng, Ganesh, Wright, Xu and Ma, IEEE Transactions on Pattern Analysis and Machine Intelligence 2011**

**(EXTRACTION)** Normal Robust PCA except that the columns of the observed matrix undergo affine transformation, which have to be learnt. Formally, P*T=Q+R, where P is the observed matrix, T is the unknown affine transformation, Q is low-rank and R is sparse noise.

**Accelerated Low-Rank Visual Recovery by Random Projection Mu,Dong,Yuan and Yan, CVPR 2011**

**EXTRACTION** Robust PCA (P=Q+R) involves SVD of matrices to estimate low-rank Q, which can be very slow if the matrices are large. This paper discusses minimizing the rank of a low-dimensional surrogate matrix which can approximate a random projection of Q to low-dimensional space. Frobenius norm distance between the surrogate matrix and the random projection of Q is used an an objective. Bounds are given on the rank of Q in terms of the rank of its random projection to lower dimensions.

**Estimation of Simultaneously Sparse and Low Rank Matrices Richard, Savalle and Vayatis, ICML 2012**

**(COMPLETION)**Assumes the same matrix is both sparse and low-rank. Sparsity induced through nuclear-norm and low-rankness through L1-norm, both on the same matrix.

**Robust Matrix Completion via Joint Schatten p-Norm and l_p-Norm Minimization Nie, Wang, Huang, and Ding, ICDM 2012**

**(COMPLETION and EXTRACTION)**Considers alternative norms for inducing low-rankness and sparsity.

**Tensor Completion for Estimating Missing Values in Visual Data Liu, Musialski, Wonka, and Ye, IEEE Transactions on Pattern Analysis and Machine Intelligence 2012**

**(COMPLETION)**Extends low-rank completion from matrices to tensors.

**BAYESIAN MODELLING**

**Bayesian Robust Principal Component Analysis Ding, He and Carin, IEEE Transactions on Image Processing, 2011**

Bayesian Modelling of the singular values, singular vectors and sparse error matrix, used for background subtraction in videos.

**Sparse Bayesian Methods for Low-Rank Matrix Estimation Babacan, Luessi, Molina and Katsaggelos, IEEE Transactions on Signal Processing 2012**

Factorization-based approach to enforce the low-rank criteria. Can be used for both completion and extraction.

**THEORETICAL PAPERS**

**Exact Matrix Completion via Convex Optimization Candes and Recht, Foundations of Computational Mathematics, 2009**

**(COMPLETION)**This is the seminal paper in this field. The aim is to recover all entries of a partially known matrix whose rank is low. The optimization problem attempts to minimize the rank of the matrix constrained on the observed entries. Since rank is a nonconvex function, its convex relaxation-the nuclear norm-is used as objective function. They provide guarantee for **perfect recovery with high probability** provided the rank is low enough(O(n^(1/5)))and the number of observed entries is O(n^(6/5)rlog(n)). Also, for any rank r, the number of entries needed is O(n^(5/4)rlog(n)). However, the matrix is required to follow the **random orthogonal model**, the observed positions need to be sampled uniformly, and at least one entry from each row and column needs to observed.

**Matrix Completion from a few Entries Keshavan,Montanari and Oh, IEEE Transactions on Information Theory, 2010**

**(COMPLETION)**This is an improvement over the previous paper. A new algorithm called spectral matrix completion is used, which can recover a nXn matrix of rank R from O(nR) known entries with bounded RMS error. Moreover, if O(nlog(n)) entries are known, it is possible to recover the matrix perfectly, with high probability.

**Matrix Completion from Noisy Entries Keshavan,Montanari and Oh, Journal of Machine Learning Research, 2010**

**(COMPLETION and EXTRACTION)**This paper considers the case where even the observed entries are perturbed. In other words, the observation model is N_{\Omega}=(M+Z)_{\Omega} where M is low-rank and Z is the perturbation matrix with small entries. An algorithm called OPTSPACE is provided, based on trimming additional entries in \Omega and making low-rank projections. Bounds are provided based on the frobenius norm of Z_{\Omega}. This is one representation of **Near Low-rank Matrix**

**Robust Principal Component Analysis Wright, Sastry, Peng, Ganesh, Rao and Ma, NIPS 2009**

**Robust Principal Component Analysis Candes, Wright, Ganesh, Rao and Ma, Arxiv preprint**

**(EXTRACTION)**These two papers consider the observation model P=Q+R, where P is fully observed, Q is low-rank and R is sparse. R is regularized by L1-norm in the convex optimization problem. Guarantees are provided for convergance of the program to the actual solution (Q_0,R_0) based on conditions on these two matrices (incoherence). The NIPS paper shows application of the technique on computer vision, in problems similar to video denoising

**Robust Matrix Completion and Corrupted Columns Xu, Caramanis, Edu and Sanghavi, ICML 2011**

**Robust Matrix Completion with Corrupted Columns Xu, Caramanis, Edu and Sanghavi, arxiv preprint**

**(COMPLETION and EXTRACTION)** These two papers consider the case where the low-rank matrix Q is superposed with a column-sparse matrix R. The observation model is once again P=L+C. The non-zero columns of C are unknown, and entries of P are revealed randomly (uniform). These include entries from the corrupted columns of C. The aim is to identify the non-zero columns of C (the corrupted columns of P), and completely recover the remaining (uncorrupted) columns. Guarantees are provided for convergance to the actual solution (L_0,C_0) based on L_0, the fraction of the corrupted columns and the fraction of entries revealed in the non-corrupted columns.

**Noisy Matrix Decompositions via Convex Relaxations- Optimal rates in high dimensions Agarwal, Negahban and Wainwright, ICML 2011**

**Noisy Matrix Decompositions via Convex Relaxations- Optimal rates in high dimensions Agarwal, Negahban and Wainwright, arxiv preprint**

**(EXTRACTION)**These two consider the observation model P=Q+R+W where Q is low-rank and W is a noise matrix. They provide generalization on the class of regularizers on R, including the sparsity-inducing 1-norm and the column-sparsity-inducing (2,1)-norm. Error bounds on solution (\hat{Q},\hat{R}) to their convex program (with respect to the actual solution Q_0,R_0) are provided based on the regularizer on R, the noise matrix W and even the regularization coeeficients of Q and R. They claim for results on approximately low-rank Q, but I could not understand what they mean by this. Perhaps the noise matrix W is viewed as perturbation from low-rankness.