Translated Abstract
From the point of signal sampling, the compressed sensing effectively reveals the essential characteristics of the signal. According to the compressibility of the signal, the compressed sensing can be called sparse representation, and the sparse representation theory and model have been successfully applied to the field of signal and image processing. The sparse representation means that the signal is expressed in a succinct manner, that is, most of the atomic coefficients are zero, or has very few nonzero large coefficients, and the large nonzero coefficients can reveal the intrinsic structure of the signal or image. So the signal or image can effectively extract its essential features by the sparse representation, which is conducive to the subsequent signal or image compression processing. Recently, for the sparse representation model, scholars have proposed a cosparse analysis model. The two models are a complementary relation, since the sparse representation model focuses on nonzero elements of the sparse representation vector x, and the cosparse analysis model is passed through a pre-set analysis operator Ω to analysis the vector x, the emphasis is placed on the the zero value of the vector Ωx. And the zero number of alternatives to the cosparse analysis model is larger than that of the sparse representation model.
Another problem, which is closely related to compressed sensing, is the low-rank matrix approximation problem. In essence, the low-rank matrix decomposition is the extension of compressed sensing. Since in the large data age, the main components of many data are hidden in low dimension space, and these main components are often subject to sparse noise interference, so the study of sparse and low-rank matrix decomposition problem is very meaningful. In this dissertation, we focus on the cosparse analysis model and the low-rank matrix decomposition problem. The research contents are summarized as follows * :
1. For the cosparse analysis model, based on the greedy analysis pursuit algorithm (GAP), we propose a ℓ p weighted relaxation method for solving this model by constructing the adaptive weight matrix W. For the greedy and relaxation methods, there is a gap between non-convex and convex. In this dissertation, we fill the gap successfully by using the weighted matrix to. Theoretically, based on the Restricted isometric properties (RIP), we give the error analysis of the cosparse analysis model in the noise environment. The experimental results show that our method is faster and more efficient when reconstructing the cosparse signals.
2. In the sense of unitary invariant norm, we study the perturbation theory of low-rank matrix approximation. Suppose that the matrix A is a low-rank approximation of the observed data matrix D and E is the perturbation matrix, using the well-known pseudo inverse decomposition (D
† − A†) and the properties of matrix projections, we present the lower bounds of low-rank matrix approximation (D − A) in different cases. Through the experiments, when the perturbation term E is the sparse matrix, the theoretical results have been verified.
3. Based on the Restricted isometric properties (RIP), in this dissertation, the sufficient conditions for the exact reconstruction of sparse matrices are given under ideal conditions. For the noise case, the robustness of sparse matrix restoration is analyzed and the upper bound of approximation error is given. Numerical experiments show that our result is correct. In addition, based on the Robust principal component analysis (RPCA) model of sparse and low-rank matrix decomposition, according to the separability of linear constrained convex optimization problem, in this dissertation, we present a separable surrogate function (SSF) method that deviates from the other approaches listed. Based on the this method, two kinds of iterative schemes are designed: One is the Proximal point iterative thresholding (PPIT) algorithm, the other scheme is SSF-IALM algorithm which is based on Inexact augmented Lagrangian multipliers (IALM) method. Theoretically, the convergence analysis of the algorithms are given in this dissertation. Corresponding our methods, simulations about real-data examples and applications on astronomical images and the standard gray images show the feasibility and effectiveness of the proposed algorithms.
4. Since the SVD decomposition is time-consuming, in this dissertation, by using the full rank factorization of a matrix to characterize the low-rank matrix, we propose a novel model named as Sparse and low-rank factorization (SLRF). Through theoretical analysis, the equivalence of this two models has been proved. Based on the SLRF model, we design two algorithms to solve the sparse and low-rank decomposition of the matrix: Penalty function method (PFM) and the Augmented lagrangian multiplier method (ALMM). Theoretically, the convergence theorem of the algorithms are given. Numerical experiments show that the SLRF method is superior to the RPCA method. The proposed method is applied to the airport lobby video supervisor, experimental results show that our method can effectively separate the background (low-rank component) and the moving foreground (sparse component).
Translated Keyword
[Cosparse analysis model, Error estimation, Low-rank matrix decomposi- tion, Robust principal component analysis, Sparse representation]
Corresponding authors email