Nullspace property

In compressed sensing, the nullspace property gives necessary and sufficient conditions on the reconstruction of sparse signals using the techniques of \ell_1-relaxation. The term "nullspace property" originates from Cohen, Dahmen, and DeVore.[1] The nullspace property is often difficult to check in practice, and the restricted isometry property is a more modern condition in the field of compressed sensing.

The technique of \ell_1-relaxation

The non-convex \ell_0-minimization problem,

\min\limits_{x} \|x\|_0 subject to Ax = b,

is a standard problem in compressed sensing. However, \ell_0-minimization is known to be NP-hard in general.[2] As such, the technique of \ell_1-relaxation is sometimes employed to circumvent the difficulties of signal reconstruction using the \ell_0-norm. In \ell_1-relaxation, the \ell_1 problem,

\min\limits_{x} \|x\|_1 subject to Ax = b,

is solved in place of the \ell_0 problem. Note that this relaxation is convex and hence amenable to the standard techniques of linear programming - a computationally desirable feature. Naturally we wish to know when \ell_1-relaxation will give the same answer as the \ell_0 problem. The nullspace property is one way to guarantee agreement.

Definition

An m \times n complex matrix A has the nullspace property of order s if for all index sets S with |S| \leq n we have that: \|\eta_S\|_1 < \|\eta_{S^C}\|_1 for all \eta \in \ker{A} \setminus \left\{0\right\}.

Recovery Condition

The following theorem gives necessary and sufficient condition on the recoverability of a given s-sparse vector in \mathbb{C}^n. The proof of the theorem is a standard one, and the proof supplied here is summarized from Holger Rauhut.[3]

\textbf{Theorem:} Let A be a m \times n complex matrix. Then every s-sparse signal x \in \mathbb{C}^n is the unique solution to the \ell_1-relaxation problem with b = Ax if and only if A satisfies the nullspace property with order s.

\textit{Proof:} For the forwards direction notice that \eta_S and -\eta_{S^C} are distinct vectors with A(-\eta_{S^C}) = A(\eta_S) by the linearity of A, and hence by uniqueness we must have \|\eta_S\|_1 < \|\eta_{S^C}\|_1 as desired. For the backwards direction, let x be s-sparse and z another (not necessary s-sparse) vector such that z \neq x and Az = Ax. Define the (non-zero) vector \eta = x - z and notice that it lies in the nullspace of A. Call S the support of x, and then the result follows from an elementary application of the triangle inequality: \|x\|_1 \leq \|x - z_S\|_1 + \|z_S\|_1 = \|\eta_S\|_1+\|z_S\|_1 < \|\eta_{S^C}\|_1+\|z_S\|_1 = \|-z_{S^C}\|_1+\|z_S\|_1 = \|z\|_1, establishing the minimality of x. \square

References

  1. Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald (2009-01-01). "Compressed sensing and best 𝑘-term approximation". Journal of the American Mathematical Society 22 (1): 211–231. doi:10.1090/S0894-0347-08-00610-3. ISSN 0894-0347.
  2. Natarajan, B. K. (1995-04-01). "Sparse Approximate Solutions to Linear Systems". SIAM J. Comput. 24 (2): 227–234. doi:10.1137/S0097539792240406. ISSN 0097-5397.
  3. Rauhut, Holger. Compressive Sensing and Structured Random Matrices.
This article is issued from Wikipedia - version of the Thursday, April 21, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.