# Seminar: April 2

## Greedy algorithms in compressed sensing

We study sparse representations and sparse approximations with respect to incoherent dictionaries. We address the problem of designing and analyzing greedy methods of approximation. A key question in this regard is: How to measure efficiency of a specific algorithm? Answering this question we prove the Lebesgue-type inequalities for algorithms under consideration. A very important new ingredient of the talk is that we perform our analysis in a Banach space instead of a Hilbert space. It is known that in many numerical problems users are satisfied with a Hilbert space setting and do not consider a more general setting in a Banach space. There are known arguments that justify interest in Banach spaces. In this talk we give one more argument in favor of consideration of greedy approximation in Banach spaces. We introduce a concept of $M$-coherent dictionary in a Banach space which is a generalization of the corresponding concept in a Hilbert space. We analyze the Quasi-Orthogonal Greedy Algorithm (QOGA), which is a generalization of the Orthogonal Greedy Algorithm (Orthogonal Matching Pursuit) for Banach spaces. It is known that the QOGA recovers exactly $S$-sparse signals after $S$ iterations provided $S\le(1+1/M)/2$. This result is well known for the Orthogonal Greedy Algorithm in Hilbert spaces. The following question is of great importance: Are there dictionaries in $\mathbb R^n$ such that their coherence in $\ell_p^n$ is less than their coherence in $\ell_2^n$ for some $p\in (1,\infty)$? We show that the answer to the above question is "yes". Thus, for such dictionaries, replacing the Hilbert space $\ell_2^n$ by a Banach space $\ell_p^n$ we improve an upper bound for sparsity that guarantees an exact recovery of a signal.