Mirror Image

Mostly AR and Stuff

Compressive Sensing and Computer Vision

Thanks to Igor Carron for pointing out this video lecture
Compressive Sensing for Computer Vision: Hype vs Hope
It start with comprehensible explanation of what compressive sensing is about (BTW wiki article on compressive sensing is wholly inadequate).
Basically it’s about imagining the lower-dimensional signal(image) as projection by rectangular matrix from mostly zero high-dimensional vector. It happens that this sparse high-dimensional vector can be restored if the matrix is almoste orthonormal (Restricted Isometry Property). Discrete Fourier Transform and random matrices have that property.
This sparse vector could be considered as classification space for original signal. So application of Compressive Sensing to Computer Vision is mostly about classification or recognition. As methods used by CS are convex and linear programming those are not run-time methods, and would not help much in real-time tracking. There is CS-inspired advise at the end of the lecture, about trying to replace L^{2} norm optimization with L^{1} norm. That could be actually helpful in some cases. If L^{1} approximated as iteratively reweighted L^{2} it’s essentially the same as robustification of least square method.

Advertisements

7, December, 2009 - Posted by | Coding AR | , ,

3 Comments

  1. Sergey,

    Iterative reweighted schemes look like L2 but their main problems is that they are real slooooow. There are new solvers right now that are pretty fast compared to, say, two to three years ago where linear programming seemed to be the only way to go (with some greedy algorithms). The recent AMP algorithm by Donoho et al seem to fall in that category for instance.

    The next frontier in that field is really this whole manifold based signal processing. In effect, it is one thing to care about sparsity but sparsity is hardly a good measure of objects and more complex things around us. For instance, there are embedded statistics in a cube (several straight lines connected together in a certain fashion) which might be more helpful than saying that a cube is made out of 12 lines.

    Igor.

    Comment by Igor Carron | 7, December, 2009

  2. I’ve looked through AMP algorithm, without digging into it. As I understand this “Message Passing” or “belief propagation” is a form of Jacobi iteration, and in term of robustification AMD is ideologically equivalent to using first derivative of robustifier, not only robustifier itself. Derivatives of robustifer are already used in reprojection error minimization (Triggs), and BTW Jacobi method could be used too as cheap replacement for levenberg-marquardt/linear search. “manifold based signal processing” – you mean “manifold lifting”, or something else?

    Comment by mirror2image | 8, December, 2009

  3. Manifold based signal processing means that you do operations of identification/clustering/classification with the projections of elements on random projections (not the elements themselves). It turns out the random projections are smaller but because of the RIP, allow for comparison equivalent to what you’d do with the full (and very large) elements (images,…)

    Comment by Igor Carron | 8, December, 2009


Sorry, the comment form is closed at this time.

%d bloggers like this: