Mirror Image

Mostly AR and Stuff

Compressive Sensing and Computer Vision

Thanks to Igor Carron for pointing out this video lecture
Compressive Sensing for Computer Vision: Hype vs Hope
It start with comprehensible explanation of what compressive sensing is about (BTW wiki article on compressive sensing is wholly inadequate).
Basically it’s about imagining the lower-dimensional signal(image) as projection by rectangular matrix from mostly zero high-dimensional vector. It happens that this sparse high-dimensional vector can be restored if the matrix is almoste orthonormal (Restricted Isometry Property). Discrete Fourier Transform and random matrices have that property.
This sparse vector could be considered as classification space for original signal. So application of Compressive Sensing to Computer Vision is mostly about classification or recognition. As methods used by CS are convex and linear programming those are not run-time methods, and would not help much in real-time tracking. There is CS-inspired advise at the end of the lecture, about trying to replace L^{2} norm optimization with L^{1} norm. That could be actually helpful in some cases. If L^{1} approximated as iteratively reweighted L^{2} it’s essentially the same as robustification of least square method.

7, December, 2009 Posted by | Coding AR | , , | 3 Comments

Testing a new descriptor.

Trying a new descriptor, inspired by SURF and SIFT. Want to use gradient instead of Haar transforms of intensity, but with less dimensionality than SURF. Also don’t need rotation/scale invariance, because using incremental tracking.

20, November, 2009 Posted by | Coding AR | , , , , , | 4 Comments

Switching to OpenCV 2.0 with VS2005

I’m using OpenCV for some tests, and for some reasons (freelance gigs and Symbian SDK) using MS Visual Studio. As new and shiny OpenCV 2.0 is out I decided to switch to it. As it happen, one absolutely have to read buried in the download section readme, before doing anything.
The thing is, OpenCV 2.0 doesn’t include lib files for VS. They have to be built by user.
So here is step by step retelling of readme:
1. Rename your old OpenCV installation to save it, just in case
2. Download and install OpenCV 2.0a
3. Download and install CMake
4. Reboot (or CMake wouldn’t work)
5. Go to C:\Program Files\CMake 2.6\bin and run cmake-gui.exe
6. In the “Where is the source code” field choose your new OpenCV directory (C:\OpenCV)
In “Where to build the binaries” choose directory for VS compiled OpneCV (C:\OpenCV\VS2005)
7. press Configure button and choose VS2005 (or whatever) as building enviroment
8. Press Generate and VS project will be generated in the C:\OpenCV\VS2005
9. Launch solution and build it. For debug build some projects require debug python libraries. As riseriyo pointed in comments if you have Python installed other than 2.6 that can cause problem.
10. Copy *.lib from C:\OpenCV\vs2005\lib\release (or debug) to C:\OpenCV\lib
Copy *.dll from C:\OpenCV\vs2005\bin\release to C:\OpenCV\bin
11. Now, reconfigure your application project. Include directories now “C:\OpenCV\include\opencv” instead of “C:\OpenCV\include
12. All libraries renamed from *.lib to *200.lib (cv.lib to cv200.lib) or *200d.lib for debug. Rename them, or change project settings.

PS if you need Python and still have a problem with cvpy:
From readme:
Known issues:
=============
1. Python 2.6 bindings for OpenCV are included within the package,
but not installed.
You can copy the subdirectory opencv/Python2.6/Lib/site-packages into
the respective directory of the Python installation.
Here is riseriyo explanation how he deal with python problem

PPS Comment by rise about vs2008 issue:
dll and manifest file version conflict in msvc 2008. the only way i was able to fix this was to completely uninstall msvc 2008 and then do a clean install w/o updating it with the sp1 packages.
see his blog and how he was troubleshooting (for days) the issue

PPPS As Niklas pointed out if you have omp.h not found error, that mean you forgot to turn off OpenMP in CMake.

That’s it. Project should compile now. If not you still have your old OpenCV installation

20, October, 2009 Posted by | Coding AR | , , | 57 Comments

Still checking Gauss-Newton

Though Levenberg-Marquardt works I’m still trying to save Gauss-Newton, especially as I’ve read paper saying that Gauss-Newton with dogleg trust-region works well for bundle adjustment. I’ll probably try direct substitution with Cholesky rank-1 update and constrained optimization.

13, October, 2009 Posted by | Coding AR, computer vision | , , , | Comments Off on Still checking Gauss-Newton

Solution – free gauge

Looks like the problem was not the large Gauss-Newton residue. The problem was gauge fixing.
Most of bundle adjustment algorithms are not gauge invariant inherently (for details check Triggs “Bundle adjustment – a modern synthesis”, chapter 9 “Gauge Freedom”). Practically that means that method have one or more free parameters which could be chosen arbitrary (for example scale), but which influence solution in non-invariant way (or don’t influence solution if algorithm is gauge invariant). Gauge fixing is the choice of the values for that free parameters. There exist at least one gauge invariant bundle adjustment method (generalization of Levenberg-Marquardt with complete matrix correction instead of diagonal only correction) , but it is order of magnitude more computational expensive.
I’ve used fixing coordinate of one of the 3d points for gauge fixing. Because method is not gauge invariant solution depend on the choice of that fixed point. The problem occurs when the chosen point is “bad” – error in feature point detector for this point is so big that it contradict to the rest of the picture. Mismatching in the point correspondence can cause the same problem.
In my case, fixing coordinate of chosen point caused “accumulation” of residual error in that point. This is easy to explain – other points can decrease reprojection error both by moving/rotating camera and by shifting their coordinates, but fixed point can do it only by moving/rotating camera. It looks like if the point was “bad” from the start it can become even worse next iteration as the error accumulate – positive feedback look causing method become unstable. That’s of cause only my observations, I didn’t do any formal analysis.
The obvious solution is to redistribute residual error among all the points – that mean drop gauge fixing and use free gauge. Free gauge is causing arbitrary scaling of the result, but the result can be rescaled later. However there is the cost. Free gauge means matrix is singular – not invertible and Gauss-Newton method can not work. So I have to switch to less efficient and more computationally expensive Levenberg-Marquardt. For now it seems working.
PS Free gauge matrix is not singular, just not well-defined and has degenerate minimum. So constrained optimization still may works.
PPS Gauge Invariance is also important concept in physics and geometry.
PPPS While messing with Quasi-Newton – it seems there is an error in chapter 10.2 of “Numerical Optimization” by Nocedal&Wright. In the secant equation instead of S_{k+1}(x_{k+1} - x_{k}) = J^{T}_{k+1}r_{k+1} - J^{T}_{k}r_{k} should be S_{k+1}(x_{k+1} - x_{k}) = J^{T}_{k+1}r_{k+1} - J^{T}_{k}r_{k+1}

11, October, 2009 Posted by | Coding AR, computer vision | , , , , , | Comments Off on Solution – free gauge

Problems

During the tests I’ve found out that bundle adjustment is failing on some “bad frames”. There two ways to deal with it – reject bad frames or try to understand what happen – who set up us a bomb? :-).Any problem is also an opportunity to understand subject better. For now I suspect Gauss-Newton is failing due to too big residue. Just adding Hessian to J^{T}J does not help – I’m getting negative eigenvalue. So now I’m trying quasi-Newton from the excellent book by Nocedal&Wright. If it will not help I’ll try hybrid Fletcher method.

PS It looks like the problem was not the large residue

6, October, 2009 Posted by | Coding AR, Uncategorized | , , , , , | Comments Off on Problems

What’s going on

Code of markerless tracker is finished for emulator. It’s in in minimal configuration, without some optimizations, bell and whistles like combined points-edge pose estimation for now. Now it’s bugs squashing and testing with different video feeds for some times. Modified bundle adjustment is the nicest part, seems pretty stable and robust.

15, September, 2009 Posted by | Coding AR | , , , , | 2 Comments

Symbian Multimarker Tracking Library

#augmentedreality
Demo-version of binary Symbian multimarker tracking library SMMT available for download.
SMMT library is a SLAM multimarker tracker for Symbian. Library can work on Symbian S60 9.1 devices like Nokia N73 and Symbian 9.2 like Nokia N95, N82. It may also work on some other later versions. This version support only landscape 320×240 resolution for algorithmical reason – size used in the optimization.
This is slightly more advanced version of the tracker used in AR Tower Defense game.
PS corrupted file fixed

5, September, 2009 Posted by | Coding AR | , , , , , , , , , | Comments Off on Symbian Multimarker Tracking Library

Some phase correlation tricks

Then doing phase correlation on low-resolution, or extreme low-resolution (like below 32×32) images, the noise could become a serious problem, up to making result completely useless. Fortunately there are some tricks, which help in this situation. Some of them I stumbled upon myself, and some picked up in relevant papers.
First is obvious – pass image through the smoothing filter. Pretty simple window filter from integral image can help here.
Second – check consistency of result. Histogram of cross-power specter can help here. Here there is the wheel within the wheel, which I have found out the hard way – discard lower and right sectors of cross-power specter for histogram, they are produced from high-frequency parts of the specter and almost always are noise, even if cross-power specter itself quite sane.
Now more academic tricks:
You could extract sub-pixel information from cross-power specter. There are lot of ways to do it, just google/citeseer for it. Some are fast and unreliable, some slow and reliable.
Last one is really nice, I’ve picked it from Carneiro & Japson paper about phase-based features.
For cross power specter calculation instead of
\frac{F_{1}\cdot F_{2}^{*}} {\left| F_{1}\cdot F_{2} \right|}
use
\frac{F_{1}\cdot F_{2}^{*}} {a + \left| F_{1}\cdot F_{2} \right|}
where a is a small positive parameter
This way harmonics with small amplitude excluded from calculations. This is pretty logical – near zero harmonics have phase undefined, almost pure noise.

PS
Another problem with extra low-resolution phase correlation is that sometimes motion vector appear not as primary, but as secondary peak, due to ambiguity of the images relations. I have yet to find out what to do in this situation…

29, August, 2009 Posted by | Coding AR | , , , | Comments Off on Some phase correlation tricks

Importance of phase

Here are some nice pictures illustrating importance of Fourier phase

27, August, 2009 Posted by | Coding AR | , , | Comments Off on Importance of phase