## Solution – free gauge

Looks like the problem was not the large Gauss-Newton residue. The problem was gauge fixing.

Most of bundle adjustment algorithms are not gauge invariant inherently (for details check Triggs “Bundle adjustment – a modern synthesis”, chapter 9 “Gauge Freedom”). Practically that means that method have one or more free parameters which could be chosen arbitrary (for example scale), but which influence solution in non-invariant way (or don’t influence solution if algorithm is gauge invariant). Gauge fixing is the choice of the values for that free parameters. There exist at least one gauge invariant bundle adjustment method (generalization of Levenberg-Marquardt with complete matrix correction instead of diagonal only correction) , but it is order of magnitude more computational expensive.

I’ve used fixing coordinate of one of the 3d points for gauge fixing. Because method is not gauge invariant solution depend on the choice of that fixed point. The problem occurs when the chosen point is “bad” – error in feature point detector for this point is so big that it contradict to the rest of the picture. Mismatching in the point correspondence can cause the same problem.

In my case, fixing coordinate of chosen point caused “accumulation” of residual error in that point. This is easy to explain – other points can decrease reprojection error both by moving/rotating camera and by shifting their coordinates, but fixed point can do it only by moving/rotating camera. It looks like if the point was “bad” from the start it can become even worse next iteration as the error accumulate – positive feedback look causing method become unstable. That’s of cause only my observations, I didn’t do any formal analysis.

The obvious solution is to redistribute residual error among all the points – that mean drop gauge fixing and use free gauge. Free gauge is causing arbitrary scaling of the result, but the result can be rescaled later. However there is the cost. Free gauge means matrix is singular – not invertible and Gauss-Newton method can not work. So I have to switch to less efficient and more computationally expensive Levenberg-Marquardt. For now it seems working.

PS Free gauge matrix is not singular, just not well-defined and has degenerate minimum. So constrained optimization still may works.

PPS Gauge Invariance is also important concept in physics and geometry.

PPPS While messing with Quasi-Newton – it seems there is an error in chapter 10.2 of “Numerical Optimization” by Nocedal&Wright. In the secant equation instead of should be

## Problems

During the tests I’ve found out that bundle adjustment is failing on some “bad frames”. There two ways to deal with it – reject bad frames or try to understand what happen – who set up us a bomb? :-).Any problem is also an opportunity to understand subject better. For now I suspect Gauss-Newton is failing due to too big residue. Just adding Hessian to does not help – I’m getting negative eigenvalue. So now I’m trying quasi-Newton from the excellent book by Nocedal&Wright. If it will not help I’ll try hybrid Fletcher method.

## What’s going on

Code of markerless tracker is finished for emulator. It’s in in minimal configuration, without some optimizations, bell and whistles like combined points-edge pose estimation for now. Now it’s bugs squashing and testing with different video feeds for some times. Modified bundle adjustment is the nicest part, seems pretty stable and robust.

## Symbian Multimarker Tracking Library

#augmentedreality

Demo-version of binary Symbian multimarker tracking library SMMT available for download.

SMMT library is a SLAM multimarker tracker for Symbian. Library can work on Symbian S60 9.1 devices like Nokia N73 and Symbian 9.2 like Nokia N95, N82. It may also work on some other later versions. This version support only landscape 320×240 resolution for algorithmical reason – size used in the optimization.

This is slightly more advanced version of the tracker used in AR Tower Defense game.

PS corrupted file fixed

## Some phase correlation tricks

Then doing phase correlation on low-resolution, or extreme low-resolution (like below 32×32) images, the noise could become a serious problem, up to making result completely useless. Fortunately there are some tricks, which help in this situation. Some of them I stumbled upon myself, and some picked up in relevant papers.

First is obvious – pass image through the smoothing filter. Pretty simple window filter from integral image can help here.

Second – check consistency of result. Histogram of cross-power specter can help here. Here there is the wheel within the wheel, which I have found out the hard way – discard lower and right sectors of cross-power specter for histogram, they are produced from high-frequency parts of the specter and almost always are noise, even if cross-power specter itself quite sane.

Now more academic tricks:

You could extract sub-pixel information from cross-power specter. There are lot of ways to do it, just google/citeseer for it. Some are fast and unreliable, some slow and reliable.

Last one is really nice, I’ve picked it from Carneiro & Japson paper about phase-based features.

For cross power specter calculation instead of

use

where is a small positive parameter

This way harmonics with small amplitude excluded from calculations. This is pretty logical – near zero harmonics have phase undefined, almost pure noise.

PS

Another problem with extra low-resolution phase correlation is that sometimes motion vector appear not as primary, but as secondary peak, due to ambiguity of the images relations. I have yet to find out what to do in this situation…

## Importance of phase

Here are some nice pictures illustrating importance of Fourier phase

## Augmented reality on S60 – basics

Blair MacIntyre asked on ARForum how to get video out of the Symbian Image data structre and upload it into OpenGL ES texture. So here how I did for my games:

I get viewfinder RGB bitmap, access it’s rgb data and use glTextureImage2D to upload it into background texture, which I stretch on the background rectangle. On top of the background rectangle I draw 3d models.

This code snipped for 320×240 screen and OpenGL ES 1+ (wordpress completly screwed tabs)

PS Here is binary static library for multimarker tracking for S60 which use that method.

#define VFWIDTH 320

#define VFHEIGHT 240

Two textures used for background, because texture size should be 2^n: 256×256 and 256×64

#define BKG_TXT_SIZEY0 256

#define BKG_TXT_SIZEY1 64

Nokia camera example could be used the as the base.

1. Overwrite ViewFinderFrameReady function

void CCameraCaptureEngine::ViewFinderFrameReady(CFbsBitmap& aFrame)

{

iController->ProcessFrame(&aFrame);

}

2. iController->ProcessFrame call CCameraAppBaseContaine->ProcessFrame

void CCameraAppBaseContainer::ProcessFrame(CFbsBitmap* pFrame)

{

// here RGB buffer for background is filled

iGLEngine->FillRGBBuffer(pFrame);

//and greyscale buffer for tracking is filled

iTracker->FillGreyBuffer(pFrame);

//traking

TBool aCaptureSuccess = iTracker->Capture();

//physics

if(aCaptureSuccess)

{

iPhEngine->Tick();

}

//rendering

glClear( GL_DEPTH_BUFFER_BIT);

iGLEngine->SetViewMatrix(iTracker->iViewMatrix);

iGLEngine->Render();

iGLEngine->Swap();

};

void CGLengine::Swap()

{

eglSwapBuffers( m_display, m_surface);

};

3. now how buffers filled: RGB buffers filled ind binded to textures

inline unsigned int byte_swap(unsigned int v) { return (v<<16) | (v&0xff00) | ((v >> 16)&0xff); }

void CGLengine::FillRGBBuffer(CFbsBitmap* pFrame)

{

pFrame->LockHeap(ETrue);

unsigned int* ptr_vf = (unsigned int*)pFrame->DataAddress();

FillBkgTxt(ptr_vf);

pFrame->UnlockHeap(ETrue); // unlock global heap

BindRGBBuffer(m_bkgTxtID0, m_rgbxBuffer0, BKG_TXT_SIZEY0);

BindRGBBuffer(m_bkgTxtID1, m_rgbxBuffer1, BKG_TXT_SIZEY1);

}

void CGLengine::FillBkgTxt(unsigned int* ptr_vf)

{

unsigned int* ptr_dst0 = m_rgbxBuffer0 +

(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY0;

unsigned int* ptr_dst1 = m_rgbxBuffer1 +

(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY1;

for(int j =0; j < VFHEIGHT; j++)

for(int i =0; i < BKG_TXT_SIZEY0; i++)

{

ptr_dst0[i + j*BKG_TXT_SIZEY0] = byte_swap(ptr_vf[i + j*VFWIDTH]);

}

ptr_vf += BKG_TXT_SIZEY0;

for(int j =0; j < VFHEIGHT; j++)

for(int i =0; i < BKG_TXT_SIZEY1; i++)

{

ptr_dst1[i + j*BKG_TXT_SIZEY1] = byte_swap(ptr_vf[i + j*VFWIDTH]);

}

}

void CGLengine::BindRGBBuffer(TInt aTxtID, GLvoid* aPtr, TInt aYSize)

{

glBindTexture( GL_TEXTURE_2D, aTxtID);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, aYSize, BKG_TXT_SIZEY0, 0,

GL_RGBA, GL_UNSIGNED_BYTE, aPtr);

}

4. Greysacle buffer filled, smoothed by integral image :

void CTracker::FillGreyBuffer(CFbsBitmap* pFrame)

{

pFrame->LockHeap(ETrue);

unsigned int* ptr = (unsigned int*)pFrame->DataAddress();

if(m_bIntegralImg)

{

// calculate integral image values

unsigned int rs = 0;

for(int j=0; j < VFWIDTH; j++)

{

// cumulative row sum

rs = rs+ Raw2Grey(ptr[j]);

m_integral[j] = rs;

}

for(int i=1; i< VFHEIGHT; i++)

{

unsigned int rs = 0;

for(int j=0; j = VFWIDTH)

{

m_integral[i*VFWIDTH+j] = m_integral[(i-1)*VFWIDTH+j]+rs;

}

}

}

iRectData.iData[0] = m_integral[1*VFWIDTH+1]>>2;

int aX, aY;

for(aY = 1; aY >2;

iRectData.iData[MAX_SIZE_X-1 + aY*MAX_SIZE_X] = Area(2*MAX_SIZE_X-2, 2*aY, 2, 2)>>2;

}

for(aX = 1; aX >2;

iRectData.iData[aX + (MAX_SIZE_Y-1)*MAX_SIZE_X] = Area(2*aX, 2*MAX_SIZE_Y-2, 2, 2)>>2;

}

for(aY = 1; aY < MAX_SIZE_Y-1; aY++)

for(aX = 1; aX >4;

}

}

else

{

if(V2RX == 2 && V2RY ==2)

for(int j =0; j < MAX_SIZE_Y; j++)

for(int i =0; i >2;

}

else

for(int j =0; j < MAX_SIZE_Y; j++)

for(int i =0; i UnlockHeap(ETrue); // unlock global heap

}

Background could be rendered like this

#define GLUNITY (1<<16)

static const TInt quadTextureCoords[4 * 2] =

{

0, GLUNITY,

0, 0,

GLUNITY, 0,

GLUNITY, GLUNITY

};

static const GLubyte quadTriangles[2 * 3] =

{

0,1,2,

0,2,3

};

static const GLfloat quadVertices0[4 * 3] =

{

0, 0, 0,

0, BKG_TXT_SIZEY0, 0,

BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,

BKG_TXT_SIZEY0, 0, 0

};

static const GLfloat quadVertices1[4 * 3] =

{

BKG_TXT_SIZEY0, 0, 0,

BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,

BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, BKG_TXT_SIZEY0, 0,

BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, 0, 0

};

void CGLengine::RenderBkgQuad()

{

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

glOrthof(0, VFWIDTH, 0, VFHEIGHT, -1.0, 1.0);

glMatrixMode(GL_MODELVIEW);

glLoadIdentity();

glViewport(0, 0, VFWIDTH, VFHEIGHT);

glClear( GL_DEPTH_BUFFER_BIT);

glDisable(GL_BLEND);

glDisable(GL_ALPHA_TEST);

glDisable(GL_DEPTH_TEST);

glDisable(GL_CULL_FACE);

glColor4x(GLUNITY, GLUNITY, GLUNITY, GLUNITY);

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID0);

glVertexPointer( 3, GL_FLOAT, 0, quadVertices0 );

glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );

glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID1);

glVertexPointer( 3, GL_FLOAT, 0, quadVertices1 );

glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );

glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glEnable(GL_CULL_FACE);

glEnable(GL_BLEND);

glEnable(GL_DEPTH_TEST);

glEnable(GL_ALPHA_TEST);

}

## Marker vs markerless (bundle adjustment)

#augmentedreality

Here is a sample of image registration with fiduciary marker (actually the marker I used in my games) vs registration with bundle adjustment. Blue lines are points heights (relatively to marker plane) calculated using marker registration and triangulation. White lines are the same using bundle adjustment(modified). Points extracted with multiscale FAST and fitted with log-polar Fourier descriptors for correspondence (actually SURF descriptor produce the same correspondence).

As you can see markerless is in no way worse then markers, at least on this example ))).

## Augmented Reality on Android – now with NDK

With release of native code kit Android now looks more like a functional AR platform. NDK allow for native C/C++ libraries, and complete application seems need java wrapper still. It’s not clear to me still how accessible are video and OpenGL API from NDK – have to look into it.

On related note – there are rumors about pretty powerful 1Ghz phone for Android 2.0

## Why 3d markerless tracking is difficult for mobile augmented reality

I often hear sentiments from users that they don’t like markers, and they are wondering, why there are so relatively few markerless AR around. First I want to say that there is no excuse for using markers in the static scene with immobile camera, or if desktop computer is used. Brute force methods for tracking like bundle adjustment and fundamental matrix are well developed and used for years and years in the computer vision and photogrammetry. However those methods in their original form could hardly produce acceptable frame rate on the mobile devices. From the other hand marker trackers on mobile devices could be made fast, stable and robust.

So why markers are easy and markerless are not ?

The problem is the structure , or “shape” of the points cloud generated by feature detector of the markerless tracker. The problem with structure is that depth coordinate of the points is not easily calculated. That is even more difficult because camera frame taken from mobile device have narrow baseline – frames taken form position close one to another, so “stereo” depth perception is quite rough. It is called structure from motion problem.

In the case of the marker tracker all feature points of the markers are on the same plane, and that allow to calculate position of the camera (up to constant scale factor) from the single frame. Essentially, if all the points produced by detector are on the same plane, like for example from the pictures lying on the table, the problem of *structure from motion* goes away. Planar cloud of point is essentially the same as the set of markers – for example any four points could be considered as marker and the same algorithm could apply. *Structure from motion* problem is why there is no easy step from “planar only” tracker to real 3d markerless tracker.

However not everything is so bad for mobile markerless tracker. If tracking environment is indoor, or cityscape there is a lot of rectangles, parallel lines and other planar structures around. Those could be used as initial approximation for one the of structure from motion algorithm, or/and as substitutes for markers.

Another approach of cause is to find some variation of structure from motion method which is fast and works for mobile. Some variation of bundle adjustment algorithm looks most promising to me.

PS PTAM tracker, which is ported to iPhone, use yet another approach – instead of using bundle adjustment for each frame, bundle adjustment is running in the separate thread asynchronously, and more simple method used for frame to frame tracking.

PPS And the last thing, from 2011: