Mirror Image

Mostly AR and Stuff

Solution – free gauge

Looks like the problem was not the large Gauss-Newton residue. The problem was gauge fixing.
Most of bundle adjustment algorithms are not gauge invariant inherently (for details check Triggs “Bundle adjustment – a modern synthesis”, chapter 9 “Gauge Freedom”). Practically that means that method have one or more free parameters which could be chosen arbitrary (for example scale), but which influence solution in non-invariant way (or don’t influence solution if algorithm is gauge invariant). Gauge fixing is the choice of the values for that free parameters. There exist at least one gauge invariant bundle adjustment method (generalization of Levenberg-Marquardt with complete matrix correction instead of diagonal only correction) , but it is order of magnitude more computational expensive.
I’ve used fixing coordinate of one of the 3d points for gauge fixing. Because method is not gauge invariant solution depend on the choice of that fixed point. The problem occurs when the chosen point is “bad” – error in feature point detector for this point is so big that it contradict to the rest of the picture. Mismatching in the point correspondence can cause the same problem.
In my case, fixing coordinate of chosen point caused “accumulation” of residual error in that point. This is easy to explain – other points can decrease reprojection error both by moving/rotating camera and by shifting their coordinates, but fixed point can do it only by moving/rotating camera. It looks like if the point was “bad” from the start it can become even worse next iteration as the error accumulate – positive feedback look causing method become unstable. That’s of cause only my observations, I didn’t do any formal analysis.
The obvious solution is to redistribute residual error among all the points – that mean drop gauge fixing and use free gauge. Free gauge is causing arbitrary scaling of the result, but the result can be rescaled later. However there is the cost. Free gauge means matrix is singular – not invertible and Gauss-Newton method can not work. So I have to switch to less efficient and more computationally expensive Levenberg-Marquardt. For now it seems working.
PS Free gauge matrix is not singular, just not well-defined and has degenerate minimum. So constrained optimization still may works.
PPS Gauge Invariance is also important concept in physics and geometry.
PPPS While messing with Quasi-Newton – it seems there is an error in chapter 10.2 of “Numerical Optimization” by Nocedal&Wright. In the secant equation instead of S_{k+1}(x_{k+1} - x_{k}) = J^{T}_{k+1}r_{k+1} - J^{T}_{k}r_{k} should be S_{k+1}(x_{k+1} - x_{k}) = J^{T}_{k+1}r_{k+1} - J^{T}_{k}r_{k+1}

11, October, 2009 Posted by | Coding AR, computer vision | , , , , , | Comments Off on Solution – free gauge

Problems

During the tests I’ve found out that bundle adjustment is failing on some “bad frames”. There two ways to deal with it – reject bad frames or try to understand what happen – who set up us a bomb? :-).Any problem is also an opportunity to understand subject better. For now I suspect Gauss-Newton is failing due to too big residue. Just adding Hessian to J^{T}J does not help – I’m getting negative eigenvalue. So now I’m trying quasi-Newton from the excellent book by Nocedal&Wright. If it will not help I’ll try hybrid Fletcher method.

PS It looks like the problem was not the large residue

6, October, 2009 Posted by | Coding AR, Uncategorized | , , , , , | Comments Off on Problems

Some phase correlation tricks

Then doing phase correlation on low-resolution, or extreme low-resolution (like below 32×32) images, the noise could become a serious problem, up to making result completely useless. Fortunately there are some tricks, which help in this situation. Some of them I stumbled upon myself, and some picked up in relevant papers.
First is obvious – pass image through the smoothing filter. Pretty simple window filter from integral image can help here.
Second – check consistency of result. Histogram of cross-power specter can help here. Here there is the wheel within the wheel, which I have found out the hard way – discard lower and right sectors of cross-power specter for histogram, they are produced from high-frequency parts of the specter and almost always are noise, even if cross-power specter itself quite sane.
Now more academic tricks:
You could extract sub-pixel information from cross-power specter. There are lot of ways to do it, just google/citeseer for it. Some are fast and unreliable, some slow and reliable.
Last one is really nice, I’ve picked it from Carneiro & Japson paper about phase-based features.
For cross power specter calculation instead of
\frac{F_{1}\cdot F_{2}^{*}} {\left| F_{1}\cdot F_{2} \right|}
use
\frac{F_{1}\cdot F_{2}^{*}} {a + \left| F_{1}\cdot F_{2} \right|}
where a is a small positive parameter
This way harmonics with small amplitude excluded from calculations. This is pretty logical – near zero harmonics have phase undefined, almost pure noise.

PS
Another problem with extra low-resolution phase correlation is that sometimes motion vector appear not as primary, but as secondary peak, due to ambiguity of the images relations. I have yet to find out what to do in this situation…

29, August, 2009 Posted by | Coding AR | , , , | Comments Off on Some phase correlation tricks

Importance of phase

Here are some nice pictures illustrating importance of Fourier phase

27, August, 2009 Posted by | Coding AR | , , | Comments Off on Importance of phase

Bundle Adjustemnt on the Mars with Rover

Just found out – Mars Rovers used bundle adjustment for its localization and rocks modeling:
“Purpose of algorithm:
To perform autonomous long-range rover localization based on bundle adjustment (BA) technology.
Processing steps of the algorithm include interest point extraction and matching, intra- and inter- stereo tie point selection, automatic cross-site tie point selection by rock extraction, modeling and matching, and bundle adjustment”

6, August, 2009 Posted by | computer vision | , , , , , | Comments Off on Bundle Adjustemnt on the Mars with Rover

Augmented reality on S60 – basics

Blair MacIntyre asked on ARForum how to get video out of the Symbian Image data structre and upload it into OpenGL ES texture. So here how I did for my games:
I get viewfinder RGB bitmap, access it’s rgb data and use glTextureImage2D to upload it into background texture, which I stretch on the background rectangle. On top of the background rectangle I draw 3d models.
This code snipped for 320×240 screen and OpenGL ES 1+ (wordpress completly screwed tabs)

PS Here is binary static library for multimarker tracking for S60 which use that method.

#define VFWIDTH 320
#define VFHEIGHT 240

Two textures used for background, because texture size should be 2^n: 256×256 and 256×64

#define BKG_TXT_SIZEY0 256
#define BKG_TXT_SIZEY1 64

Nokia camera example could be used the as the base.

1. Overwrite ViewFinderFrameReady function

void CCameraCaptureEngine::ViewFinderFrameReady(CFbsBitmap& aFrame)
{
iController->ProcessFrame(&aFrame);
}

2. iController->ProcessFrame call CCameraAppBaseContaine->ProcessFrame

void CCameraAppBaseContainer::ProcessFrame(CFbsBitmap* pFrame)
{
// here RGB buffer for background is filled
iGLEngine->FillRGBBuffer(pFrame);
//and greyscale buffer for tracking is filled
iTracker->FillGreyBuffer(pFrame);

//traking
TBool aCaptureSuccess = iTracker->Capture();
//physics
if(aCaptureSuccess)
{
iPhEngine->Tick();
}
//rendering
glClear( GL_DEPTH_BUFFER_BIT);
iGLEngine->SetViewMatrix(iTracker->iViewMatrix);
iGLEngine->Render();

iGLEngine->Swap();
};
void CGLengine::Swap()
{
eglSwapBuffers( m_display, m_surface);
};

3. now how buffers filled: RGB buffers filled ind binded to textures

inline unsigned int byte_swap(unsigned int v)
{


		return (v<<16) | (v&0xff00) | ((v >> 16)&0xff);
}

void CGLengine::FillRGBBuffer(CFbsBitmap* pFrame)
{
pFrame->LockHeap(ETrue);
unsigned int* ptr_vf = (unsigned int*)pFrame->DataAddress();

FillBkgTxt(ptr_vf);

pFrame->UnlockHeap(ETrue); // unlock global heap

BindRGBBuffer(m_bkgTxtID0, m_rgbxBuffer0, BKG_TXT_SIZEY0);
BindRGBBuffer(m_bkgTxtID1, m_rgbxBuffer1, BKG_TXT_SIZEY1);
}

void CGLengine::FillBkgTxt(unsigned int* ptr_vf)
{
unsigned int* ptr_dst0 = m_rgbxBuffer0 +
(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY0;
unsigned int* ptr_dst1 = m_rgbxBuffer1 +
(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY1;

for(int j =0; j < VFHEIGHT; j++)
for(int i =0; i < BKG_TXT_SIZEY0; i++)
{
ptr_dst0[i + j*BKG_TXT_SIZEY0] = byte_swap(ptr_vf[i + j*VFWIDTH]);
}

ptr_vf += BKG_TXT_SIZEY0;

for(int j =0; j < VFHEIGHT; j++)
for(int i =0; i < BKG_TXT_SIZEY1; i++)
{
ptr_dst1[i + j*BKG_TXT_SIZEY1] = byte_swap(ptr_vf[i + j*VFWIDTH]);
}

}

void CGLengine::BindRGBBuffer(TInt aTxtID, GLvoid* aPtr, TInt aYSize)
{
glBindTexture( GL_TEXTURE_2D, aTxtID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, aYSize, BKG_TXT_SIZEY0, 0,
GL_RGBA, GL_UNSIGNED_BYTE, aPtr);
}

4. Greysacle buffer filled, smoothed by integral image :

void CTracker::FillGreyBuffer(CFbsBitmap* pFrame)
{

pFrame->LockHeap(ETrue);
unsigned int* ptr = (unsigned int*)pFrame->DataAddress();

if(m_bIntegralImg)
{
// calculate integral image values

unsigned int rs = 0;
for(int j=0; j < VFWIDTH; j++)
{
// cumulative row sum
rs = rs+ Raw2Grey(ptr[j]);
m_integral[j] = rs;
}

for(int i=1; i< VFHEIGHT; i++)
{
unsigned int rs = 0;
for(int j=0; j = VFWIDTH)
{
m_integral[i*VFWIDTH+j] = m_integral[(i-1)*VFWIDTH+j]+rs;
}
}
}

iRectData.iData[0] = m_integral[1*VFWIDTH+1]>>2;

int aX, aY;

for(aY = 1; aY >2;
iRectData.iData[MAX_SIZE_X-1 + aY*MAX_SIZE_X] = Area(2*MAX_SIZE_X-2, 2*aY, 2, 2)>>2;
}

for(aX = 1; aX >2;
iRectData.iData[aX + (MAX_SIZE_Y-1)*MAX_SIZE_X] = Area(2*aX, 2*MAX_SIZE_Y-2, 2, 2)>>2;
}

for(aY = 1; aY < MAX_SIZE_Y-1; aY++)
for(aX = 1; aX >4;
}

}
else
{

if(V2RX == 2 && V2RY ==2)
for(int j =0; j < MAX_SIZE_Y; j++)
for(int i =0; i >2;
}
else
for(int j =0; j < MAX_SIZE_Y; j++)
for(int i =0; i UnlockHeap(ETrue); // unlock global heap

}

Background could be rendered like this

#define GLUNITY (1<<16)
static const TInt quadTextureCoords[4 * 2] =
{
0, GLUNITY,
0, 0,
GLUNITY, 0,
GLUNITY, GLUNITY
};

static const GLubyte quadTriangles[2 * 3] =
{
0,1,2,
0,2,3
};

static const GLfloat quadVertices0[4 * 3] =
{
0, 0, 0,
0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0, 0, 0
};

static const GLfloat quadVertices1[4 * 3] =
{
BKG_TXT_SIZEY0, 0, 0,
BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, 0, 0
};

void CGLengine::RenderBkgQuad()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, VFWIDTH, 0, VFHEIGHT, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, VFWIDTH, VFHEIGHT);

glClear( GL_DEPTH_BUFFER_BIT);
glDisable(GL_BLEND);
glDisable(GL_ALPHA_TEST);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);

glColor4x(GLUNITY, GLUNITY, GLUNITY, GLUNITY);

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID0);
glVertexPointer( 3, GL_FLOAT, 0, quadVertices0 );
glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );
glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID1);
glVertexPointer( 3, GL_FLOAT, 0, quadVertices1 );
glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );
glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glEnable(GL_ALPHA_TEST);

}

27, July, 2009 Posted by | Coding AR | , , , , , , , , | Comments Off on Augmented reality on S60 – basics

Video Surveillance is Useless

Found this interesting slide presentation form Peter Kovesi, inventor of phase congruency edge detector. It basically saying, that on current tech level video surveillance is useless for face identification. What follow is that it’s actually harmful, due to wrong impression of it’s reliability.
Also on his page – some fun animation or How to Animate Impossible Objects
impossible
PS Fourier phase approach to feature detection looks really promising, especially if to find some low computation cost modification.

18, July, 2009 Posted by | computer vision | , , , , | 3 Comments

Computer vision accelerator in FPGA for smartphone

#augmentedreality
Tony Chun form Intel integrated platform research group talk about “methodology” of putting computer vision algorithm(or speech recognition) into hardware. He specifically mention smartphone and mobile augmented reality. Tony suggest that this accelerator should be programmable, with some software language to make it flexible. It’s not clear if he is talking about FPGA prototype, or putting FPGA into smartphone. Idea to use FPGA chip for mobile CV task is not new, for example in this LinkedIn discussion Stanislav Sinyagin suggested some specific hardware to play with.

Thanks artimes.rouli.net for pointing this one.

7, July, 2009 Posted by | Augmented Reality | , , , | 2 Comments

Air-fueled batteries

As I’d already written I think the battery life is the key to adoption of high-performance mobile devices, strong enough for advanced image processing and real-time augmented reality.
Here are some news – Technologyreview report it seems there are some advances in lithium-air batteries. Air-fueled batteries is something similar to fuel-air explosives Like FAE air-fueled batteries are not storing oxidizer in themselves, but use oxidizer from the air. AFB should allow ten times energy density of the common batteries.
Lithium AFB are developed by IBM, Hitachi and could use not only lithium but zinc and aluminium

28, June, 2009 Posted by | Mobile | , , | Comments Off on Air-fueled batteries

Nokia consider Maemo Linux as alternative to Symbian ?

As cnet point out Symbian is not mentioned in the joint Intel-Nokia press release about 3G and Open Source Software collaboration. Only Maemo and Moblin are mentioned. Symbian, though also open sourced is left out. It could be that Nokia is less enthusiastic about Symbain OS now. Existing Symbain OS UIs are inferior to iPhone UI, Symbian OS third party applications are not getting enough traction and most of Symbian users are not even aware they exist. Symbian Signed restrictions are not helping either. BTW most of Symbian users are not even aware they are Symbian users.
So Nokia seems hedging its bets with Maemo linux. Cnet think Nokia could switch to Maemo for high-end devices and leave Symbian for mid-range.

25, June, 2009 Posted by | Mobile, Symbian | , , , , , | 1 Comment