Mirror Image

Mostly AR and Stuff

Problems

During the tests I’ve found out that bundle adjustment is failing on some “bad frames”. There two ways to deal with it – reject bad frames or try to understand what happen – who set up us a bomb? :-).Any problem is also an opportunity to understand subject better. For now I suspect Gauss-Newton is failing due to too big residue. Just adding Hessian to J^{T}J does not help – I’m getting negative eigenvalue. So now I’m trying quasi-Newton from the excellent book by Nocedal&Wright. If it will not help I’ll try hybrid Fletcher method.

PS It looks like the problem was not the large residue

Advertisements

6, October, 2009 Posted by | Coding AR, Uncategorized | , , , , , | Comments Off on Problems

What’s going on

Code of markerless tracker is finished for emulator. It’s in in minimal configuration, without some optimizations, bell and whistles like combined points-edge pose estimation for now. Now it’s bugs squashing and testing with different video feeds for some times. Modified bundle adjustment is the nicest part, seems pretty stable and robust.

15, September, 2009 Posted by | Coding AR | , , , , | 2 Comments

Symbian Multimarker Tracking Library

#augmentedreality
Demo-version of binary Symbian multimarker tracking library SMMT available for download.
SMMT library is a SLAM multimarker tracker for Symbian. Library can work on Symbian S60 9.1 devices like Nokia N73 and Symbian 9.2 like Nokia N95, N82. It may also work on some other later versions. This version support only landscape 320×240 resolution for algorithmical reason – size used in the optimization.
This is slightly more advanced version of the tracker used in AR Tower Defense game.
PS corrupted file fixed

5, September, 2009 Posted by | Coding AR | , , , , , , , , , | Comments Off on Symbian Multimarker Tracking Library

Some phase correlation tricks

Then doing phase correlation on low-resolution, or extreme low-resolution (like below 32×32) images, the noise could become a serious problem, up to making result completely useless. Fortunately there are some tricks, which help in this situation. Some of them I stumbled upon myself, and some picked up in relevant papers.
First is obvious – pass image through the smoothing filter. Pretty simple window filter from integral image can help here.
Second – check consistency of result. Histogram of cross-power specter can help here. Here there is the wheel within the wheel, which I have found out the hard way – discard lower and right sectors of cross-power specter for histogram, they are produced from high-frequency parts of the specter and almost always are noise, even if cross-power specter itself quite sane.
Now more academic tricks:
You could extract sub-pixel information from cross-power specter. There are lot of ways to do it, just google/citeseer for it. Some are fast and unreliable, some slow and reliable.
Last one is really nice, I’ve picked it from Carneiro & Japson paper about phase-based features.
For cross power specter calculation instead of
\frac{F_{1}\cdot F_{2}^{*}} {\left| F_{1}\cdot F_{2} \right|}
use
\frac{F_{1}\cdot F_{2}^{*}} {a + \left| F_{1}\cdot F_{2} \right|}
where a is a small positive parameter
This way harmonics with small amplitude excluded from calculations. This is pretty logical – near zero harmonics have phase undefined, almost pure noise.

PS
Another problem with extra low-resolution phase correlation is that sometimes motion vector appear not as primary, but as secondary peak, due to ambiguity of the images relations. I have yet to find out what to do in this situation…

29, August, 2009 Posted by | Coding AR | , , , | Comments Off on Some phase correlation tricks

Importance of phase

Here are some nice pictures illustrating importance of Fourier phase

27, August, 2009 Posted by | Coding AR | , , | Comments Off on Importance of phase

Bundle Adjustemnt on the Mars with Rover

Just found out – Mars Rovers used bundle adjustment for its localization and rocks modeling:
“Purpose of algorithm:
To perform autonomous long-range rover localization based on bundle adjustment (BA) technology.
Processing steps of the algorithm include interest point extraction and matching, intra- and inter- stereo tie point selection, automatic cross-site tie point selection by rock extraction, modeling and matching, and bundle adjustment”

6, August, 2009 Posted by | computer vision | , , , , , | Comments Off on Bundle Adjustemnt on the Mars with Rover

Augmented reality on S60 – basics

Blair MacIntyre asked on ARForum how to get video out of the Symbian Image data structre and upload it into OpenGL ES texture. So here how I did for my games:
I get viewfinder RGB bitmap, access it’s rgb data and use glTextureImage2D to upload it into background texture, which I stretch on the background rectangle. On top of the background rectangle I draw 3d models.
This code snipped for 320×240 screen and OpenGL ES 1+ (wordpress completly screwed tabs)

PS Here is binary static library for multimarker tracking for S60 which use that method.

#define VFWIDTH 320
#define VFHEIGHT 240

Two textures used for background, because texture size should be 2^n: 256×256 and 256×64

#define BKG_TXT_SIZEY0 256
#define BKG_TXT_SIZEY1 64

Nokia camera example could be used the as the base.

1. Overwrite ViewFinderFrameReady function

void CCameraCaptureEngine::ViewFinderFrameReady(CFbsBitmap& aFrame)
{
iController->ProcessFrame(&aFrame);
}

2. iController->ProcessFrame call CCameraAppBaseContaine->ProcessFrame

void CCameraAppBaseContainer::ProcessFrame(CFbsBitmap* pFrame)
{
// here RGB buffer for background is filled
iGLEngine->FillRGBBuffer(pFrame);
//and greyscale buffer for tracking is filled
iTracker->FillGreyBuffer(pFrame);

//traking
TBool aCaptureSuccess = iTracker->Capture();
//physics
if(aCaptureSuccess)
{
iPhEngine->Tick();
}
//rendering
glClear( GL_DEPTH_BUFFER_BIT);
iGLEngine->SetViewMatrix(iTracker->iViewMatrix);
iGLEngine->Render();

iGLEngine->Swap();
};
void CGLengine::Swap()
{
eglSwapBuffers( m_display, m_surface);
};

3. now how buffers filled: RGB buffers filled ind binded to textures

inline unsigned int byte_swap(unsigned int v)
{


		return (v<<16) | (v&0xff00) | ((v >> 16)&0xff);
}

void CGLengine::FillRGBBuffer(CFbsBitmap* pFrame)
{
pFrame->LockHeap(ETrue);
unsigned int* ptr_vf = (unsigned int*)pFrame->DataAddress();

FillBkgTxt(ptr_vf);

pFrame->UnlockHeap(ETrue); // unlock global heap

BindRGBBuffer(m_bkgTxtID0, m_rgbxBuffer0, BKG_TXT_SIZEY0);
BindRGBBuffer(m_bkgTxtID1, m_rgbxBuffer1, BKG_TXT_SIZEY1);
}

void CGLengine::FillBkgTxt(unsigned int* ptr_vf)
{
unsigned int* ptr_dst0 = m_rgbxBuffer0 +
(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY0;
unsigned int* ptr_dst1 = m_rgbxBuffer1 +
(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY1;

for(int j =0; j < VFHEIGHT; j++)
for(int i =0; i < BKG_TXT_SIZEY0; i++)
{
ptr_dst0[i + j*BKG_TXT_SIZEY0] = byte_swap(ptr_vf[i + j*VFWIDTH]);
}

ptr_vf += BKG_TXT_SIZEY0;

for(int j =0; j < VFHEIGHT; j++)
for(int i =0; i < BKG_TXT_SIZEY1; i++)
{
ptr_dst1[i + j*BKG_TXT_SIZEY1] = byte_swap(ptr_vf[i + j*VFWIDTH]);
}

}

void CGLengine::BindRGBBuffer(TInt aTxtID, GLvoid* aPtr, TInt aYSize)
{
glBindTexture( GL_TEXTURE_2D, aTxtID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, aYSize, BKG_TXT_SIZEY0, 0,
GL_RGBA, GL_UNSIGNED_BYTE, aPtr);
}

4. Greysacle buffer filled, smoothed by integral image :

void CTracker::FillGreyBuffer(CFbsBitmap* pFrame)
{

pFrame->LockHeap(ETrue);
unsigned int* ptr = (unsigned int*)pFrame->DataAddress();

if(m_bIntegralImg)
{
// calculate integral image values

unsigned int rs = 0;
for(int j=0; j < VFWIDTH; j++)
{
// cumulative row sum
rs = rs+ Raw2Grey(ptr[j]);
m_integral[j] = rs;
}

for(int i=1; i< VFHEIGHT; i++)
{
unsigned int rs = 0;
for(int j=0; j = VFWIDTH)
{
m_integral[i*VFWIDTH+j] = m_integral[(i-1)*VFWIDTH+j]+rs;
}
}
}

iRectData.iData[0] = m_integral[1*VFWIDTH+1]>>2;

int aX, aY;

for(aY = 1; aY >2;
iRectData.iData[MAX_SIZE_X-1 + aY*MAX_SIZE_X] = Area(2*MAX_SIZE_X-2, 2*aY, 2, 2)>>2;
}

for(aX = 1; aX >2;
iRectData.iData[aX + (MAX_SIZE_Y-1)*MAX_SIZE_X] = Area(2*aX, 2*MAX_SIZE_Y-2, 2, 2)>>2;
}

for(aY = 1; aY < MAX_SIZE_Y-1; aY++)
for(aX = 1; aX >4;
}

}
else
{

if(V2RX == 2 && V2RY ==2)
for(int j =0; j < MAX_SIZE_Y; j++)
for(int i =0; i >2;
}
else
for(int j =0; j < MAX_SIZE_Y; j++)
for(int i =0; i UnlockHeap(ETrue); // unlock global heap

}

Background could be rendered like this

#define GLUNITY (1<<16)
static const TInt quadTextureCoords[4 * 2] =
{
0, GLUNITY,
0, 0,
GLUNITY, 0,
GLUNITY, GLUNITY
};

static const GLubyte quadTriangles[2 * 3] =
{
0,1,2,
0,2,3
};

static const GLfloat quadVertices0[4 * 3] =
{
0, 0, 0,
0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0, 0, 0
};

static const GLfloat quadVertices1[4 * 3] =
{
BKG_TXT_SIZEY0, 0, 0,
BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, 0, 0
};

void CGLengine::RenderBkgQuad()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, VFWIDTH, 0, VFHEIGHT, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, VFWIDTH, VFHEIGHT);

glClear( GL_DEPTH_BUFFER_BIT);
glDisable(GL_BLEND);
glDisable(GL_ALPHA_TEST);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);

glColor4x(GLUNITY, GLUNITY, GLUNITY, GLUNITY);

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID0);
glVertexPointer( 3, GL_FLOAT, 0, quadVertices0 );
glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );
glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID1);
glVertexPointer( 3, GL_FLOAT, 0, quadVertices1 );
glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );
glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glEnable(GL_ALPHA_TEST);

}

27, July, 2009 Posted by | Coding AR | , , , , , , , , | Comments Off on Augmented reality on S60 – basics

Why 3d markerless tracking is difficult for mobile augmented reality

I often hear sentiments from users that they don’t like markers, and they are wondering, why there are so relatively few markerless AR around. First I want to say that there is no excuse for using markers in the static scene with immobile camera, or if desktop computer is used. Brute force methods for tracking like bundle adjustment and fundamental matrix are well developed and used for years and years in the computer vision and photogrammetry. However those methods in their original form could hardly produce acceptable frame rate on the mobile devices. From the other hand marker trackers on mobile devices could be made fast, stable and robust.
So why markers are easy and markerless are not ?
The problem is the structure , or “shape” of the points cloud generated by feature detector of the markerless tracker. The problem with structure is that depth coordinate of the points is not easily calculated. That is even more difficult because camera frame taken from mobile device have narrow baseline – frames taken form position close one to another, so “stereo” depth perception is quite rough. It is called structure from motion problem.
In the case of the marker tracker all feature points of the markers are on the same plane, and that allow to calculate position of the camera (up to constant scale factor) from the single frame. Essentially, if all the points produced by detector are on the same plane, like for example from the pictures lying on the table, the problem of structure from motion goes away. Planar cloud of point is essentially the same as the set of markers – for example any four points could be considered as marker and the same algorithm could apply. Structure from motion problem is why there is no easy step from “planar only” tracker to real 3d markerless tracker.
However not everything is so bad for mobile markerless tracker. If tracking environment is indoor, or cityscape there is a lot of rectangles, parallel lines and other planar structures around. Those could be used as initial approximation for one the of structure from motion algorithm, or/and as substitutes for markers.
Another approach of cause is to find some variation of structure from motion method which is fast and works for mobile. Some variation of bundle adjustment algorithm looks most promising to me.
PS PTAM tracker, which is ported to iPhone, use yet another approach – instead of using bundle adjustment for each frame, bundle adjustment is running in the separate thread asynchronously, and more simple method used for frame to frame tracking.
PPS And the last thing, from 2011:

30, March, 2009 Posted by | Coding AR | , , , , , , , , | 4 Comments

Tracking planes in the city

In relation to tracking cityscape I did some planar segmentation test. Segmented FAST generated corners with simple 5-points projective invariant.
In some cases 5-point give some rough approximation:
planar segments
In some cases outliers are quite bad – some point have very close projective invariant but still are in diffferent planes.
bad seggment
So simple method not quite work…

19, March, 2009 Posted by | Coding AR, computer vision | , , , , , , , , , | 4 Comments

Oriented descriptors vs upright

I have tested oriented descriptors SURF descriptors vs upright descriptors for approximately horizontally oriented camera images and got feature density less than oriented then for upright. Repeatability of oriented was worse too…

17, March, 2009 Posted by | Coding AR | , , , | 2 Comments