Mirror Image

Mostly AR and Stuff

Augmented reality on S60 – basics

Blair MacIntyre asked on ARForum how to get video out of the Symbian Image data structre and upload it into OpenGL ES texture. So here how I did for my games:
I get viewfinder RGB bitmap, access it’s rgb data and use glTextureImage2D to upload it into background texture, which I stretch on the background rectangle. On top of the background rectangle I draw 3d models.
This code snipped for 320×240 screen and OpenGL ES 1+ (wordpress completly screwed tabs)

PS Here is binary static library for multimarker tracking for S60 which use that method.

#define VFWIDTH 320
#define VFHEIGHT 240

Two textures used for background, because texture size should be 2^n: 256×256 and 256×64

#define BKG_TXT_SIZEY0 256
#define BKG_TXT_SIZEY1 64

Nokia camera example could be used the as the base.

1. Overwrite ViewFinderFrameReady function

void CCameraCaptureEngine::ViewFinderFrameReady(CFbsBitmap& aFrame)
{
iController->ProcessFrame(&aFrame);
}

2. iController->ProcessFrame call CCameraAppBaseContaine->ProcessFrame

void CCameraAppBaseContainer::ProcessFrame(CFbsBitmap* pFrame)
{
// here RGB buffer for background is filled
iGLEngine->FillRGBBuffer(pFrame);
//and greyscale buffer for tracking is filled
iTracker->FillGreyBuffer(pFrame);

//traking
TBool aCaptureSuccess = iTracker->Capture();
//physics
if(aCaptureSuccess)
{
iPhEngine->Tick();
}
//rendering
glClear( GL_DEPTH_BUFFER_BIT);
iGLEngine->SetViewMatrix(iTracker->iViewMatrix);
iGLEngine->Render();

iGLEngine->Swap();
};
void CGLengine::Swap()
{
eglSwapBuffers( m_display, m_surface);
};

3. now how buffers filled: RGB buffers filled ind binded to textures

inline unsigned int byte_swap(unsigned int v)
{


		return (v<<16) | (v&0xff00) | ((v >> 16)&0xff);
}

void CGLengine::FillRGBBuffer(CFbsBitmap* pFrame)
{
pFrame->LockHeap(ETrue);
unsigned int* ptr_vf = (unsigned int*)pFrame->DataAddress();

FillBkgTxt(ptr_vf);

pFrame->UnlockHeap(ETrue); // unlock global heap

BindRGBBuffer(m_bkgTxtID0, m_rgbxBuffer0, BKG_TXT_SIZEY0);
BindRGBBuffer(m_bkgTxtID1, m_rgbxBuffer1, BKG_TXT_SIZEY1);
}

void CGLengine::FillBkgTxt(unsigned int* ptr_vf)
{
unsigned int* ptr_dst0 = m_rgbxBuffer0 +
(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY0;
unsigned int* ptr_dst1 = m_rgbxBuffer1 +
(BKG_TXT_SIZEY0-VFHEIGHT)*BKG_TXT_SIZEY1;

for(int j =0; j < VFHEIGHT; j++)
for(int i =0; i < BKG_TXT_SIZEY0; i++)
{
ptr_dst0[i + j*BKG_TXT_SIZEY0] = byte_swap(ptr_vf[i + j*VFWIDTH]);
}

ptr_vf += BKG_TXT_SIZEY0;

for(int j =0; j < VFHEIGHT; j++)
for(int i =0; i < BKG_TXT_SIZEY1; i++)
{
ptr_dst1[i + j*BKG_TXT_SIZEY1] = byte_swap(ptr_vf[i + j*VFWIDTH]);
}

}

void CGLengine::BindRGBBuffer(TInt aTxtID, GLvoid* aPtr, TInt aYSize)
{
glBindTexture( GL_TEXTURE_2D, aTxtID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, aYSize, BKG_TXT_SIZEY0, 0,
GL_RGBA, GL_UNSIGNED_BYTE, aPtr);
}

4. Greysacle buffer filled, smoothed by integral image :

void CTracker::FillGreyBuffer(CFbsBitmap* pFrame)
{

pFrame->LockHeap(ETrue);
unsigned int* ptr = (unsigned int*)pFrame->DataAddress();

if(m_bIntegralImg)
{
// calculate integral image values

unsigned int rs = 0;
for(int j=0; j < VFWIDTH; j++)
{
// cumulative row sum
rs = rs+ Raw2Grey(ptr[j]);
m_integral[j] = rs;
}

for(int i=1; i< VFHEIGHT; i++)
{
unsigned int rs = 0;
for(int j=0; j = VFWIDTH)
{
m_integral[i*VFWIDTH+j] = m_integral[(i-1)*VFWIDTH+j]+rs;
}
}
}

iRectData.iData[0] = m_integral[1*VFWIDTH+1]>>2;

int aX, aY;

for(aY = 1; aY >2;
iRectData.iData[MAX_SIZE_X-1 + aY*MAX_SIZE_X] = Area(2*MAX_SIZE_X-2, 2*aY, 2, 2)>>2;
}

for(aX = 1; aX >2;
iRectData.iData[aX + (MAX_SIZE_Y-1)*MAX_SIZE_X] = Area(2*aX, 2*MAX_SIZE_Y-2, 2, 2)>>2;
}

for(aY = 1; aY < MAX_SIZE_Y-1; aY++)
for(aX = 1; aX >4;
}

}
else
{

if(V2RX == 2 && V2RY ==2)
for(int j =0; j < MAX_SIZE_Y; j++)
for(int i =0; i >2;
}
else
for(int j =0; j < MAX_SIZE_Y; j++)
for(int i =0; i UnlockHeap(ETrue); // unlock global heap

}

Background could be rendered like this

#define GLUNITY (1<<16)
static const TInt quadTextureCoords[4 * 2] =
{
0, GLUNITY,
0, 0,
GLUNITY, 0,
GLUNITY, GLUNITY
};

static const GLubyte quadTriangles[2 * 3] =
{
0,1,2,
0,2,3
};

static const GLfloat quadVertices0[4 * 3] =
{
0, 0, 0,
0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0, 0, 0
};

static const GLfloat quadVertices1[4 * 3] =
{
BKG_TXT_SIZEY0, 0, 0,
BKG_TXT_SIZEY0, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, BKG_TXT_SIZEY0, 0,
BKG_TXT_SIZEY0+BKG_TXT_SIZEY1, 0, 0
};

void CGLengine::RenderBkgQuad()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, VFWIDTH, 0, VFHEIGHT, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, VFWIDTH, VFHEIGHT);

glClear( GL_DEPTH_BUFFER_BIT);
glDisable(GL_BLEND);
glDisable(GL_ALPHA_TEST);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);

glColor4x(GLUNITY, GLUNITY, GLUNITY, GLUNITY);

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID0);
glVertexPointer( 3, GL_FLOAT, 0, quadVertices0 );
glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );
glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glBindTexture( GL_TEXTURE_2D, m_bkgTxtID1);
glVertexPointer( 3, GL_FLOAT, 0, quadVertices1 );
glTexCoordPointer( 2, GL_FIXED, 0, quadTextureCoords );
glDrawElements( GL_TRIANGLES, 2 * 3, GL_UNSIGNED_BYTE, quadTriangles );

glEnable(GL_CULL_FACE);
glEnable(GL_BLEND);
glEnable(GL_DEPTH_TEST);
glEnable(GL_ALPHA_TEST);

}

27, July, 2009 Posted by | Coding AR | , , , , , , , , | Comments Off on Augmented reality on S60 – basics

Marker vs markerless (bundle adjustment)

#augmentedreality
Here is a sample of image registration with fiduciary marker (actually the marker I used in my games) vs registration with bundle adjustment. Blue lines are points heights (relatively to marker plane) calculated using marker registration and triangulation. White lines are the same using bundle adjustment(modified). Points extracted with multiscale FAST and fitted with log-polar Fourier descriptors for correspondence (actually SURF descriptor produce the same correspondence).
marker vs markerless
As you can see markerless is in no way worse then markers, at least on this example ))).

23, July, 2009 Posted by | Coding AR | , , , , , , | 2 Comments

Augmented Reality on Android – now with NDK

With release of native code kit Android now looks more like a functional AR platform. NDK allow for native C/C++ libraries, and complete application seems need java wrapper still. It’s not clear to me still how accessible are video and OpenGL API from NDK – have to look into it.
On related note – there are rumors about pretty powerful 1Ghz phone for Android 2.0

5, July, 2009 Posted by | Augmented Reality, Coding AR | , , , | Comments Off on Augmented Reality on Android – now with NDK

Why 3d markerless tracking is difficult for mobile augmented reality

I often hear sentiments from users that they don’t like markers, and they are wondering, why there are so relatively few markerless AR around. First I want to say that there is no excuse for using markers in the static scene with immobile camera, or if desktop computer is used. Brute force methods for tracking like bundle adjustment and fundamental matrix are well developed and used for years and years in the computer vision and photogrammetry. However those methods in their original form could hardly produce acceptable frame rate on the mobile devices. From the other hand marker trackers on mobile devices could be made fast, stable and robust.
So why markers are easy and markerless are not ?
The problem is the structure , or “shape” of the points cloud generated by feature detector of the markerless tracker. The problem with structure is that depth coordinate of the points is not easily calculated. That is even more difficult because camera frame taken from mobile device have narrow baseline – frames taken form position close one to another, so “stereo” depth perception is quite rough. It is called structure from motion problem.
In the case of the marker tracker all feature points of the markers are on the same plane, and that allow to calculate position of the camera (up to constant scale factor) from the single frame. Essentially, if all the points produced by detector are on the same plane, like for example from the pictures lying on the table, the problem of structure from motion goes away. Planar cloud of point is essentially the same as the set of markers – for example any four points could be considered as marker and the same algorithm could apply. Structure from motion problem is why there is no easy step from “planar only” tracker to real 3d markerless tracker.
However not everything is so bad for mobile markerless tracker. If tracking environment is indoor, or cityscape there is a lot of rectangles, parallel lines and other planar structures around. Those could be used as initial approximation for one the of structure from motion algorithm, or/and as substitutes for markers.
Another approach of cause is to find some variation of structure from motion method which is fast and works for mobile. Some variation of bundle adjustment algorithm looks most promising to me.
PS PTAM tracker, which is ported to iPhone, use yet another approach – instead of using bundle adjustment for each frame, bundle adjustment is running in the separate thread asynchronously, and more simple method used for frame to frame tracking.
PPS And the last thing, from 2011:

30, March, 2009 Posted by | Coding AR | , , , , , , , , | 4 Comments

Tracking planes in the city

In relation to tracking cityscape I did some planar segmentation test. Segmented FAST generated corners with simple 5-points projective invariant.
In some cases 5-point give some rough approximation:
planar segments
In some cases outliers are quite bad – some point have very close projective invariant but still are in diffferent planes.
bad seggment
So simple method not quite work…

19, March, 2009 Posted by | Coding AR, computer vision | , , , , , , , , , | 4 Comments

Symbian foundation is considering options for improving Symbian Signed

David Wood from Symbian Foundation answered a question about Symbian Signed (mandatory digital signature for all Symbian OS app with advanced capabilities) in comments to his blog post about Symbian Release Plan :

“>What about Symbian Signed? Are there any plans to drop it, or at least relax it…?
A number of options for improving the operation of Symbian Signed are under active consideration.”

So it seems Symbian foundation is hearing to developers and end users lamentation and things could be better soon.

18, March, 2009 Posted by | Coding, Mobile, Symbian | , , , , | Comments Off on Symbian foundation is considering options for improving Symbian Signed

Oriented descriptors vs upright

I have tested oriented descriptors SURF descriptors vs upright descriptors for approximately horizontally oriented camera images and got feature density less than oriented then for upright. Repeatability of oriented was worse too…

17, March, 2009 Posted by | Coding AR | , , , | 2 Comments

Tracking cityscape

One of the big problem in image registration/structure from motion/3d tracking is using global information of the image. Feature/blob extraction, like SIFT, SURF or FAST etc using only local information around the point. Region detector like MSER using area information, but MSER is not good at tracking textures, and not quite stable at complex scenes. Edge detection provide some non-local information, but require processing edges. That could be computationally heavy, but looks promising anyway. There are a lot of methods which use global information – all kind of texture segmentation, epitome, snakes/appearance models, but those are computationally heavy and not suitable for mobiles. The question is how to incorporate global information from the image into tracker, and make it with minimal amount of operations. One way is to optimise tracker for specific environment – for example use the property of cityscape, a lot of planar structures and straight lines. Such multiplanar tracker wouldn’t work in the forest or park, but could be a working compromise.

12, March, 2009 Posted by | Coding AR | , , , , , , , , , , , , | Comments Off on Tracking cityscape

Region Tracking

Experimenting with MSER region tracking
The problem is that regions seems not stable enough and way too big.
MSER of downsampled images, original images, MSER of mean shift filtered images, MSER of smoothed images:
mser
Mean shift filtering seems capture more feauters, but it’s too computationally expensive.

3, March, 2009 Posted by | Coding AR | , , , , , , | Comments Off on Region Tracking

New version of Augmented Reality Tower Defense

A new version of AR Tower Defense – v0.03 Some bugs fixed (black screen bug)
and minor tracking improvement

12, February, 2009 Posted by | Augmented Reality, Coding AR, Demo, Mobile, Nokia N95 | , , , , , , , , , , | Comments Off on New version of Augmented Reality Tower Defense