Mirror Image

Mostly AR and Stuff

Computer vision accelerator in FPGA for smartphone

#augmentedreality
Tony Chun form Intel integrated platform research group talk about “methodology” of putting computer vision algorithm(or speech recognition) into hardware. He specifically mention smartphone and mobile augmented reality. Tony suggest that this accelerator should be programmable, with some software language to make it flexible. It’s not clear if he is talking about FPGA prototype, or putting FPGA into smartphone. Idea to use FPGA chip for mobile CV task is not new, for example in this LinkedIn discussion Stanislav Sinyagin suggested some specific hardware to play with.

Thanks artimes.rouli.net for pointing this one.

7, July, 2009 Posted by | Augmented Reality | , , , | 2 Comments

Bing vs Google for augmented reality and computer vision

I’m using Google a lot for my work, looking for articles, unknown to me definitions and techniques and so on. So I’ve decided to check Microsoft Bing too.
First test – augmented reality
Google – definition in the first line, links give pretty comprehensive coverage for beginner
Bing – four obscure links with job and phd references

Second test – MSER definition
Google – give definition in the first line
Bing – unrelated garbage

Third test – preserving symmetry in cholesky decomposition
Google result
Bing result
Similar results. Both engines relay on the wikipedia heavily

Forth test: “multiscale segmentation”
Google result
Bing result
Surprisingly I like Bing results better.

Conclusion:
Google engine seems have more “common sense” and more useful for introduction into subject. Could be because of bigger indexed base.
Bing could be actually useful in specific searches.

15, June, 2009 Posted by | Uncategorized | , , , | 4 Comments

Open Source programmable camera for image processing

Interesting product – camera for computer vision applications, with open sourced DSP
camera
From sci.image.processing:
“The entire camera (hardware as well as software) is open source. It features a 752×480 pixel CMOS sensor, 64MB of SDRAM and 4MB of flash, Ethernet and div. IOs.
The camera runs a uClinux and comes with an image processing framework.”
Datasheet is here

14, June, 2009 Posted by | Uncategorized | , , | Comments Off on Open Source programmable camera for image processing

Why 3d markerless tracking is difficult for mobile augmented reality

I often hear sentiments from users that they don’t like markers, and they are wondering, why there are so relatively few markerless AR around. First I want to say that there is no excuse for using markers in the static scene with immobile camera, or if desktop computer is used. Brute force methods for tracking like bundle adjustment and fundamental matrix are well developed and used for years and years in the computer vision and photogrammetry. However those methods in their original form could hardly produce acceptable frame rate on the mobile devices. From the other hand marker trackers on mobile devices could be made fast, stable and robust.
So why markers are easy and markerless are not ?
The problem is the structure , or “shape” of the points cloud generated by feature detector of the markerless tracker. The problem with structure is that depth coordinate of the points is not easily calculated. That is even more difficult because camera frame taken from mobile device have narrow baseline – frames taken form position close one to another, so “stereo” depth perception is quite rough. It is called structure from motion problem.
In the case of the marker tracker all feature points of the markers are on the same plane, and that allow to calculate position of the camera (up to constant scale factor) from the single frame. Essentially, if all the points produced by detector are on the same plane, like for example from the pictures lying on the table, the problem of structure from motion goes away. Planar cloud of point is essentially the same as the set of markers – for example any four points could be considered as marker and the same algorithm could apply. Structure from motion problem is why there is no easy step from “planar only” tracker to real 3d markerless tracker.
However not everything is so bad for mobile markerless tracker. If tracking environment is indoor, or cityscape there is a lot of rectangles, parallel lines and other planar structures around. Those could be used as initial approximation for one the of structure from motion algorithm, or/and as substitutes for markers.
Another approach of cause is to find some variation of structure from motion method which is fast and works for mobile. Some variation of bundle adjustment algorithm looks most promising to me.
PS PTAM tracker, which is ported to iPhone, use yet another approach – instead of using bundle adjustment for each frame, bundle adjustment is running in the separate thread asynchronously, and more simple method used for frame to frame tracking.
PPS And the last thing, from 2011:

30, March, 2009 Posted by | Coding AR | , , , , , , , , | 4 Comments

Tracking planes in the city

In relation to tracking cityscape I did some planar segmentation test. Segmented FAST generated corners with simple 5-points projective invariant.
In some cases 5-point give some rough approximation:
planar segments
In some cases outliers are quite bad – some point have very close projective invariant but still are in diffferent planes.
bad seggment
So simple method not quite work…

19, March, 2009 Posted by | Coding AR, computer vision | , , , , , , , , , | 4 Comments

Oriented descriptors vs upright

I have tested oriented descriptors SURF descriptors vs upright descriptors for approximately horizontally oriented camera images and got feature density less than oriented then for upright. Repeatability of oriented was worse too…

17, March, 2009 Posted by | Coding AR | , , , | 2 Comments

Tracking cityscape

One of the big problem in image registration/structure from motion/3d tracking is using global information of the image. Feature/blob extraction, like SIFT, SURF or FAST etc using only local information around the point. Region detector like MSER using area information, but MSER is not good at tracking textures, and not quite stable at complex scenes. Edge detection provide some non-local information, but require processing edges. That could be computationally heavy, but looks promising anyway. There are a lot of methods which use global information – all kind of texture segmentation, epitome, snakes/appearance models, but those are computationally heavy and not suitable for mobiles. The question is how to incorporate global information from the image into tracker, and make it with minimal amount of operations. One way is to optimise tracker for specific environment – for example use the property of cityscape, a lot of planar structures and straight lines. Such multiplanar tracker wouldn’t work in the forest or park, but could be a working compromise.

12, March, 2009 Posted by | Coding AR | , , , , , , , , , , , , | Comments Off on Tracking cityscape

New version of Augmented Reality Tower Defense

A new version of AR Tower Defense – v0.03 Some bugs fixed (black screen bug)
and minor tracking improvement

12, February, 2009 Posted by | Augmented Reality, Coding AR, Demo, Mobile, Nokia N95 | , , , , , , , , , , | Comments Off on New version of Augmented Reality Tower Defense

Markerless tracking with FAST

Testing outdoor markerless tracking with FAST/SURF feature detector.
The plane of the camera is not parallel to the earth, that make difficult for eye to estimate precision.
registration

29, January, 2009 Posted by | Uncategorized | , , , , , , , , | Comments Off on Markerless tracking with FAST

FAST with SURF descriptor

Feature detected with multistage FAST and fitted with SURF descriptors
FAST SURF
Less strict threshold give a lot more correspondences, but also some false positives
FAST SURF

25, January, 2009 Posted by | Coding AR | , , , , , , | 44 Comments