Mirror Image

Mostly AR and Stuff

Samsung SARI 1.5 Augmented Reality SDK is out in the wild

Something I did for Samsung (kernel of tracker). Biggest improvement in SARI 1.5 is the sensors fusion, which allow for a lot more robust tracking.
Here is example of run-time localization and mapping with SARI 1.5:

This is the AR EdiBear game (free in Samsung apps store)

14, November, 2011 Posted by | Augmented Reality, Coding AR, computer vision, Demo, Games, mobile games | , , , , , , , , , , | 3 Comments

Stuff and AR

I once asked, what’s 3d registration/reconstruction/pose estimation is about – optimization or statistics? The more I think about it, the more I convinced it’s at least 80% statistics. Often specifically optimization tricks like Tikhonov regularization have statistical underpinning. Stability of optimization is robust statistics(Yes I know, I repeat it way too often). Cost function formulation is a formulation for error distribution and define convergence speed.

Now unrelated(almost) AR stuff:
I already mentioned on Twitter that version of markerless tracker for which I did a lot of work is part of Samsung AR SDK (SARI) for Android and Bada. It was was shown at AP2011(Presentaion and also include nice Bada code). AR SDK presentation is here.
Some videos form presentation – Edi Bear game demo with non-reference tracking at the end of the video and less trivial elements of SLAM tracking. Other application of SARI SDK – PBI (This one seems use earlier version).

4, July, 2011 Posted by | Uncategorized | , , , , , | 1 Comment

SMMT is now open sourced

SMMT is now open sourced under BSD-like license. Sourceforge page is here.

21, February, 2010 Posted by | Coding AR | , , , , , | 4 Comments

TI demoed tablet with stereocamera

In relation to this post TI demoed OMAP3 tablet with dual camera capable of recording 3d images.
TI promise dual core OMAP4 will be even better at this.

18, February, 2010 Posted by | Augmented Reality, Mobile | , , , , , | Comments Off on TI demoed tablet with stereocamera

Augmented reality: from Tangible Space to Intelligent Space

Threr is such thing as Milgram’s Reality-Virtuality Continuum
Milgrams continuum
Milgram’s continuum shows progress of interface from raw environment to completely synthetic environment.
It looks like it’s possible to add another dimension to this picture. There exists a concept of “Tangible Space” in AR. “Tangible Space” basically mean that user can interact with real-world objects and those actions affect virtual environment. For example AR game which use real world objects as part of gameplay, track positions and any changes of state of those objects. Essentially “Tangible Space” is virtual wrapping around real-world interaction.
However that line of thought could be stretched beyond augmented reality. In the “Tangible Space” real-world interaction affect virtual environment. What if virtual interaction affect real-world environment? In that case we would have “Intelligent Space”, or iSpace.
DIND
Based on DIND – Distributed Intelligent Networked Device. iSpace is an augmented(or virtual) reality environment “augmented” with mobile robots and/or other actuators. Intelligent network now not only track physical environment, but also actively interact with it using physical agents. If Augmented Reality is an extension of eye, Intelligent Space is an extension of both eye and hands. Not only real environment is a part of interface now(as in “Tangible Space”) , it now actively help human to perform some task, and also should guess how to do it. Human and robots are now integrated system, something like distributed exoskeleton.
Now we have a new dimension for Milgram’s Continuum:
Passive View of Real Environment->Augmented Reality->Tangible Space->Intelligent Space
If you remember Vernor Vinge’s “Rainbow Ends”, the enviroment in it not just an Augmented Reality – it’s an Intelligent Space

8, February, 2010 Posted by | Augmented Reality | , , , , | Comments Off on Augmented reality: from Tangible Space to Intelligent Space

Symbian Multimarker Tracking Library updated

Symbian Multimarker Tracking Library updated to v0.5. Some bugs fixed, markers can be moved run-time now. Download is here

24, January, 2010 Posted by | Coding AR | , , , , , , , , , | Comments Off on Symbian Multimarker Tracking Library updated

I want smartphone with stereocamera

#augmentedreality
Smartphone with stereocamera is not exactly a new concept
Motorola stereo camera phone
But 3d registration, rangefinding, augmented reality would be a lot more robust and efficient with stereocamera.
Of cause it should be implemented properly, distance between lenses should be as big as possible. Preferably with lenses near opposite ends of the phone, to increase baseline, which would increase 3d precision.
Special geek model could have second camera on the retractable extender for even more precision.
Stereocamera would make AR markerless tracking trivial. 3d structure of the scene could be triangulated in one step form the single stereoframe.

12, January, 2010 Posted by | Augmented Reality | , , , , , | 2 Comments

Vizux intoduce some serious Augmented Reality eyewear.

Via Marketwire.Here it is, Wrap 920AR:
Wrap 930AR
Specs:
* 1/3-inch wide VGA Digital Image Sensor
* Resolution: 752H x 480W per lens
* Frame rate: 60 fps
* High-speed USB 2.0
* some kind of 6DoF tracker (probably 3-axis accelerometer and/or e-compass, I don’t have hopes for gyroscope)
* Supported by Vuzix Software Developer Program
$799.99, expected availability is 2nd quarter of 2010.
The Wrap 920AR’s stereo camera assembly and 6-DoF Tracker will also be available separately for upgrading existing Wrap video eyewear. Here is Wrap 920AR at vizux homepage

7, January, 2010 Posted by | Augmented Reality | , , , | 1 Comment

More bundle adjustment

Here is some more narrow baseline local bundle adjustment, from only two camera frames.

Outlier is drawn in red. Some points are not detected as outliers, but still are not localized properly.
Multiscale FAST used for detection. No descriptors were used for point correspondence, instead incremental tracking with search in sliding window by average gradient responses was used(there are three tracked frames between those two). I think those bad points could possibly be isolated with some geometric consistency rules, presuming landscape is smooth.

6, January, 2010 Posted by | Coding AR | , , , , , , | Comments Off on More bundle adjustment

Features matching and geometric consistency.

Here I want to talk about matching in image registration. We are doing registration in 3D or 2D, and using feature points for that. Next stage after extraction of feature points from the image is finding corresponding points in two(or more) images. Usually it’s done with descriptors, like SIFT, SURF, DAISY etc. Sometimes randomized trees are used for it. Whatever methods is used it usually has around .5% of false positives. False positives create outliers in registration algorithm. That is not a big problem in planar trackers or model/marker trackers. It could be a problem for Structure From Motion though. If CPU power is not limited the problem is not very serious. Heavy-duty algorithms like full-sequence bundle adjustment and RANSAC cope with outliers pretty well. However even for high-end mobile phones such algorithms are problematic. Some tricks can help – Georg Klein put full-sequence bundle adjustment into separate thread on PTAM tracker to run asynchronously, but I’m trying to do local, 2-4 frames bundle adjustment here. The problem of false positives is especially difficult for images of patterned environment, where some image parts are similar or repeated.
Here mismatched correspondence marked with blue line (points 15-28).

As you can see it’s not easy for any descriptor to tell the difference between points 13(correct) and 15(wrong) on the left image – their neighborhood is practically the same:


Such situations could easily happen not only indoor, but also in cityscape, industrial, and others regular environments.
One solution for such cases is to increase descriptor radius, to process a bigger patch around the point, but that would create problems of its own, for example too much false negatives.
Other approach is to use geometric consistency of the image points positions.
There are at least two ways to do it.
One is to consider displacements of corresponding points between frames. Here is example from paper by Kanazawa et al “Robast Image Matching Preserving Global Geometric Consistency”

This method first gathering local displacement statistic around each points, filter out outliers and and apply smoothing filter. Here are original matches, matches after applying consistency check and matches after applying smoothing filter.

However this method works best for dense, regular sets of feature points. For small, sparse set of points it does not improving situation much.
Here is a second approach. Build graph out of feature points for each frame.

Local topological structure of the two graphs is different because of false positives. It’s easy to find graph vertices/edges which cause inconsistency – edges marked blue.They can be found for example by signs of crossproducts between edges. After offending vertices found they are removed:

There are different ways to build graph out of feature points. Simplest is nearest neighbors, but may be Delaney triangulation or DSP can do better.

11, December, 2009 Posted by | Coding AR, computer vision | , , , , , , , , | 2 Comments