There are two recent developments related to Augmented Reality and Google – Google Goggles and Google integrating QR codes into Google Maps. While I was talking on twitter with @noazark the question arise about Google Google not doing real-time localization of the user, thus not being a “real” AR.
Here come QR codes. QR code are extremely easy to recognize in the camera image, and their square shape allow for fast calculation of camera position relatively to QR. In fact each QR code include three fiduciary markers:
And well known marker-tracking technique easily applied to them. Marker tracking could be augmented (pan intended:) by planar tracking of the corners of the pattern itself. That allow for attaching virtual 3d objects/animations to QR codes, but there is more in it. As QR code contain more than 4k of data, exact GPS coordinate, pattern orientation and its’ size could be encoded in the pattern. That way mobile phone seeing the code can easily calculate it’s exact 3d coordinate and orientation, not only relatively to QR, but absolute.
More of it – QR code can have coordinates of nearby QR codes, creating kind of localization grid, which can point user to any location covered by that grid with arrow on the screen of the phone.
Now to markerless tracking – QR code can be used to jump-start markerless tracker and assist it with error-correction(drift compensation), especially mentioned grid of the codes. That is especially relevant to markereless trackers which use planar structures and straight edges.
Now there is one problem here – white QR code is easy to segment out of dark background. But on white background it not so easy to recognize, and embedded fiduciary markers will not be seen form afar. Here is suggestion – make thick black frame around the QR, and make it part of the extended standard. This square shape would be easy to recognize, even if it’s only couple of dozen of pixel in diameter. With incremental tracking phone will be able to track it(after initial close up) even if moved quite far from the QR. If this square frame is part of the standard, always having the same relative size, it could be used for distance estimation.
Now combine it with Google Goggles real time and you have functional AR with 3d registration.
I have been struck off the list of the Nokia Augmented Reality co-creation session, so here is a gist of what I was intending to say about AR-friendly mobile devices.
I will not repeat obvious here (requirements for CPU, FPU, RAM etc.) but concentrate on things which are often missed.
I. Hardware side
1. Battery life is the most important thing here. AR applications are eating battery extremely fast – full CPU load, memory access, working camera and on top of it wireless data access, GPS and e-compass.
It’s not realistic to expect dramatic improvement in the battery life in near future, though fuel cells and air-fueled batteries give some hope. If one think short term the dual battery is the most realistic solution. AR-capable devices tend to be quite heavy and not quite slim anyway, so second battery will not make dramatic difference (iPhone could be exception here).
Now how to make maximum out of it? Make batteries hot-swappable with separate slots and provide separate battery charger. If user indoor he/she can remove empty battery and put it on charge while device is running on the second.
2. Heating. Up until now no one was paying attention to the heating of mobile devices, mostly because CPU-heavy apps are very few now (may be only 3d games). AR application produce even more heat than 3d game and device could become quite hot. So heatsinks and heatpumps are on the agenda.
3. Camera. For AR the speed of the camera is more important than the resolution. Speed is the most important factor, slow camera produce blurred images which are extremely hard to process (extract features, edges etc)
Position of the camera. Most of the users are holding device horizontally while using AR. Specific of the mobile AR is that simultaneously user is getting input from the peripheral vision. To produce picture consistent with peripheral vision camera should be in the center of the device, not on the extreme edge like in N900.
Lack of skewing, off-center, radial and rolling shutter distortions of the camera is another factor. In this respect Nokia phone cameras are quite good for now, unlike iPhone.
4. Buttons. Touchscreen is not very helpful to AR, all screen real estate should be dedicated to the environment representation. While it’s quite possible to make completely gesture-driven AR interface buttons are still helpful. There should be at least one easily accessible button on the front panel. N95 with slider out to the right is the almost perfect setup – one big button on front panel and some on the slider on the opposite side. N900 with buttons only on the slider, slider sliding only down and no buttons on the front panel is the example of unhelpful buttons placement.
II. Software side
Platform fragmentation is the bane of mobile developers. Especially if several new models launched every quarter. One of the reasons of the phenomenal success of iPhone application platform is that there is no fragmentation whatsoever. Whit the huge zoo of models it practically impossible support all that are in the suitable hardware range. That is especially difficult with AR apps, which are closely coupled with camera technical specification, display size and ratio etc. If manufacturers want to make it easy for devs they should concentrate on one AR-friendly line of devices, with binary, or at least source code compatibility between models.
2. Easy access to DSP in API. It would effectively give developer a second CPU.
3. Access to raw data from camera. Why row data from camera are not accessible from ordinary API and only available to selected elite developer houses is a mistery to me. Right now, for example for Symbain OS camera viewfinder convert data to YUV422, from YUV422 to BMP and ordinary viewfinder API have access to BMP only. Quite overhead.
4. API to access internal camera parameters – focus distance etc. Otherwise every device have to be calibrated by developer.
As cnet point out Symbian is not mentioned in the joint Intel-Nokia press release about 3G and Open Source Software collaboration. Only Maemo and Moblin are mentioned. Symbian, though also open sourced is left out. It could be that Nokia is less enthusiastic about Symbain OS now. Existing Symbain OS UIs are inferior to iPhone UI, Symbian OS third party applications are not getting enough traction and most of Symbian users are not even aware they exist. Symbian Signed restrictions are not helping either. BTW most of Symbian users are not even aware they are Symbian users.
So Nokia seems hedging its bets with Maemo linux. Cnet think Nokia could switch to Maemo for high-end devices and leave Symbian for mid-range.
Polynesian Stick Charts were completely different way of navigation, they were mapping not only locations, but also oceanic swells, patterns of waves.
Specific map encoding was closely guarded secret, known only to group of navigators who own them.
Navigating by the wave pattern navigator “would crouch in the bow of his canoe and literally feel every motion of the vessel.”They “concentrated on refraction of swells as they came in contact with undersea slopes of islands and the bending of swells around islands as they interacted with swells coming from opposite directions.”
Fascinating staff, kind of technology which could have been developed by alien, or in alternate history line.
Time-domain smoothing filter is somehow controversial question in image registration. The video is inherently smooth, necessity of time-domain smoothing is usually sign of instability of image registration algorithm. Taking average for several frames is especially notorious. Ideally good, stable algorithm don’t need any smoothing in time-domain. However what to do if there is small jittering in otherwise stable and fast real-time algorithm. Just couple of pixel in amplitude, it’s not noticeable on the big virtual objects, but on small, especially stretched or linear objects it’s quite noticeable. I have found that at least in my case, simple, several frames, smoothing filter remove jittering and is not causing any visible alignment artifacts.
First reception of the demo is positive. It seems generally users have no problems with concepts of fiduciary markers and keeping them in the camera frame. Mostly people want more stuff and gameplay, as always, but seems there is no problem with usability. Well, of cause there is problem of Nokia platform fragmentation, but that is not exactly AR related. Well, it could be. Mobile AR applications are CPU hungry, and that force them to squeeze device hardware to its extreme. Which make AR apps especially vulnerable to platform fragmentation. I’ve already mentioned strange problem with N96 in previews post.
The concept of tabletop marker-based game seems viable.
The main question is, how to bring more real-world interaction into game ?
Right now we have phone as 3d mouse and small objects scanned and placed into game.
Make markers movable ? Wouldn’t it hinder gameplay ? Scan surface of the table and use it as game “terrain”, with game physics properties imported from real table surface/objects ?