Mirror Image

Mostly AR and Stuff

Augmented reality: from Tangible Space to Intelligent Space

Threr is such thing as Milgram’s Reality-Virtuality Continuum
Milgrams continuum
Milgram’s continuum shows progress of interface from raw environment to completely synthetic environment.
It looks like it’s possible to add another dimension to this picture. There exists a concept of “Tangible Space” in AR. “Tangible Space” basically mean that user can interact with real-world objects and those actions affect virtual environment. For example AR game which use real world objects as part of gameplay, track positions and any changes of state of those objects. Essentially “Tangible Space” is virtual wrapping around real-world interaction.
However that line of thought could be stretched beyond augmented reality. In the “Tangible Space” real-world interaction affect virtual environment. What if virtual interaction affect real-world environment? In that case we would have “Intelligent Space”, or iSpace.
DIND
Based on DIND – Distributed Intelligent Networked Device. iSpace is an augmented(or virtual) reality environment “augmented” with mobile robots and/or other actuators. Intelligent network now not only track physical environment, but also actively interact with it using physical agents. If Augmented Reality is an extension of eye, Intelligent Space is an extension of both eye and hands. Not only real environment is a part of interface now(as in “Tangible Space”) , it now actively help human to perform some task, and also should guess how to do it. Human and robots are now integrated system, something like distributed exoskeleton.
Now we have a new dimension for Milgram’s Continuum:
Passive View of Real Environment->Augmented Reality->Tangible Space->Intelligent Space
If you remember Vernor Vinge’s “Rainbow Ends”, the enviroment in it not just an Augmented Reality – it’s an Intelligent Space

8, February, 2010 Posted by | Augmented Reality | , , , , | Comments Off on Augmented reality: from Tangible Space to Intelligent Space

My take on the next gen mobile augmented reality device specs

Everyone talking about what near-future AR device should look like, so I’d like too.

First possibility – videoglasses with camera + lightweight PC.
Dedicated wearable PC is out IMO, it’s a too hardcore staff.
Next close thing is a netbook. Netbook could be used both in its main capacity and as an AR platform. However the problem here is the weight. Somehow I don’t think average user would want to carry around 1kg netbook in the backpack, while using AR. Anything more light wouldn’t have enough processing power to track mid-resolution stereo cameras in real time. Nevertheless weight could be reduced.
Make display and keyboard detachable, use carbon fiber case and easily replaceable dual (or may be triple) batteries. If all this would reduce weight below 400g while keeping CPU above 1.5Mhz with 2.5 hours of full load battery life (5 hours with dual batteries) such netbook could be viable platform for AR with videoglasses.

Second possibility – handheld/smartphone. Here we have severe limitations on the battery/CPU/GPU. That’s why I don’t think high resolution display would be beneficial for such AR device. To process high-resolution images require a lot of CPU power. And with small size of display hi-res wouldn’t look much better then low/mid res. 320×240 is good enough , 400×320 is optimal probably. For the same reason 1 Mb camera is enough too, but it should be fast, preferably 60 fps, camera with high quality sensors, without any distortions and good for low-light conditions. I’m not sure about auto-focus. Slow auto-focus could be a problem. Accelerometers and compass would be good. GPS is a must. CPU – 600 Mhz at least, with hardware floating point. Lightweight GPU. Most important thing is API. Complete access to Image Processor (if present), Digital Signal Processor and raw camera data. That kind of API is not easily accessible on the most of the modern smartphones.
Actually if existing smartphones had such an API opened right now there would be breakthrough in the mobile AR already. Access to DSP could make image processing a lot faster.

Now would any such device allow real time AR like in the Coca-Cola avatar ad? Definitely no.
The bottleneck here is the battery IMO. We have to wait for some new future tech like carbon nanotubes
supercapacitors to see complete coverage AR.

1, February, 2009 Posted by | Augmented Reality | , , , , , , , | 5 Comments

The Greater Augmented Reality

That was in response to excellent Tim post.

I completely agree that “self” becoming progressively more fuzzy concept. For example “Self” concept usually include one’s memories. But what if some of my memories are stored outside of me, and my brain storing only search keys to them ? Yes, I mean google. Google already works as “Augmented Memory”. Ironically I’m developing mobile augmented reality apps, using google as augmented reality memory in weirdly recursive loop…

14, November, 2008 Posted by | Augmented Reality | , , , , | Comments Off on The Greater Augmented Reality

Augmented reality contact lenses could be little more close

Slashdot report Samsung showed prototype of colored E-paper based on the carbon nanotubes. Carbon nanotubes used as underlying electrodes, they are completly transparent and very thin. So we may be more close to AR holy grail – working active matrix contact lenses(Vinge’s Rainbow Ends anyone ?. Or at least good enough transperent see-through display glasses.

28, October, 2008 Posted by | Augmented Reality | , , , | Comments Off on Augmented reality contact lenses could be little more close