Threr is such thing as Milgram’s Reality-Virtuality Continuum
Milgram’s continuum shows progress of interface from raw environment to completely synthetic environment.
It looks like it’s possible to add another dimension to this picture. There exists a concept of “Tangible Space” in AR. “Tangible Space” basically mean that user can interact with real-world objects and those actions affect virtual environment. For example AR game which use real world objects as part of gameplay, track positions and any changes of state of those objects. Essentially “Tangible Space” is virtual wrapping around real-world interaction.
However that line of thought could be stretched beyond augmented reality. In the “Tangible Space” real-world interaction affect virtual environment. What if virtual interaction affect real-world environment? In that case we would have “Intelligent Space”, or iSpace.
Based on DIND – Distributed Intelligent Networked Device. iSpace is an augmented(or virtual) reality environment “augmented” with mobile robots and/or other actuators. Intelligent network now not only track physical environment, but also actively interact with it using physical agents. If Augmented Reality is an extension of eye, Intelligent Space is an extension of both eye and hands. Not only real environment is a part of interface now(as in “Tangible Space”) , it now actively help human to perform some task, and also should guess how to do it. Human and robots are now integrated system, something like distributed exoskeleton.
Now we have a new dimension for Milgram’s Continuum:
Passive View of Real Environment->Augmented Reality->Tangible Space->Intelligent Space
If you remember Vernor Vinge’s “Rainbow Ends”, the enviroment in it not just an Augmented Reality – it’s an Intelligent Space
Everyone talking about what near-future AR device should look like, so I’d like too.
First possibility – videoglasses with camera + lightweight PC.
Dedicated wearable PC is out IMO, it’s a too hardcore staff.
Next close thing is a netbook. Netbook could be used both in its main capacity and as an AR platform. However the problem here is the weight. Somehow I don’t think average user would want to carry around 1kg netbook in the backpack, while using AR. Anything more light wouldn’t have enough processing power to track mid-resolution stereo cameras in real time. Nevertheless weight could be reduced.
Make display and keyboard detachable, use carbon fiber case and easily replaceable dual (or may be triple) batteries. If all this would reduce weight below 400g while keeping CPU above 1.5Mhz with 2.5 hours of full load battery life (5 hours with dual batteries) such netbook could be viable platform for AR with videoglasses.
Second possibility – handheld/smartphone. Here we have severe limitations on the battery/CPU/GPU. That’s why I don’t think high resolution display would be beneficial for such AR device. To process high-resolution images require a lot of CPU power. And with small size of display hi-res wouldn’t look much better then low/mid res. 320×240 is good enough , 400×320 is optimal probably. For the same reason 1 Mb camera is enough too, but it should be fast, preferably 60 fps, camera with high quality sensors, without any distortions and good for low-light conditions. I’m not sure about auto-focus. Slow auto-focus could be a problem. Accelerometers and compass would be good. GPS is a must. CPU – 600 Mhz at least, with hardware floating point. Lightweight GPU. Most important thing is API. Complete access to Image Processor (if present), Digital Signal Processor and raw camera data. That kind of API is not easily accessible on the most of the modern smartphones.
Actually if existing smartphones had such an API opened right now there would be breakthrough in the mobile AR already. Access to DSP could make image processing a lot faster.
Now would any such device allow real time AR like in the Coca-Cola avatar ad? Definitely no.
The bottleneck here is the battery IMO. We have to wait for some new future tech like carbon nanotubes
supercapacitors to see complete coverage AR.
That was in response to excellent Tim post.
I completely agree that “self” becoming progressively more fuzzy concept. For example “Self” concept usually include one’s memories. But what if some of my memories are stored outside of me, and my brain storing only search keys to them ? Yes, I mean google. Google already works as “Augmented Memory”. Ironically I’m developing mobile augmented reality apps, using google as augmented reality memory in weirdly recursive loop…
Slashdot report Samsung showed prototype of colored E-paper based on the carbon nanotubes. Carbon nanotubes used as underlying electrodes, they are completly transparent and very thin. So we may be more close to AR holy grail – working active matrix contact lenses(Vinge’s Rainbow Ends anyone ?. Or at least good enough transperent see-through display glasses.