Very interesting article in Wired Recipe for Disaster: The Formula That Killed Wall Street . I’m not a statistician, but I’ll try to explain it. The gist of the article is that in the heart of the current financial crisis is the David X. Li formula, which use “Gaussian copula function” for risk estimation. The idea of formula is that if we have to estimate joint probability of two random events, it could be done with simple formula, which use only probability distributions of each event as if they were independent and a single parameter – statistical correlation. So what bankers did – instead of looking into relationships and connections between events they just did calculate one single statistical parameter and used it for risk estimation. Even more – they applied the same formula to the results of those relatively simple calculations and build pyramids of estimations, each next step applying the same simple formula to results of the previous step. As a result, an extremely complex behavior was reduced to the simple linear model, which had little in common with reality.
And now – the illustration from wiki, what exactly this single parameter – correlation is:
Here are several two-variable distributions and their correlation coefficients. It could be seen that for linear relationships correlation capture dependence of variables perfectly (middle raw). For upper row – normal distributions – it capture the essence of dependency. We can say something about other variable if we know one variable and correlation in that case. For complex shapes – lower row – correlation is zero for each. Each of the lower shapes will be represented as the upper central shape (fuzzy ball) with correlation. Correlation capture nil information about how one variable depend on another for the lower shapes. Correlation allow representation of any shape only as fuzzy ellipse. Li’s formula reduce dimensionality. The thing is, dimensionality – topological property, and you don’t mess with topological properties easily. Imagine bankers using fuzzy ball instead of ring for risk estimation…
Now to the image processing. Most of feature detection in image processing is done for grayscale image. Original image is usually RGB, but before features extraction it converted to grayscale.
However the original image is colored, why not to use colors for feature detection ? For example detect features in each color channel separately?
The thing is, the pictures in each color channel are very similar.
The extraction of blobs in each channel in most cases will triple the job without gaining of significant new information – all the channels will give about the same blobs.
Nevertheless it’s obvious, there is some nontrivial information about the image, encoded in colors.
Why blob detection for each color don’t give access to it ?
The reason is the same as for current financial crisis – dimensionality. Treating each color channel separately we replace five-dimensional RGB+coordinates space with three three-dimensional color+coordinates spaces. Relationships between color channels are lost. Topology of color structure is lost.
To actually use color information, statistical relationships between colors of the image should be explored – something like three dimensional color bins histogram, essentially converting image from RGB to indexed color.
Returning to my old post about DSi and AR, it’s known now that main DSi CPU is 132 Mhz. That is marginally acceptable for markers based AR games. What kind of performance can be achieved for DSi ?
Here is my old Nokia 6600 demo which run at 107Mhz CPU phone. With DSi 133Mhz CPU+second 33Mhz CPU I’d estimate the same game can run on DSi at 8-10 fps. That’s is not a stellar, but nevertheless playable frame rate. Could be faster with some aggressive optimization or simplifications.
Nokia announces Ovi Application store
for Flash, Java applications and other(?) content. Symbian apps are not mentioned explicitly, but presumably they will be available too. The developer or content provider would get 70% revenue share. Not much known about the store and policy yet. On the publisher site there is a form for e-mail and content submission for Nokia consideration. No online registration for publishers available yet.
PS. Symbian applications will be accepted, confirmed by Nokia.
No self-signed application allowed in the store.
I had already written in this blog, the main factor slowing mobile AR development is the battery life. Faster CPU require more energy, and eating device battery very fast.
One way to work around the problem is to use different, task specific architecture for CPU units/coprocessors, thus to get more processing power with the same power consumption.
Another – get more energy. That’s what Samsung did with “”Blue Earth” phone. The phone has full solar panel on its back.
I was intrigued by reports of ultra-efficient chips based on the probabilistic logic – PCMOS. After some googling I found this pdf, which clear the subject somehow. It seems probabilistic logic is not went into equation. Instead this architecture suggest normal, deterministic CPU with probabilistic coprocessor. Coprocessor use noise as source for random number generator (essentially analog random number generator), and can use this random number generator in different Monte-Carlo algorithms, like random neural networks, probabilistic cellular automata and likes. It seems to me the gain could be achived only for specific applications which use random number generators. In this PCMOS is not different from GPU, DSP and other task-specific accelerators.
A new version of AR Tower Defense – v0.03 Some bugs fixed (black screen bug)
and minor tracking improvement
Engadget report Nokia may open a software portal for its Symbian OS applications, with a formal announcement to come at the Mobile World Congress. Rumors about Nokia Symbian application store were floating at Nokia developer forum couple of months ago. Actually I also have written about it in this blog post – “What iPhone can teach Nokia” :)
Everyone talking about what near-future AR device should look like, so I’d like too.
First possibility – videoglasses with camera + lightweight PC.
Dedicated wearable PC is out IMO, it’s a too hardcore staff.
Next close thing is a netbook. Netbook could be used both in its main capacity and as an AR platform. However the problem here is the weight. Somehow I don’t think average user would want to carry around 1kg netbook in the backpack, while using AR. Anything more light wouldn’t have enough processing power to track mid-resolution stereo cameras in real time. Nevertheless weight could be reduced.
Make display and keyboard detachable, use carbon fiber case and easily replaceable dual (or may be triple) batteries. If all this would reduce weight below 400g while keeping CPU above 1.5Mhz with 2.5 hours of full load battery life (5 hours with dual batteries) such netbook could be viable platform for AR with videoglasses.
Second possibility – handheld/smartphone. Here we have severe limitations on the battery/CPU/GPU. That’s why I don’t think high resolution display would be beneficial for such AR device. To process high-resolution images require a lot of CPU power. And with small size of display hi-res wouldn’t look much better then low/mid res. 320×240 is good enough , 400×320 is optimal probably. For the same reason 1 Mb camera is enough too, but it should be fast, preferably 60 fps, camera with high quality sensors, without any distortions and good for low-light conditions. I’m not sure about auto-focus. Slow auto-focus could be a problem. Accelerometers and compass would be good. GPS is a must. CPU – 600 Mhz at least, with hardware floating point. Lightweight GPU. Most important thing is API. Complete access to Image Processor (if present), Digital Signal Processor and raw camera data. That kind of API is not easily accessible on the most of the modern smartphones.
Actually if existing smartphones had such an API opened right now there would be breakthrough in the mobile AR already. Access to DSP could make image processing a lot faster.
Now would any such device allow real time AR like in the Coca-Cola avatar ad? Definitely no.
The bottleneck here is the battery IMO. We have to wait for some new future tech like carbon nanotubes
supercapacitors to see complete coverage AR.