## Randomness: our brain deceive us

Here two points distribution : one is random, and one is not:

Which is which ?

The thing is that left image is not random, and right is.

Sean Carroll from Cosmic Variance write:

“Humans are not very good at generating random sequences; when asked to come up with a “random” sequence of coin flips from their heads, they inevitably include too few long strings of the same outcome. In other words, they think that randomness looks a lot more uniform and structureless than it really does. The flip side is that, when things really are random, they see patterns that aren’t really there. It might be in coin flips or distributions of points, or it might involve the Virgin Mary on a grilled cheese sandwich, or the insistence on assigning blame for random unfortunate events.”

## Sorry, no warp drive

Star Treck-esque warp drive – Alcubierre Drive was a mathematical curiosity in the General Relativity Theory, which allowed for faster than light travel inside of the bubble of warped space-time. Of cause it had some problem, like that bubble of space-time could have been created only if some matter was already moving faster than light, and it required exotic matter, and it required three solar masses to transport a single atom. Now it looks like quantum mechanics finally put it out of misery. Exploring mechanism of creaton of warp bubble out of the flat space-tiem , using semiclassical approach Spanish-Italian team shown that energy of the front edge of the warp bubble would grow exponentially with time, which mean that warp drive would be unstable.

Via Slashdot

## Algebra and geometry

Something I’ve picked up at The n-Category Café. Algebra and geometry are analogous to syntax and semantic with syntax corresponding to algebra and semantics to geometry. This broad statement have precise meaning, which could be expressed as duality between Boolean algebras and specific topological spaces, which used in the study of formal semantics of computer languages.

## Mobile OS for Augmented Reality

Which platform suit better for mobile AR ? Each has it pluses and minuses. I’m trying to make overall estimation, not only form prototype development pov.

1. iPhone

+ beautiful phone

+! no platform fragmentation

+ application store

+ growing market share

+ 3d accelerator, GPS, accelerometer

+ active developer community

-!! No official camera API for now, direct access to camera require undocumented API

– slow camera on the existing model (better in the next model ?)

– CPU underclocked to 412Mhz on existing model (better in the next model ?)

2. Android

+ Open sourced

+ good CPU for existing model (528Mhz for G1)

+ 3d accelerator, GPS, accelerometer for existing model

+ active developer community

+ application store

+ completely open model for developers available.

-! officially java only (10-100 more slow than native code for numerical tasks), installation of native code app require hack on the consumer model.

– low market penetration for now(will be better?)

3. Symbian

+! Big market share

+ some models have good CPU (up to 600Mhz)

+ some models have fast camera

+ some models have 3d accelerator, GPS, accelerometer and even electronic compass

+ application store coming soon for Nokia models

+ will be open source soon

+ situation with Symbian Signed may improve in the future.

-! platform fragmentation, different OS versions are only partially compatible.

– Symbian signed prevent access to GPS/accelerometer for early versions(S60 FP3) self-signed application

-! For signed app – each binary version should be paid and signed separately, require expensive Publisher ID

– No self-signed application allowed to app store.

– high learning curve

– Market share is shrinking now, eaten by iPhone

4. WinMobile

Not many specific pluses or minuses.

– Small market share

5. Other flavors of Linux – situation is not clear yet.

## GE “Smart Grid” conspiracy unveiled :)

GE “Plug Into Smart Grid” conspiracy was unveiled by xkcd:

## From financial crisis to image processing: Ignore Topology At Your Own Risk.

Very interesting article in Wired Recipe for Disaster: The Formula That Killed Wall Street . I’m not a statistician, but I’ll try to explain it. The gist of the article is that in the heart of the current financial crisis is the David X. Li formula, which use “Gaussian copula function” for risk estimation. The idea of formula is that if we have to estimate joint probability of two random events, it could be done with simple formula, which use only probability distributions of each event as if they were independent and a single parameter – statistical correlation. So what bankers did – instead of looking into relationships and connections between events they just did calculate one single statistical parameter and used it for risk estimation. Even more – they applied the same formula to the results of those relatively simple calculations and build pyramids of estimations, each next step applying the same simple formula to results of the previous step. As a result, an extremely complex behavior was reduced to the simple linear model, which had little in common with reality.

And now – the illustration from wiki, what exactly this single parameter – correlation is:

Here are several two-variable distributions and their correlation coefficients. It could be seen that for linear relationships correlation capture dependence of variables perfectly (middle raw). For upper row – normal distributions – it capture the essence of dependency. We can say something about other variable if we know one variable and correlation in that case. For complex shapes – lower row – correlation is zero for each. Each of the lower shapes will be represented as the upper central shape (fuzzy ball) with correlation. Correlation capture nil information about how one variable depend on another for the lower shapes. Correlation allow representation of any shape only as fuzzy ellipse. Li’s formula reduce dimensionality. The thing is, dimensionality – topological property, and you don’t mess with topological properties easily. Imagine bankers using fuzzy ball instead of ring for risk estimation…

Now to the image processing. Most of feature detection in image processing is done for grayscale image. Original image is usually RGB, but before features extraction it converted to grayscale.

However the original image is colored, why not to use colors for feature detection ? For example detect features in each color channel separately?

The thing is, the pictures in each color channel are very similar.

The extraction of blobs in each channel in most cases will triple the job without gaining of significant new information – all the channels will give about the same blobs.

Nevertheless it’s obvious, there is some nontrivial information about the image, encoded in colors.

Why blob detection for each color don’t give access to it ?

The reason is the same as for current financial crisis – dimensionality. Treating each color channel separately we replace five-dimensional RGB+coordinates space with three three-dimensional color+coordinates spaces. Relationships between color channels are lost. Topology of color structure is lost.

To actually use color information, statistical relationships between colors of the image should be explored – something like three dimensional color bins histogram, essentially converting image from RGB to indexed color.

## Nokia announces Ovi Application store

Nokia announces Ovi Application store

for Flash, Java applications and other(?) content. Symbian apps are not mentioned explicitly, but presumably they will be available too. The developer or content provider would get 70% revenue share. Not much known about the store and policy yet. On the publisher site there is a form for e-mail and content submission for Nokia consideration. No online registration for publishers available yet.

PS. Symbian applications will be accepted, confirmed by Nokia.

No self-signed application allowed in the store.

## “Probabilistic” CMOS

I was intrigued by reports of ultra-efficient chips based on the probabilistic logic – PCMOS. After some googling I found this pdf, which clear the subject somehow. It seems probabilistic logic is not went into equation. Instead this architecture suggest normal, deterministic CPU with probabilistic coprocessor. Coprocessor use noise as source for random number generator (essentially analog random number generator), and can use this random number generator in different Monte-Carlo algorithms, like random neural networks, probabilistic cellular automata and likes. It seems to me the gain could be achived only for specific applications which use random number generators. In this PCMOS is not different from GPU, DSP and other task-specific accelerators.

## Markerless tracking with FAST

Testing outdoor markerless tracking with FAST/SURF feature detector.

The plane of the camera is not parallel to the earth, that make difficult for eye to estimate precision.

## Polynesian stick charts were mapping wave patterns

Polynesian Stick Charts were completely different way of navigation, they were mapping not only locations, but also oceanic swells, patterns of waves.

Specific map encoding was closely guarded secret, known only to group of navigators who own them.

Navigating by the wave pattern navigator “would crouch in the bow of his canoe and literally feel every motion of the vessel.”They “concentrated on refraction of swells as they came in contact with undersea slopes of islands and the bending of swells around islands as they interacted with swells coming from opposite directions.”

Fascinating staff, kind of technology which could have been developed by alien, or in alternate history line.