New Scientist report a new method of improving vision was patented. The idea is to amplify the light hitting the retina for vision impaired. Nanoscale specs of semiconductor = quantum dots are injected into the eye. They fluoresce when hit by photons and thus make light hitting retina cell brighter. Tests on rats shown that rats with quantum dot injected have more retina electrical activity. No words if improvement in the sight of the rats was actually observed.
As everyone twittering about new Vuzix Wrap 920AV glasses it’s not clear for me from the photo
if they have camera or not.
Old Vuzix SightMate have clearly visible camera.
I don’t see anything like that on the new glasses. Vuzix promise “augmented reality features”, but no camera – no AR. What would be point of stylish videoglasses with ad-hock attached camera ?
I continue to test SURF, in respect to scale space. Scale space is essentially a pyramid of progressively more blurred or lower resolution images. The idea of scale invariant feature detection is that the “real” feature should be present at several scales – that is should be clearly detectable at several image resolution/blur levels. The interesting thing I see is, that for SURF, at least for test images from Mikolajczyk ‘s dataset, scale space seems doesn’t affect detection rate with viewpoint change. I meant that there is no difference if feature distinct in several scales or only in one. That’s actually reasonable – scale space obviously benefit detection in the blurred images, or noisy images, or repeatability/correspondence in scaled images , and “viewpoint” images form Mikolajczyk ‘s dataset are clear, high resolution and about the same scale. Nevertheless there is some possibility for optimization here.
Thanks to Blogmas I know that Eric Drexler has blog.
What caught my attention was 3D imaging of biological nanostructures. Electron tomography was used to reconstruct 3d structure of biological, well, nanostructures.
I have tested several modification of SURF, using original SURF Hessian, extremum of SURF-based Laplacian, Hessian-Laplace – extremum both Hessian and Laplacian and minimal eigenvalue of Hessian. They all give about the same detection rate, but original SURF Hessian give better results. Minimal eigenvalue of Hessian seems better scaling with threshold value – original Hessian absolute value could be very low, but eigenvalues are not. So this approach may have some advantage if there are potential precision loss problem, for example in fixed point calculations. A lot of high-end mobile phones still launched without hardware floating point so it still could be useful in AR or Computer Vision applications.
I’ve started experimenting with markerless tracking. I’ve captured several cityscape image sequences and processed them with SURF detector. I’ve used Nokia N95 viewfinder frames. Here descriptors were oriented:
There are some corresponding features detected in both images, but their descriptors arn’t fit.
Interesting, upright, not oriented descriptors give little different picture – some new correspondences found, some lost.
Slashdot,, Engadget and others report research group in Kyoto succeed in reading 10×10 pixel images form visual cortex.
Being a neural net visual cortex encoding/hashing could be personal for each brain, but researchers seems were able to use the same response dataset to reconstruct images for two different people.
AR Tower Defense is fullscreen now. I ‘ve also changed default mode from portrait to landscape.
Symbian c++ development environment Carbide.c++ 2.0 form Nokia
Fullscreen version of the AR Tower Defense moving along nicely. I have changed default screen position from portrait to landscape. Don’t see any noticeable slowdown now. I’m thinking about increasing the number of markers from 6 to 9, but I’m not sure. Too many markers would clog the table…
PS It’s here already