Mirror Image

Mostly AR and Stuff

ConvNet for windows

I have seen an excellent wlakthrough on building Alex Krizhevsky’s cuda-convnet for windows, but difference in configuration and installed packages could be tiresome. So here is complete build of convnet for windows 64:

https://github.com/s271/win_convnet

It require having CUDA compute capability 2.0 or better GPU of cause, Windows 64bit, Visual Studio 64bits and Python 64bit with NumPy installed. The rest of libs and dlls are precomplied. In building it I’ve followed instructions by Yalong Bai (Wyvernbai) from http://www.asiteof.me/archives/50.

Read Readme.md before installation – you may (or may not) require PYTHONPATH environmental variable set.

On side note I’ve used WinPython for both libraries and running the package. WinPython seems a very nice package, which include Spyder IDE. I have some problems with Spyder though – sometimes it behave erratically during debugging/running the code. Could be my inexperience with Spyder though. Another nice package – PythonXY – regretfully can not be used – it has only 32 bit version and will not compile with/run 64 bit modules.

24, October, 2013 Posted by | Coding, Uncategorized | , , , , , , , | Comments Off on ConvNet for windows

CUDA gamification experiment

This demo was inspired by Escapist article “I hate magic” by Robert Rath ( http://www.escapistmagazine.com/artic… ) and “continuous game of life” (http://arxiv.org/abs/1111.1567 ) by Stephan Rafler
This is attempt to show how GPGPU could be used to model magic as different physics.
The demo run at 70 fps at laptop with GF GTX 670M. With some tradeoffs it can be made run much faster
require CUDA GPGPU with at least 2.0 Cuda compute capability (practically all modern cards, GF GTX 550 or better)
Source code:
https://github.com/s271/cuMagic/

29, September, 2013 Posted by | Coding, Demo, Games, GPGPU | , , , | Comments Off on CUDA gamification experiment

__volatile__ in #cuda reduce sample

“Reduce” is one of the most useful samples in NVIDIA CUDA SDK. It’s implementation of highly optimized cuda algorithm for some of the elements of the array of the arbitrary length. It’s hardly possible to make anything better and generic enough with existing GPGPU architecture (if anyone know something as generic but considerably more efficient I’d like to know too). One of the big plus of the reduce algorithm is that it can work for any binary commutative associative operation – like min, max, multiply etc. And NVIDIA sample provide this ability – it’s implemented as reduce on template class, so all one have to do is implement class with overload of addition and assignment operations.

However there is one obstacle – it’s a __volatile__ qualifier in the code. Simple overload of “=” “+=” and “+” operations  in class LSum cause compiler error like

error: no operator “+” matches these operands
1> operand types are: LSum + volatile LSum

The answer is add __volatile__ to all class operation, but there is the trick here:

for “=”  just

volatile LSum& operator =(volatile LSum &rhs)

is not enough. You should add volatile to the end too, to specify not only input and output, but function itself as volatile.

At the end class looks like:

class LSum
{
public:

__device__ LSum& operator+=(volatile LSum &rhs)

{

return *this;

};

__device__ LSum operator+(volatile LSum &rhs)
{
LSum res = *this;
res += rhs;
return res;
};

__device__ LSum& operator =(const float &LSum)
{

return *this;

};

__device__ volatile LSum& operator =(volatile LSum &rhs) volatile
{

return *this;
};
};

11, May, 2013 Posted by | Uncategorized | Comments Off on __volatile__ in #cuda reduce sample

Split-Bregman for total variation: parameter choice

Split Bregamn method is one of the most used method for solving convex optimization problem with non-smooth regularization. Mostly it used for L_1 regularization, and here I want to talk about TV-L1 and similar regularizers ( like \|\Psi u\|_1 )
For simplest possible one-dimentional denoising case
min_{u} \quad (u-f)^2 - \mu\|\frac{\partial u}{\partial x}\|_1 (1)
Split Bregman iteration looks like
min_{\substack u} \quad (u-f)^2 + \mu\lambda(\frac{\partial u}{\partial x}+b-d)^2 (2)
d = Shrink(\frac{\partial u}{\partial x} + b, \frac{1}{\lambda}) (3)
b = b + \frac{\partial u}{\partial x} - d (4)
There is a lot of literature about choosing \mu parameter from (1) for denosing. Methods include Morozov discrepancy principle, L-curve and more – that’s just parameter of Tikhonov regularization.
There is a lot less metods for choosing \lambda for Split Bregman.
Theoretically \lambda shouldn’t be very important – if Split Bregman converge solution is not depending on \lambda. In practice of cause convergence speed and stability depend on \lambda a lot. Here I’ll point out some simple consideration which may help to choose \lambda
Assume for simplicity that u is twice differentiable almoste everythere, the (1) become
Lu = u-f - \mu\lambda*(\frac{\partial^2 u}{\partial x^2} + \frac{\partial b}{\partial x} - \frac{\partial d}{\partial x}) = 0 (5)
We know if Split Bbregman converge d \to \frac {\partial u}{\partial x}
that mean \frac {\partial b} {\partial x} \to \frac {\partial} {\partial x} sign(\frac {\partial u}{\partial x}), here exteranl derivative is a weak derivative.
For the solution b is therefore locally constant, and we even know what constant it is.
Recalling
Shrink(x,  \frac{1}{\lambda}) = \left\{\begin{array} {l l}  |x| \leq  \frac{1}{\lambda} : 0\\  |x| > \frac{1}{\lambda} : x - sign(x)\frac{1}{\lambda}  \end{array} \right.
For |x| > \frac{1}{\lambda} we have
d = \frac{\partial u}{\partial x} + b \pm \frac{1}{\lambda}
and accordingly
b = \mp  \frac{1}{\lambda}
For converged solution
b(x) = \frac{1}{\lambda}sign(\frac{\partial u}{\partial x})
everythere where \frac {\partial u}{\partial x} is not zero, with possible exeption of measure zero set.
Now returning to choice of \lambda for Split Bregman iterations.
From (4) we see that for small values b , b grow with step \frac {\partial u}{\partial x} until it reach it’s maximum absolute value \pm \frac{1}{\lambda}
Also if the b is “wrong”, if we want for it to switch value in one iteration | \frac {\partial u}{\partial x}| should be more than \frac{2}{\lambda}
That give us lower boundary for \lambda:
For most of x | \frac {\partial u}{\partial x}| \geq \frac{2}{\lambda}
What happens if on some interval b reach \mp  \frac{1}{\lambda} for two consequtive iterations?
Then on the next iteration form (5) and (3) on that interval
u_k-f - \mu\lambda(\frac{\partial^2 u_k}{\partial x^2}-\frac{\partial^2 u_{k-1}}{\partial x^2} ) = 0
or
u_k = min_{u} (u-f)^2 + \mu\lambda(\frac {\partial u}{\partial x} - \frac {\partial u_{k-1}}{\partial x})^2
See taht with \mu\lambda big enuogh sign of \frac {\partial u_k}{\partial x} stabilizing with high probaility and we could be close to true solution. The “could” in the last sentence is there because we are inside the interval, and if the values of u on the ends of interval are wrong b “flipping” can propagate inside our interval.
It’s more difficult to estimate upper boundary for \lambda.
Obviously for more easy solution of (2) or (5) \lambda shouldn’t be too big, so that eigenvalues of operator L woudn’t be too close to 1. Because solutions of (2) are inexact in Split Breagman we obviously want L having bigger eigenvalues, so that single (or small number of) iteration could suppress error good enough for (2) subproblem.
So in conclusion (and my limited experience) if you can estimate avg |\frac {\partial u}{\partial x}| for solution \lambda = \frac{2}{avg |\frac {\partial u}{\partial x}|} could be a good value to start testing.

2, January, 2013 Posted by | computer vision | 2 Comments

Algebraic topology – now in compressed sensing

Thanks to Igor Carron for pointing me to this workshop – Algebraic Topology and Machine Learning . There is very interesting paper there Persistent Homological Structures in Compressed Sensing and Sparse Likelihood by Moo K. Chung, Hyekyung Lee and Matthew Arnold. The paper is very comprehensive and require only minimal understanding of some algebraic topology concepts (which is exactly where I’m in realation to algebraic topology). Basically it’s application of topological data analysis to compressive sensing. They use such thing as persistent homology and “barcodes”. Before, persistent homology and barcodes were used for such things as extracting solid structure from noisiy point cloud. Barcode is stable to noise dependence of some topological invariants on some parameter. In case of point cloud parameter is the radius of the ball around each point. As radius go from very big to zero topology of union of balls change, and those changes of topology make barcode. Because barcode is stable topological invariant learning barcode is the same as learning topology of solid structure underlying point cloud.
In the paper authors using graphical lasso (glasso) with L_1 regularizer to find interdependency between set of sampled variables. However if consider \lambda parameter of regularizer as a kind of radius of ball this problem aquire persistent homology and barcode. The correlation matrix is thresholded by \lambda and become adjacency matrix of some graph. Barcode is now dependence of topology of that graph on parameter \lambda. What is especially spectacular is that to calculate barcode no glasso iteration are needed – barcode obtained by simple thresholding of correlation matrix. Thus barcode easily found and with it topology of correlations of variables. Well, at least that is how I understood the paper.
PS Using this approach for total variation denoising barcode would include dependance of size function from smoothing parameter.

11, December, 2012 Posted by | Uncategorized | Comments Off on Algebraic topology – now in compressed sensing

Deleting files which couldn’t be deleted by Administrator in Win 7

If you are an unfortunate person who have to use Windows 7 a lot (like me) undeletable files happen to you from time to time. It could be result of disk failure, or interrupted boot, or just for no reason at all.
The problem is, sometimes they can’t be deleted even by Administrator. To fix this Administrator have have to “take ownership” of the file in question.
It goes like this:
1. run from the command line
chkdsk /f
This step is necessary because file permissions could be corrupted
2. login as Administrator.
3. from the command line
takeown /f full_directory_and_path_name
4. from the command line
icacls full_directory_and_path_name /GRANT ADMINISTRATORS:F
5.Now Administrator can delete file. If even that is not helping you can try delete from Safe mode.

1, March, 2012 Posted by | Coding | Comments Off on Deleting files which couldn’t be deleted by Administrator in Win 7

Total Variation in Image Processing and classical Action

This post is inspired by Extremal Principles in Classical, Statistical and Quantum Mechanics in Azimuth blog.
Total Variation used a lot in image processing. Image denoising, optical flow, depth maps processing. The standard form of Total Variation f or L_2 norm is minimizing “energy” of the form
\int_\Omega f(u(x)) + (\nabla u(x))^2 dx
(I’m talking about Total Variaton-L_2 for now, not L_1) over all functions u:\ \Omega \rightarrow R^n
In case of image denoising it would be
\int_\Omega (v(x)-u(x))^2 + (\mu/2)(\nabla u(x))^2 dx
where v is original image and u is denoised image
Part f(u(x)) is called “fidelity term” and (\nabla u(x))^2 is “regularizer”
Regularizer part is to provide smoothness of solution and fidelity term is to force smooth solution to resemble original image (that is in case of image denoising)
Now if we return to classical Action, movement of the point is defined by the minimum of functional
A(u) = \int K(u(t)) - V(u(t)) dt, over trajectories u(t) where K is kinetic energy and V is potential energy, or
A(u) = \int f(u(t)) + (m/2)(\nabla u(t))^2 dt, \ f=-V
One-dimensional L_2 total variation for image denoising is the same as classical mechanics of the particle, with potential energy defined by iamge and smoothness of denoised image as kinetic energy! For optical flow potential energy is differences between tranformed first image and the second
(I_0(u(x)) - I_1(x))^2 and kinetic energy is the smoothness of the optical flow.
Of cause the strict equality hold only for one-dimentional image and L_2, and potential energy is quite strange – it depend not on coordinate but on velocity, like some kind of friction.
While it hold some practical meaning, most of practical task have two or more dimensional image and L_1 or L_p regulariser. So in term of classical mechanics we have movement in multidimensional time with non-classical kinetic energy
K = \mu/2 \sqrt{v^2}
which has uncanny resemblance to Lagrangian of relativistic particle
-m c^2\sqrt{1-v^2/c^2}
So total variation in image processing is equivalent to physics of non-classical movement with multidimensional time, in the field with potential energy defined by image. I have no idea what does it signify, but it sounds cool :) . Holographic principle? May be crowd from Azimuth or n-category cafe will give some explanation eventually…
And another, related question: regularizer in Total Variation. There is inherent connection between regularizers and Bayesian priors. What TV-L1 regularizer mean from Bayesian statistics point of view?

PS I’m posting mostly on my google plus now, so this blog is a small part of my posts.

14, January, 2012 Posted by | Uncategorized | Comments Off on Total Variation in Image Processing and classical Action

Samsung SARI 1.5 Augmented Reality SDK is out in the wild

Something I did for Samsung (kernel of tracker). Biggest improvement in SARI 1.5 is the sensors fusion, which allow for a lot more robust tracking.
Here is example of run-time localization and mapping with SARI 1.5:

This is the AR EdiBear game (free in Samsung apps store)

14, November, 2011 Posted by | Augmented Reality, Coding AR, computer vision, Demo, Games, mobile games | , , , , , , , , , , | 3 Comments

Interior point method – if all you have is a hammer

Interior point method for nonlinear optimization often considered as complex, or highly nontrivial etc. The fact is, that for “simple” nonlinear optimization it’s quite simple, manageable and can even be explained in #3tweets. For those not familiar with it there is a simple introduction to it in wikipedia, which in turn follows an excellent paper by Margaret H. Wright.
Now about “if all you have is a hammer, everything looks like a nail”. Some of applications of interior point method could be quite unexpected.
Everyone who worked with Levenberg-Marquardt minimization algorithm know how much pain is the choice of the small parameter \lambda . Levenberg-Marquardt can also be seen as modification of Gauss-newton with a trust region. The \lambda of Levenberg-Marquardt do correspond to the trust region radius, but that dependence is highly complex and is difficult to estimate. You want trust region of the radius r, but what should be avlue of \lambda? There is no easy answer to that question; there are some complex methods, or there is a testing with subdivision, which is what the original Levenberg-Marquardt implement.
Interior point can help here.
If we choose shape of trust region for Gauss-Newton as hypercube or simplex or like, we can formulate it as set of L1 norm inequality constrains. And that is the domain of interior point method! For hypercube ||\Delta x||_1 \leq \epsilon the resulting equations looks especially nice
\begin{pmatrix} W & -I & I \\ -I & diag & 0 \\ I & 0 & diag \end{pmatrix}
W – hessian, I – identity, diag – diagonal
That is a banded arrowhead matrix, and for it Cholesky decomposition cost insignificantly more than decomposition of original W. The matrix is not positive definite – Cholesky without square root should be used.
Now there is a temptation to use single constrain ||\Delta x||_2 \leq \epsilon instead of set of constrain ||\Delta x||_1 \leq \epsilon, but that will not work. ||\Delta x||_2 \leq \epsilon should have to be linearized to be tractable, but it’s a second order condition – it’s linear part is zero, so linearization doesn’t constrain anything.
The same method could be used whenever we have to put constrain on the value of Gauss-Newton update, and shape of constrain in not important (or polygonal)
Now last touch – Interior point method has small parameter of it’s own. It’s called usually \mu . In the “normal” method there is a nice rule for it update – take it as \mu = \lambda C (in the notation from wikipedia articleC is a value of constraint, \lambda is a value of slack variable at the previous iteration) That rule usually explicitly stated in the articles about Interior Point Method(IPM) for Linear Programming, but omitted (as obvious probably) in the papers about IPM for nonlinear optimization
In our case (IPM for trust region) we don’t need update \mu at all – we move boundary of the region with each iteration, so each \mu is an initial value. Have to remember, \mu is not a size of trust region, but strength of it’s enforcement.

4, September, 2011 Posted by | computer vision, sci | , , | Comments Off on Interior point method – if all you have is a hammer

Effectivness of compressed sensing in image processing and other stuff

I seldom post in the this blog now, mostly because I’m positing on twitter and G+ a lot lately. I still haven’t figured out which post should go where – blog, G+ or twitter, so it’s kind of chaotic for now.
What of interest is going on: There are two paper on the CVPR11 which claim that compressed sensing(sparse recovery) is not applicable to some of the most important computer vision tasks:
Is face recognition really a Compressive Sensing problem?
Are Sparse Representations Really Relevant for Image Classification?
Both paper claim that space of the natural images(or their important subsets) are not really sparse.
Those claims however dont’t square with claim of high effectiveness of compact signature of Random Ferns.
Could both of those be true? In my opinion – yes. Difference of two approaches is that first two paper assumed explicit sparsity – that is they enforced sparsity on the feature vector. Compressed signature approach used implicit sparsity – feature vector underling the signature is assumed sparse but is not explicitly reconstructed. Why compressed signature is working while explicit approach didn’t? That could be the case if image space is sparse in the different coordinate system – that is here one is dealing with the union of subspaces. Assumption not of the simple sparsity, but of the union of subspaces is called blind compressed sensing.
Now if we look at the space of the natural images it’s easy to see why it is low dimensional. Natural image is the image of some scene, an that scene has limited number of moving object. So dimension of space images of the scene is approximately the sum of degree of freedom of the objects(and camera) of the scene, plus effects of occlusions, illumination and noise. Now if the add strong enough random error to the scene, the image is stop to be the natural image(that is image of any scene). That mean manifold of the images of the scene is isolated – there is no natural images in it’s neighborhood. That hint that up to some error the space of the natural images is at least is the union of isolated low-dimensional manifolds. The union of mainfolds is obviously is more complex structure than the union of subspace, but methods of blind compressed sensing could be applicable to it too. Of cause to think about union of manifolds could be necessary only if the space of images is not union of subspace, which is obviously preferable case

6, August, 2011 Posted by | computer vision | , , , , | 1 Comment