Mirror Image

Mostly AR and Stuff

Deriving Gibbs distribution from stochastic gradients

Stochastic gradients is one of the most important tools in optimization and machine learning (especially for Deep Learning – see for example ConvNet). One of it’s advantage is that it behavior is well understood in general case, by application of methods of statistical mechanics.

In general form stochastic gradient descent could be written as
w_{n+1} = w_n - \alpha \bigtriangledown E(w_n) + F_n
where F_n is a random variable with expectation zero.
To apply methods of statistical mechanics we can rewrite it in continuous form, as stochastic gradient flow
\partial J/\partial t = - \bigtriangledown_J E(J) +F(t)
and random variable F(t) we assume to be white noise for simplicity.
In that moment most of textbooks and papers refer to “methods of statistical mechanics” to show that
stochastic gradient flow has invariant probability distribution, which is called Gibbs distribution
p(x) = \frac {exp(-\beta E(x))}{Z}
and from here derive some interesting things like temperature and free energy.
The question is – how Gibbs distribution derived from stochastic gradient flow?

First we have to understand what stochastic gradient flow really means.
It’s not a partial differential equation (PDE), because it include random variable, which is not a function \mathbb{R} \to \mathbb{R}. In fact it’s a stochastic differential equation (SDE) . SDE use some complex mathematical machinery and relate to partial differential equations, probability/statistics, measure theory and ergodic theory. However they used a lot in finance, so there is quite a number of textbooks on SDE for non-mathematicians. For the short, minimalistic and accessible book on stochastic differential equation I can’t recommend highly enough introduction to SDE by L.C. Evans

SDE in question is called Ito diffusion. Solution of that equation is a stochastic process – collection of random variables parametrized by time. Sample path of stochastic process in question as a function of time is nowhere differentiable – it’s difficult to talk about it in term of derivatives, so it is defined through it’s integral form.
First I’ll notice that integral of white noise is actually Brownian motion, or Wiener process.
Assume that we have stochastic differential equation written in informal manner
\partial X / \partial t = b(X, t) + g(X, t) F(t) with X -stochastic process and F(t) – white noise
It’s integral form is
X(T) - X(0) = \int_{0}^{T} b(X, t) dt + \int_{0}^{T} g(X, t) dW(t) \ \ (1)
where W(t) is a Wiener process
W(t) = \int F(t) dt
This equation is usually written in the form
dX = b(X,t)dt + g(X,t)dW \ \ (2)
This is only a notation for integral equation, d here is not a differential.
Returning to (1)
\int_{0}^{T} b(X, t) dt is an integral along sample path, it’s meaning is obvious, or it can be defined as limit of Riemann sums with respect to time.

The most notable thing here is
\int_{0}^{T} g(X, t) dW(t) – integral with respect to Wiener process (3)
It’s a stochastic integral, and it’s defined in the courses of stochastic differential equation as the limit of Riemann sums of random variables, in the manner similar to definition of ordinary integral.
Curiously, stochastic integral is not quite well defined. Depending on the form of the sum it produce different results, like Ito integral:

\int_0^t g \,d W =\lim_{n\rightarrow\infty} \sum_{[t_{i-1},t_i]\in\pi_n}g_{t_{i-1}}(W_{t_i}-W_{t_{i-1}})
Different Riemann sums produce different integral – Stratonovich integral:
\int_0^t g \,d W =\lim_{n\rightarrow\infty} \sum_{[t_{i-1},t_i]\in\pi_n}(g_{t_i} + g_{t_{i-1}})/2(W_{t_i}-W_{t_{i-1}})

Ito integral used more often in statistics because it use g_{t_{i-1}} – it don’t “look forward”, and Stratonovich more used in theoretical physics.

Returning to Ito integral – Ito integral is stochastic process itself, and it has expectation zero for each t.
From definition of Ito integral follow one of the most important tools of stochastic calculus – Ito Lemma (or Ito formula)
Ito lemma states that for solution of SDE (2)
dX = b(X, t)dt + g(x, t)dW X, b, W – vectors, g – matrix
were W is Wiener process (actually some more general process) and b and g are good enough

du(\mathbf{X}, t) = \frac{\partial u}{\partial t} dt + (\nabla_\mathbf{X}^{\mathsf T} u) d\mathbf{X} + \tfrac{1}{2} \sum_{i,j} (gg^\mathsf{T})_{i,j}\tfrac{ \partial^2 u}{\partial x_i \partial x_j} dt

where \nabla_\mathbf{X} u is the gradient.
From Ito lemma follow Ito product rule for scalar processes: applying Ito formula to process V = { X \choose Y} combined from two processes X and Y to function u(V) = XY
d(XY) = XdY + YdX + g_x \cdot g_y dt
Using Ito formula and Ito product rule it is possible to get Feynman–Kac formula (derivation could be found in the wikipedia, it use only Ito formula, Ito product rule and the fact that expectation of Ito integral (3) is zero):
for partial differential equation (PDE)
\frac{\partial u}{\partial t} + (\nabla_\mathbf{X}u)^{\mathsf T} b + \tfrac{1}{2} \sum_{i,j} (gg^\mathsf{T})_{i,j}\tfrac{ \partial^2 u}{\partial x_i \partial x_j} - vu = f
with terminal condition
u(x, T) = \psi(x) \ \ (4)
solution can be written as conditional expectation:
u(x,t) = E\left[ \int_t^T e^{- \int_t^r v(X_\tau,\tau)\, d\tau}f(X_r,r)dr + e^{-\int_t^T v(X_\tau,\tau)\, d\tau}\psi(X_T) \Bigg| X_t=x \right]
Feynman–Kac formula establish connection between PDE and stochastic process.
From Feynman–Kac formula taking v\equiv 0 and f \equiv 0 we get Kolmogorov backward equation :
for
Lu = (\nabla_\mathbf{X}u)^{\mathsf T} b + \tfrac{1}{2} \sum_{i,j} (gg^\mathsf{T})_{i,j}\tfrac{ \partial^2 u}{\partial x_i \partial x_j}
equation
\frac{\partial u}{\partial t} + Lu = 0 \ \ (5)
with terminal condition (4) have solution as conditional expectation
u(x, t) = E(\psi(X_T) | X_t = x) \ \ (6)
From Kolmogorov backward equation we can obtain Kolmogorov forward equation, which describe evolution of probability density for random process X (2)
In SDE courses it’s established that (2) is a Markov process and has transitional probability P and transitional density p:
p(x, s, y, t) = probability density at being at y in time t, on condition that it started at x in time s
taking u – solution of (5) with terminal condition (6)
u(x, s) = E(\psi(X_T) | X_s = x) = \int p(x, s, z, T) \psi(z) dz
From Markov property
p(x, s, z, T) = \int p(x, s, y, t)p(y, t, z, T) dz
from here
u(x, s) = \int \int p(x, s, y, t) p(y, t, z, T) \psi(z) dz dy
u(x, s) = \int p(x, s, y, t) u(y, t) dy
form here
0 = \frac{\partial u}{\partial t} = \int \frac {\partial p(x, s, y, t)}{\partial t}u(y, t) + p(x, s, y, t) \frac {\partial u(y, t)}{\partial t} dy
from (5)
\int \frac{\partial p(x, s, y, t)}{\partial t}u(y, t) - (Lu) p(x, s, y, t) dy = 0 \ \ (7)
Now we introduce dual operator L^*
\int p (Lu) dy = \int (L^{*}p) u dy
By integration by part we can get
L^*u = -\nabla_\mathbf{X}^{\mathsf T}(ub) + \tfrac{1}{2} \sum_{i,j} \tfrac{ \partial^2 }{\partial x_i \partial x_j}(gg^\mathsf{T}u)_{i,j}
and from (7)
\int (\frac{\partial p(x, s, y, t)}{\partial t} - L^{*}p) u(y, t) dy = 0
for t=T
\int (\frac{\partial p(x, s, y, T)}{\partial t} - L^{*}p(x, s, y, T)) \psi(y) dy = 0
This is true for any \psi, wich is independent from p
And we get Kolmogorov forward equation for p. Integrating by x we get the same equation for probability density at any moment T
\frac{\partial p}{\partial t} - L^{*}p = 0
Now we return to Gibbs invariant distribution for gradient flow
\tfrac{\partial J}{\partial t} = - \bigtriangledown_J E(J) + \sqrt{\tfrac{2}{\beta}}F(t)
Stochastic gradient flow in SDE notation
dJ = - \bigtriangledown_J E(J)dt + \sqrt{\tfrac{2}{\beta}} dW – integral of white noise is Wiener process
We want to find invariant probability density p_G. Invariant – means it doesn’t change with time, \tfrac{\partial p_G}{\partial t} = 0
so from Kolmogorov forward equation
L^*p_G = 0
or
\nabla^{\mathsf T}(p_G\nabla E)+ \tfrac{1}{\beta} \nabla^{\mathsf T} \nabla p_G = 0
removing gradient
p_G\nabla E + \tfrac{1}{\beta} \nabla p_G = C
C = 0 because we want p_G integrable
\nabla(E + \tfrac{1}{\beta} log(p_G)) = 0
and at last we get Gibbs distribution
p_G = \frac{exp(-\beta E)}{Z}
Recalling again the chain of reasoning:
Wiener process

Ito integral

SDE + Ito Lemma + Ito product rule + zero expecation of Ito integral →

Feynman-Kac formula →

Kolmogorov backward equation + Markov property of SDE →

Kolmogorov forward equation for probability density  →

Gibbs invariant distribution

Advertisements

13, November, 2013 Posted by | Uncategorized | Comments Off on Deriving Gibbs distribution from stochastic gradients

ConvNet for windows

I have seen an excellent wlakthrough on building Alex Krizhevsky’s cuda-convnet for windows, but difference in configuration and installed packages could be tiresome. So here is complete build of convnet for windows 64:

https://github.com/s271/win_convnet

It require having CUDA compute capability 2.0 or better GPU of cause, Windows 64bit, Visual Studio 64bits and Python 64bit with NumPy installed. The rest of libs and dlls are precomplied. In building it I’ve followed instructions by Yalong Bai (Wyvernbai) from http://www.asiteof.me/archives/50.

Read Readme.md before installation – you may (or may not) require PYTHONPATH environmental variable set.

On side note I’ve used WinPython for both libraries and running the package. WinPython seems a very nice package, which include Spyder IDE. I have some problems with Spyder though – sometimes it behave erratically during debugging/running the code. Could be my inexperience with Spyder though. Another nice package – PythonXY – regretfully can not be used – it has only 32 bit version and will not compile with/run 64 bit modules.

24, October, 2013 Posted by | Coding, Uncategorized | , , , , , , , | Comments Off on ConvNet for windows

__volatile__ in #cuda reduce sample

“Reduce” is one of the most useful samples in NVIDIA CUDA SDK. It’s implementation of highly optimized cuda algorithm for some of the elements of the array of the arbitrary length. It’s hardly possible to make anything better and generic enough with existing GPGPU architecture (if anyone know something as generic but considerably more efficient I’d like to know too). One of the big plus of the reduce algorithm is that it can work for any binary commutative associative operation – like min, max, multiply etc. And NVIDIA sample provide this ability – it’s implemented as reduce on template class, so all one have to do is implement class with overload of addition and assignment operations.

However there is one obstacle – it’s a __volatile__ qualifier in the code. Simple overload of “=” “+=” and “+” operations  in class LSum cause compiler error like

error: no operator “+” matches these operands
1> operand types are: LSum + volatile LSum

The answer is add __volatile__ to all class operation, but there is the trick here:

for “=”  just

volatile LSum& operator =(volatile LSum &rhs)

is not enough. You should add volatile to the end too, to specify not only input and output, but function itself as volatile.

At the end class looks like:

class LSum
{
public:

__device__ LSum& operator+=(volatile LSum &rhs)

{

return *this;

};

__device__ LSum operator+(volatile LSum &rhs)
{
LSum res = *this;
res += rhs;
return res;
};

__device__ LSum& operator =(const float &LSum)
{

return *this;

};

__device__ volatile LSum& operator =(volatile LSum &rhs) volatile
{

return *this;
};
};

11, May, 2013 Posted by | Uncategorized | Comments Off on __volatile__ in #cuda reduce sample

Algebraic topology – now in compressed sensing

Thanks to Igor Carron for pointing me to this workshop – Algebraic Topology and Machine Learning . There is very interesting paper there Persistent Homological Structures in Compressed Sensing and Sparse Likelihood by Moo K. Chung, Hyekyung Lee and Matthew Arnold. The paper is very comprehensive and require only minimal understanding of some algebraic topology concepts (which is exactly where I’m in realation to algebraic topology). Basically it’s application of topological data analysis to compressive sensing. They use such thing as persistent homology and “barcodes”. Before, persistent homology and barcodes were used for such things as extracting solid structure from noisiy point cloud. Barcode is stable to noise dependence of some topological invariants on some parameter. In case of point cloud parameter is the radius of the ball around each point. As radius go from very big to zero topology of union of balls change, and those changes of topology make barcode. Because barcode is stable topological invariant learning barcode is the same as learning topology of solid structure underlying point cloud.
In the paper authors using graphical lasso (glasso) with L_1 regularizer to find interdependency between set of sampled variables. However if consider \lambda parameter of regularizer as a kind of radius of ball this problem aquire persistent homology and barcode. The correlation matrix is thresholded by \lambda and become adjacency matrix of some graph. Barcode is now dependence of topology of that graph on parameter \lambda. What is especially spectacular is that to calculate barcode no glasso iteration are needed – barcode obtained by simple thresholding of correlation matrix. Thus barcode easily found and with it topology of correlations of variables. Well, at least that is how I understood the paper.
PS Using this approach for total variation denoising barcode would include dependance of size function from smoothing parameter.

11, December, 2012 Posted by | Uncategorized | Comments Off on Algebraic topology – now in compressed sensing

Total Variation in Image Processing and classical Action

This post is inspired by Extremal Principles in Classical, Statistical and Quantum Mechanics in Azimuth blog.
Total Variation used a lot in image processing. Image denoising, optical flow, depth maps processing. The standard form of Total Variation f or L_2 norm is minimizing “energy” of the form
\int_\Omega f(u(x)) + (\nabla u(x))^2 dx
(I’m talking about Total Variaton-L_2 for now, not L_1) over all functions u:\ \Omega \rightarrow R^n
In case of image denoising it would be
\int_\Omega (v(x)-u(x))^2 + (\mu/2)(\nabla u(x))^2 dx
where v is original image and u is denoised image
Part f(u(x)) is called “fidelity term” and (\nabla u(x))^2 is “regularizer”
Regularizer part is to provide smoothness of solution and fidelity term is to force smooth solution to resemble original image (that is in case of image denoising)
Now if we return to classical Action, movement of the point is defined by the minimum of functional
A(u) = \int K(u(t)) - V(u(t)) dt, over trajectories u(t) where K is kinetic energy and V is potential energy, or
A(u) = \int f(u(t)) + (m/2)(\nabla u(t))^2 dt, \ f=-V
One-dimensional L_2 total variation for image denoising is the same as classical mechanics of the particle, with potential energy defined by iamge and smoothness of denoised image as kinetic energy! For optical flow potential energy is differences between tranformed first image and the second
(I_0(u(x)) - I_1(x))^2 and kinetic energy is the smoothness of the optical flow.
Of cause the strict equality hold only for one-dimentional image and L_2, and potential energy is quite strange – it depend not on coordinate but on velocity, like some kind of friction.
While it hold some practical meaning, most of practical task have two or more dimensional image and L_1 or L_p regulariser. So in term of classical mechanics we have movement in multidimensional time with non-classical kinetic energy
K = \mu/2 \sqrt{v^2}
which has uncanny resemblance to Lagrangian of relativistic particle
-m c^2\sqrt{1-v^2/c^2}
So total variation in image processing is equivalent to physics of non-classical movement with multidimensional time, in the field with potential energy defined by image. I have no idea what does it signify, but it sounds cool :) . Holographic principle? May be crowd from Azimuth or n-category cafe will give some explanation eventually…
And another, related question: regularizer in Total Variation. There is inherent connection between regularizers and Bayesian priors. What TV-L1 regularizer mean from Bayesian statistics point of view?

PS I’m posting mostly on my google plus now, so this blog is a small part of my posts.

14, January, 2012 Posted by | Uncategorized | Comments Off on Total Variation in Image Processing and classical Action

Stuff and AR

I once asked, what’s 3d registration/reconstruction/pose estimation is about – optimization or statistics? The more I think about it, the more I convinced it’s at least 80% statistics. Often specifically optimization tricks like Tikhonov regularization have statistical underpinning. Stability of optimization is robust statistics(Yes I know, I repeat it way too often). Cost function formulation is a formulation for error distribution and define convergence speed.

Now unrelated(almost) AR stuff:
I already mentioned on Twitter that version of markerless tracker for which I did a lot of work is part of Samsung AR SDK (SARI) for Android and Bada. It was was shown at AP2011(Presentaion and also include nice Bada code). AR SDK presentation is here.
Some videos form presentation – Edi Bear game demo with non-reference tracking at the end of the video and less trivial elements of SLAM tracking. Other application of SARI SDK – PBI (This one seems use earlier version).

4, July, 2011 Posted by | Uncategorized | , , , , , | 1 Comment

How Kinect depth sensor works – stereo triangulation?

Kinect use depth sensor produced by PrimeSense, but how exactly it works is not obvious from the first glance. Some person here, claimed to be specialist, assured me that PrimeSense sensor is using time-of-flight depth camera. Well, he was wrong. In fact PrimeSense explicitly saying they are not using time-of-flight, but something they call “light coding” and use standard off-the shelf CMOS sensor which is not capable extract time of return from modulated light.
Daniel Reetz made excellent works of making IR photos of Kinect laser emitter and analyzing it’s characteristics. He confirm PrimeSense statement – IR laser is not modulated. All that laser do is project static pseudorandom pattern of specs on the environment. PrimeSense use only one IR sensor. How it possible to extract depth information from the single IR image of the spec pattern? Stereo triangulation require two images to get depth of each point(spec). Here is the trick: actually there not one, but two images. One image is what we see on the photo – image of the specs captured by IR sensor. The second image is invisible – it’s a hardwired pattern of specs which laser project. That second image should be hardcoded into chip logic. Those images are not equivalent – there is some distance between laser and sensor, so images correspond to different camera positions, and that allow to use stereo triangulation to calculate each spec depth.
triangulation
The difference here is that the second image is “virtual” – position of the second point y_2 is already hardcoded into memory. Because laser and sensor are aligned that make task even more easy: all one have to do is to measure horizontal offset of the spec on the first image relative to hardcoded position(after correcting lens distortion of cause).
That also explain pseudorandom pattern of the specs. Pseudorandom patten make matching of specs in two images more easy, as each spec have locally different neighborhood. Can it be called “structured light” sensor? With some stretch of definition. Structured light usually project grid of regular lines instead of pseudorandom points. At least PrimeSense object to calling their method “structured light”.

30, November, 2010 Posted by | Uncategorized | 10 Comments

New laptop, new Ubuntu

Got myself new and shiny Asus u35jc-a1 and started dual boot Ubuntu on it. I have ubuntu as wubi(ubuntu in windows file) on my old desktop replacement Dell XPS, so I started with wubi for u35jc too. Wubi worked from the start, wifi card works without problem. However NVIDIA completely screwed up hybrid driver for Linux(there are 2 vidoecards on u35jc, one integrated and other NVIDIA), it’s completely unusable, so NVIDIA driver should be disabled. Happily there are detailed instructions on ubuntuforums. They are for 10.4, but work for 10.10 too. Suspend was not working quite stable though, even after fix from ubuntuforums. It’s tricky – suspension require turing NVIDIA driver on and off.. Suspend was working for AC power, but suspend on battery power caused system hang someteimes. I proceed to multituch fix from the list. Multituch fix require creation or modification of xorg.conf, wich require stopping X. Stopping X caused crush which permanently killed Ubuntu wubi install. That was quite scary, so I decided to forgo wubi and make complete ubuntu intallation into partition.
Installation was not completely smooth. Probably because of some quirks of initial Asus partition, ubuntu installer refused to create any partition but first in the free space. After creation of the first boot partition remaining free disk space become unusable. So I forgo swap partition and installed ubuntu into single / partition. After that I implemented only acpi_call and suspend fixes. Suspend now works like charm, both for AC and battery. Multitach fix I put aside for now – it’s not critical. It seems all the problem, necessity of separate suspend fix, problems with wubi and stopping X were caused by NVIDA driver. Hope NVIDIA will fix it eventually – I want to play with GPGPU without going into Windows boot.

19, November, 2010 Posted by | Uncategorized | Comments Off on New laptop, new Ubuntu

Is Ouidoo a rebranded Zii Egg?

There ii some buzz on the net about a new smartphone specifically designed for augmented reality – Ouidoo. Some of the specs of the device sound outlandish – 26 CPUs, 8 GFLOPS and a new OS.
However one commenter on the cn.engadged.com pointed out that this device could actually be real – it specs resemble that of the new Creative Zii Egg OEM platform, which seems available for developers now. Zii Egg feature a new OS -“Plasma OS”, but also support Android. So what’s the deal with 26 CPUs?
Those are not general-propose CPUs of cause, but Zii Egg have some kind of vector processor (like in Cell processor or modern GPUs, some high-end smartphones also have it) with 24 floating point units. Add to this CPU itself – ARM Coretx8 and possibly dedicated GPU and you have 24+1+1 = 26. Also specs claimed by Creative for Zii Egg floating point vector processor – 8 GFLOPS are exactly those clamied by QderoPateo for Ouidoo.

25, April, 2010 Posted by | Uncategorized | 1 Comment

Technology behind project Natal

Latest info is here.

23, January, 2010 Posted by | Uncategorized | 9 Comments