## Total Variation in Image Processing and classical Action

This post is inspired by Extremal Principles in Classical, Statistical and Quantum Mechanics in Azimuth blog.

Total Variation used a lot in image processing. Image denoising, optical flow, depth maps processing. The standard form of Total Variation f or norm is minimizing “energy” of the form

(I’m talking about Total Variaton- for now, not ) over all functions

In case of image denoising it would be

where is original image and is denoised image

Part is called “fidelity term” and is “regularizer”

Regularizer part is to provide smoothness of solution and fidelity term is to force smooth solution to resemble original image (that is in case of image denoising)

Now if we return to classical Action, movement of the point is defined by the minimum of functional

, over trajectories where is kinetic energy and is potential energy, or

*One-dimensional total variation for image denoising is the same as classical mechanics of the particle, with potential energy defined by iamge and smoothness of denoised image as kinetic energy!* For optical flow potential energy is differences between tranformed first image and the second

and kinetic energy is the smoothness of the optical flow.

Of cause the strict equality hold only for one-dimentional image and , and potential energy is quite strange – it depend not on coordinate but on velocity, like some kind of friction.

While it hold some practical meaning, most of practical task have two or more dimensional image and or regulariser. So in term of classical mechanics we have movement in multidimensional time with non-classical kinetic energy

which has uncanny resemblance to Lagrangian of relativistic particle

*So total variation in image processing is equivalent to physics of non-classical movement with multidimensional time, in the field with potential energy defined by image.* I have no idea what does it signify, but it sounds cool :) . Holographic principle? May be crowd from Azimuth or n-category cafe will give some explanation eventually…

And another, related question: regularizer in Total Variation. There is inherent connection between regularizers and Bayesian priors. What TV-L1 regularizer mean from Bayesian statistics point of view?

PS I’m posting mostly on my google plus now, so this blog is a small part of my posts.

Sorry, the comment form is closed at this time.