## Robust estimators – understand or die… err… be bored trying

This is continuation of my attempt to understand internal mechanics of robust statistics. First I want to say that robust statistics “just works”. It’s not necessary to have deep understanding of it to use it and even to use it creatively. However without that deeper understanding I feel myself kind of blind. I can modify or invent robust estimators empirically, but I can not see clearly the reasons, why use this and not that modification.

Now about robust estimators. They could be divided into two groups: maximum likelihood estimators(M-estimators), which in case of robust statistics usually, but not always are redescending estimators (notable ** not** redescending estimator is norm), and all the rest of estimators.

This second “all the rest” group include subset of L-estimators(think of median, which is also M-estimator with norm.Yea, it’s kind of messy), S-estimators (use global scale estimation for all the measurements) R-estimators, which like L-estimator use order statistics but use it for weights. There may be some others too, but I don’t know much about this second group.

It’s easy to understand what M-estimators do: just find the value of parameter which give maximum probability of given set of measurements.

or

which give us traditional M-estimator form

or

,

Practically we are usually work not with measurements per se, but with some distribution of cost function of the measurements , so it become

it’s the same as the previous equation just defined in such a way as to separate statistical part from cost function part.

Now if we make a set of weights it become

We see that it could be considered as “nonlinear least squares”, which could be solved with iteratively reweighted least squares

Now for second group of estimators we have probability of joint distribution

All the global factors – sort order, global scale etc. are incorporated into measurements dependence.

It seems the difference between this formulation of second group of estimators and M-estimator is that conditional independence assumption about measurements is dropped.

Another interesting thing is that if some of measurements are not dependent on others, this formulation can get us bayesian network

Now lets return to M-estimators. M-estimator is defined by assumption about probability distribution of the measurements.

So M-estimator and *probabilistic distribution* through which it is defined are essentially the same. Least squares, for example, is produced by normal(gausssian) distribution. Just take sum of logarithms of gaussian and you get least squares estimator.

If we are talking about normal (pun intended), non-robust estimator, their defining feature is finite variance of distribution.

We have central limit theorem which saying that for any distribution mean value of samples will have approximately normal(or Gaussian) distribution.

From this follow property of asymptotic normality – for estimator with finite variance its distribution around true value of parameter approximate normal distribution.

We are discussing robust estimators, which are stable to error and have “thick-tailed” distribution, so we *can not* assume finite variance of distribution.

Nevertheless to have “true” result we want some form of probabilistic convergence of measurements to true value. As it happens such class of distribution with infinite variance exists. It’s called alpha-stable distributions.

Alpha stable distribution are those distributions for which linear combination of random variables have the same distribution, up to scale factor. From this follow analog of central limit theorem for stable distribution.

The most well known alpha-stable distribution is Cauchy distribution, which correspond to widely used redescending estimator

Cauchy distribution can be generalized in several way, including recent GCD – generalized Cauchy distribution(Carrillo et al), with density function

and estimator

Carrillo also introduce Cauchy distribution-based “norm” (it’s not a real norm obviously) which he called “Lorentzian norm”

is correspond classical Cauchy distribution

He successfully applied Lorentzian norm based basis pursuit to compressed sensing problem, which support idea that compressed sensing and robust statistics are dual each other.

Sorry, the comment form is closed at this time.