Conditional probability was always baffling me. Empirical, frequentists meaning is clear, but the abstract definition, originating from Kolmogorov – what was its mathematical meaning? How it can be derived? It’s a nontrivial definition and is appearing in the textbooks out the air, without measure theory intuition behind it.
Here I mostly follow Chang&Pollard paper Conditioning as disintegartion. Beware that the paper use non-standard notation, but this post follow more common notation, same as in wikipedia.
Here is example form Chang&Pollard paper:
Suppose we have distribution on concentrated in two straight lines and with respective density and angles with X axis . Observation (X,Y) taken, giving , what is probability that point lies on the line ?
Standard approach would be approximate with and take limit with
Not only taking this limit is kind of cumbersome, it’s also not totally obvious that it’s the same conditional probability that defined in the abstract definition – we are replacing ratio with limit here.
Now what is “correct way” to define conditional probabilities, especially for distributions?
For simplicity we will first talk about single scalar random variable, defined on probability space. We will think of random variable X as function on the sample space. Now condition define fiber – inverse image of .
Disintegration theorem say that probability measure on the sample space can be decomposed into two measures – parametric family of measures induced by original probability on each fiber and “orthogonal” measure on – on the parameter space of the first measure. Here is the space of values of X and serve as parameter space for measures on fibers. Second measure induced by the inverse image of the function (random variable) for each measurable set on . This second measure is called Pushforward measure. Pushforward measure is just for measurable set on (in our case) taking its X inverse image on sample space and measuring it with μ.
Fiber is in fact sample space for conditional event, and measure on fiber is our conditional distribution.
Full statement of the theorem require some term form measure theory. Following wikipedia
* Then there exists a ν-almost everywhere uniquely determined family of probability measures ⊆ P(Y) such that
* the function is Borel measurable, in the sense that is a Borel-measurable function for each Borel-measurable set B ⊆ Y;
* “lives on” the fiber : for ν-almost all x ∈ X,
* for every Borel-measurable function : Y → [0, ∞],
From here for any event E form Y
This was complete statement of the disintegration theorem.
Now returning to Chang&Pollard example. For formal derivation I refer you to the original paper, here we will just “guess” , and by uniqueness it give us disintegration. Our conditional distribution for will be just point masses on the intersections of lines and with axis
Here is delta function – point mass.
Our conditional probability that event lies on with condition , form conditional density thus
Another example from Chen&Pollard. It relate to sufficient statistics. Term sufficient statistic used if we have probability distribution depending on some parameter, like in maximum likelihood estimation. Sufficient statistic is some function of sample, if it’s possible to estimate parameter of distribution from only values of that function in the best possible way – adding more data form the sample will not give more information about parameter of distribution.
Let be uniform distribution on the square . In that case M = max(x, y) is sufficient statistics for . How to show it?
Let take our function , and make disintegration.
is a uniform distribution on edges where x and y equal m and
is density of conditional probability and it doesn’t depend on
For any – means that M is sufficient.
It seems that in most cases disintegration is not a tool for finding conditional distribution. Instead it can help to guess it and form uniqueness prove that the guess is correct. That correctness could be nontrivial – there are some paradoxes similar to Borel Paradox in Chang&Pollard paper.