Law of total variance

Theorem in probability theory

In probability theory, the law of total variance[1] or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law,[2] states that if X {\displaystyle X} and Y {\displaystyle Y} are random variables on the same probability space, and the variance of Y {\displaystyle Y} is finite, then

Var ( Y ) = E [ Var ( Y X ) ] + Var ( E [ Y X ] ) . {\displaystyle \operatorname {Var} (Y)=\operatorname {E} [\operatorname {Var} (Y\mid X)]+\operatorname {Var} (\operatorname {E} [Y\mid X]).}

In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM).[3] These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation".

Explanation

To understand the formula above, we need to comprehend the random variables E [ Y | X ] {\displaystyle \operatorname {E} [Y|X]} and Var ( Y | X ) {\displaystyle \operatorname {Var} (Y|X)} . These variables depend on the value of X {\displaystyle X} : for a given x {\displaystyle x} , E [ Y | X = x ] {\displaystyle \operatorname {E} [Y|X=x]} and Var ( Y | X = x ) {\displaystyle \operatorname {Var} (Y|X=x)} are constant numbers. Essentially, we use the possible values of X {\displaystyle X} to group the outcomes and then compute the expected values and variances for each group.

The "unexplained" component E ( Var [ Y | X ] ) {\displaystyle \operatorname {E} (\operatorname {Var} [Y|X])} is simply the average of all the variances of Y {\displaystyle Y} within each group. The "explained" component Var ( E [ Y | X ] ) {\displaystyle \operatorname {Var} (\operatorname {E} [Y|X])} is the variance of the expected values, i.e., it represents the part of the variance that is explained by the variation of the average value of Y {\displaystyle Y} for each group.

Weight of dogs by breed

For an illustration, consider the example of a dog show (a selected excerpt of Analysis_of_variance#Example). Let the random variable Y {\displaystyle Y} correspond to the dog weight and X {\displaystyle X} correspond to the breed. In this situation, it is reasonable to expect that the breed explains a major portion of the variance in weight since there is a big variance in the breeds' average weights. Of course, there is still some variance in weight for each breed, which is taken into account in the "unexplained" term.

Note that the "explained" term actually means "explained by the averages." If variances for each fixed X {\displaystyle X} (e.g., for each breed in the example above) are very distinct, those variances are still combined in the "unexplained" term.

Examples

Example 1

Five graduate students take an exam that is graded from 0 to 100. Let Y {\displaystyle Y} denote the student's grade and X {\displaystyle X} indicate whether the student is international or domestic. The data is summarized as follows:

Student Y {\displaystyle Y} X {\displaystyle X}
1 20 International
2 30 International
3 100 International
4 40 Domestic
5 60 Domestic

Among international students, the mean is E [ Y | X = International ] = 50 {\displaystyle \operatorname {E} [Y|X={\text{International}}]=50} and the variance is Var ( Y | X = International ) = 3800 3 = 1266. 6 ¯ {\displaystyle \operatorname {Var} (Y|X={\text{International}})={\frac {3800}{3}}=1266.{\overline {6}}} .

Among domestic students, the mean is E [ Y | X = Domestic ] = 50 {\displaystyle \operatorname {E} [Y|X={\text{Domestic}}]=50} and the variance is Var ( Y | X = Domestic ) = 100 {\displaystyle \operatorname {Var} (Y|X={\text{Domestic}})=100} .

X {\displaystyle X} P ( X ) {\displaystyle P(X)} E [ Y | X ] {\displaystyle \operatorname {E} [Y|X]} Var ( Y | X ) {\displaystyle \operatorname {Var} (Y|X)}
International 3/5 50 1266.6
Domestic 2/5 50 100

The part of the variance of Y {\displaystyle Y} "unexplained" by X {\displaystyle X} is the mean of the variances for each group. In this case, it is ( 3 5 ) ( 3800 3 ) + ( 2 5 ) ( 100 ) = 800 {\displaystyle \left({\frac {3}{5}}\right)\left({\frac {3800}{3}}\right)+\left({\frac {2}{5}}\right)(100)=800} . The part of the variance of Y {\displaystyle Y} "explained" by X {\displaystyle X} is the variance of the means of Y {\displaystyle Y} inside each group defined by the values of the X {\displaystyle X} . In this case, it is zero, since the mean is the same for each group. So the total variation is

Var ( Y ) = E [ Var ( Y | X ) ] + Var ( E [ Y | X ] ) = 800 + 0 = 800. {\displaystyle \operatorname {Var} (Y)=\operatorname {E} [\operatorname {Var} (Y|X)]+\operatorname {Var} (\operatorname {E} [Y|X])=800+0=800.}

Example 2

Suppose X is a coin flip with the probability of heads being h. Suppose that when X = heads then Y is drawn from a normal distribution with mean μh and standard deviation σh, and that when X = tails then Y is drawn from normal distribution with mean μt and standard deviation σt. Then the first, "unexplained" term on the right-hand side of the above formula is the weighted average of the variances, h2 + (1 − h)σt2, and the second, "explained" term is the variance of the distribution that gives μh with probability h and gives μt with probability 1 − h.

Formulation

There is a general variance decomposition formula for c 2 {\displaystyle c\geq 2} components (see below).[4] For example, with two conditioning random variables:

Var [ Y ] = E [ Var ( Y X 1 , X 2 ) ] + E [ Var ( E [ Y X 1 , X 2 ] X 1 ) ] + Var ( E [ Y X 1 ] ) , {\displaystyle \operatorname {Var} [Y]=\operatorname {E} \left[\operatorname {Var} \left(Y\mid X_{1},X_{2}\right)\right]+\operatorname {E} [\operatorname {Var} (\operatorname {E} \left[Y\mid X_{1},X_{2}\right]\mid X_{1})]+\operatorname {Var} (\operatorname {E} \left[Y\mid X_{1}\right]),}
which follows from the law of total conditional variance:[4]
Var ( Y X 1 ) = E [ Var ( Y X 1 , X 2 ) X 1 ] + Var ( E [ Y X 1 , X 2 ] X 1 ) . {\displaystyle \operatorname {Var} (Y\mid X_{1})=\operatorname {E} \left[\operatorname {Var} (Y\mid X_{1},X_{2})\mid X_{1}\right]+\operatorname {Var} \left(\operatorname {E} \left[Y\mid X_{1},X_{2}\right]\mid X_{1}\right).}

Note that the conditional expected value E ( Y X ) {\displaystyle \operatorname {E} (Y\mid X)} is a random variable in its own right, whose value depends on the value of X . {\displaystyle X.} Notice that the conditional expected value of Y {\displaystyle Y} given the event X = x {\displaystyle X=x} is a function of x {\displaystyle x} (this is where adherence to the conventional and rigidly case-sensitive notation of probability theory becomes important!). If we write E ( Y X = x ) = g ( x ) {\displaystyle \operatorname {E} (Y\mid X=x)=g(x)} then the random variable E ( Y X ) {\displaystyle \operatorname {E} (Y\mid X)} is just g ( X ) . {\displaystyle g(X).} Similar comments apply to the conditional variance.

One special case, (similar to the law of total expectation) states that if A 1 , , A n {\displaystyle A_{1},\ldots ,A_{n}} is a partition of the whole outcome space, that is, these events are mutually exclusive and exhaustive, then

Var ( X ) = i = 1 n Var ( X A i ) Pr ( A i ) + i = 1 n E [ X A i ] 2 ( 1 Pr ( A i ) ) Pr ( A i ) 2 i = 2 n j = 1 i 1 E [ X A i ] Pr ( A i ) E [ X A j ] Pr ( A j ) . {\displaystyle {\begin{aligned}\operatorname {Var} (X)={}&\sum _{i=1}^{n}\operatorname {Var} (X\mid A_{i})\Pr(A_{i})+\sum _{i=1}^{n}\operatorname {E} [X\mid A_{i}]^{2}(1-\Pr(A_{i}))\Pr(A_{i})\\[4pt]&{}-2\sum _{i=2}^{n}\sum _{j=1}^{i-1}\operatorname {E} [X\mid A_{i}]\Pr(A_{i})\operatorname {E} [X\mid A_{j}]\Pr(A_{j}).\end{aligned}}}

In this formula, the first component is the expectation of the conditional variance; the other two components are the variance of the conditional expectation.

Proof

Finite Case

Let ( x 1 , y 1 ) , , ( x n , y n ) {\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})} be observed values of ( X , Y ) {\displaystyle (X,Y)} , with repetitions.

Set y ¯ = E [ Y ] {\displaystyle {\bar {y}}=\operatorname {E} [Y]} and, for each possible value x {\displaystyle x} of X {\displaystyle X} , set y ¯ x i = E [ Y | X = x i ] {\displaystyle {\bar {y}}_{x_{i}}=\operatorname {E} [Y|X=x_{i}]} .

Note that

( y i y ¯ ) 2 = ( y i y ¯ x i + y ¯ x i y ¯ ) 2 = ( y i y ¯ x i ) 2 + ( y ¯ x i y ¯ ) 2 + 2 ( y i y ¯ x i ) ( y ¯ x i y ¯ ) . {\displaystyle (y_{i}-{\bar {y}})^{2}=\left(y_{i}-{\bar {y}}_{x_{i}}+{\bar {y}}_{x_{i}}-{\bar {y}}\right)^{2}=(y_{i}-{\bar {y}}_{x_{i}})^{2}+({\bar {y}}_{x_{i}}-{\bar {y}})^{2}+2(y_{i}-{\bar {y}}_{x_{i}})({\bar {y}}_{x_{i}}-{\bar {y}}).}

Summing these for 1 i n {\displaystyle 1\leq i\leq n} , the last parcel becomes

i = 1 n 2 ( y i y ¯ x i ) ( y ¯ x i y ¯ ) = 2 x ( { 1 i n | x i = x } ( y i y ¯ x ) ) ( y ¯ x y ¯ ) = 2 x 0 ( y ¯ x y ¯ ) = 0. {\displaystyle \sum _{i=1}^{n}2(y_{i}-{\bar {y}}_{x_{i}})({\bar {y}}_{x_{i}}-{\bar {y}})=2\sum _{x}\left(\sum _{\{1\leq i\leq n|x_{i}=x\}}(y_{i}-{\bar {y}}_{x})\right)({\bar {y}}_{x}-{\bar {y}})=2\sum _{x}0\cdot ({\bar {y}}_{x}-{\bar {y}})=0.}

Hence,

Var ( Y ) = 1 n i = 1 n ( y i y ¯ ) 2 = 1 n i = 1 n ( y i y ¯ x i ) 2 + 1 n i = 1 n ( y ¯ x i y ¯ ) 2 = E [ Var ( Y X ) ] + Var ( E [ Y X ] ) . {\displaystyle \operatorname {Var} (Y)={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-{\bar {y}}_{x_{i}})^{2}+{\frac {1}{n}}\sum _{i=1}^{n}({\bar {y}}_{x_{i}}-{\bar {y}})^{2}=\operatorname {E} [\operatorname {Var} (Y\mid X)]+\operatorname {Var} (\operatorname {E} [Y\mid X]).}

General Case

The law of total variance can be proved using the law of total expectation.[5] First,

Var [ Y ] = E [ Y 2 ] E [ Y ] 2 {\displaystyle \operatorname {Var} [Y]=\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [Y]^{2}}
from the definition of variance. Again, from the definition of variance, and applying the law of total expectation, we have
E [ Y 2 ] = E [ E [ Y 2 X ] ] = E [ Var [ Y X ] + E [ Y X ] 2 ] . {\displaystyle \operatorname {E} \left[Y^{2}\right]=\operatorname {E} \left[\operatorname {E} [Y^{2}\mid X]\right]=\operatorname {E} \left[\operatorname {Var} [Y\mid X]+\operatorname {E} [Y\mid X]^{2}\right].}

Now we rewrite the conditional second moment of Y {\displaystyle Y} in terms of its variance and first moment, and apply the law of total expectation on the right hand side:

E [ Y 2 ] E [ Y ] 2 = E [ Var [ Y X ] + E [ Y X ] 2 ] E [ E [ Y X ] ] 2 . {\displaystyle \operatorname {E} \left[Y^{2}\right]-\operatorname {E} [Y]^{2}=\operatorname {E} \left[\operatorname {Var} [Y\mid X]+\operatorname {E} [Y\mid X]^{2}\right]-\operatorname {E} [\operatorname {E} [Y\mid X]]^{2}.}

Since the expectation of a sum is the sum of expectations, the terms can now be regrouped:

= ( E [ Var [ Y X ] ] ) + ( E [ E [ Y X ] 2 ] E [ E [ Y X ] ] 2 ) . {\displaystyle =\left(\operatorname {E} [\operatorname {Var} [Y\mid X]]\right)+\left(\operatorname {E} \left[\operatorname {E} [Y\mid X]^{2}\right]-\operatorname {E} [\operatorname {E} [Y\mid X]]^{2}\right).}

Finally, we recognize the terms in the second set of parentheses as the variance of the conditional expectation E [ Y X ] {\displaystyle \operatorname {E} [Y\mid X]} :

= E [ Var [ Y X ] ] + Var [ E [ Y X ] ] . {\displaystyle =\operatorname {E} [\operatorname {Var} [Y\mid X]]+\operatorname {Var} [\operatorname {E} [Y\mid X]].}

General variance decomposition applicable to dynamic systems

The following formula shows how to apply the general, measure theoretic variance decomposition formula [4] to stochastic dynamic systems. Let Y ( t ) {\displaystyle Y(t)} be the value of a system variable at time t . {\displaystyle t.} Suppose we have the internal histories (natural filtrations) H 1 t , H 2 t , , H c 1 , t {\displaystyle H_{1t},H_{2t},\ldots ,H_{c-1,t}} , each one corresponding to the history (trajectory) of a different collection of system variables. The collections need not be disjoint. The variance of Y ( t ) {\displaystyle Y(t)} can be decomposed, for all times t , {\displaystyle t,} into c 2 {\displaystyle c\geq 2} components as follows:

Var [ Y ( t ) ] = E ( Var [ Y ( t ) H 1 t , H 2 t , , H c 1 , t ] ) + j = 2 c 1 E ( Var [ E [ Y ( t ) H 1 t , H 2 t , , H j t ] H 1 t , H 2 t , , H j 1 , t ] ) + Var ( E [ Y ( t ) H 1 t ] ) . {\displaystyle {\begin{aligned}\operatorname {Var} [Y(t)]={}&\operatorname {E} (\operatorname {Var} [Y(t)\mid H_{1t},H_{2t},\ldots ,H_{c-1,t}])\\[4pt]&{}+\sum _{j=2}^{c-1}\operatorname {E} (\operatorname {Var} [\operatorname {E} [Y(t)\mid H_{1t},H_{2t},\ldots ,H_{jt}]\mid H_{1t},H_{2t},\ldots ,H_{j-1,t}])\\[4pt]&{}+\operatorname {Var} (\operatorname {E} [Y(t)\mid H_{1t}]).\end{aligned}}}

The decomposition is not unique. It depends on the order of the conditioning in the sequential decomposition.

The square of the correlation and explained (or informational) variation

In cases where ( Y , X ) {\displaystyle (Y,X)} are such that the conditional expected value is linear; that is, in cases where

E ( Y X ) = a X + b , {\displaystyle \operatorname {E} (Y\mid X)=aX+b,}
it follows from the bilinearity of covariance that
a = Cov ( Y , X ) Var ( X ) {\displaystyle a={\operatorname {Cov} (Y,X) \over \operatorname {Var} (X)}}
and
b = E ( Y ) Cov ( Y , X ) Var ( X ) E ( X ) {\displaystyle b=\operatorname {E} (Y)-{\operatorname {Cov} (Y,X) \over \operatorname {Var} (X)}\operatorname {E} (X)}
and the explained component of the variance divided by the total variance is just the square of the correlation between Y {\displaystyle Y} and X ; {\displaystyle X;} that is, in such cases,
Var ( E ( Y X ) ) Var ( Y ) = Corr ( X , Y ) 2 . {\displaystyle {\operatorname {Var} (\operatorname {E} (Y\mid X)) \over \operatorname {Var} (Y)}=\operatorname {Corr} (X,Y)^{2}.}

One example of this situation is when ( X , Y ) {\displaystyle (X,Y)} have a bivariate normal (Gaussian) distribution.

More generally, when the conditional expectation E ( Y X ) {\displaystyle \operatorname {E} (Y\mid X)} is a non-linear function of X {\displaystyle X} [4]

ι Y X = Var ( E ( Y X ) ) Var ( Y ) = Corr ( E ( Y X ) , Y ) 2 , {\displaystyle \iota _{Y\mid X}={\operatorname {Var} (\operatorname {E} (Y\mid X)) \over \operatorname {Var} (Y)}=\operatorname {Corr} (\operatorname {E} (Y\mid X),Y)^{2},}
which can be estimated as the R {\displaystyle R} squared from a non-linear regression of Y {\displaystyle Y} on X , {\displaystyle X,} using data drawn from the joint distribution of ( X , Y ) . {\displaystyle (X,Y).} When E ( Y X ) {\displaystyle \operatorname {E} (Y\mid X)} has a Gaussian distribution (and is an invertible function of X {\displaystyle X} ), or Y {\displaystyle Y} itself has a (marginal) Gaussian distribution, this explained component of variation sets a lower bound on the mutual information:[4]
I ( Y ; X ) ln ( [ 1 ι Y X ] 1 / 2 ) . {\displaystyle \operatorname {I} (Y;X)\geq \ln \left([1-\iota _{Y\mid X}]^{-1/2}\right).}

Higher moments

A similar law for the third central moment μ 3 {\displaystyle \mu _{3}} says

μ 3 ( Y ) = E ( μ 3 ( Y X ) ) + μ 3 ( E ( Y X ) ) + 3 cov ( E ( Y X ) , var ( Y X ) ) . {\displaystyle \mu _{3}(Y)=\operatorname {E} \left(\mu _{3}(Y\mid X)\right)+\mu _{3}(\operatorname {E} (Y\mid X))+3\operatorname {cov} (\operatorname {E} (Y\mid X),\operatorname {var} (Y\mid X)).}

For higher cumulants, a generalization exists. See law of total cumulance.

See also

References

  1. ^ Neil A. Weiss, A Course in Probability, Addison–Wesley, 2005, pages 385–386.
  2. ^ Joseph K. Blitzstein and Jessica Hwang: "Introduction to Probability"
  3. ^ Mahler, Howard C.; Dean, Curtis Gary (2001). "Chapter 8: Credibility" (PDF). In Casualty Actuarial Society (ed.). Foundations of Casualty Actuarial Science (4th ed.). Casualty Actuarial Society. pp. 525–526. ISBN 978-0-96247-622-8. Retrieved June 25, 2015.
  4. ^ a b c d e Bowsher, C.G. and P.S. Swain, Identifying sources of variation and the flow of information in biochemical networks, PNAS May 15, 2012 109 (20) E1320-E1328.
  5. ^ Neil A. Weiss, A Course in Probability, Addison–Wesley, 2005, pages 380–383.
  • Blitzstein, Joe. "Stat 110 Final Review (Eve's Law)" (PDF). stat110.net. Harvard University, Department of Statistics. Retrieved 9 July 2014.
  • Billingsley, Patrick (1995). Probability and Measure. New York, NY: John Wiley & Sons, Inc. ISBN 0-471-00710-2. (Problem 34.10(b))