Variance estimation is a statistical inference problem in which a sample is used to produce a point estimate of the variance of an unknown distribution.
The problem is typically solved by using the sample variance as an estimator of the population variance.
In this lecture, we present two examples, concerning:
IID samples from a normal distribution whose mean is known;
IID samples from a normal distribution whose mean is unknown.
For each of these two cases, we derive the expected value, the distribution and the asymptotic properties of the variance estimator.
Table of contents
In this example of variance estimation we make assumptions that are similar to those we made in the mean estimation of normal IID samples.
The sample is made of independent draws from a normal distribution.
Specifically, we observe the realizations of independent random variables , ..., , all having
known mean ;
unknown variance .
We use the following estimator of variance:
The expected value of the estimator is equal to the true variance :
This can be proved using linearity of the expected value:
Therefore, the estimator is unbiased.
The variance of the estimator is
This can be proved using the fact that for a normal distribution and the formula for the variance of an independent sum:
Therefore, the variance of the estimator tends to zero as the sample size tends to infinity.
The estimator has a Gamma distribution with parameters and .
The estimator can be written aswhere the variables are independent standard normal random variables and , being a sum of squares of independent standard normal random variables, has a Chi-square distribution with degrees of freedom (see the lecture entitled Chi-square distribution for more details). Multiplying a Chi-square random variable with degrees of freedom by one obtains a Gamma random variable with parameters and (see the lecture entitled Gamma distribution for more details).
The mean squared error of the estimator is
The estimatorcan be viewed as the sample mean of a sequence where the generic term of the sequence is
Since the sequence is an IID sequence with finite mean, it satisfies the conditions of Kolmogorov's Strong Law of Large Numbers.
Therefore, the sample mean of converges almost surely to the true mean :
In other words, the estimator is strongly consistent.
It is also weakly consistent, because almost sure convergence implies convergence in probability:
This example of variance estimation is similar to the previous one. The only difference is that we relax the assumption that the mean of the distribution is known.
The sample is made of independent draws from a normal distribution.
Specifically, we observe the realizations of independent random variables , ..., , all having a normal distribution with:
unknown mean ;
unknown variance .
In this example also the mean of the distribution, being unknown, needs to be estimated.
It is estimated with the sample mean :
We use the following estimators of variance:
The expected value of the unadjusted sample variance is
This can be proved as follows:But when (because and are independent when - see Mutual independence via expectations). Therefore,
Therefore, the unadjusted sample variance is a biased estimator of the true variance .
The adjusted sample variance , on the contrary, is an unbiased estimator of variance:
This can be proved as follows:
Thus, when also the mean is being estimated, we need to divide by rather than by to obtain an unbiased estimator.
Intuitively, by considering squared deviations from the sample mean rather than squared deviations from the true mean, we are underestimating the true variability of the data.
In fact, the sum of squared deviations from the true mean is always larger than the sum of squared deviations from the sample mean.
Dividing by rather than by exactly corrects this bias. The number by which we divide is called the number of degrees of freedom and it is equal to the number of sample points () minus the number of other parameters to be estimated (in our case , the true mean ).
The factor by which we need to multiply the biased estimator to obtain the unbiased estimator is
This factor is known as degrees of freedom adjustment, which explains why is called unadjusted sample variance and is called adjusted sample variance.
The variance of the unadjusted sample variance is
This is proved in the following subsection (distribution of the estimator).
The variance of the adjusted sample variance is
This is also proved in the following subsection (distribution of the estimator).
Therefore, both the variance of and the variance of converge to zero as the sample size tends to infinity.
Note that the unadjusted sample variance , despite being biased, has a smaller variance than the adjusted sample variance , which is instead unbiased.
The unadjusted sample variance has a Gamma distribution with parameters and .
To prove this result, we need to use some facts on quadratic forms involving normal random variables, which have been introduced in the lecture entitled Normal distribution - Quadratic forms. To understand this proof, you need to first read that lecture, in particular the section entitled Sample variance as a quadratic form. Define the matrixwhere is an identity matrix and is a vector of ones. is symmetric and idempotent. Denote by the random vector whose -th entry is equal to . The random vector has a multivariate normal distribution with mean and covariance matrix .
Using the fact that the matrix is symmetric and idempotent, the unadjusted sample variance can be written as
By using the fact that the random vectorhas a standard multivariate normal distribution and the fact that , we can rewrite In other words, is proportional to a quadratic form in a standard normal random vector () and the quadratic form involves a symmetric and idempotent matrix whose trace is equal to . Therefore, the quadratic form has a Chi-square distribution with degrees of freedom. Finally, we can writethat is, is a Chi-square random variable divided by its number of degrees of freedom and multiplied by . Thus, is a Gamma random variable with parameters and (see the lecture entitled Gamma distribution for an explanation). Also, by the properties of Gamma random variables, its expected value isand its variance is
The adjusted sample variance has a Gamma distribution with parameters and .
The proof of this result is similar to the proof for unadjusted sample variance found above. It can also be found in the lecture entitled Normal distribution - Quadratic forms. Here, we just notice that , being a Gamma random variable with parameters and , has expected valueand variance
The mean squared error of the unadjusted sample variance is
It can be proved as follows:
The mean squared error of the adjusted sample variance is
It can be proved as follows:
Therefore the mean squared error of the unadjusted sample variance is always smaller than the mean squared error of the adjusted sample variance:
Both the unadjusted and the adjusted sample variances are consistent estimators of the unknown variance .
The unadjusted sample variancecan be written aswhere we have definedThe two sequences and are the sample means of and respectively. The latter both satisfy the conditions of Kolmogorov's Strong Law of Large Numbers (they form IID sequences with finite means), which implies that their sample means and converge almost surely to their true means:Since the functionis continuous and almost sure convergence is preserved by continuous transformations, we obtainTherefore the estimator is strongly consistent. It is also weakly consistent because almost sure convergence implies convergence in probability:The adjusted sample variance can be written asThe ratio can be thought of as a constant random variable defined as follows:which converges almost surely to . Therefore, where both and are almost surely convergent. Since the product is a continuous function and almost sure convergence is preserved by continuous transformation, we haveThus, also is strongly consistent.
Below you can find some exercises with explained solutions.
You observe three independent draws from a normal distribution having unknown mean and unknown variance . Their values are 50, 100 and 150.
Use these values to produce an unbiased estimate of the variance of the distribution.
The sample mean is An unbiased estimate of the variance is provided by the adjusted sample variance:
A machine (a laser rangefinder) is used to measure the distance between the machine itself and a given object.
When measuring the distance to an object located 10 meters apart, measurement errors committed by the machine are normally and independently distributed and are on average equal to zero.
The variance of the measurement errors is less than 1 squared centimeter, but its exact value is unknown and needs to be estimated.
To estimate it, we repeatedly take the same measurement and we compute the sample variance of the measurement errors (which we are also able to compute because we know the true distance).
How many measurements do we need to take to obtain an estimator of variance having a standard deviation less than 0.1 squared centimeters?
Denote the measurement errors by , ..., . The following estimator of variance is used: The variance of this estimator isThusWe need to ensure thatorwhich is certainly verified ifor
Please cite as:
Taboga, Marco (2021). "Estimation of the variance", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/variance-estimation.
Most of the learning materials found on this website are now available in a traditional textbook format.