The Basics of Interval Estimation

Home

Java Applet Simulation of the mean estimation experiment


The Random Sample

Suppose that we have a basic random experiment with a random variable X. The distribution of X has an unknown parameter c of interest, that we would like to estimate. We repeat the basic experiment n times to generate a random sample of size n from the distribution of X:

(X1, X2, ..., Xn)

Recall that these are independent variables, each with the same distribution as X.

Point Estimation

Any estimation procedure should result, not only in an estimate of c, but also in some measure of the quality of this estimate. The general method of point estimation results in a statistic

W = h(X1, X2, ..., Xn)

as an estimator of c, with the mean square error

MSE(W) = E[(W - c)2]

as the measure of the quality of the estimator.

Interval Estimation

In this module, we consider another approach, that in some respects is better. We want to find a random interval which contains the parameter c with a specified probability. Such an interval is called a confidence interval and the specified probability is called the confidence level.

More precisely, a 1 - a confidence interval for c is an interval (L, R), where L and R are statistics

L = g(X1, X2, ..., Xn)
R = h(X1, X2, ..., Xn)

satisfying

P(L < c < R) = 1 - a

In particular, a 1 - a confidence lower bound for c is a statistic L satisfying

P(L < c) = 1 - a

(this is a special case of a confidence interval with positive infinity as the right endpoint). Similarly, a 1 - a confidence upper bound for c is a statistic R satisfying

P(c < R) = 1 - a

(this is a special case of a confidence interval with negative infinity as the left endpoint.) In this module we are interested in confidence intervals for the mean and variance of the distribution, two of the simplest but most important problems in inferential statistics.


Interval Estimation

PreviousNext