Chi-square Test of a Single Population ...
Testing the Variances of a Normally Distributed Population using the Chi-square Test A... Read More
A point estimator (PE) is a sample statistic used to estimate an unknown population parameter. It is a random variable and therefore varies from sample to sample. A good example of an estimator is the sample mean, \(x\), which helps statisticians estimate the population mean, μ. There are three desirable properties every good estimator should possess. These are:
Let us now look at each property in detail.
We say that a PE β’j is an unbiased estimator of the true population parameter βj if the expected value of β’j is equal to the true βj. Putting this in standard mathematical notation, an estimator is unbiased if:
E(β’j) = βj as long as the sample size n is finite.
Bias is the difference between the expected value of the estimator and the true value of the parameter. Therefore, this difference is and should be zero if an estimator is unbiased. Given the foregoing statement, a non-zero difference indicates bias. A biased estimator can be less or more than the true parameter, giving rise to positive and negative biases.
Suppose we have two unbiased estimators – β’j1 and β’j2 – of the population parameter βj:
E(β’j1) = βj and E(β’j2) = βj
We say that β’j1 is more efficient relative to β’j2 if the variance of the sample distribution of β’j1 is less than that of β’j2 for all finite sample sizes.
In short, if we have two unbiased estimators, we prefer the estimator with a smaller variance because this means it is more precise in statistical terms. It is also important to note that the property of efficiency only applies in the presence of unbiasedness. This is because we only consider the variances of unbiased estimators.
Let β’j(N) denote an estimator of βj where n represents the sample size. We would consider β’j(N) a consistent point estimator of βj if its sampling distribution converges to the true value of the population parameter βj as n tends to infinity.
This intuitively means that if a PE is consistent, its distribution becomes more concentrated around the real value of the population parameter involved. As such, we could say that as n increases, the probability that the estimator ‘closes in’ on the actual value of the parameter approaches 1.