Simple Random vs. Stratified Random Sa ...
Simple random and stratified random sampling are both sampling techniques used by analysts... Read More
A point estimator (PE) is a sample statistic used to estimate an unknown population parameter. It is a random variable and therefore varies from sample to sample. A good example of an estimator is the sample mean x, which helps statisticians estimate the population mean, μ. There are three desirable properties every good estimator should possess. These are:
Let’s now look at each property in detail.
We say that the PE β’j is an unbiased estimator of the true population parameter βj if the expected value of β’j is equal to the true βj. Putting this in standard mathematical notation, an estimator is unbiased if:
E(β’j) = βj as long as the sample size n is finite.
Bias is the difference between the expected value of the estimator and the true value of the parameter. As such, this difference is and should be zero if an estimator is unbiased. Otherwise, a non-zero difference indicates bias. A biased estimator can be less or more than the true parameter, giving rise to positive and negative biases.
Suppose we have two unbiased estimators – β’j1 and β’j2 – of the population parameter, βj:
E(β’j1) = βj and E(β’j2) = βj
We say that β’j1 is more efficient relative to β’j2 if the variance of the sample distribution of β’j1 is less than that of β’j2 for all finite sample sizes.
In short, if we have two unbiased estimators, we should give preference to the estimator with a smaller variance because this means it’s more precise in statistical terms. It’s also important to note that the property of efficiency only applies in the presence of unbiasedness since we only consider the variances of unbiased estimators.
Let β’j(N) denote an estimator of βj where N represents the sample size. We would consider β’j(N) a consistent point estimator of βj if its sampling distribution converges to or collapses on the true value of the population parameter βj as N tends to infinity.
This intuitively means that if a PE is consistent, its distribution becomes more and more concentrated around the real value of the population parameter involved. Therefore, we could say that as N increases, the probability that the estimator ‘closes in’ on the actual value of the parameter approaches 1.
Reading 10 LOS 10g:
Identify and describe desirable properties of an estimator.