(a) Let theta_n be an estimator of a population parameter theta. Explain what th
ID: 3250648 • Letter: #
Question
(a) Let theta_n be an estimator of a population parameter theta. Explain what this means for a random sample X_1, ..., X_n which are distributed with parameter theta and give an example and a non-example of an estimator. Give definitions of what it means for theta_n to be an unbiased estimator of theta and a consistent estimator of theta. (b) Suppose that theta_n is an unbiased estimator of a population parameter theta. (i) If Var (theta_n) rightarrow 0 as n rightarrow infinity, prove that theta_n is a consistent estimator of theta. (ii) Use the proof of part (i) to determine an approximate confidence interval for theta by proving that P[|theta_n - theta| lessthanorequalto 1/squareroot alpha squareroot Var[theta_n]] greaterthanorequalto 1 - alpha, for any alpha elementof (0, 1). (c) Suppose that X_1, ..., X_n is a sequence of independent random variables with unknown mean mu, and suppose that mu_n = 1/n sigma^n _i = 1 X_i. Suppose further that the variance of each X_i is known, and equal to sigma^2. Deduce formulae for the mean and variance of mu_n, and prove for any alpha elementof (0, 1) that P[|mu_n - mu| lessthanorequalto sigma/squareroot alpha squareroot n] greaterthanorequalto 1 - alpha. (d) Suppose that sigma = 0.1 and it is desired to estimate mu to within 0.001 with at least 95% confidence. What according to part (c), is the minimal sample size that must be considered in order to achieve this? Compare your answer with the result you might obtain from applying the Central Limit Theorem.Explanation / Answer
E[u(X1,X2,…,Xn)]=
then the statistic u(X1,X2,…,Xn)
is an unbiased estimator of the parameter . Otherwise, u(X1,X2,…,Xn) is a biased estimator of .
if Xi is a Bernoulli random variable with parameter p, then E(Xi) = p. Therefore:
E(p^)=E(1ni=1nXi)=1ni=1nE(Xi)=1ni=1np=1n(np)=p
The first equality holds because we've merely replaced p-hat with its definition. The second equality holds by the rules of expectation for a linear combination. The third equality holds because E(Xi) = p. The fourth equality holds because when you add the value p up n times, you get np. And, of course, the last equality is simple algebra.
In summary, we have shown that:
E(p^)=p
Therefore, the maximum likelihood estimator is an unbiased estimator of p.
One desirable property of estimators is consistency. If we collect a large number of observations, we hope we have a lot of information about any unknown parameter , and thus we hope we can construct an estimator with a very small MSE. We call an estimator consistent if limn MSE() = 0
which means that as the number of observations increase the MSE descends to 0. if X1, . . . , Xn N(, 1), then the MSE
of ¯x is 1/n. Since limn(1/n) = 0, ¯x is a consistent estimator of .
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.