Often \(\Theta\).
In the context of point estimation, the decision rule is called a statistical estimator (also termed statistic or estimator) of \(\theta\), usually designed as \(\widehat{\theta}\), is any function of the data (sample of the random variable) \(\widehat{\theta}=d(x)\).
We might want to estimate some parameter of \(f\) or some feature such as \(\mu\).
Parameters that we do not want to estimate.
The distribution of \(\widehat{\theta}\) is called the sampling distribution.
\[bias(\widehat{\theta}) = E[\widehat{\theta}] - \theta\]
\[se(\widehat{\theta})=\sqrt{V(\widehat{\theta})}\] Often \(se\) is difficult to calculate as the variance depends on the unknown distribution \(F\). A typical solution is to aproximate \(F\) with the empirical distribution \(\widehat{F}\) and use bootstrap.
\(\widehat{\theta}_n\) is consistent if it converges in probability to \(\theta\).
\(\widehat{\theta}_n\) is asymptotically normal if \(\frac{\widehat{\theta}_n - \theta}{se}\) converges in distribution to \(N(0,1)\).
…
…
\(L(\theta,\widehat{\theta})=(\theta - \widehat{\theta})^2\)
The risk function is called MSE.
\(MSE = R(\theta,\widehat{\theta})=E[L(\theta,\widehat{\theta})]= \int (\theta - \widehat{\theta})^2 f(x;\theta) \, dx\)
\(MSE = bias(\widehat{\theta})^2 + var(\widehat{\theta})\)
Demonstration: ...
\(X \sim N(\mu,1)\)
\(\widehat{\mu}_1 = 4\)
\(MSE_1 = bias(\widehat{\mu}_1)^2 + var(\widehat{\mu}_1)= \left( E[\widehat{\mu}_1] - \mu \right)^2+ var(\widehat{\mu}_1) = \left(4-\mu \right)^2 + 0 = \left(4-\mu \right)^2\)
\(\widehat{\mu}_2 = X\)
\(MSE_2 = bias(\widehat{\mu}_2)^2 + var(\widehat{\mu}_2)= \left( E[\widehat{\mu}_2] - \mu \right)^2+ var(\widehat{\mu}_2) = \left(E[X]-\mu \right)^2 + var(X) = \left(\mu-\mu \right)^2 + 1 = 1\)
If the parameter happens to be close to 4 the risk for the first estimator is better than for the second. Otherwise, the second si better.
\(L(\theta,\widehat{\theta})=|\theta - \widehat{\theta}|\)
\(L(\theta,\widehat{\theta})=\int \log \left( \frac{f(x;\theta)}{f(x;\widehat{\theta})} \right) f(x;\theta) dx\)