Estimate Variance
The estimation variance is equal to:
\(\sigma_E^2=E\left\{\left(\hat{M}-m\right)^2\right\}=E\left\{{\hat{M}}^2\right\}-m^2\)
If m is known, \(S^2=\frac{1}{N}\sum_{i=1}^{N}\left(X_i-m\right)^2\) is unbiased for \(\sigma^2\) .
If m is unknown, \(S^2=\frac{1}{N-1}\sum_{i=1}^{N}\left(X_i-X\right)^2\) is unbiased for \(\sigma^2\) .
With a moment of order two:
\(E\left\{{\hat{M}}^2\right\}=E\left\{\frac{1}{N^2}\sum_{k=1}^{N}{\sum_{j=1}^{N}x\left(i\right)x\left(j\right)}\right\}=\frac{1}{N^2}\left[\begin{matrix}\sum_{i=1}^{N}{E\left\{x^2(i)\right\}+\sum_{k=1}^{N}\sum_{j=1}^{N}E\left\{x(i)x(j)\right\}}\\i\neq j\\\end{matrix}\right]\) (17)
If the samples are independent or decorrelated, which can happen when they are sufficiently far from each other, we have:
\(E\left\{x(i)x(j)\right\}=m^2.\delta(i-j)\) (18)
With a moment of order two:
\(E\left\{{\hat{M}}^2\right\}=\frac{1}{N^2}\left[N\left(\sigma^2+m^2\right)+\left(N^2-N^2\right)\right]=\frac{\sigma^2}{N}+m^2\) (19)
\(\sigma_E^2=\frac{\sigma^2}{N}\) (20)
Relation (20) shows that the precision of the average estimation is a function of \(\frac{1}{N}\) in power and of \(\frac{1}{\sqrt N}\) in amplitude.
\(\begin{matrix}\sum_{k=1}^{N}\sum_{j=1}^{N}E\left\{x(i)x(j)\right\}\\i\neq j\\\end{matrix}=2\sum_{j=1}^{N}{\left(N-k\right)R_x\left(k\right)} \left(k=j-i\right)\) (21)
And the moment of order two becomes:
\(E\left\{{\hat{M}}^\mathrm{2}\right\}=\frac{2}{N^2}\sum_{j=1}^{N}{\left(N-k\right)R_x\left(k\right)+\frac{\sigma^2+m^2}{N}}\) (22)
The autocorrelation function being an even function, verifying \(R_x\left(0\right)=\sigma^2+m^2\) , we deduce the expression of the estimation variance:
\(\sigma_E^2=\frac{2}{N^2}\sum_{j=1}^{N}{\frac{N-\left|k\right|}{N}R_{xc}\left(k\right)}\) (23)
With \(R_{xc}\left(k\right)=R_x\left(k\right)-m^2\) , the autocorrelation of the centred process.
You can write expression (12) in a discrete way. The value of \(\sigma_E^2\) for samples that are independent or decorrelated is the same as the term at the origin in \(\sigma_E^2\) for samples that are dependent or correlated. In fact, of
\(R_{xc}\left(k\right)=E\left\{\left(x\left(i\right)-m\right)\left(x\left(i-k\right)-m\right)\right\}=\sigma^2\delta\left(k\right)\) (24)
We verify that
\(\sigma_E^2=\frac{2}{N^2}\sum_{k=-(N-1)}^{N-1}{\frac{N-\left|k\right|}{N}\sigma^2\delta\left(k\right)=\frac{\sigma^2}{N}\delta\left(k\right)}\) (25)
In the same way as for continuous processes, the discrete estimator of the mean is a consistent estimator.
Example : Example:
Consider X a character such that E(X) = µ and var(x)=\sigma^2. Let \(\epsilon\) : X1, . . . , Xn be a sample associated with X. Let:
\({\hat{\mu}}_1=X_1\) ,
\({\hat{\mu}}_2=\frac{\left(X_1+X_2\right)}{2}\) ,
\({\hat{\mu}}_3=\frac{1}{n}\sum_{1}^{n}X_i\) .
Which of the following estimators is unbiased and has minimal variance?
These three estimators are in the class of unbiased estimators:
E\(\left({\hat{\mu}}_1\right)=E\left(X_1\right)=\mu\) ,
\(E\left({\hat{\mu}}_2\right)=\frac{\left\{E(X_1)+E\left(X_2\right)\right\}}{2}=\frac{\left(\mu+\mu\right)}{2}=\mu\) ,
\(E\left({\hat{\mu}}_3\right)=\frac{1}{n}\sum_{1}^{n}X_i=\mu\) .
We've also found:
\(var({\hat{\mu}}_1)=\sigma^2\); \(var(\hat{\mu}_2)=\frac{\sigma^2}{2}\) and \(var(\hat{\mu}_3)=\frac{\sigma^2}{n}\)