Prediction Model
The most common representation is :
\(\hat{x}(n)=\sum_{i=1}^{p}{a_ix\left(n-i\right)}\) (54)
Here, \(\hat{x}(n)\) represents the predicted signal value, \(x(n-i)\) are the previously observed values with \(p<n\), and a_i are the predictor coefficients. This estimate yields the following error:
\(e\left(n\right)=x\left(n\right)-\hat{x}\left(n\right)\) (55)
Where\( \hat{x}(n)\) is the true value of the signal.
These equations are applicable to all types of linear (one-dimensional) predictions. The differences lie in how the predictor coefficients \(a_i\) are chosen.
For multidimensional signals, the error metric is often defined as :
\(e\left(n\right)=xn-x(n)\) (56)
Here\( ‖*‖\) represents a chosen vector norm. Predictions like as \(\hat{x}(n)\) are commonly employed in Kalman filters and smoothers to estimate current and past signal values, respectively.