This is a good place to stop for a short summary. Before going further, I would like to summarize what we have learned so far.
As you remember from the "One-dimensional Kalman Filter section" (if you don't remember, please review it again), the Kalman Filter computations are based on five equations.
Two prediction equations:
Two update equations:
Kalman Gain Equation – required for computation of the update equations. The Kalman Gain is actually a "weighting" parameter for the measurement and the past estimations. It defines the weight of the past estimation and the weight of the measurement in estimating the current state.
So far, we have learned the two prediction equations in matrix notation and several auxiliary equations that are required for computing the main equations.
The general form of the state extrapolation equation in a matrix notation is:
\( \boldsymbol{\hat{x}_{n+1,n}} \) | is the predicted system state vector at time step \( n + 1 \) |
\( \boldsymbol{\hat{x}_{n,n}} \) | is the estimated system state vector at time step \( n \) |
\( \boldsymbol{u_{n}} \) | is the control variable or input variable - a measurable (deterministic) input to the system |
\( \boldsymbol{w_{n}} \) | is the process noise or disturbance - an unmeasurable input that affects the state |
\( \boldsymbol{F} \) | is the state transition matrix |
\( \boldsymbol{G} \) | is the control matrix or input transition matrix (mapping control to state variables) |
The general form of the Covariance Extrapolation Equation is given by:
\( \boldsymbol{P_{n,n}} \) | is the estimate uncertainty (covariance) matrix of the current state |
\( \boldsymbol{P_{n+1,n}} \) | is the predicted estimate uncertainty (covariance) matrix for the next state |
\( \boldsymbol{F} \) | is the state transition matrix that we derived in the "Modeling linear dynamic systems" section |
\( \boldsymbol{Q} \) | is the process noise matrix |
The generalized measurement equation in a matrix form is given by:
\( \boldsymbol{z_{n}} \) | is the measurement vector |
\( \boldsymbol{x_{n}} \) | is the true system state (hidden state) |
\( \boldsymbol{v_{n}} \) | a random noise vector |
\( \boldsymbol{H} \) | is the observation matrix |
The terms \( \boldsymbol{w} \) and \( \boldsymbol{v} \) which correspond to the process and measurement noise vectors are interesting in that they do not typically appear directly in the equations of interest, since they are unknown.
Instead, these terms are used to model the uncertainty (or noise) in the equations themselves.
All covariance equations are covariance matrices in the form of:
\[ E \left( \boldsymbol{ee^{T}} \right) \]
i.e. expectation of squared error. See the background break page for more details.
The measurement uncertainty is given by:
\( \boldsymbol{R_{n}} \) | is the covariance matrix of the measurement |
\( \boldsymbol{v_{n}} \) | is the measurement error |
The process noise uncertainty is given by:
\( \boldsymbol{Q_{n}} \) | is the covariance matrix of the process noise |
\( \boldsymbol{w_{n}} \) | is the process noise |
The estimation uncertainty is given by:
\( \boldsymbol{P_{n,n}} \) | is the covariance matrix of the estimation error |
\( \boldsymbol{e_{n}} \) | is the estimation error |
\( \boldsymbol{x_{n}} \) | is the true system state (hidden state) |
\( \boldsymbol{\hat{x}_{n,n}} \) | is the estimated system state vector at time step \( n \) |