It is a good point to stop for a short summary. Before going further, I would like to summarize what we have learnt so far.
As you remember from the "One-dimensional Kalman Filter section" (if you don't remember, please review it again), the Kalman Filter computations are based on five equations.
Two prediction equations:
Two update equations:
Kalman Gain Equation – required for computation of the update equations. The Kalman Gain is actually a "weighting" parameter for the measurement and the past estimations. It defines the weight of the past estimation and the weight of the measurement in estimating the current state.
So far, we have learnt the two prediction equations in the matrix notation, and several auxiliary equations that are required for main equations computation.
The general form of the state extrapolation equation in a matrix notation is:
\( \boldsymbol{\hat{x}_{n+1,n}} \) | is a predicted system state vector at time step \( n + 1 \) |
\( \boldsymbol{\hat{x}_{n,n}} \) | is an estimated system state vector at time step \( n \) |
\( \boldsymbol{\hat{u}_{n,n}} \) | is a control variable or input variable - a measurable (deterministic) input to the system |
\( \boldsymbol{w_{n}} \) | is a process noise or disturbance - an unmeasurable input that affects the state |
\( \boldsymbol{F} \) | is a state transition matrix |
\( \boldsymbol{G} \) | is a control matrix or input transition matrix (mapping control to state variables) |
The general form of the Covariance Extrapolation Equation is given by:
\( \boldsymbol{P_{n,n}} \) | is an estimate uncertainty (covariance) matrix of the current sate |
\( \boldsymbol{P_{n+1,n}} \) | is a predicted estimate uncertainty (covariance) matrix for the next state |
\( \boldsymbol{F} \) | is a state transition matrix that we've derived in "Modeling linear dynamic systems" section |
\( \boldsymbol{B} \) | is a input matrix |
\( \boldsymbol{Q} \) | is a process noise matrix |
The generalized measurement equation in a matrix form is given by:
\( \boldsymbol{z_{n}} \) | is a measurement vector |
\( \boldsymbol{x_{n}} \) | is a true system state (hidden state) |
\( \boldsymbol{v_{n}} \) | a random noise vector |
\( \boldsymbol{H} \) | is an observation matrix |
The terms \( \boldsymbol{w} \) and \( \boldsymbol{v} \) which correspond to the process and measurement noise vectors are interesting in that they do not typically appear directly in the equations of interest, since they are unknown.
Instead, these terms are used to model the uncertainty (or noise) in the equations themselves.
All covariance equations are covariance matrixes in the form of:
\[ E \left( \boldsymbol{ee^{T}} \right) \]
i.e. expectation of squared error. See the background break page for more details.
The measurement uncertainty is given by:
\( \boldsymbol{R_{n}} \) | is a covariance matrix of the measurement |
\( \boldsymbol{v_{n}} \) | is a measurement error |
The process noise uncertainty is given by:
\( \boldsymbol{Q_{n}} \) | is a covariance matrix of the process noise |
\( \boldsymbol{w_{n}} \) | is a process noise |
The estimation uncertainty is given by:
\( \boldsymbol{P_{n,n}} \) | is a covariance matrix of the estimation error |
\( \boldsymbol{e_{n}} \) | is an estimation error |
\( \boldsymbol{x_{n}} \) | is a true system state (hidden state) |
\( \boldsymbol{\hat{x}_{n,n}} \) | is an estimated system state vector at time step \( n \) |