Covariance Update Equation

The Covariance Update Equation is given by:

\[ \boldsymbol{P}_{n,n} = \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{n}\boldsymbol{R}_{n}\boldsymbol{K}_{n}^{T} \]
where:
\( \boldsymbol{P}_{n,n} \) is the covariance matrix of the current state estimation
\( \boldsymbol{P}_{n,n-1} \) is the prior estimate covariance matrix of the current state (predicted at the previous state)
\( \boldsymbol{K}_{n} \) is the Kalman Gain
\( \boldsymbol{H} \) is the observation matrix
\( \boldsymbol{R}_{n} \) is the measurement noise covariance matrix
\( \boldsymbol{I} \) is an Identity Matrix (the \( n \times n \) square matrix with ones on the main diagonal and zeros elsewhere)

Covariance Update Equation Derivation

This section includes the Covariance Update Equation derivation. Some of you may find it too detailed, but on the other hand, it will help others to understand better.

You can jump to the next topic if you don't care about the derivation.

For the derivation, I use the following four equations:

Eq. Num. Equation Notes
1 \( \boldsymbol{\hat{x}}_{n,n} = \boldsymbol{\hat{x}}_{n,n-1} + \boldsymbol{K}_{n} ( \boldsymbol{z}_{n} - \boldsymbol{H \hat{x}}_{n,n-1} ) \) State Update Equation
2 \( \boldsymbol{z}_{n} = \boldsymbol{Hx}_{n} + \boldsymbol{v}_{n} \) Measurement Equation
3 \( \boldsymbol{P}_{n,n} = E\left( \boldsymbol{e}_{n}\boldsymbol{e}_{n}^{T} \right) = E\left( \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n} \right)^{T} \right) \) Estimate Covariance
4 \( \boldsymbol{R}_{n} = E\left( \boldsymbol{v}_{n}\boldsymbol{v}_{n}^{T} \right) \) Measurement Covariance

We derive the Current Estimate Covariance ( \( \boldsymbol{P}_{n,n} \) ) as a function of the Kalman Gain \( \boldsymbol{K}_{n} \).

Notes
\( \boldsymbol{\hat{x}}_{n,n} = \boldsymbol{\hat{x}}_{n,n-1} + \boldsymbol{K}_{n} ( \boldsymbol{z}_{n} - \boldsymbol{H \hat{x}}_{n,n-1} ) \) State Update Equation
\( \boldsymbol{\hat{x}}_{n,n} = \boldsymbol{\hat{x}}_{n,n-1} + \boldsymbol{K}_{n} ( \boldsymbol{Hx}_{n} + \boldsymbol{v}_{n} - \boldsymbol{H \hat{x}}_{n,n-1} ) \) Plug the Measurement Equation into the State Update Equation
\( \boldsymbol{e}_{n} = \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n} \) Estimate error
\( \boldsymbol{e}_{n} = \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} - \boldsymbol{K}_{n} \left(\boldsymbol{Hx}_{n} + \boldsymbol{v}_{n} - \boldsymbol{H \hat{x}}_{n,n-1} \right) \) Plug \( \boldsymbol{\hat{x}}_{n,n} \)
\( \boldsymbol{e}_{n} = \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} - \boldsymbol{K}_{n}\boldsymbol{Hx}_{n} - \boldsymbol{K}_{n}\boldsymbol{v}_{n} + \boldsymbol{K}_{n}\boldsymbol{H \hat{x}}_{n,n-1} \) Expand
\( \boldsymbol{e}_{n} = \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} - \boldsymbol{K}_{n}\boldsymbol{H}\left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) - \boldsymbol{K}_{n}\boldsymbol{v}_{n} \) Localize \( (\boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1}) \)
\( \boldsymbol{e}_{n} = \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) - \boldsymbol{K}_{n}\boldsymbol{v}_{n}\)
\( \boldsymbol{P}_{n,n} = E\left( \boldsymbol{e}_{n}\boldsymbol{e}_{n}^{T} \right) = E\left( \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n} \right)^{T} \right) \) Estimate Covariance
\( \boldsymbol{P}_{n,n} = E\left( \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) - \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right) \times \\ \times \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) - \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right)^{T} \right) \) Plug \( \boldsymbol{e}_{n} \)
\( \boldsymbol{P}_{n,n} = E\left( \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) - \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right) \times \\ \times \left( \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \right)^{T} - \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n}\right) ^{T} \right) \right) \) Expand
\( \boldsymbol{P}_{n,n} = E\left(\left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) - \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right) \times \\ \times \left( \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} - \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n}\right)^{T} \right) \right) \) Apply the matrix transpose property: \( (\boldsymbol{AB})^{T} = \boldsymbol{B}^{T}\boldsymbol{A}^{T} \)
\( \boldsymbol{P}_{n,n} = E \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} - \\ - \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right)^{T} - \\ - \boldsymbol{K}_{n}\boldsymbol{v}_{n} \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} + \\ + \boldsymbol{K}_{n}\boldsymbol{v}_{n} \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right)^{T} \right) \) Expand
\( \boldsymbol{P}_{n,n} = E \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} \right) - \\ - \color{red}{E \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right)^{T} \right)} - \\ - \color{red}{E \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n} \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} \right)} + \\ + E \left( \color{blue}{\boldsymbol{K}_{n}\boldsymbol{v}_{n} \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right)^{T} }\right) \) Apply the rule \( E(X \pm Y) = E(X) \pm E(Y) \)
\( \color{red}{E \left( \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n} \right)^{T} \right) = 0} \)
\( \color{red}{E \left( \boldsymbol{K}_{n}\boldsymbol{v}_{n} \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} \right) = 0} \)
\( (\boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1}) \) is the error of the prior estimate. It is uncorrelated with the current measurement noise \( \boldsymbol{v}_{n} \). The expectation value of the product of two uncorrelated variables is zero.
\( \boldsymbol{P}_{n,n} = E \left( \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left( \boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \left( \boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} \right) + \\ + E \left( \color{blue}{\boldsymbol{K}_{n}\boldsymbol{v}_{n}\boldsymbol{v}_{n}^{T}\boldsymbol{K}_{n}^{T}}\right) \) Apply the matrix transpose property: \( (\boldsymbol{AB})^{T} = \boldsymbol{B}^{T}\boldsymbol{A}^{T} \)
\( \boldsymbol{P}_{n,n} = \boldsymbol{ \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)} \color{green}{E \left( \left(\boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left(\boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \right)} \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} + \\ + \boldsymbol{K}_{n} \color{blue}{ E \left( \boldsymbol{v}_{n}\boldsymbol{v}_{n}^{T} \right) } \boldsymbol{ K}_{n}^{T} \) Apply the rule \( E(aX) = aE(X) \)
\( \color{green}{E \left( \left(\boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right) \left(\boldsymbol{x}_{n} - \boldsymbol{\hat{x}}_{n,n-1} \right)^{T} \right) = \boldsymbol{P}_{n,n-1}} \)
\( \color{blue}{ E \left( \boldsymbol{v}_{n}\boldsymbol{v}_{n}^{T}\right) = \boldsymbol{R}_{n}} \)
\( \color{green}{\boldsymbol{P}_{n,n-1}} \) is the prior estimate covariance
\( \color{blue}{\boldsymbol{R}_{n}} \) is the measurement covariance
\( \boldsymbol{P}_{n,n} = \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \color{green}{\boldsymbol{P}_{n,n-1} } \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{n} \color{blue}{ \boldsymbol{R}_{n}} \boldsymbol{K}_{n}^{T} \) Covariance Update Equation!
Previous Next