Covariance Update Equation

The Covariance Update Equation is given by:

\[ \boldsymbol{ P_{n,n} = \left( I - K_{n}H \right) P_{n,n-1} \left( I - K_{n}H \right)^{T} + K_{n}R_{n}K_{n}^{T} } \]
where:
\( \boldsymbol{P_{n,n} } \) is an estimate uncertainty (covariance) matrix of the current sate
\( \boldsymbol{P_{n,n-1}} \) is a prior estimate uncertainty (covariance) matrix of the current sate (predicted at the previous state)
\( \boldsymbol{K_{n}} \) is a Kalman Gain
\( \boldsymbol{H} \) is an observation matrix
\( \boldsymbol{R_{n}} \) is a Measurement Uncertainty (measurement noise covariance matrix)

Covariance Update Equation Derivation

I will provide the derivation of the Covariance Update Equation. I will do it as detailed as possible, without shortcuts, so it is going to be long. Some of you may find it too detailed, on the other hand, it will help others to understand better.

If you don’t bother about the derivation, you can jump to the next topic.

For the derivation, I will use with the following four equations:

Eq. Num. Equation Notes
1 \( \boldsymbol{\hat{x}_{n,n} = \hat{x}_{n,n-1} + K_{n} ( z_{n} - H \hat{x}_{n,n-1} )} \) State Update Equation
2 \( \boldsymbol{z_{n} = Hx_{n} + v_{n}} \) Measurement Equation
3 \( \boldsymbol{P_{n,n}} = E\left( \boldsymbol{e_{n}e_{n}^{T}} \right) = E\left( \left( \boldsymbol{x_{n} - \hat{x}_{n,n}} \right) \left( \boldsymbol{x_{n} - \hat{x}_{n,n}} \right)^{T} \right) \) Estimate Uncertainty
4 \( \boldsymbol{R_{n}} = E\left( \boldsymbol{v_{n}v_{n}^{T}} \right) \) Measurement Uncertainty

We are going to derive the Current Estimate Uncertainty ( \( \boldsymbol{P_{n,n}} \) ) as a function of the Kalman Gain \( \boldsymbol{K_{n}} \).

Notes
\( \boldsymbol{\hat{x}_{n,n} = \hat{x}_{n,n-1} + K_{n} ( z_{n} - H \hat{x}_{n,n-1} )} \) State Update Equation
\( \boldsymbol{\hat{x}_{n,n} = \hat{x}_{n,n-1} + K_{n} ( Hx_{n} + v_{n} - H \hat{x}_{n,n-1} )} \) Plug the Measurement Equation into the State Update Equation
\( \boldsymbol{e_{n}} = \boldsymbol{x_{n} - \hat{x}_{n,n}} \) Estimate error
\( \boldsymbol{e_{n}} = \boldsymbol{x_{n} - \hat{x}_{n,n-1} - K_{n} \left( Hx_{n} + v_{n} - H \hat{x}_{n,n-1} \right)} \) Plug the \( \boldsymbol{\hat{x}_{n,n}} \)
\( \boldsymbol{e_{n}} = \boldsymbol{x_{n} - \hat{x}_{n,n-1} - K_{n}Hx_{n} - K_{n}v_{n} + K_{n}H \hat{x}_{n,n-1}} \) Open the brackets
\( \boldsymbol{e_{n}} = \boldsymbol{x_{n} - \hat{x}_{n,n-1} - K_{n}H\left( x_{n} - \hat{x}_{n,n-1} \right) - K_{n}v_{n}} \) Localize \( (\boldsymbol{ x_{n} - \hat{x}_{n,n}}) \)
\( \boldsymbol{e_{n}} = \boldsymbol{ \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) - K_{n}v_{n}} \)
\( \boldsymbol{P_{n,n}} = E\left( \boldsymbol{e_{n}e_{n}^{T}} \right) = E\left( \left( \boldsymbol{x_{n} - \hat{x}_{n,n}} \right) \left( \boldsymbol{x_{n} - \hat{x}_{n,n}} \right)^{T} \right) \) Estimate Uncertainty
\( \boldsymbol{P_{n,n}} = E\left( \boldsymbol{\left( \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) - K_{n}v_{n} \right) \times \\ \times \left( \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) - K_{n}v_{n} \right)^{T}} \right) \) Plug \( \boldsymbol{e_{n}} \)
\( \boldsymbol{P_{n,n}} = E\left( \boldsymbol{\left( \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) - K_{n}v_{n} \right) \times \\ \times \left( \left( \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) \right)^{T} - \left( K_{n}v_{n}\right) ^{T} \right)} \right) \) Open the brackets
\( \boldsymbol{P_{n,n}} = E\left( \boldsymbol{\left( \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) - K_{n}v_{n} \right) \times \\ \times \left( \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} \left( I - K_{n}H \right)^{T} - \left( K_{n}v_{n}\right) ^{T} \right)} \right) \) Apply the matrix transpose property: \( \boldsymbol{(AB)^{T} = B^{T}A^{T}} \)
\( \boldsymbol{P_{n,n}} = E \left( \boldsymbol{ \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} \left( I - K_{n}H \right)^{T} - \\ - \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) \left( K_{n}v_{n} \right)^{T} - \\ - K_{n}v_{n} \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} \left( I - K_{n}H \right)^{T} + \\ + K_{n}v_{n} \left( K_{n}v_{n} \right)^{T} } \right) \) Open the brackets
\( \boldsymbol{P_{n,n}} = E \left( \boldsymbol{ \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} \left( I - K_{n}H \right)^{T} }\right) - \\ - \color{red}{E \left( \boldsymbol{ \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) \left( K_{n}v_{n} \right)^{T} }\right)} - \\ - \color{red}{E \left( \boldsymbol{ K_{n}v_{n} \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} \left( I - K_{n}H \right)^{T} }\right)} + \\ + E \left( \color{blue}{\boldsymbol{ K_{n}v_{n} \left( K_{n}v_{n} \right)^{T} }}\right) \) Apply the rule \( E(X \pm Y) = E(X) \pm E(Y) \)
\( \color{red}{E \left( \boldsymbol{ \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) \left( K_{n}v_{n} \right)^{T} }\right) = 0} \)
\( \color{red}{E \left( \boldsymbol{ K_{n}v_{n} \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} \left( I - K_{n}H \right)^{T} }\right) = 0} \)
\( (\boldsymbol{ x_{n} - \hat{x}_{n,n}}) \) is the error of the prior estimate in relation to the true value, it is uncorrelated with the current measurement noise \( \boldsymbol{ v_{n} } \). The expectation of the product of two independent variables equals to zero.
\( \boldsymbol{P_{n,n}} = E \left( \boldsymbol{ \left( I - K_{n}H \right) \left( x_{n} - \hat{x}_{n,n-1} \right) \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} \left( I - K_{n}H \right)^{T} }\right) + \\ + E \left( \color{blue}{\boldsymbol{ K_{n}v_{n} v_{n}^{T} K_{n}^{T} }}\right) \) Apply the matrix transpose property: \( \boldsymbol{(AB)^{T} = B^{T}A^{T}} \)
\( \boldsymbol{P_{n,n}} = \boldsymbol{ \left( I - K_{n}H \right)} \color{green}{E \left( \boldsymbol{ \left( x_{n} - \hat{x}_{n,n-1} \right) \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} }\right)} \boldsymbol{ \left( I - K_{n}H \right)^{T}} + \\ + \boldsymbol{K_{n}} \color{blue}{ E \left( \boldsymbol{ v_{n} v_{n}^{T} }\right) } \boldsymbol{ K_{n}^{T} } \) Apply the rule \( E(aX) = aE(X) \)
\( \color{green}{E \left( \boldsymbol{ \left( x_{n} - \hat{x}_{n,n-1} \right) \left( x_{n} - \hat{x}_{n,n-1} \right)^{T} }\right) = \boldsymbol{P_{n,n-1}}} \)
\( \color{blue}{ E \left( \boldsymbol{ v_{n} v_{n}^{T} }\right) = \boldsymbol{R_{n}}} \)
\( \color{green}{\boldsymbol{P_{n,n-1}}} \) is prior estimate uncertainty
\( \color{blue}{\boldsymbol{R_{n}}} \) is the measurement uncertainty
\( \boldsymbol{P_{n,n}} = \boldsymbol{ \left( I - K_{n}H \right)} \color{green}{\boldsymbol{ P_{n,n-1}} } \boldsymbol{ \left( I - K_{n}H \right)^{T}} + \boldsymbol{K_{n}} \color{blue}{ \boldsymbol{ R_{n} } } \boldsymbol{ K_{n}^{T} } \) Covariance Update Equation!
Previous Next