The Kalman Gain

The final equation is the Kalman Gain Equation.

The Kalman Gain in matrix notation is given by:

\[ \boldsymbol{K}_{n} = \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right)^{-1} \]
Where:
\( \boldsymbol{K}_{n} \) is the Kalman Gain
\( \boldsymbol{P}_{n,n-1} \) is the prior estimate covariance matrix of the current state (predicted at the previous step)
\( \boldsymbol{H} \) is the observation matrix
\( \boldsymbol{R}_{n} \) is the measurement noise covariance matrix

Kalman Gain Equation Derivation

This chapter includes the derivation of the Kalman Gain Equation. You can jump to the next topic if you don't care about the derivation.

First, let's rearrange the Covariance Update Equation:

Notes
\( \boldsymbol{P}_{n,n} = \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1} \color{blue}{\left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T}} + \boldsymbol{K}_{n} \boldsymbol{R}_{n}\boldsymbol{K}_{n}^{T} \) Covariance Update Equation
\( \boldsymbol{P}_{n,n} = \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1} \color{blue}{\left(\boldsymbol{I} - \left(\boldsymbol{K}_{n}\boldsymbol{H}\right)^{T}\right)} + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} \) \( \boldsymbol{I}^{T} = \boldsymbol{I} \)
\( \boldsymbol{P}_{n,n} = \color{green}{\left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1}} \color{blue}{\left(\boldsymbol{I} - \boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T}\right)} + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} \) Apply the matrix transpose property: \( (\boldsymbol{AB})^{T} = \boldsymbol{B}^{T}\boldsymbol{A}^{T} \)
\( \boldsymbol{P}_{n,n} = \color{green}{\left(\boldsymbol{P}_{n,n-1} - \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} \right)} \left(\boldsymbol{I} - \boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T}\right) + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} \)
\( \boldsymbol{P}_{n,n} = \boldsymbol{P}_{n,n-1} - \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T} - \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} + \\ + \color{#7030A0}{\boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T} + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} } \) Expand
\( \boldsymbol{P}_{n,n} = \boldsymbol{P}_{n,n-1} - \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T} - \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} + \\ + \color{#7030A0}{\boldsymbol{K}_{n} \left( \boldsymbol{H} \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right) \boldsymbol{K}_{n}^{T} } \) Group the last two terms

The Kalman Filter is an optimal filter. Thus, we seek a Kalman Gain that minimizes the estimate variance.

To minimize the estimate variance, we need to minimize the main diagonal (from the upper left to the lower right) of the covariance matrix \( \boldsymbol{P}_{n,n} \).

The sum of the main diagonal of the square matrix is the trace of the matrix. Thus, we need to minimize \( tr(\boldsymbol{P}_{n,n}) \). To find the conditions required to produce a minimum, we differentiate the trace of \( \boldsymbol{P}_{n,n} \) with respect to \( \boldsymbol{K}_{n} \) and set the result to zero.

Previous Next