The final equation is the Kalman Gain Equation.
The Kalman Gain in matrix notation is given by:
\( \boldsymbol{K}_{n} \) | is the Kalman Gain |
\( \boldsymbol{P}_{n,n-1} \) | is the prior estimate uncertainty (covariance) matrix of the current state (predicted at the previous step) |
\( \boldsymbol{H} \) | is the observation matrix |
\( \boldsymbol{R}_{n} \) | is the Measurement Uncertainty (measurement noise covariance matrix) |
This chapter includes the derivation of the Kalman Gain Equation. You can jump to the next topic if you don't care about the derivation.
First, let's rearrange the Covariance Update Equation:
Notes | |
---|---|
\( \boldsymbol{P}_{n,n} = \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1} \color{blue}{\left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T}} + \boldsymbol{K}_{n} \boldsymbol{R}_{n}\boldsymbol{K}_{n}^{T} \) | Covariance Update Equation |
\( \boldsymbol{P}_{n,n} = \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1} \color{blue}{\left(\boldsymbol{I} - \left(\boldsymbol{K}_{n}\boldsymbol{H}\right)^{T}\right)} + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} \) | \( \boldsymbol{I}^{T} = \boldsymbol{I} \) |
\( \boldsymbol{P}_{n,n} = \color{green}{\left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1}} \color{blue}{\left(\boldsymbol{I} - \boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T}\right)} + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} \) | Apply the matrix transpose property: \( (\boldsymbol{AB})^{T} = \boldsymbol{B}^{T}\boldsymbol{A}^{T} \) |
\( \boldsymbol{P}_{n,n} = \color{green}{\left(\boldsymbol{P}_{n,n-1} - \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} \right)} \left(\boldsymbol{I} - \boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T}\right) + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} \) | |
\( \boldsymbol{P}_{n,n} = \boldsymbol{P}_{n,n-1} - \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T} - \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} + \\ + \color{#7030A0}{\boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T} + \boldsymbol{K}_{n} \boldsymbol{R}_{n} \boldsymbol{K}_{n}^{T} } \) | Expand |
\( \boldsymbol{P}_{n,n} = \boldsymbol{P}_{n,n-1} - \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T} - \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} + \\ + \color{#7030A0}{\boldsymbol{K}_{n} \left( \boldsymbol{H} \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right) \boldsymbol{K}_{n}^{T} } \) | Group the last two terms |
The Kalman Filter is an optimal filter. Thus, we seek a Kalman Gain that minimizes the estimate variance.
To minimize the estimate variance, we need to minimize the main diagonal (from the upper left to the lower right) of the covariance matrix \( \boldsymbol{P}_{n,n} \).
The sum of the main diagonal of the square matrix is the trace of the matrix. Thus, we need to minimize \( tr(\boldsymbol{P}_{n,n}) \). To find the conditions required to produce a minimum, we differentiate the trace of \( \boldsymbol{P}_{n,n} \) with respect to \( \boldsymbol{K}_{n} \) and set the result to zero.
Notes | |
---|---|
\( tr\left( \boldsymbol{P}_{n,n} \right) = tr\left( \boldsymbol{P}_{n,n-1}\right) - \\ - \color{#FF8C00}{tr\left( \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\boldsymbol{K}_{n}^{T} \right) - tr\left( \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} \right)} + \\ + tr\left(\boldsymbol{K}_{n} \left(\boldsymbol{H}\boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right) \boldsymbol{K}_{n}^{T} \right) \) | Trace of the Covariance Update Equation |
\( tr\left( \boldsymbol{P}_{n,n} \right) = tr\left( \boldsymbol{P}_{n,n-1}\right) - \color{#FF8C00}{ 2tr\left( \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1} \right)} + \\ + tr\left(\boldsymbol{K}_{n} \left(\boldsymbol{H}\boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right)\boldsymbol{K}_{n}^{T} \right) \) | The trace of the matrix is equal to the trace of its transpose |
\( \frac{d\left( tr\left( \boldsymbol{P}_{n,n} \right) \right)}{d\boldsymbol{K}_{n}} = \color{blue}{\frac{d\left( tr\left( \boldsymbol{P}_{n,n-1}\right) \right)}{d\boldsymbol{K}_{n}}} - \color{green}{ \frac{d\left( 2tr\left( \boldsymbol{K}_{n}\boldsymbol{H}\boldsymbol{P}_{n,n-1}\right) \right) }{d\boldsymbol{K}_{n}} } + \\ + \color{#7030A0}{\frac{ d\left( tr\left(\boldsymbol{K}_{n} \left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right) \boldsymbol{K}_{n}^{T} \right) \right) }{d\boldsymbol{K}_{n}}} = 0 \) | Differentiate the trace of \( \boldsymbol{P}_{n,n} \) with respect to \( \boldsymbol{K}_{n} \) |
\( \frac{d\left( tr\left( \boldsymbol{P}_{n,n} \right) \right)}{d\boldsymbol{K}_{n}} = \color{blue}{0} - \color{green}{ 2 \left( \boldsymbol{H P}_{n,n-1}\right)^{T} } + \\ + \color{#7030A0}{\boldsymbol{2K}_{n} \left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right) } = 0 \) |
\(
\color{green}{\frac{d}{d\boldsymbol{A}} \left( tr\left( \boldsymbol{AB} \right) \right) = \boldsymbol{B}^{T} } \\
\color{#7030A0}{\frac{d}{d\boldsymbol{A}} \left( tr\left( \boldsymbol{ABA}^{T} \right) \right) = 2\boldsymbol{AB} }
\)
See the proof here. |
\( \color{green}{ \left( \boldsymbol{HP}_{n,n-1} \right)^{T} } = \color{#7030A0}{\boldsymbol{K}_{n} \left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right) } \) | |
\( \boldsymbol{K}_{n} = \left(\boldsymbol{HP}_{n,n-1} \right)^{T} \left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right)^{-1} \) | |
\( \boldsymbol{K}_{n} = \boldsymbol{P}_{n,n-1}^{T}\boldsymbol{H}^{T} \left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right)^{-1} \) | Apply the matrix transpose property: \( (\boldsymbol{AB})^{T} = \boldsymbol{B}^{T}\boldsymbol{A}^{T} \) |
\( \boldsymbol{K}_{n} = \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T} \left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right)^{-1} \) | The Covariance matrix is a symmetric matrix: \( \boldsymbol{P}_{n,n-1}^{T} = \boldsymbol{P}_{n,n-1} \) |