One-dimensional Kalman Gain Derivation

There are several ways to derive the one-dimensional Kalman Gain equation. I present the simplest one.

Given the measurement \( z_{n} \) and the prior estimate \( \hat{x}_{n,n-1} \) we are interested in finding an optimum combined estimate \( \hat{x}_{n,n} \) based on the measurement and the prior estimate.

The optimum combined estimate is a weighted mean of the prior estimate and the measurement:

\[ \hat{x}_{n,n} = k_{1}z_{n} + k_{2}\hat{x}_{n,n-1} \]

Where \( k_{1} \) and \( k_{2} \) are the weights of the measurement and the prior estimate.

\[ k_{1} + k_{2} = 1 \]

\[ \hat{x}_{n,n} = k_{1}z_{n} + (1 - k_{1})\hat{x}_{n,n-1} \]

The relation between variances is given by:

\[ p_{n,n} = k_{1}^{2}r_{n} + (1 - k_{1})^{2}p_{n,n-1} \]

Where:

\( p_{n,n} \) is the variance of the optimum combined estimate \( \hat{x}_{n,n} \)

\( p_{n,n-1} \) is the variance of the prior estimate \( \hat{x}_{n,n} \)

\( r_{n} \) is the variance of the measurement \( z_{n} \)

Note: for any normally distributed random variable \( x \) with variance \( \sigma^{2} \), \( kx \) is distributed normally with variance \( k^{2}\sigma^{2} \)

Since we are looking for an optimum estimate, we are interested to minimize \( p_{n,n} \).

To find \( k_{1} \) that minimizes \( p_{n,n} \), we differentiate \( p_{n,n} \) with respect to \( k_{1} \) and set the result to zero.

\[ \frac{dp_{n,n}}{dk_{1}} = 2k_{1}r_{n} - 2(1 - k_{1})p_{n,n-1} \]

Hence

\[ k_{1}r_{n} = p_{n,n-1} - k_{1}p_{n,n-1} \]

\[ k_{1}p_{n,n-1} + k_{1}r_{n} = p_{n,n-1} \]

\[ k_{1} = \frac{p_{n,n-1}}{p_{n,n-1} + r_{n}} \]

We have derived the Kalman Gain!

Since the Kalman Gain yields the minimum variance estimate, the Kalman Filter is also called an optimal filter.

Back