One-dimensional Kalman Gain Derivation

There are several ways to derive the one-dimensional Kalman Gain equation. I present the simplest one.

Given the measurement \( z_{n} \) and the prior estimate \( \hat{x}_{n,n-1} \) we are interested in finding an optimum combined estimate \( \hat{x}_{n,n} \) based on the measurement and the prior estimate.

The optimum combined estimate is a weighted mean of the prior estimate and the measurement:

\[ \hat{x}_{n,n} = w_{1}z_{n} + w_{2}\hat{x}_{n,n-1} \]

Where \( w_{1} \) and \( w_{2} \) are the weights of the measurement and the prior estimate.

\[ w_{1} + w_{2} = 1 \]

\[ \hat{x}_{n,n} = w_{1}z_{n} + (1 - w_{1})\hat{x}_{n,n-1} \]

The relation between variances is given by:

\[ p_{n,n} = w_{1}^{2}r_{n} + (1 - w_{1})^{2}p_{n,n-1} \]

Where:

\( p_{n,n} \) is the variance of the optimum combined estimate \( \hat{x}_{n,n} \)

\( p_{n,n-1} \) is the variance of the prior estimate \( \hat{x}_{n,n} \)

\( r_{n} \) is the variance of the measurement \( z_{n} \)

Note: for any normally distributed random variable \( x \) with variance \( \sigma^{2} \), \( wx \) is distributed normally with variance \( w^{2}\sigma^{2} \)

Since we are looking for an optimum estimate, we want to minimize \( p_{n,n} \).

To find \( w_{1} \) that minimizes \( p_{n,n} \), we differentiate \( p_{n,n} \) with respect to \( w_{1} \) and set the result to zero.

\[ \frac{dp_{n,n}}{dw_{1}} = 2w_{1}r_{n} - 2(1 - w_{1})p_{n,n-1} \]

Hence

\[ w_{1}r_{n} = p_{n,n-1} - w_{1}p_{n,n-1} \]

\[ w_{1}p_{n,n-1} + w_{1}r_{n} = p_{n,n-1} \]

\[ w_{1} = \frac{p_{n,n-1}}{p_{n,n-1} + r_{n}} \]

We have derived the Kalman Gain!

Since the Kalman Gain yields the minimum variance estimate, the Kalman Filter is also called an optimal filter.

Back