Adding process noise

In this chapter, we add process noise to the one-dimensional Kalman Filter model.

The complete model of the one-dimensional Kalman Filter

The Process Noise

In the real world, there are uncertainties in the system dynamic model. For example, when we want to estimate the resistance value of the resistor, we assume a constant dynamic model, i.e., the resistance doesn't change between the measurements. However, the resistance can change slightly due to the fluctuation of the environment temperature. When tracking ballistic missiles with radar, the uncertainty of the dynamic model includes random changes in the target acceleration. The uncertainties are much more significant for an aircraft due to possible aircraft maneuvers.

On the other hand, when we estimate the location of a static object using a GPS receiver, the uncertainty of the dynamic model is zero since the static object doesn't move. The uncertainty of the dynamic model is called the Process Noise. In the literature, it is also called plant noise, driving noise, dynamics noise, model noise, and system noise. The process noise produces estimation errors.

In the previous example, we estimated the height of the building. Since the building height doesn't change, we didn't consider the process noise.

The Process Noise Variance is denoted by the letter \( q \).

The Covariance Extrapolation Equation shall include the Process Noise Variance.

The Covariance Extrapolation Equation for constant dynamics is:

\[ p_{n+1,n}= p_{n,n}+ q_{n} \]

These are the updated Kalman Filter equations in one dimension:

Equation Equation Name Alternative names used in the literature
State Update \( \hat{x}_{n,n}=~ \hat{x}_{n,n-1}+ K_{n} \left( z_{n}- \hat{x}_{n,n-1} \right) \) State Update Filtering Equation
\( p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \) Covariance Update Corrector Equation
\( K_{n}= \frac{p_{n,n-1}}{p_{n,n-1}+r_{n}} \) Kalman Gain Weight Equation
State Predict \( \hat{x}_{n+1,n}= \hat{x}_{n,n} \)
(For constant dynamics)

\( \hat{x}_{n+1,n}= \hat{x}_{n,n}+ \Delta t\hat{\dot{x}}_{n,n} \)
\( \hat{\dot{x}}_{n+1,n}= \hat{\dot{x}}_{n,n} \)
(For constant velocity dynamics)
State Extrapolation Predictor Equation
Transition Equation
Prediction Equation
Dynamic Model
State Space Model
\( p_{n+1,n}= p_{n,n} + q_{n} \)
(For constant dynamics)

\( p_{n+1,n}^{x}= p_{n,n}^{x} + \Delta t^{2} \cdot p_{n,n}^{v} \)
\( p_{n+1,n}^{v}= p_{n,n}^{v} + q_{n} \)
(For constant velocity dynamics)
Covariance Extrapolation Predictor Covariance Equation
Note 1: The State Extrapolation Equation and the Covariance Extrapolation Equation depend on the system dynamics.
Note 2: The table above demonstrates the special form of the Kalman Filter equations tailored for the specific case. The general form of the equation is presented later in matrix notation. For now, our goal is to understand the concept of the Kalman Filter.

Example 6 – Estimating the temperature of the liquid in a tank

We want to estimate the temperature of the liquid in a tank.

Estimating the liquid temperature

We assume that at a steady state, the liquid temperature is constant. However, some fluctuations in the true liquid temperature are possible. We can describe the system dynamics by the following equation:

\[ x_{n}=T+ w_{n} \]

Where:

\( T \) is the constant temperature

\( w_{n} \) is a random process noise with variance \( q \)

The numerical example

  • Let us assume a true temperature of 50 degrees Celsius.
  • We assume that the model is accurate. Thus we set the process noise variance ( \( q \) ) to 0.0001\( ^{\circ}C^{2} \).
  • The measurement error (standard deviation) is 0.1\( ^{\circ}C \).
  • The measurements are taken every 5 seconds.
  • Due to the process noise, the true liquid temperature values at the measurement points are: 50.005\( ^{\circ}C \), 49.994\( ^{\circ}C \), 49.993\( ^{\circ}C \), 50.001\( ^{\circ}C \), 50.006\( ^{\circ}C \), 49.998\( ^{\circ}C \), 50.021\( ^{\circ}C \), 50.005\( ^{\circ}C \), 50\( ^{\circ}C \), and 49.997\( ^{\circ}C \).
  • The measurements are: 49.986\( ^{\circ}C \), 49.963\( ^{\circ}C \), 50.09\( ^{\circ}C \), 50.001\( ^{\circ}C \), 50.018\( ^{\circ}C \), 50.05\( ^{\circ}C \), 49.938\( ^{\circ}C \), 49.858\( ^{\circ}C \), 49.965\( ^{\circ}C \), and 50.114\( ^{\circ}C \).

The following chart compares the true liquid temperature and the measurements.

True temperature vs. measurements

Iteration Zero

Before the first iteration, we must initialize the Kalman Filter and predict the following state (which is the first state).

Initialization

We don't know the true temperature of the liquid in a tank, and our guess is 60\( ^{\circ}C \).

\[ \hat{x}_{0,0}=60^{\circ}\textrm{C} \]

Our guess is imprecise, so we set our initialization estimate error \( \sigma \) to 100\( ^{\circ}C \). The Estimate Variance of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000^{\circ}C^{2} \]

This variance is very high. We get faster Kalman Filter convergence if we initialize with a more meaningful value.

Prediction

Since our model has constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=60^{\circ}C \]

The extrapolated estimate variance:

\[ p_{1,0}= p_{0,0}+q=10000+ 0.0001=10000.0001^{\circ}C^{2} \]

First Iteration

Step 1 - Measure

The measurement value:

\[ z_{1}=~ 49.986^{\circ}C \]

Since the measurement error is 0.1 ( \( \sigma \) ), the variance ( \( \sigma ^{2} \) ) would be 0.01; thus, the measurement variance is:

\[ r_{1}= 0.01^{\circ}C^{2} \]

Step 2 - Update

Kalman Gain calculation:

\[ K_{1}= \frac{p_{1,0}}{p_{1,0}+r_{1}}= \frac{10000.0001}{10000.0001+0.01} = 0.999999 \]

The Kalman Gain is almost 1; thus, our estimate error is much bigger than the measurement error. Thus the weight of the estimate is negligible, while the measurement weight is almost 1.

Estimating the current state:

\[ \hat{x}_{1,1}=~ \hat{x}_{1,0}+ K_{1} \left( z_{1}- \hat{x}_{1,0} \right) =60+0.999999 \left( 49.986-60 \right) = 49.986^{\circ}C \]

Update the current estimate variance:

\[ p_{1,1}=~ \left( 1-K_{1} \right) p_{1,0}= \left( 1-0.999999 \right) 10000.0001=0.01^{\circ}C^{2} \]

Step 3 - Predict

Since our system's Dynamic Model is constant, i.e., the liquid temperature doesn't change:

\[ \hat{x}_{2,1}=\hat{x}_{1,1}= 49.986^{\circ}C \]

The extrapolated estimate variance is:

\[ p_{2,1}= p_{1,1}+q=0.01+ 0.0001=0.0101^{\circ}C^{2} \]

Second Iteration

Step 1 - Measure

The measurement value:

\[ z_{2}=~ 49.963^{\circ}C \]

Since the measurement error is 0.1 ( \( \sigma \) ), the variance ( \( \sigma^{2} \) ) would be 0.01; thus, the measurement variance is:

\[ r_{2}= 0.01^{\circ}C^{2} \]

Step 2 - Update

Kalman Gain calculation:

\[ K_{2}= \frac{p_{2,1}}{p_{2,1}+r_{2}}= \frac{0.0101}{0.0101+0.01} = 0.5 \]

The Kalman Gain is 0.5, i.e., the weight of the estimate and the measurement weight are equal.

Estimating the current state:

\[ \hat{x}_{2,2}=~ \hat{x}_{2,1}+ K_{2} \left( z_{2}- \hat{x}_{2,1} \right) =49.986+0.5 \left( 49.963-49.986 \right) =49.974^{\circ}C \]

Update the current estimate variance:

\[ p_{2,2}=~ \left( 1-K_{2} \right) p_{2,1}= \left( 1-0.5 \right) 0.0101=0.005^{\circ}C^{2} \]

Step 3 - Predict

Since the dynamic model of the system is constant, i.e., the liquid temperature doesn't change:

\[ \hat{x}_{3,2}=\hat{x}_{2,2}= 49.974^{\circ}C \]

The extrapolated estimate variance is:

\[ p_{3,2}= p_{2,2}+q=0.005+ 0.0001=0.0051^{\circ}C^{2} \]

Iterations 3-10

The calculations for the subsequent iterations are summarized in the following table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
3 \( 50.09^{\circ}C \) \[ K_{3}= \frac{0.0051}{0.0051+0.01}=0.3388 \] \[ \hat{x}_{3,3}=~ 49.974+0.3388 \left( 50.09-49.974 \right) = 50.016^{\circ}C \] \[ p_{3,3}= \left( 1-0.3388 \right)0.0051 =0.0034^{\circ}C^{2} \] \[ \hat{x}_{4,3}= \hat{x}_{3,3} = 50.016^{\circ}C \] \[ p_{4,3}= 0.0034+0.0001=0.0035^{\circ}C^{2} \]
4 \( 50.001^{\circ}C \) \[ K_{4}= \frac{0.0035}{0.0035+0.01}=0.2586 \] \[ \hat{x}_{4,4}=~ 50.016+0.2586 \left( 50.001-50.016 \right) = 50.012^{\circ}C \] \[ p_{4,4}= \left( 1-0.2586 \right) 0.0035=0.0026^{\circ}C^{2} \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=50.012^{\circ}C \] \[ p_{5,4}= 0.0026+0.0001=0.0027^{\circ}C^{2} \]
5 \( 50.018^{\circ}C \) \[ K_{5}= \frac{0.0027}{0.0027+0.01}=0.2117 \] \[ \hat{x}_{5,5}= 50.012+0.2117 \left( 50.018-50.012 \right) =50.013^{\circ}C \] \[ p_{5,5}= \left( 1-0.2117 \right) 0.0027=0.0021^{\circ}C^{2} \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=50.013^{\circ}C \] \[ p_{6,5}= 0.0021+0.0001=0.0022^{\circ}C^{2} \]
6 \( 50.05^{\circ}C \) \[ K_{6}= \frac{0.0022}{0.0022+0.01}=0.1815 \] \[ \hat{x}_{6,6}=~ 50.013+0.1815 \left( 50.05-50.013 \right) = 50.02^{\circ}C \] \[ p_{6,6}= \left( 1-0.1815 \right) 0.0022=0.0018^{\circ}C^{2} \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=50.02^{\circ}C \] \[ p_{7,6}= 0.0018+0.0001=0.0019^{\circ}C^{2} \]
7 \( 49.938^{\circ}C \) \[ K_{7}= \frac{0.0019}{0.0019+0.01}=0.1607 \] \[ \hat{x}_{7,7}=~ 50.02 + 0.1607 \left( 49.938-50.02 \right) = 50.007^{\circ}C \] \[ p_{7,7}= \left( 1-0.1607 \right) 0.0019=0.0016^{\circ}C^{2} \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=49.978^{\circ}C \] \[ p_{8,7}= 0.0016+0.0001=0.0017^{\circ}C^{2} \]
8 \( 49.858^{\circ}C \) \[ K_{8}= \frac{0.0017}{0.0017+0.01}=0.1458 \] \[ \hat{x}_{8,8}= 50.007+0.1458 \left( 49.858-50.007 \right) =49.985^{\circ}C \] \[ p_{8,8}= \left( 1-0.1458 \right) 0.0017=0.0015 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=49.985^{\circ}C \] \[ p_{9,8}= 0.0015+0.0001=0.0016^{\circ}C^{2} \]
9 \( 49.965^{\circ}C \) \[ K_{9}= \frac{0.0016}{0.0016+0.01}=0.1348 \] \[ \hat{x}_{9,9}=~ 49.985+0.1348 \left( 49.965-49.985 \right) =49.982^{\circ}C \] \[ p_{9,9}= \left( 1-0.1348 \right) 0.0016=0.0014^{\circ}C^{2} \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=49.982^{\circ}C \] \[ p_{10,9}= 0.0014+0.0001=0.0015^{\circ}C^{2} \]
10 \( 50.114^{\circ}C \) \[ K_{10}= \frac{0.0015}{0.0015+0.01}=0.1265 \] \[ \hat{x}_{10,10}=~ 49.982+0.1265 \left( 50.114 -49.982 \right) =49.999^{\circ}C \] \[ p_{10,10}= \left( 1-0.1265 \right) 0.0015=0.0013^{\circ}C^{2} \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=49.999^{\circ}C \] \[ p_{11,10}= 0.0013+0.0001=0.0014^{\circ}C^{2} \]

Results analysis

The following chart describes the Kalman Gain.

The Kalman Gain

As you can see, the Kalman Gain gradually decreases; therefore, the KF converges.

The following chart compares the true value, measured values, and estimates. The confidence interval is 95%.

You can find the guidelines for a confidence interval calculation here.

True value, measured values and estimates

As you can see, the estimated value converges toward the true value. The KF estimates uncertainties are too high for the 95% confidence level.

Example summary

We measured a liquid temperature using the one-dimensional Kalman Filter. Although the system dynamics include a random process noise, the Kalman Filter provides a good estimation.

Example 7 – Estimating the temperature of a heated liquid

Like in the previous example, we estimate the temperature of a liquid in a tank. In this case, the dynamic model of the system is not constant - the liquid is getting heated at a rate of 0.1\( ^{\circ}C \) every second.

The numerical example

The Kalman Filter parameters are similar to the previous example:

  • We assume that the model is accurate. Thus we set the process noise variance ( \( q \) ) to 0.0001\( ^{\circ}C^{2} \).
  • The measurement error (standard deviation) is 0.1\( ^{\circ}C \).
  • The measurements are taken every 5 seconds.
  • The dynamic model of the system is constant.

Note: although the true dynamic model of the system is not constant (since the liquid is getting heated), we treat the system as a system with a constant dynamic model (the temperature doesn't change).

  • The true liquid temperature values at the measurement points are: 50.505\( ^{\circ}C \), 50.994\( ^{\circ}C \), 51.493\( ^{\circ}C \), 52.001\( ^{\circ}C \), 52.506\( ^{\circ}C \), 52.998\( ^{\circ}C \), 53.521\( ^{\circ}C \), 54.005\( ^{\circ}C \), 54.5\( ^{\circ}C \), and 54.997\( ^{\circ}C \).
  • The measurements are: 50.486\( ^{\circ}C \), 50.963\( ^{\circ}C \), 51.597\( ^{\circ}C \), 52.001\( ^{\circ}C \), 52.518\( ^{\circ}C \), 53.05\( ^{\circ}C \), 53.438\( ^{\circ}C \), 53.858\( ^{\circ}C \), 54.465\( ^{\circ}C \), and 55.114\( ^{\circ}C \).

The following chart compares the true liquid temperature and the measurements.

True temperature vs. measurements

Iteration Zero

Iteration zero is similar to the previous example.

Before the first iteration, we must initialize the Kalman Filter and predict the following state (which is the first state).

Initialization

We don't know the true temperature of the liquid in a tank, and our guess is 10\( ^{\circ}C \).

\[ \hat{x}_{0,0}=10^{\circ}C \]

Our guess is imprecise, so we set our initialization estimate error ( \( \sigma \) ) to 100. The Estimate Variance of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000^{\circ}C^{2} \]

This variance is very high. We get faster Kalman Filter convergence if we initialize with a more meaningful value.

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model has constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{\circ}C \]

The extrapolated estimate variance:

\[ p_{1,0}= p_{0,0}+q=10000+ 0.0001=10000.0001^{\circ}C^{2} \]

Iterations 1-10

The calculations for the subsequent iterations are summarized in the following table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
1 \( 50.486^{\circ}C \) \[ K_{1}= \frac{10000.0001}{10000.0001+0.01} = 0.999999 \] \[ \hat{x}_{1,1}=~ 10+0.999999 \left( 50.486-10 \right) = 50.486^{\circ}C \] \[ p_{1,1}= \left( 1-0.999999 \right) 10000.0001=0.01^{\circ}C^{2} \] \[ \hat{x}_{2,1}= \hat{x}_{1,1}=50.486^{\circ}C \] \[ p_{2,1}= 0.01+0.0001=0.0101^{\circ}C^{2} \]
2 \( 50.963^{\circ}C \) \[ K_{2}= \frac{0.0101}{0.0101+0.01}=0.5025 \] \[ \hat{x}_{2,2}=~ 50.486+0.5025 \left( 50.963-50.486 \right) =50.726^{\circ}C\] \[ p_{2,2}= \left( 1-0.5025 \right) 0.0101=0.005^{\circ}C^{2} \] \[ \hat{x}_{3,2}= \hat{x}_{2,2}=50.726^{\circ}C \] \[ p_{3,2}= 0.005+0.0001=0.0051^{\circ}C^{2} \]
3 \( 51.597^{\circ}C \) \[ K_{3}= \frac{0.0051}{0.0051+0.01}=0.3388 \] \[ \hat{x}_{3,3}=~ 50.726+0.3388 \left( 51.597-50.726 \right) = 51.021^{\circ}C\] \[ p_{3,3}= \left( 1-0.3388 \right) 0.0051=0.0034^{\circ}C^{2} \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=51.021^{\circ}C \] \[ p_{4,3}= 0.0034+0.0001=0.0035^{\circ}C^{2} \]
4 \( 52.001^{\circ}C \) \[ K_{4}= \frac{0.0035}{0.0035+0.01}=0.2586 \] \[ \hat{x}_{4,4}=~ 51.021+0.2586 \left( 52.001-51.021 \right) =51.274^{\circ}C \] \[ p_{4,4}= \left( 1-0.2586 \right) 0.0035=0.0026^{\circ}C^{2} \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=51.274^{\circ}C \] \[ p_{5,4}= 0.0026+0.0001=0.0027^{\circ}C^{2} \]
5 \( 52.518^{\circ}C \) \[ K_{5}= \frac{0.0027}{0.0027+0.01}=0.2117 \] \[ \hat{x}_{5,5}= 51.274+0.2117 \left( 52.518-51.274 \right) =51.538^{\circ}C \] \[ p_{5,5}= \left( 1-0.2117 \right) 0.0027=0.0021^{\circ}C^{2} \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=51.538^{\circ}C \] \[ p_{6,5}= 0.0021+0.0001=0.0022^{\circ}C^{2} \]
6 \( 53.05^{\circ}C \) \[ K_{6}= \frac{0.0022}{0.0022+0.01}=0.1815 \] \[ \hat{x}_{6,6}=~ 51.538+0.1815 \left( 53.05-51.538 \right) = 51.812^{\circ}C \] \[ p_{6,6}= \left( 1-0.1815 \right) 0.0022=0.0018^{\circ}C^{2} \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=51.812^{\circ}C \] \[ p_{7,6}= 0.0018+0.0001=0.0019^{\circ}C^{2} \]
7 \( 53.438^{\circ}C \) \[ K_{7}= \frac{0.0019}{0.0019+0.01}=0.1607 \] \[ \hat{x}_{7,7}=~ 51.812+0.1607 \left( 53.438-51.812 \right) =52.0735^{\circ}C \] \[ p_{7,7}= \left( 1-0.1607 \right) 0.0019=0.0016^{\circ}C^{2} \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=52.0735^{\circ}C \] \[ p_{8,7}= 0.0016+0.0001=0.0017^{\circ}C^{2} \]
8 \( 53.858^{\circ}C \) \[ K_{8}= \frac{0.0017}{0.0017+0.01}=0.1458 \] \[ \hat{x}_{8,8}= 52.0735+0.1458 \left( 53.858-52.0735 \right) =52.334^{\circ}C \] \[ p_{8,8}= \left( 1-0.1458 \right) 0.0017=0.0015^{\circ}C^{2} \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=52.334^{\circ}C \] \[ p_{9,8}= 0.0015+0.0001=0.0016^{\circ}C^{2} \]
9 \( 54.523^{\circ}C \) \[ K_{9}= \frac{0.0016}{0.0016+0.01}=0.1348 \] \[ \hat{x}_{9,9}=~ 52.334+0.1348 \left( 54.523-52.334 \right) =52.621^{\circ}C \] \[ p_{9,9}= \left( 1-0.1348 \right) 0.0016=0.0014^{\circ}C^{2} \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=52.621^{\circ}C \] \[ p_{10,9}= 0.0014+0.0001=0.0015^{\circ}C^{2} \]
10 \( 55.114^{\circ}C \) \[ K_{10}= \frac{0.0015}{0.0015+0.01}=0.1265 \] \[ \hat{x}_{10,10}=~ 2.621+0.1265 \left( 55.114 -52.621 \right) =52.936^{\circ}C \] \[ p_{10,10}= \left( 1-0.1265 \right) 0.0015=0.0013^{\circ}C^{2} \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=52.936^{\circ}C \] \[ p_{11,10}= 0.0013+0.0001=0.0014^{\circ}C^{2} \]

Results analysis

The following chart compares the true value, measured values, and estimates.

True value, measured values and estimates

As you can see, the Kalman Filter has failed to provide a reliable estimation. There is a lag error in the Kalman Filter estimation. We've already encountered the lag error in Example 3, where we estimated the position of an accelerating aircraft using the \( \alpha - \beta \) filter that assumes constant aircraft velocity. We got rid of the lag error in Example 4, where we replaced the \( \alpha - \beta \) filter with the \( \alpha -\beta -\gamma \) filter that assumes acceleration.

There are two reasons for the lag error in our Kalman Filter example:

  • The dynamic model doesn't fit the case.
  • We have chosen very low process noise \( \left( q=0.0001^{\circ}C^{2} \right) \) while the true temperature fluctuations are much more significant.
Note: The lag error is constant. Therefore the estimate curve should have the same slope as the true value curve. The figure above presents only the 10 first measurements, which is not enough for convergence. The figure below presents the first 100 measurements with a constant lag error.
True value, measured values and estimates

There are two possible ways to fix the lag error:

  • If we know that the liquid temperature can change linearly, we can define a new model that considers a possible linear change in the liquid temperature. We did this in Example 4. This method is preferred. However, this method won't improve the Kalman Filter performance if the temperature change can't be modeled.
  • On the other hand, since our model is not well defined, we can adjust the process model reliability by increasing the process noise \( \left( q \right) \). See the next example for details.

Another problem is a low estimate uncertainty. The KF failed to provide accurate estimates and is also confident in its estimates. It is an example of a bad KF design.

Example summary

In this example, we measured the temperature of a heating liquid using a one-dimensional Kalman Filter with a constant dynamic model. We've observed the lag error in the Kalman Filter estimation. The wrong dynamic model and process model definitions cause the lag error.

An appropriate dynamic model or process model definition can fix the lag error.

Example 8 – Estimating the temperature of a heated liquid

This example is similar to the previous example, with only one change. Since our process is not well-defined, we increase the process variance \( \left( q \right) \) from 0.0001\(^{\circ}C^{2}\) to 0.15\(^{\circ}C^{2}\).

The numerical example

Iteration Zero

Before the first iteration, we must initialize the Kalman Filter and predict the following state (which is the first state).

Initialization

The initialization is similar to the previous example.

We don't know the true temperature of the liquid in a tank, and our guess is 10\( ^{\circ}C \).

\[ \hat{x}_{0,0}=10^{\circ}C \]

Our guess is imprecise, so we set our initialization estimate error ( \( \sigma \) ) to 100. The Estimate Variance of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000^{\circ}C^{2} \]

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model has constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{\circ}C \]

The extrapolated estimate variance:

\[ p_{1,0}= p_{0,0}+q=10000+ 0.15=10000.15^{\circ}C^{2} \]

Iterations 1-10

The calculations for the successive iterations are summarized in the following table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
1 \( 50.486^{\circ}C \) \[ K_{1}= \frac{10000.15}{10000.15+0.01} = 0.999999 \] \[ \hat{x}_{1,1}=~ 10+0.999999 \left( 50.486-10 \right) =50.486^{\circ}C \] \[ p_{1,1}= \left( 1-0.999999 \right)10000.15=0.01^{\circ}C^{2} \] \[ \hat{x}_{2,1}= \hat{x}_{1,1}=50.486^{\circ}C \] \[ p_{2,1}= 0.01+0.15=0.16^{\circ}C^{2} \]
2 \( 50.963^{\circ}C \) \[ K_{2}= \frac{0.16}{0.16+0.01}=0.9412 \] \[ \hat{x}_{2,2}=~ 50.486+0.9412 \left( 50.963-50.486 \right) =50.934^{\circ}C \] \[ p_{2,2}= \left( 1-0.9412 \right) 0.16=0.0094^{\circ}C^{2} \] \[ \hat{x}_{3,2}= \hat{x}_{2,2}=50.934^{\circ}C \] \[ p_{3,2}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
3 \( 51.597^{\circ}C \) \[ K_{3}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{3,3}=~ 50.934+0.941 \left( 51.597-50.934 \right) =51.556^{\circ}C\] \[ p_{3,3}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=51.556^{\circ}C \] \[ p_{4,3}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
4 \( 52.001^{\circ}C \) \[ K_{4}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{4,4}=~ 51.556+0.941 \left( 52.001-51.556 \right) = 51.975^{\circ}C \] \[ p_{4,4}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=51.975^{\circ}C \] \[ p_{5,4}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
5 \( 52.518^{\circ}C \) \[ K_{5}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{5,5}= 51.975+0.941 \left( 52.518-51.975 \right) =52.486^{\circ}C \] \[ p_{5,5}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=52.486^{\circ}C \] \[ p_{6,5}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
6 \( 53.05^{\circ}C \) \[ K_{6}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{6,6}=~ 52.486+0.941 \left( 53.05-52.486 \right) =53.017^{\circ}C \] \[ p_{6,6}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=53.017^{\circ}C \] \[ p_{7,6}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
7 \( 53.438^{\circ}C \) \[ K_{7}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{7,7}=~ 53.017+0.941 \left( 53.438-53.017 \right) =53.413^{\circ}C \] \[ p_{7,7}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=53.413^{\circ}C \] \[ p_{8,7}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
8 \( 53.858^{\circ}C \) \[ K_{8}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{8,8}= 53.413+0.941 \left( 53.858-53.413 \right) =53.832^{\circ}C \] \[ p_{8,8}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=53.832^{\circ}C \] \[ p_{9,8}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
9 \( 54.523^{\circ}C \) \[ K_{9}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{9,9}=~ 53.832+0.941 \left( 54.523-53.832 \right) =54.428^{\circ}C \] \[ p_{9,9}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=54.428^{\circ}C \] \[ p_{10,9}= 0.0094+0.15=0.1594^{\circ}C^{2} \]
10 \( 55.114^{\circ}C \) \[ K_{10}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{10,10}=~ 54.428+0.941 \left( 55.114 -54.428 \right) =55.074^{\circ}C \] \[ p_{10,10}= \left( 1-0.941 \right) 0.1594=0.0094^{\circ}C^{2} \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=55.074^{\circ}C \] \[ p_{11,10}= 0.0094+0.15=0.1594^{\circ}C^{2} \]

Results analysis

The following chart compares the true value, measured values, and estimates.

True value, measured values and estimates

As you can see, the estimates follow the measurements. There is no lag error.

We can eliminate the lag error by setting a high process uncertainty. However, since our model is not well-defined, the noisy estimates are almost equal to the measurements, and we miss the goal of the Kalman Filter.

Let us take a look at the Kalman Gain.

The Kalman Gain

Due to the high process uncertainty, the measurement weight is much higher than the weight of the estimate. Thus, the Kalman Gain is high, and it converges to 0.94.

The good news is that we can trust the estimates of this KF. The true values (the green line) are within the 95% confidence region.

Example summary

The best Kalman Filter implementation involves a model that is very close to reality, leaving little room for process noise. However, a precise model is not always available - for example, an airplane pilot may decide to perform a sudden maneuver that changes the predicted airplane trajectory. In this case, the process noise should be increased.

Previous Next