Kalman Filter in one dimension

This chapter describes the Kalman Filter in one dimension. The main goal of this chapter is to explain the Kalman Filter concept simply and intuitively without using math tools that may seem complex and confusing.

We are going to advance toward the Kalman Filter equations step by step.

  • First, we derive the Kalman Filter equations for a simple example without process noise.
  • Second, we add process noise.

One-dimensional Kalman Filter without process noise

As I've mentioned earlier, the Kalman Filter is based on five equations. We are already familiar with two of them:

  • The state update equations.
  • The dynamic model equations.

In this chapter, we derive another three Kalman Filter Equations.

Let's recall our first example (gold bar weight measurement); We made multiple measurements and computed the estimate by averaging.

We obtained the following result:

Measurements vs. True value vs. Estimates

On the above plot, you can see the true values, the estimated values, and measurements vs. the number of measurements.

The differences between the measurements (blue samples) and the true values (green line) are measurement errors. Since the measurement errors are random, we can describe them by variance ( \( \sigma ^{2} \) ). The variance of the measurement errors could be provided by the scale vendor or derived by a calibration procedure. The variance of the measurement errors is the measurement uncertainty.

Note: In some literature, the measurement uncertainty is also called the measurement error.

We denote the measurement uncertainty by \( r \).

The difference between the estimates (the red line) and the true values (the green line) is the estimate error. As you can see, the estimate error becomes smaller and smaller as we make additional measurements, and it converges towards zero, while the estimated value converges towards the true value. We don't know the estimate error, but we can estimate the uncertainty in estimate.

We denote the estimate uncertainty by \( p \).

Let's look at the weight measurements PDF (Probability Density Function).

The following plot shows ten measurements of the gold bar weight.

  • The blue circles describe the measurements.
  • The true values are described by the red dashed line.
  • The green line describes the probability density function of the measurement.
  • The bold green area is the standard deviation ( \( \sigma \) ) of the measurement, i.e., there is a probability of 68.26% that the measurement value lies within this area.

As you can see, 8 out of 10 measurements are close enough to the true value. It lies within \( 1 \sigma \) boundaries.

The measurement uncertainty ( \( r \) ) is the variance of the measurement ( \( \sigma ^{2} \) ).

Measurements Probability Density Function

The Kalman Gain equation in 1d

I would like to present the intuitive derivation of the Kalman Gain Equation – the third Kalman Filter equation. The mathematical derivation will be shown in the following chapters.

The Kalman Gain (denoted by \( K_{n} \)) is the weight given to the current state estimate and the measurements. Unlike the \( \alpha \) -\( \beta \) (-\( \gamma \) ) parameters, the Kalman Gain is calculated dynamically for each filter iteration.

In one dimension, the Kalman Gain Equation is the following:

\[ K_{n}= \frac{Uncertainty \quad in \quad Estimate}{Uncertainty \quad in \quad Estimate \quad + \quad Uncertainty \quad in \quad Measurement}= \frac{p_{n,n-1}}{p_{n,n-1}+r_{n}} \]
Where:
\( p_{n,n-1} \) is the extrapolated estimate uncertainty
\( r_{n} \) is the measurement uncertainty

The Kalman Gain is a number between zero and one:

\[ 0 \leq K_{n} \leq 1 \]

Let's rewrite the state update equation:

\[ \hat{x}_{n,n}=~ \hat{x}_{n,n-1}+ K_{n} \left( z_{n}- \hat{x}_{n,n-1} \right) = \left( 1-K_{n} \right) \hat{x}_{n,n-1}+ K_{n}z_{n} \]

As you can see, the Kalman Gain \( \left( K_{n} \right) \) is the measurement weight, and the \( \left( 1-K_{n} \right) \) term is the weight of the current state estimate.

When the measurement uncertainty is large and the estimate uncertainty is low, the Kalman Gain is close to zero. Hence we give significant weight to the estimate and a small weight to the measurement.

On the other hand, when the measurement uncertainty is low and the estimate uncertainty is large, the Kalman Gain is close to one. Hence we give a low weight to the estimate and a significant weight to the measurement.

If the measurement uncertainty equals the estimate uncertainty, then the Kalman gain equals 0.5.

The Kalman gain tells how much the measurement changes the estimate.

The Kalman Filter is optimal since the Kalman Gain minimizes the estimation uncertainty. You can see the mathematical derivation for the one-dimensional Kalman Filter.

The Kalman Gain equation is the third Kalman filter equation.

The estimate uncertainty update in 1d

The following equation defines the estimate uncertainty update:

\[ p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \]
Where:
\( K_{n} \) is the Kalman Gain
\( p_{n,n-1} \) is the estimate uncertainty that was calculated during the previous filter estimation
\( p_{n,n} \) is the estimate uncertainty of the current state

This equation updates the estimate uncertainty of the current state. It is called the Covariance Update Equation. Why covariance? We will see this in the following chapters.

It is quite clear from the equation that the estimate uncertainty is constantly getting smaller with each filter iteration, since \( \left( 1-K_{n} \right) \leq 1 \). When the measurement uncertainty is large, the Kalman gain is low. Therefore, the convergence of the estimate uncertainty would be slow. However, the Kalman gain is high when the measurement uncertainty is small. Therefore, the estimate uncertainty would quickly converge towards zero.

The Covariance Update equation is the fourth Kalman Filter Equation.

The estimate uncertainty extrapolation in 1d

Like state extrapolation, the estimate uncertainty extrapolation is done with the dynamic model equations.

In our second example, the one-dimensional radar case, the predicted target position is:

\[ \hat{x}_{n+1,n}= \hat{x}_{n,n}+ \Delta t\hat{\dot{x}}_{n,n} \] \[ \hat{\dot{x}}_{n+1,n}= \hat{\dot{x}}_{n,n} \]

i.e., the predicted position equals the current estimated position plus the currently estimated velocity multiplied by time. The predicted velocity equals the current velocity estimate (assuming a constant velocity model).

The estimate uncertainty extrapolation would be:

\[ p_{n+1,n}^{x}= p_{n,n}^{x} + \Delta t^{2} \cdot p_{n,n}^{v} \] \[ p_{n+1,n}^{v}= p_{n,n}^{v} \]
Where:
\( p^{x} \) is the position estimate uncertainty
\( p^{v} \) is the velocity estimate uncertainty

i.e., the predicted position estimate uncertainty equals the current position estimate uncertainty plus current velocity estimate uncertainty multiplied by time squared. The predicted velocity estimate uncertainty equals the current velocity estimate uncertainty (assuming a constant velocity model).

Note: If you are wondering why the time is squared in \( p_{n+1,n}^{x}= p_{n,n}^{x} + \Delta t^{2} \cdot p_{n,n}^{v} \) equation, take a look at the expectation of variance derivation.

In our first example (gold bar weight measurement), the system's dynamics is constant. Thus, the estimate uncertainty extrapolation would be:

\[ p_{n+1,n}= p_{n,n} \]
Where:
\( p \) is the gold bar's weight estimate uncertainty

The estimate uncertainty extrapolation equation is called the Covariance Extrapolation Equation, which is the fifth Kalman Filter equation.

Putting all together

This chapter combines all of these pieces into a single algorithm. Like the \( \alpha \) , \( \beta \), (\( \gamma \) ) filter, the Kalman filter utilizes the "Measure, Update, Predict" algorithm.

The following chart provides a low-level schematic description of the algorithm:

Schematic description of the Kalman Filter algorithm

The filter inputs are:

  • Initialization
  • The initialization is performed only once, and it provides two parameters:

    • Initial System State ( \( \hat{x}_{1,0} \) )
    • Initial State Uncertainty ( \( p_{1,0} \) )

    The initialization parameters can be provided by another system, another process (for instance, a search process in radar), or an educated guess based on experience or theoretical knowledge. Even if the initialization parameters are not precise, the Kalman filter would be able to converge close to the true value.

  • Measurement
  • The measurement is performed for every filter cycle, and it provides two parameters:

    • Measured System State ( \( z_{n} \) )
    • Measurement Uncertainty ( \( r_{n} \) )

In addition to a measured value, the Kalman filter requires a measurement uncertainty parameter. Usually, this parameter is provided by the equipment vendor or can be derived by measurement equipment calibration. The radar measurement uncertainty depends on several parameters such as SNR (Signal to Noise Ratio), beam width, bandwidth, time on target, clock stability, and more. Every radar measurement has a different SNR, beam width, and time on target. Therefore, the radar calculates each measurement's uncertainty and reports it to the tracker.

The filter outputs are:

  • System State Estimate ( \( \hat{x}_{n,n} \) )
  • Estimate Uncertainty ( \( p_{n,n} \) )

In addition to the System State Estimate, the Kalman filter also provides the Estimate Uncertainty! As I've already mentioned, the estimate uncertainty is given by:

\[ p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \]

and \( p_{n,n} \) is constantly getting smaller with each filter iteration, since \( \left( 1-K_{n} \right) \leq 1 \)

So, it is up to us to decide how many measurements to take. If we are measuring a building height, and we are interested in a precision of 3 centimeters ( \( \sigma \) ), we should make measurements until the Estimation Uncertainty ( \( \sigma ^{2} \) ) is less than \( 9~centimeters^{2} \).

The following table summarizes the five Kalman Filter equations.

Equation Equation Name Alternative names used in the literature
\( \hat{x}_{n,n}=~ \hat{x}_{n,n-1}+ K_{n} \left( z_{n}- \hat{x}_{n,n-1} \right) \) State Update Filtering Equation
\( \hat{x}_{n+1,n}= \hat{x}_{n,n}+ \Delta t\hat{\dot{x}}_{n,n} \)
\( \hat{\dot{x}}_{n+1,n}= \hat{\dot{x}}_{n,n} \)
(For constant velocity dynamics)
State Extrapolation Predictor Equation
Transition Equation
Prediction Equation
Dynamic Model
State Space Model
\( K_{n}= \frac{p_{n,n-1}}{p_{n,n-1}+r_{n}} \) Kalman Gain Weight Equation
\( p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \) Covariance Update Corrector Equation
\( p_{n+1,n}= p_{n,n} \)
(For constant dynamics)
Covariance Extrapolation Predictor Covariance Equation
Note 1: The State Extrapolation Equation and the Covariance Extrapolation Equation depend on the system dynamics.
Note 2: The table above demonstrates a special form of the Kalman Filter equations tailored for a specific case. The general form of the equation is presented later in matrix notation. Right now, our goal is to understand the concept of the Kalman Filter.

The following figure provides a detailed description of the Kalman Filter's block diagram.

Detailed description of the Kalman Filter algorithm
  • Step 0: Initialization
  • As mentioned above, the initialization is performed only once, and it provides two parameters:

    • Initial System State ( \( \hat{x}_{1,0} \) )
    • Initial State Uncertainty ( \( p_{1,0} \) )

    The initialization is followed by prediction.

  • Step 1: Measurement
  • The measurement process provides two parameters:

    • Measured System State ( \( z_{n} \) )
    • Measurement Uncertainty ( \( r_{n} \) )
  • Step 2: State Update
  • The state update process is responsible for the system's current state estimation.

    The state update process inputs are:

    • Measured Value ( \( z_{n} \) )
    • The Measurement Uncertainty ( \( r_{n} \) )
    • Previous System State Estimate ( \( \hat{x}_{n,n-1} \) )
    • Estimate Uncertainty ( \( p_{n,n-1} \) )

    Based on the inputs, the state update process calculates the Kalman Gain and provides two outputs:

    • Current System State Estimate ( \( \hat{x}_{n,n} \) )
    • Current State Estimate Uncertainty ( \( p_{n,n} \) )

    These parameters are the Kalman Filter outputs.

  • Step 3: Prediction
  • The prediction process extrapolates the current system state and the uncertainty of the current system state estimate to the next system state, based on the system's dynamic model.

    At the first filter iteration the initialization outputs are treated as the Previous State Estimate and Uncertainty.

    The prediction outputs become the Previous State Estimate and Uncertainty on the following filter iterations.

Kalman Gain intuition

The Kalman Gain Defines the weight of the measurement and the weight of the previous estimate when forming a new estimate.

High Kalman Gain

A low measurement uncertainty relative to the estimate uncertainty would result in a high Kalman Gain (close to 1). Therefore the new estimate would be close to the measurement. The following figure illustrates the influence of a high Kalman Gain on the estimate in an aircraft tracking application.

High Kalman Gain

Low Kalman Gain

A high measurement uncertainty relative to the estimate uncertainty would result in a low Kalman Gain (close to 0). Therefore the new estimate would be close to the previous estimate. The following figure illustrates the influence of a low Kalman Gain on the estimate in an aircraft tracking application.

Low Kalman Gain

Now we understand the Kalman Filter algorithm, and we are ready for the first numerical example

Note: If you are curious about the math behind the Kalman Gain, take a look at the One-Dimensional Kalman Gain Derivation.

Example 5 – Estimating the height of a building

Assume that we would like to estimate the height of a building using a very imprecise altimeter.

We know that building height doesn't change over time, at least during the short measurement process.

Estimating the building height

The numerical example

  • The true building height is 50 meters.
  • The altimeter measurement error (standard deviation) is 5 meters.
  • The ten measurements are: 48.54m, 47.11m, 55.01m, 55.15m, 49.89m, 40.85m, 46.72m, 50.05m, 51.27m, 49.95m.

Iteration Zero

Initialization

One can estimate the building height simply by looking at it.

The estimated building height is:

\[ \hat{x}_{0,0}=60m \]

Now we shall initialize the estimate uncertainty. A human's estimation error (standard deviation) is about 15 meters: \( \sigma =15 \) . Consequently the variance is 225: \( \sigma ^{2}=225 \).

\[ p_{0,0}=225 \]

Prediction

Now, we shall predict the next state based on the initialization values.

Since our system's Dynamic Model is constant, i.e., the building doesn't change its height:

\[ \hat{x}_{1,0}=\hat{x}_{0,0}= 60m \]

The extrapolated estimate uncertainty (variance) also doesn't change:

\[ p_{1,0}= p_{0,0}=225 \]

First Iteration

Step 1 - Measure

The first measurement is: \( z_{1}=48.54m \) .

Since the standard deviation ( \( \sigma \) ) of the altimeter measurement error is 5, the variance ( \( \sigma ^{2} \) ) would be 25, thus, the measurement uncertainty is: \( r_{1}=25 \) .

Step 2 - Update

Kalman Gain calculation:

\[ K_{1}= \frac{p_{1,0}}{p_{1,0}+r_{1}}= \frac{225}{225+25}=0.9 \]

Estimating the current state:

\[ \hat{x}_{1,1}=~ \hat{x}_{1,0}+ K_{1} \left( z_{1}- \hat{x}_{1,0} \right) =60+0.9 \left( 48.54-60 \right) =49.69m \]

Update the current estimate uncertainty:

\[ p_{1,1}=~ \left( 1-K_{1} \right) p_{1,0}= \left( 1-0.9 \right) 225=22.5 \]

Step 3 - Predict

Since our system's Dynamic Model is constant, i.e., the building doesn't change its height:

\[ \hat{x}_{2,1}=\hat{x}_{1,1}= 49.69m \]

The extrapolated estimate uncertainty (variance) also doesn't change:

\[ p_{2,1}= p_{1,1}=22.5 \]

Second Iteration

After a unit time delay, the predicted estimate from the previous iteration becomes the previous estimate in the current iteration:

\[ \hat{x}_{2,1}=49.69m \]

The extrapolated estimate uncertainty becomes the previous estimate uncertainty:

\[ p_{2,1}= 22.5 \]

Step 1 - Measure

The second measurement is: \( z_{2}=47.11m \)

The measurement uncertainty is: \( r_{2}=25 \)

Step 2 - Update

Kalman Gain calculation:

\[ K_{2}= \frac{p_{2,1}}{p_{2,1}+r_{2}}= \frac{22.5}{22.5+25}=0.47 \]

Estimating the current state:

\[ \hat{x}_{2,2}=~ \hat{x}_{2,1}+ K_{2} \left( z_{2}- x_{2,1} \right) =49.69+0.47 \left( 47.11-49.69 \right) =48.47m \]

Update the current estimate uncertainty:

\[ p_{2,2}=~ \left( 1-K_{2} \right) p_{2,1}= \left( 1-0.47 \right) 22.5=11.84 \]

Step 3 - Predict

Since our system's Dynamic Model is constant, i.e., the building doesn't change its height:

\[ \hat{x}_{3,2}=\hat{x}_{2,2}= 48.47m \]

The extrapolated estimate uncertainty (variance) also doesn't change:

\[ p_{3,2}= p_{2,2}=11.84 \]

Iterations 3-10

The calculations for the subsequent iterations are summarized in the following table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
3 \( 55.01m \) \[ K_{3}= \frac{11.84}{11.84+25}=0.32 \] \[ \hat{x}_{3,3}=~ 48.47+0.32 \left( 55.01 -48.47 \right) =50.57m \] \[ p_{3,3}= \left( 1-0.32 \right) 11.84=8.04 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=50.57m \] \[ p_{4,3}= p_{3,3}=8.04 \]
4 \( 55.15m \) \[ K_{4}= \frac{8.04}{8.04+25}=0.24 \] \[ \hat{x}_{4,4}=~ 50.57+0.24 \left( 55.15 -50.57 \right) =51.68m \] \[ p_{4,4}= \left( 1-0.24 \right) 8.04=6.08 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=51.68m \] \[ p_{5,4}= p_{4,4}=6.08 \]
5 \( 49.89m \) \[ K_{5}= \frac{6.08}{6.08+25}=0.2 \] \[ \hat{x}_{5,5}= 51.68+0.2 \left( 49.89 -51.68 \right) =51.33m \] \[ p_{5,5}= \left( 1-0.2 \right) 6.08=4.89 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=51.33m \] \[ p_{6,5}= p_{5,5}=4.89 \]
6 \( 40.85m \) \[ K_{6}= \frac{4.89}{4.89+25}=0.16 \] \[ \hat{x}_{6,6}=~ 51.33+0.16 \left( 40.85 -51.33 \right) =49.62m \] \[ p_{6,6}= \left( 1-0.16 \right) 4.89=4.09 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=49.62m \] \[ p_{7,6}= p_{6,6}=4.09 \]
7 \( 46.72m \) \[ K_{7}= \frac{4.09}{4.09+25}=0.14 \] \[ \hat{x}_{7,7}=~ 49.62+0.14 \left( 46.72 -49.62 \right) =49.21m \] \[ p_{7,7}= \left( 1-0.14 \right) 4.09=3.52 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=49.21m \] \[ p_{8,7}= p_{7,7}=3.52 \]
8 \( 50.05m \) \[ K_{8}= \frac{3.52}{3.52+25}=0.12 \] \[ \hat{x}_{8,8}= 49.21+0.12 \left( 50.05 -49.21 \right) =49.31m \] \[ p_{8,8}= \left( 1-0.12 \right) 3.52=3.08 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=49.31m \] \[ p_{9,8}= p_{8,8}=3.08 \]
9 \( 51.27m \) \[ K_{9}= \frac{3.08}{3.08+25}=0.11 \] \[ \hat{x}_{9,9}=~ 49.31+0.11 \left( 51.27 -49.31 \right) =49.53m \] \[ p_{9,9}= \left( 1-0.11 \right) 3.08=2.74 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=49.53m \] \[ p_{10,9}= p_{9,9}=2.74 \]
10 \( 49.95m \) \[ K_{10}= \frac{2.74}{2.74+25}=0.1 \] \[ \hat{x}_{10,10}=~ 49.53+0.1 \left( 49.95 -49.53 \right) =49.57m \] \[ p_{10,10}= \left( 1-0.1 \right) 2.74=2.47 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=49.57m \] \[ p_{11,10}= p_{10,10}=2.47 \]

The following chart compares the true value, measured values, and estimates.

True value, measured values and estimates

As you can see, the estimated value converges to about 49.5 meters after 7 measurements.

The following chart compares the measurement uncertainty and the estimate uncertainty.

Measurement uncertainty and estimate uncertainty

At the first filter iteration, the estimate uncertainty is close to the measurement uncertainty and quickly decreases. After 10 measurements, the estimate uncertainty ( \( \sigma ^{2} \) ) is 2.47, i.e., the estimate error standard deviation is: \( \sigma = \sqrt[]{2.47}=1.57m \).

So we can say that the building height estimate is: \( 49.57 \pm 1.57m \).

The following chart shows the Kalman Gain.

The Kalman Gain

As you can see, the Kalman Gain is decreasing, making the measurement weight smaller and smaller.

Example summary

We measured the building height using the one-dimensional Kalman Filter in this example. Unlike the \( \alpha -\beta -(\gamma) \) filter, the Kalman Gain is dynamic and depends on the precision of the measurement device.

The initial value used by the Kalman Filter is not precise. Therefore, the measurement weight in the State Update Equation is high, and the estimate uncertainty is high.

With each iteration, the measurement weight is lower; therefore, the estimate uncertainty is lower.

The Kalman Filter output includes the estimate and the estimate uncertainty.

The complete model of the one-dimensional Kalman Filter

To complete the one-dimensional Kalman Filter model, we need to add the process noise variable to the Covariance Extrapolation Equation.

The Process Noise

In the real world, there are uncertainties in the system dynamic model. For example, when we want to estimate the resistance value of the resistor, we assume a constant dynamic model, i.e., the resistance doesn't change between the measurements. However, the resistance can change slightly due to the fluctuation of the environment temperature. When tracking ballistic missiles with radar, the uncertainty of the dynamic model includes random changes in the target acceleration. The uncertainties are much more significant for an aircraft due to possible aircraft maneuvers.

On the other hand, when we estimate the location of a static object using a GPS receiver, the uncertainty of the dynamic model is zero since the static object doesn't move. The uncertainty of the dynamic model is called the Process Noise. In the literature, it is also called plant noise, driving noise, dynamics noise, model noise, and system noise. The process noise produces estimation errors.

In the previous example, we estimated a building's height. Since the building's height doesn't change, we didn't consider the process noise.

The Process Noise Variance is denoted by the letter \( q \).

The Covariance Extrapolation Equation shall include the Process Noise Variance.

The Covariance Extrapolation Equation for constant dynamics is:

\[ p_{n+1,n}= p_{n,n}+ q_{n} \]

These are the updated Kalman Filter equations in one dimension:

Equation Equation Name Alternative names used in the literature
\( \hat{x}_{n,n}=~ \hat{x}_{n,n-1}+ K_{n} \left( z_{n}- \hat{x}_{n,n-1} \right) \) State Update Filtering Equation
\( \hat{x}_{n+1,n}= \hat{x}_{n,n}+ \Delta t\hat{\dot{x}}_{n,n} \)
\( \hat{\dot{x}}_{n+1,n}= \hat{\dot{x}}_{n,n} \)
(For constant velocity dynamics)
State Extrapolation Predictor Equation
Transition Equation
Prediction Equation
Dynamic Model
State Space Model
\( K_{n}= \frac{p_{n,n-1}}{p_{n,n-1}+r_{n}} \) Kalman Gain Weight Equation
\( p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \) Covariance Update Corrector Equation
\( p_{n+1,n}= p_{n,n} + q_{n} \)
(For constant dynamics)
Covariance Extrapolation Predictor Covariance Equation
Note 1: The State Extrapolation Equation and the Covariance Extrapolation Equation depend on the system dynamics.
Note 2: The table above demonstrates the special form of the Kalman Filter equations tailored for the specific case. The general form of the equation is presented later in matrix notation. For now, our goal is to understand the concept of the Kalman Filter.

Example 6 – Estimating the temperature of the liquid in a tank

We would like to estimate the temperature of the liquid in a tank.

Estimating the liquid temperature

We assume that at a steady state, the liquid temperature is constant. However, some fluctuations in the true liquid temperature are possible. We can describe the system dynamics by the following equation:

\[ x_{n}=T+ w_{n} \]

where:

\( T \) is the constant temperature

\( w_{n} \) is a random process noise with variance \( q \)

The numerical example

  • Let us assume an true temperature of 50 degrees Celsius.
  • We assume that the model is accurate. Thus we set the process noise variance ( \( q \) ) to 0.0001.
  • The measurement error (standard deviation) is 0.1 degrees Celsius.
  • The measurements are taken every 5 seconds.
  • The true liquid temperature values at the measurement points are: 49.979\( ^{o}C \), 50.025\( ^{o}C \), 50\( ^{o}C \), 50.003\( ^{o}C \), 49.994\( ^{o}C \), 50.002\( ^{o}C \), 49.999\( ^{o}C \), 50.006\( ^{o}C \), 49.998\( ^{o}C \), and 49.991\( ^{o}C \).
  • The measurements are: 49.95\( ^{o}C \), 49.967\( ^{o}C \), 50.1\( ^{o}C \), 50.106\( ^{o}C \), 49.992\( ^{o}C \), 49.819\( ^{o}C \), 49.933\( ^{o}C \), 50.007\( ^{o}C \), 50.023\( ^{o}C \), and 49.99\( ^{o}C \).

The following chart compares the true liquid temperature and the measurements.

True temperature vs. measurements

Iteration Zero

Before the first iteration, we must initialize the Kalman Filter and predict the following state (which is the first state).

Initialization

We don't know the true temperature of the liquid in a tank, and our guess is 10\( ^{o}C \).

\[ \hat{x}_{0,0}=10^{o}C \]

Our guess is imprecise, so we set our initialization estimate error \( \sigma \) to 100. The Estimate Uncertainty of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000 \]

This variance is very high. We get faster Kalman Filter convergence if we initialize with a more meaningful value.

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model has constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{o}C \]

The extrapolated estimate uncertainty (variance):

\[ p_{1,0}= p_{0,0}+q=10000+ 0.0001=10000.0001 \]

First Iteration

Step 1 - Measure

The measurement value:

\[ z_{1}=~ 49.95^{o}C \]

Since the measurement error is 0.1 ( \( \sigma \) ), the variance ( \( \sigma ^{2} \) ) would be 0.01; thus, the measurement uncertainty is:

\[ r_{1}= 0.01 \]

Step 2 - Update

Kalman Gain calculation:

\[ K_{1}= \frac{p_{1,0}}{p_{1,0}+r_{1}}= \frac{10000.0001}{10000.0001+0.01} = 0.999999 \]

The Kalman Gain is almost 1, i.e., our estimate error is much bigger than the measurement error. Thus the weight of the estimate is negligible, while the measurement weight is almost 1.

Estimating the current state:

\[ \hat{x}_{1,1}=~ \hat{x}_{1,0}+ K_{1} \left( z_{1}- \hat{x}_{1,0} \right) =10+0.999999 \left( 49.95-10 \right) =49.95^{o}C \]

Update the current estimate uncertainty:

\[ p_{1,1}=~ \left( 1-K_{1} \right) p_{1,0}= \left( 1-0.999999 \right) 10000.0001=0.01 \]

Step 3 - Predict

Since our system's Dynamic Model is constant, i.e., the liquid temperature doesn't change:

\[ \hat{x}_{2,1}=\hat{x}_{1,1}= 49.95^{o}C \]

The extrapolated estimate uncertainty (variance) is:

\[ p_{2,1}= p_{1,1}+q=0.01+ 0.0001=0.0101 \]

Second Iteration

Step 1 - Measure

The measurement value:

\[ z_{2}=~ 49.967^{o}C \]

Since the measurement error is 0.1 ( \( \sigma \) ), the variance ( \( \sigma^{2} \) ) would be 0.01; thus, the measurement uncertainty is:

\[ r_{2}= 0.01 \]

Step 2 - Update

Kalman Gain calculation:

\[ K_{2}= \frac{p_{2,1}}{p_{2,1}+r_{2}}= \frac{0.0101}{0.0101+0.01} = 0.5 \]

The Kalman Gain is 0.5, i.e., the weight of the estimate and the measurement weight are equal.

Estimating the current state:

\[ \hat{x}_{2,2}=~ \hat{x}_{2,1}+ K_{2} \left( z_{2}- \hat{x}_{2,1} \right) =49.95+0.5 \left( 49.967-49.95 \right) =49.959^{o}C \]

Update the current estimate uncertainty:

\[ p_{2,2}=~ \left( 1-K_{2} \right) p_{2,1}= \left( 1-0.5 \right) 0.0101=0.005 \]

Step 3 - Predict

Since our system's Dynamic Model is constant, i.e., the liquid temperature doesn't change:

\[ \hat{x}_{3,2}=\hat{x}_{2,2}= 49.959^{o}C \]

The extrapolated estimate uncertainty (variance) is:

\[ p_{3,2}= p_{2,2}+q=0.005+ 0.0001=0.0051 \]

Iterations 3-10

The calculations for the successive iterations are summarized in the following table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
3 \( 50.1^{o}C \) \[ K_{3}= \frac{0.0051}{0.0051+0.01}=0.3388 \] \[ \hat{x}_{3,3}=~ 49.959+0.3388 \left( 50.1-49.959 \right) =50.007^{o}C \] \[ p_{3,3}= \left( 1-0.3388 \right)0.0051 =0.0034 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=50.007^{o}C \] \[ p_{4,3}= 0.0034+0.0001=0.0035 \]
4 \( 50.106^{o}C \) \[ K_{4}= \frac{0.0035}{0.0035+0.01}=0.2586 \] \[ \hat{x}_{4,4}=~ 50.007+0.2586 \left( 50.106-50.007 \right) =50.032^{o}C \] \[ p_{4,4}= \left( 1-0.2586 \right) 0.0035=0.0026 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=50.032^{o}C \] \[ p_{5,4}= 0.0026+0.0001=0.0027 \]
5 \( 49.992^{o}C \) \[ K_{5}= \frac{0.0027}{0.0027+0.01}=0.2117 \] \[ \hat{x}_{5,5}= 50.032+0.2117 \left( 49.992-50.032 \right) =50.023^{o}C \] \[ p_{5,5}= \left( 1-0.2117 \right) 0.0027=0.0021 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=50.023^{o}C \] \[ p_{6,5}= 0.0021+0.0001=0.0022 \]
6 \( 49.819^{o}C \) \[ K_{6}= \frac{0.0022}{0.0022+0.01}=0.1815 \] \[ \hat{x}_{6,6}=~ 50.023+0.1815 \left( 49.819-50.023 \right) =49.987^{o}C \] \[ p_{6,6}= \left( 1-0.1815 \right) 0.0022=0.0018 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=49.987^{o}C \] \[ p_{7,6}= 0.0018+0.0001=0.0019 \]
7 \( 49.933^{o}C \) \[ K_{7}= \frac{0.0019}{0.0019+0.01}=0.1607 \] \[ \hat{x}_{7,7}=~ 49.987+0.1607 \left( 49.933-49.987 \right) =49.978^{o}C \] \[ p_{7,7}= \left( 1-0.1607 \right) 0.0019=0.0016 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=49.978^{o}C \] \[ p_{8,7}= 0.0016+0.0001=0.0017 \]
8 \( 50.007^{o}C \) \[ K_{8}= \frac{0.0017}{0.0017+0.01}=0.1458 \] \[ \hat{x}_{8,8}= 49.978+0.1458 \left( 50.007-49.978 \right) =49.983^{o}C \] \[ p_{8,8}= \left( 1-0.1458 \right) 0.0017=0.0015 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=49.983^{o}C \] \[ p_{9,8}= 0.0015+0.0001=0.0016 \]
9 \( 50.023^{o}C \) \[ K_{9}= \frac{0.0016}{0.0016+0.01}=0.1348 \] \[ \hat{x}_{9,9}=~ 49.983+0.1348 \left( 50.023-49.983 \right) =49.988^{o}C \] \[ p_{9,9}= \left( 1-0.1348 \right) 0.0016=0.0014 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=49.988^{o}C \] \[ p_{10,9}= 0.0014+0.0001=0.0015 \]
10 \( 49.99^{o}C \) \[ K_{10}= \frac{0.0015}{0.0015+0.01}=0.1265 \] \[ \hat{x}_{10,10}=~ 49.988+0.1265 \left( 49.99 -49.988 \right) =49.988^{o}C \] \[ p_{10,10}= \left( 1-0.1265 \right) 0.0015=0.0013 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=49.988^{o}C \] \[ p_{11,10}= 0.0013+0.0001=0.0014 \]

The following chart compares the true value, measured values, and estimates.

True value, measured values and estimates

As you can see, the estimated value converges toward the true value.

The following chart shows the estimate uncertainty.

Estimate uncertainty

The estimate uncertainty quickly goes down. After 10 measurements, the estimate uncertainty ( \( \sigma ^{2} \) ) is 0.0013, i.e., the estimate error standard deviation is: \( \sigma = \sqrt[]{0.0013}=0.036^{o}C \).

So we can say that the liquid temperature estimate is: \( 49.988 \pm 0.036_{ }^{o}C \).

The following chart describes the Kalman Gain.

The Kalman Gain

As you can see, the Kalman Gain is going down, making the measurement weight smaller and smaller.

Example summary

We measured a liquid temperature using the one-dimensional Kalman Filter in this example. Although the system dynamics include a random process noise, the Kalman Filter can provide a good estimation.

Example 7 – Estimating the temperature of a heating liquid

Like in the previous example, we estimate the temperature of a liquid in a tank. In this case, the dynamic model of the system is not constant - the liquid is heating at a rate of 0.1\( ^{o}C \) every second.

The Kalman Filter parameters are similar to the previous example:

  • We assume that the model is accurate. Thus we set the process noise variance ( \( q \) ) to 0.0001.
  • The measurement error (standard deviation) is 0.1\( ^{o}C \).
  • The measurements are taken every 5 seconds.
  • The dynamic model of the system is constant.

Note: although the true dynamic model of the system is not constant (since the liquid is heating), we treat the system as a system with a constant dynamic model (the temperature doesn't change).

  • The true liquid temperature values at the measurement points are: 50.479\( ^{o}C \), 51.025\( ^{o}C \), 51.5\( ^{o}C \), 52.003\( ^{o}C \), 52.494\( ^{o}C \), 53.002\( ^{o}C \), 53.499\( ^{o}C \), 54.006\( ^{o}C \), 54.498\( ^{o}C \), and 54.991\( ^{o}C \).
  • The measurements are: 50.45\( ^{o}C \), 50.967\( ^{o}C \), 51.6\( ^{o}C \), 52.106\( ^{o}C \), 52.492\( ^{o}C \), 52.819\( ^{o}C \), 53.433\( ^{o}C \), 54.007\( ^{o}C \), 54.523\( ^{o}C \), and 54.99\( ^{o}C \).

The following chart compares the true liquid temperature and the measurements.

True temperature vs. measurements

Iteration Zero

Iteration zero is similar to the previous example.

Before the first iteration, we must initialize the Kalman Filter and predict the following state (which is the first state).

Initialization

We don't know the true temperature of the liquid in a tank, and our guess is 10\( ^{o}C \).

\[ \hat{x}_{0,0}=10^{o}C \]

Our guest is very imprecise, so we set our initialization estimate error ( \( \sigma \) ) to 100. The Estimate Uncertainty of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000 \]

This variance is very high. We get faster Kalman Filter convergence if we initialize with a more meaningful value.

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model has constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{o}C \]

The extrapolated estimate uncertainty (variance):

\[ p_{1,0}= p_{0,0}+q=10000+ 0.0001=10000.0001 \]

Iterations 1-10

The calculations for the successive iterations are summarized in the following table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
1 \( 50.45^{o}C \) \[ K_{1}= \frac{10000.0001}{10000.0001+0.01} = 0.999999 \] \[ \hat{x}_{1,1}=~ 10+0.999999 \left( 50.45-10 \right) =50.45^{o}C \] \[ p_{1,1}= \left( 1-0.999999 \right) 10000.0001=0.01 \] \[ \hat{x}_{2,1}= \hat{x}_{1,1}=50.45^{o}C \] \[ p_{2,1}= 0.01+0.0001=0.0101 \]
2 \( 50.967^{o}C \) \[ K_{2}= \frac{0.0101}{0.0101+0.01}=0.5025 \] \[ \hat{x}_{2,2}=~ 50.45+0.5025 \left( 50.967-50.45 \right) =50.71^{o}C\] \[ p_{2,2}= \left( 1-0.5025 \right) 0.0101=0.005 \] \[ \hat{x}_{3,2}= \hat{x}_{2,2}=50.71^{o}C \] \[ p_{3,2}= 0.005+0.0001=0.0051 \]
3 \( 51.6^{o}C \) \[ K_{3}= \frac{0.0051}{0.0051+0.01}=0.3388 \] \[ \hat{x}_{3,3}=~ 50.71+0.3388 \left( 51.6-50.71 \right) =51.011^{o}C\] \[ p_{3,3}= \left( 1-0.3388 \right) 0.0051=0.0034 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=51.011^{o}C \] \[ p_{4,3}= 0.0034+0.0001=0.0035 \]
4 \( 52.106^{o}C \) \[ K_{4}= \frac{0.0035}{0.0035+0.01}=0.2586 \] \[ \hat{x}_{4,4}=~ 51.011+0.2586 \left( 52.106-51.011 \right) =51.295^{o}C \] \[ p_{4,4}= \left( 1-0.2586 \right) 0.0035=0.0026 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=51.295^{o}C \] \[ p_{5,4}= 0.0026+0.0001=0.0027 \]
5 \( 52.492^{o}C \) \[ K_{5}= \frac{0.0027}{0.0027+0.01}=0.2117 \] \[ \hat{x}_{5,5}= 51.295+0.2117 \left( 52.492-51.295 \right) =51.548^{o}C \] \[ p_{5,5}= \left( 1-0.2117 \right) 0.0027=0.0021 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=51.548^{o}C \] \[ p_{6,5}= 0.0021+0.0001=0.0022 \]
6 \( 52.819^{o}C \) \[ K_{6}= \frac{0.0022}{0.0022+0.01}=0.1815 \] \[ \hat{x}_{6,6}=~ 51.548+0.1815 \left( 52.819-51.548 \right) =51.779^{o}C \] \[ p_{6,6}= \left( 1-0.1815 \right) 0.0022=0.0018 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=51.779^{o}C \] \[ p_{7,6}= 0.0018+0.0001=0.0019 \]
7 \( 53.433^{o}C \) \[ K_{7}= \frac{0.0019}{0.0019+0.01}=0.1607 \] \[ \hat{x}_{7,7}=~ 51.779+0.1607 \left( 53.433-51.779 \right) =52.045^{o}C \] \[ p_{7,7}= \left( 1-0.1607 \right) 0.0019=0.0016 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=52.045^{o}C \] \[ p_{8,7}= 0.0016+0.0001=0.0017 \]
8 \( 54.007^{o}C \) \[ K_{8}= \frac{0.0017}{0.0017+0.01}=0.1458 \] \[ \hat{x}_{8,8}= 52.045+0.1458 \left( 54.007-52.045 \right) =52.331^{o}C \] \[ p_{8,8}= \left( 1-0.1458 \right) 0.0017=0.0015 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=52.331^{o}C \] \[ p_{9,8}= 0.0015+0.0001=0.0016 \]
9 \( 54.523^{o}C \) \[ K_{9}= \frac{0.0016}{0.0016+0.01}=0.1348 \] \[ \hat{x}_{9,9}=~ 52.331+0.1348 \left( 54.523-52.331 \right) =52.626^{o}C \] \[ p_{9,9}= \left( 1-0.1348 \right) 0.0016=0.0014 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=52.626^{o}C \] \[ p_{10,9}= 0.0014+0.0001=0.0015 \]
10 \( 54.99^{o}C \) \[ K_{10}= \frac{0.0015}{0.0015+0.01}=0.1265 \] \[ \hat{x}_{10,10}=~ 2.626+0.1265 \left( 54.99 -52.626 \right) =52.925^{o}C \] \[ p_{10,10}= \left( 1-0.1265 \right) 0.0015=0.0013 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=52.925^{o}C \] \[ p_{11,10}= 0.0013+0.0001=0.0014 \]

The following chart compares the true value, measured values, and estimates.

True value, measured values and estimates

As you can see, the Kalman Filter has failed to provide a reliable estimation. There is a lag error in the Kalman Filter estimation. We've already encountered the lag error in Example 3, where we estimated the position of an accelerating aircraft using the \( \alpha - \beta \) filter that assumes constant aircraft velocity. We got rid of the lag error in Example 4, where we replaced the \( \alpha - \beta \) filter with the \( \alpha -\beta -\gamma \) filter that assumes acceleration.

There are two reasons for the lag error in our Kalman Filter example:

  • The dynamic model doesn't fit the case.
  • We have chosen very low process noise \( \left( q=0.0001 \right) \) while the true temperature fluctuations are much more significant.
Note: The lag error is constant. Therefore the estimate curve should have the same slope as the true value curve. The figure above presents only the 10 first measurements, which is not enough for convergence. The figure below presents the first 100 measurements with a constant lag error.
True value, measured values and estimates

There are two possible ways to fix the lag error:

  • If we know that the liquid temperature can change linearly, we can define a new model that considers a possible linear change in the liquid temperature. We did this in Example 4. This method is preferred. However, this method won't improve the Kalman Filter performance if the temperature change can't be modeled.
  • On the other hand, since our model is not well defined, we can adjust the process model reliability by increasing the process noise \( \left( q \right) \). See the next example for details.

Example summary

In this example, we measured the temperature of a heating liquid using a one-dimensional Kalman Filter with a constant dynamic model. We've observed the lag error in the Kalman Filter estimation. The lag error is caused by the wrong dynamic model and process model definitions.

An appropriate dynamic model or process model definition can fix the lag error.

Example 8 – Estimating the temperature of a heating liquid

This example is similar to the previous example, with only one change. Since our process is not well-defined, we increase the process uncertainty \( \left( q \right) \) from 0.0001 to 0.15.

Iteration Zero

Before the first iteration, we must initialize the Kalman Filter and predict the following state (which is the first state).

Initialization

The initialization is similar to the previous example.

We don't know the true temperature of the liquid in a tank, and our guess is 10\( ^{o}C \).

\[ \hat{x}_{0,0}=10^{o}C \]

Our guest is very imprecise, so we set our initialization estimate error ( \( \sigma \) ) to 100. The Estimate Uncertainty of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000 \]

This variance is very high. We get faster Kalman Filter convergence if we initialize with a more meaningful value.

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model has constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{o}C \]

The extrapolated estimate uncertainty (variance):

\[ p_{1,0}= p_{0,0}+q=10000+ 0.15=10000.15 \]

Iterations 1-10

The calculations for the successive iterations are summarized in the following table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
1 \( 50.45^{o}C \) \[ K_{1}= \frac{10000.15}{10000.15+0.01} = 0.999999 \] \[ \hat{x}_{1,1}=~ 10+0.999999 \left( 50.45-10 \right) =50.45^{o}C \] \[ p_{1,1}= \left( 1-0.999999 \right)10000.15=0.01 \] \[ \hat{x}_{2,1}= \hat{x}_{1,1}=50.45^{o}C \] \[ p_{2,1}= 0.01+0.15=0.16 \]
2 \( 50.967^{o}C \) \[ K_{2}= \frac{0.16}{0.16+0.01}=0.9412 \] \[ \hat{x}_{2,2}=~ 50.45+0.9412 \left( 50.967-50.45 \right) =50.94^{o}C\] \[ p_{2,2}= \left( 1-0.9412 \right) 0.16=0.0094 \] \[ \hat{x}_{3,2}= \hat{x}_{2,2}=50.94^{o}C \] \[ p_{3,2}= 0.0094+0.15=0.1594 \]
3 \( 51.6^{o}C \) \[ K_{3}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{3,3}=~ 50.94+0.941 \left( 51.6-50.94 \right) =51.56^{o}C\] \[ p_{3,3}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=51.56^{o}C \] \[ p_{4,3}= 0.0094+0.15=0.1594 \]
4 \( 52.106^{o}C \) \[ K_{4}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{4,4}=~ 51.56+0.941 \left( 52.106-51.56 \right) =52.07^{o}C \] \[ p_{4,4}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=52.07^{o}C \] \[ p_{5,4}= 0.0094+0.15=0.1594 \]
5 \( 52.492^{o}C \) \[ K_{5}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{5,5}= 52.07+0.941 \left( 52.492-52.07 \right) =52.47^{o}C \] \[ p_{5,5}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=52.47^{o}C \] \[ p_{6,5}= 0.0094+0.15=0.1594 \]
6 \( 52.819^{o}C \) \[ K_{6}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{6,6}=~ 52.47+0.941 \left( 52.819-52.47 \right) =52.8^{o}C \] \[ p_{6,6}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=52.8^{o}C \] \[ p_{7,6}= 0.0094+0.15=0.1594 \]
7 \( 53.433^{o}C \) \[ K_{7}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{7,7}=~ 52.8+0.941 \left( 53.433-52.8 \right) =53.4^{o}C \] \[ p_{7,7}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=53.4^{o}C \] \[ p_{8,7}= 0.0094+0.15=0.1594 \]
8 \( 54.007^{o}C \) \[ K_{8}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{8,8}= 53.4+0.941 \left( 54.007-53.4 \right) =53.97^{o}C \] \[ p_{8,8}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=53.97^{o}C \] \[ p_{9,8}= 0.0094+0.15=0.1594 \]
9 \( 54.523^{o}C \) \[ K_{9}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{9,9}=~ 53.97+0.941 \left( 54.523-53.97 \right) =54.49^{o}C \] \[ p_{9,9}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=54.49^{o}C \] \[ p_{10,9}= 0.0094+0.15=0.1594 \]
10 \( 54.99^{o}C \) \[ K_{10}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{10,10}=~ 54.49+0.941 \left( 54.99 -54.49 \right) =54.96^{o}C \] \[ p_{10,10}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=54.96^{o}C \] \[ p_{11,10}= 0.0094+0.15=0.1594 \]

The following chart compares the true value, measured values, and estimates.

True value, measured values and estimates

As you can see, the estimates follow the measurements. There is no lag error.

The following chart shows the Kalman Gain.

The Kalman Gain

Due to the high process uncertainty, the measurement weight is much higher than the weight of the estimate. Thus, the Kalman Gain is high, and it converges to 0.94.

Example summary

We can eliminate the lag error by setting a high process uncertainty. However, since our model is not well-defined, we get noisy estimates that are almost equal to the measurements, and we miss the goal of the Kalman Filter.

The best Kalman Filter implementation would involve a model that is very close to reality, leaving little room for process noise. However, a precise model is not always available - for example, an airplane pilot may decide to perform a sudden maneuver that changes the predicted airplane trajectory. In this case, the process noise would be increased.

Previous Next