Kalman Filter in one dimension

This chapter describes the Kalman Filter in one dimension. The main goal of this chapter is to explain the Kalman Filter concept in a simple and intuitive way without using math tools that may seem complex and confusing.

We are going to advance towards the Kalman Filter equations step by step.

  • First, we are going to derive the Kalman Filter equations for a simple example, without the process noise.
  • Second, we will add the process noise.

One-dimensional Kalman Filter without the process noise

As I've mentioned earlier, the Kalman Filter is based on five equations. We are already familiar with two of them:

  • The state update equations.
  • The dynamic model equations.

In this chapter, we are going to derive another three Kalman Filter Equations.

Let's recall our first example (gold bar weight measurement), we made multiple measurements and computed the estimate by averaging.

We’ve received the following result:

Measurements vs. True value vs. Estimates

On the above plot, you can see the true value, the estimated value and the measurements, vs. number of measurements.

The differences between the measurements (blue samples) and the true value (green line) are measurement errors. Since the measurement errors are random, we can describe them by variance ( \( \sigma ^{2} \) ). The variance of the measurement errors could be provided by the scale vendor or can be derived by calibration procedure. The variance of the measurement errors is actually the measurement uncertainty.

Note: In some literature, the measurement uncertainty is also called the measurement error.

We will denote the measurement uncertainty by \( r \) .

The difference between the estimate (the red line) and the true value (green line) is the estimate error. As you can see the estimate error becomes smaller and smaller as we make more measurements, and it converges towards zero, while the estimated value converges towards the true value. We don’t know what the estimate error is, but we can estimate the uncertainty in estimate.

We will denote the estimate uncertainty by \( p \) .

Let's take a look on the weight measurements PDF (Probability Density Function).

On the following plot we can see ten measurements of the gold bar weight.

  • The measurements are described by the blue line.
  • The true value is described by the red dashed line.
  • The green line describes the probability density function of the measurement.
  • The bold green area is the standard deviation ( \( \sigma \) ) of the measurement, i.e. there is a probability of 68.26% that the true value lies within this area.

As you can see, 8 out of 10 measurements are close enough to the true value, so the true value lies within \( 1 \sigma \) boundaries.

The measurement uncertainty ( \( r \) ) is the variance of the measurement ( \( \sigma ^{2} \) ).

Measurements Probability Density Function

The Kalman Gain equation in 1d

We are going to derive the third equation which is the Kalman Gain Equation. Right now, I will present the intuitive derivation of the Kalman Gain Equation. The mathematical derivation will be shown in the following chapters.

In a Kalman filter, the \( \alpha \) -\( \beta \) (-\( \gamma \) ) parameters are calculated dynamically for each filter iteration. These parameters are called Kalman Gain and denoted by \( K_{n} \).

The Kalman Gain Equation is the following:

\[ K_{n}= \frac{Uncertainty \quad in \quad Estimate}{Uncertainty \quad in \quad Estimate \quad + \quad Uncertainty \quad in \quad Measurement}= \frac{p_{n,n-1}}{p_{n,n-1}+r_{n}} \]
Where:
\( p_{n,n-1} \) is the extrapolated estimate uncertainty
\( r_{n} \) is the measurement uncertainty

The Kalman Gain is a number between zero and one:

\[ 0 \leq K_{n} \leq 1 \]

Let’s rewrite the state update equation:

\[ \hat{x}_{n,n}=~ \hat{x}_{n,n-1}+ K_{n} \left( z_{n}- \hat{x}_{n,n-1} \right) = \left( 1-K_{n} \right) \hat{x}_{n,n-1}+ K_{n}z_{n} \]

As you can see the Kalman Gain \( \left( K_{n} \right) \) is the weight that we give to the measurement. And \( \left( 1-K_{n} \right) \) is the weight that we give to the estimate.

When the measurement uncertainty is very large and the estimate uncertainty is very small, the Kalman Gain is close to zero. Hence we give a big weight to the estimate and a small weight to the measurement.

On the other side, when the measurement uncertainty is very small and the estimate uncertainty is very large, the Kalman Gain is close to one. Hence we give a small weight to the estimate and a big weight to the measurement.

If the measurement uncertainty is equal to the estimate uncertainty, then the Kalman gain equals to 0.5.

The Kalman gain tells you how much I want to change my estimate by given a measurement.

The Kalman Gain equation is the third Kalman filter equation.

The estimate uncertainty update in 1d

The estimate uncertainty update in 1d.

The following equation defines the estimate uncertainty update:

\[ p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \]
Where:
\( K_{n} \) is the Kalman Gain
\( p_{n,n-1} \) is the estimate uncertainty that was calculated during the previous filter estimation
\( p_{n,n} \) is the estimate uncertainty of the current sate

This equation updates the estimate uncertainty of the current state. It is called the Covariance Update Equation. Why covariance? We will see it the following chapters.

It is quite clear from the equation that the estimate uncertainty is always getting smaller with each filter iteration, since \( \left( 1-K_{n} \right) \leq 1 \) . When the measurement uncertainty is large, then the Kalman gain will be low, therefore, the convergence of the estimate uncertainty would be slow. However, when the measurement uncertainty is small, then the Kalman gain will be high and the estimate uncertainty would quickly converge towards zero.

The Covariance Update equation is the fourth Kalman Filter Equation.

The estimate uncertainty extrapolation in 1d

Like state extrapolation, the estimate uncertainty extrapolation is done with the dynamic model equations.

In our second example, in one-dimensional radar case, the predicted target position is:

\[ \hat{x}_{n,n-1}= \hat{x}_{n-1,n-1}+ \Delta t\hat{\dot{x}}_{n-1,n-1} \] \[ \hat{\dot{x}}_{n,n-1}= \hat{\dot{x}}_{n-1,n-1} \]

i.e the predicted position equals to the current estimated position plus current estimated velocity multiplied by time. The predicted velocity is equal to the current velocity estimate (assuming the constant velocity model).

The estimate uncertainty extrapolation would be:

\[ p_{n,n-1}^{x}= p_{n-1,n-1}^{x} + \Delta t^{2} \cdot p_{n-1,n-1}^{v} \] \[ p_{n,n-1}^{v}= p_{n-1,n-1}^{v} \]
Where:
\( p^{x} \) is the position estimate uncertainty
\( p^{v} \) is the velocity estimate uncertainty

i.e the predicted position estimate uncertainty equals to the current position estimate uncertainty plus current velocity estimate uncertainty multiplied by time squared. The predicted velocity estimate uncertainty is equal to the current velocity estimate uncertainty (assuming the constant velocity model).

Note: If you are wondering, why the time is squared in \( p_{n,n-1}^{x}= p_{n-1,n-1}^{x} + \Delta t^{2} \cdot p_{n-1,n-1}^{v} \), take a look on the expectation of variance derivation.

In our first example (gold bar weight measurement) the system dynamics is constant. Thus, the estimate uncertainty extrapolation would be:

\[ p_{n,n-1}= p_{n-1,n-1} \]
Where:
\( p \) is the gold bar weight estimate uncertainty

The estimate uncertainty extrapolation equation is called Covariance Extrapolation Equation and this is the fifth Kalman Filter equation.

Putting all together

In this chapter, we are going to combine all pieces in a single algorithm. Like the \( \alpha \) , \( \beta \), (\( \gamma \) ) filter, the Kalman filter utilizes the "Measure, Update, Predict" algorithm.

The following chart provides a low-level schematic description of the algorithm:

Schematic description of the Kalman Filter algorithm

The filter inputs are:

  • Initialization
  • The initialization performed only once, and it provides two parameters:

    • Initial System State ( \( \hat{x}_{1,0} \) )
    • Initial State Uncertainty ( \( p_{1,0} \) )

    The initialization parameters can be provided by another system, another process (for instance, search process in radar) or educated guess based on experience or theoretical knowledge. Even if the initialization parameters are not precise, the Kalman filter will be able to converge close to the real value.

  • Measurement
  • The measurement is performed for every filter cycle, and it provides two parameters:

    • Measured System State ( \( z_{n} \) )
    • Measurement Uncertainty ( \( r_{n} \) )

In addition to the measured value, the Kalman filter requires the measurement uncertainty parameters. Usually, this parameter is provided by equipment vendor, or it can be derived by measurement equipment calibration. The radar measurement uncertainty depends on several parameters such as SNR (Signal to Nose Ratio), beam width, bandwidth, time on target, clock stability and more. Every radar measurement has different SNR, beam width and time on target. Therefore, that radar calculates the measurement uncertainty for each measurement and reports it to the tracker.

The filter outputs are:

  • System State Estimate ( \( \hat{x}_{n,n} \) )
  • Estimate Uncertainty ( \( p_{n,n} \) )

In addition to the System State Estimate the Kalman filter also provides the Estimate Uncertainty! We have a merit of the estimate precision. As I’ve already mentioned, the estimate uncertainty is given by:

\[ p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \]

and \( p_{n,n} \) is always getting smaller with each filter iteration, since \( \left( 1-K_{n} \right) \leq 1 \)

So it is up to us to decide how many measurements to take. If we are measuring the building height, and we are interested in precision of 3 centimeters ( \( \sigma \) ), we will make the measurements until the Estimation Uncertainty ( \( \sigma ^{2} \) ) is less than 9 centimeters.

The following table summarizes the five Kalman Filter equations.

Equation Equation Name Alternative names used in the literature
\( \hat{x}_{n,n}=~ \hat{x}_{n,n-1}+ K_{n} \left( z_{n}- \hat{x}_{n,n-1} \right) \) State Update Filtering Equation
\( \hat{x}_{n,n-1}= \hat{x}_{n-1,n-1}+ \Delta t\hat{\dot{x}}_{n-1,n-1} \)
\( \hat{\dot{x}}_{n,n-1}= \hat{\dot{x}}_{n-1,n-1} \)
(For constant velocity dynamics)
State Extrapolation Predictor Equation
Transition Equation
Prediction Equation
Dynamic Model
State Space Model
\( K_{n}= \frac{p_{n,n-1}}{p_{n,n-1}+r_{n}} \) Kalman Gain Weight Equation
\( p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \) Covariance Update Corrector Equation
\( p_{n,n-1}= p_{n-1,n-1} \)
(For constant dynamics)
Covariance Extrapolation Predictor Covariance Equation
Note 1: In the State Extrapolation Equation and the Covariance Extrapolation Equation depends on the system dynamics.
Note 2: The table above demonstrates the special form of the Kalman Filter equations tailored for the specific case. The general form of the equation will be presented later in a matrix notation. Right now, our goal is to understand the concept of the Kalman Filter.

The following figure provides a detailed description of the Kalman Filter’s block diagram.

Detailed description of the Kalman Filter algorithm
  • Step 0: Initialization
  • As mentioned above, the initialization performed only once, and it provides two parameters:

    • Initial System State ( \( \hat{x}_{1,0} \) )
    • Initial State Uncertainty ( \( p_{1,0} \) )

    The initialization is followed by prediction.

  • Step 1: Measurement
  • The measurement process shall provide two parameters:

    • Measured System State ( \( z_{n} \) )
    • Measurement Uncertainty ( \( r_{n} \) )
  • Step 2: State Update
  • The state update process is responsible for system's current state estimation.

    The state update process inputs are:

    • Measured Value ( \( z_{1,0} \) )
    • The Measurement Uncertainty ( \( r_{n} \) )
    • Previous System State Estimate ( \( \hat{x}_{n,n-1} \) )
    • Estimate Uncertainty ( \( p_{n,n-1} \) )

    Based on the inputs, the state update process calculates the Kalman Gain and provides two outputs:

    • Current System State Estimate ( \( \hat{x}_{n,n} \) )
    • Current State Estimate Uncertainty ( \( p_{n,n} \) )

    These parameters are the Kalman Filter outputs.

  • Step 3: Prediction
  • The prediction process extrapolates the current system state and the uncertainty of the current system state estimate to the next system state, based on the system's dynamic model.

    At the first filter iteration the initialization outputs are treated as the Previous State Estimate and Uncertainty.

    On the next filter iterations, the prediction outputs become the Previous State Estimate and Uncertainty.

The Kalman Gain intuition

The Kalman Gain Defines a weight of the measurement and a weight of the previous estimate when forming a new estimate.

High Kalman Gain

A low measurement uncertainty relative to the estimate uncertainty, would result a high Kalman Gain (close to 1). As a result, the new estimate would be close to the measurement. The following figure illustrates the influence of the high Kalman Gain on the estimate in aircraft tracking application.

High Kalman Gain

Low Kalman Gain

A high measurement uncertainty relative to the estimate uncertainty, would result a low Kalman Gain (close to 0). As a result, the new estimate would be close to the previous estimate. The following figure illustrates the influence of the low Kalman Gain on the estimate in aircraft tracking application.

Low Kalman Gain

Now, we understand the Kalman Filter algorithm and we are ready for the first numerical example.

Example 5 – Estimating the building height

Assume that we would like to estimate the height of the building using very imprecise altimeter.

We know for sure, that the building height doesn’t change over time, at least during the short measurement process.

Estimating the building height

The numerical example

  • The true building height is 50 meters.
  • The altimeter measurement error (standard deviation) is 5 meters.
  • The set of ten measurements is: 48.54m, 47.11m, 55.01m, 55.15m, 49.89m, 40.85m, 46.72m, 50.05m, 51.27m, 49.95m.

Iteration Zero

Initialization

One can estimate the building height simply by looking on it.

The estimated building height is:

\[ \hat{x}_{0,0}=60m \]

Now we shall initialize the estimate uncertainty. Human’s estimation error (standard deviation) is about 15 meters: \( \sigma =15 \) . Consequently the variance is 225: \( \sigma ^{2}=225 \) .

\[ p_{0,0}=225 \]

Prediction

Now, we shall predict the next state based on the initialization values.

Since the using system’s Dynamic Model is constant, i.e. the building doesn’t change its height, then:

\[ \hat{x}_{1,0}=x_{0,0}= 60m \]

The extrapolated estimate uncertainty (variance) also doesn’t change:

\[ p_{1,0}= p_{0,0}=225 \]

First Iteration

Step 1 - Measure

The first measurement is: \( z_{1}=48.54m \) .

Since the standard deviation ( \( \sigma \) ) of the altimeter measurement error is 5, the variance ( \( \sigma ^{2} \) ) would be 25, thus the measurement uncertainty is: \( r_{1}=25 \) .

Step 2 - Update

Kalman Gain calculation:

\[ K_{1}= \frac{p_{1,0}}{p_{1,0}+r_{1}}= \frac{225}{225+25}=0.9 \]

Estimating the current state:

\[ \hat{x}_{1,1}=~ \hat{x}_{1,0}+ K_{1} \left( z_{1}- \hat{x}_{1,0} \right) =60+0.9 \left( 48.54-60 \right) =49.69m \]

Update the current estimate uncertainty:

\[ p_{1,1}=~ \left( 1-K_{1} \right) p_{1,0}= \left( 1-0.9 \right) 225=22.5 \]

Step 3 - Predict

Since the using system’s Dynamic Model is constant, i.e. the building doesn’t change its height, then:

\[ \hat{x}_{2,1}=\hat{x}_{1,1}= 49.69m \]

The extrapolated estimate uncertainty (variance) also doesn’t change:

\[ p_{2,1}= p_{1,1}=22.5 \]

Second Iteration

After a unit time delay, the predicted estimate from the previous iteration becomes a previous estimate in the current iteration:

\[ \hat{x}_{2,1}=49.69m \]

The extrapolated estimate uncertainty becomes the previous estimate uncertainty:

\[ p_{2,1}= 22.5 \]

Step 1 - Measure

The second measurement is: \( z_{2}=47.11m \)

The measurement uncertainty is: \( r_{2}=25 \)

Step 2 - Update

Kalman Gain calculation:

\[ K_{2}= \frac{p_{2,1}}{p_{2,1}+r_{2}}= \frac{22.5}{22.5+25}=0.47 \]

Estimating the current state:

\[ \hat{x}_{2,2}=~ \hat{x}_{2,1}+ K_{2} \left( z_{2}- x_{2,1} \right) =49.69+0.47 \left( 47.11-49.69 \right) =48.47m \]

Update the current estimate uncertainty:

\[ p_{2,2}=~ \left( 1-K_{2} \right) p_{2,1}= \left( 1-0.47 \right) 22.5=11.84 \]

Step 3 - Predict

Since the using system’s Dynamic Model is constant, i.e. the building doesn’t change its height, then:

\[ \hat{x}_{3,2}=\hat{x}_{2,2}= 48.47m \]

The extrapolated estimate uncertainty (variance) also doesn’t change:

\[ p_{3,2}= p_{2,2}=11.84 \]

Iterations 3-10

The calculations for the next iterations are summarized in the next table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
3 \( 55.01m \) \[ K_{3}= \frac{11.84}{11.84+25}=0.32 \] \[ \hat{x}_{3,3}=~ 48.47+0.32 \left( 55.01 -48.47 \right) =50.57m \] \[ p_{3,3}= \left( 1-0.32 \right) 11.84=8.04 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=50.57m \] \[ p_{4,3}= p_{3,3}=8.04 \]
4 \( 55.15m \) \[ K_{4}= \frac{8.04}{8.04+25}=0.24 \] \[ \hat{x}_{4,4}=~ 50.57+0.24 \left( 55.15 -50.57 \right) =51.68m \] \[ p_{4,4}= \left( 1-0.24 \right) 8.04=6.08 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=51.68m \] \[ p_{5,4}= p_{4,4}=6.08 \]
5 \( 49.89m \) \[ K_{5}= \frac{6.08}{6.08+25}=0.2 \] \[ \hat{x}_{5,5}= 51.68+0.2 \left( 49.89 -51.68 \right) =51.33m \] \[ p_{5,5}= \left( 1-0.2 \right) 6.08=4.89 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=51.33m \] \[ p_{6,5}= p_{5,5}=4.89 \]
6 \( 40.85m \) \[ K_{6}= \frac{4.89}{4.89+25}=0.16 \] \[ \hat{x}_{6,6}=~ 51.33+0.16 \left( 40.85 -51.33 \right) =49.62m \] \[ p_{6,6}= \left( 1-0.16 \right) 4.89=4.09 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=49.62m \] \[ p_{7,6}= p_{6,6}=4.09 \]
7 \( 46.72m \) \[ K_{7}= \frac{4.09}{4.09+25}=0.14 \] \[ \hat{x}_{7,7}=~ 49.62+0.14 \left( 46.72 -49.62 \right) =49.21m \] \[ p_{7,7}= \left( 1-0.14 \right) 4.09=3.52 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=49.21m \] \[ p_{8,7}= p_{7,7}=3.52 \]
8 \( 50.05m \) \[ K_{8}= \frac{3.52}{3.52+25}=0.12 \] \[ \hat{x}_{8,8}= 49.21+0.12 \left( 50.05 -49.21 \right) =49.31m \] \[ p_{8,8}= \left( 1-0.12 \right) 3.52=3.08 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=49.31m \] \[ p_{9,8}= p_{8,8}=3.08 \]
9 \( 51.27m \) \[ K_{9}= \frac{3.08}{3.08+25}=0.11 \] \[ \hat{x}_{9,9}=~ 49.31+0.11 \left( 51.27 -49.31 \right) =49.53m \] \[ p_{9,9}= \left( 1-0.11 \right) 3.08=2.74 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=49.53m \] \[ p_{10,9}= p_{9,9}=2.74 \]
10 \( 49.95m \) \[ K_{10}= \frac{2.74}{2.74+25}=0.1 \] \[ \hat{x}_{10,10}=~ 49.53+0.1 \left( 49.95 -49.53 \right) =49.57m \] \[ p_{10,10}= \left( 1-0.1 \right) 2.74=2.47 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=49.57m \] \[ p_{11,10}= p_{10,10}=2.47 \]

The following chart compares the true value, measured values and estimates.

True value, measured values and estimates

As you can see the estimated value converges about 49.5 meters after 7 measurements.

The next chart compares the measurement uncertainty and the estimate uncertainty.

Measurement uncertainty and estimate uncertainty

At the first filter iteration, the estimate uncertainty is close to the measurement uncertainty, and it quickly goes down. After 10 measurements the estimate uncertainty ( \( \sigma ^{2} \) ) is 2.47, i.e. the estimate error standard deviation is: \( \sigma = \sqrt[]{2.47}=1.57m \)

So we can say that the building height estimate is: \( 49.57 \pm 1.57m \)

The next chart shows the Kalman Gain.

The Kalman Gain

As you can see, the Kalman Gain is going down, making the measurement weight smaller and smaller.

Example summary

In this example, we've measured the building height using the one-dimensional Kalman Filter. Unlike the \( \alpha -\beta -(\gamma) \) filter, the Kalman Gain is dynamic and depends on the precision of the measurement device.

At the beginning, the Kalman Filter initialization is not precise. Therefore, the measurements weight in the State Update Equation is high, and the estimate uncertainty is high.

With each iteration, the measurement weight is smaller and the estimate uncertainty is smaller.

The Kalman Filter output includes the estimate and the estimate uncertainty.

The complete model of the one-dimensional Kalman Filter

Now, we are going to update the Covariance Extrapolation Equation with the process noise variable.

The Process Noise

In a real world there are uncertainties in the system dynamic model. For example, when we want to estimate the resistance value of the resistor, we assume the constant dynamic model, i.e. the resistance doesn’t change between the measurements. However, the resistance can slightly change due to the fluctuation of the environment temperature. When tracking ballistic missiles with the radar, the uncertainty of the dynamic model includes random changes in the target acceleration. For the aircraft, the uncertainties are much greater due to possible aircraft manures.

On the other hand, when we estimate the location of a static object using GPS receiver, the uncertainty of the dynamic model is zero, since the static object doesn’t move. The uncertainty of the dynamic model is called the Process Noise. In the literature, it also called plant noise, driving noise, dynamics noise, model noise and system noise. The process noise produces estimation errors.

In the previous example we've estimated the building height. The building height doesn't change. Therefore, we didn't take the process noise into consideration.

The Process Noise Variance is denoted by letter \( q \).

The Covariance Extrapolation Equation shall include the Process Noise Variance.

The Covariance Extrapolation Equation for constant dynamics is:

\[ p_{n,n-1}= p_{n-1,n-1}+ q_{n} \]

These are the updated Kalman Filter equations in one dimension:

Equation Equation Name Alternative names used in the literature
\( \hat{x}_{n,n}=~ \hat{x}_{n,n-1}+ K_{n} \left( z_{n}- \hat{x}_{n,n-1} \right) \) State Update Filtering Equation
\( \hat{x}_{n,n-1}= \hat{x}_{n-1,n-1}+ \Delta t\hat{\dot{x}}_{n-1,n-1} \)
\( \hat{\dot{x}}_{n,n-1}= \hat{\dot{x}}_{n-1,n-1} \)
(For constant velocity dynamics)
State Extrapolation Predictor Equation
Transition Equation
Prediction Equation
Dynamic Model
State Space Model
\( K_{n}= \frac{p_{n,n-1}}{p_{n,n-1}+r_{n}} \) Kalman Gain Weight Equation
\( p_{n,n}=~ \left( 1-K_{n} \right) p_{n,n-1} \) Covariance Update Corrector Equation
\( p_{n,n-1}= p_{n-1,n-1} + q_{n} \)
(For constant dynamics)
Covariance Extrapolation Predictor Covariance Equation
Note 1: In the State Extrapolation Equation and the Covariance Extrapolation Equation depend on the system dynamics.
Note 2: The table above demonstrates the special form of the Kalman Filter equations tailored for the specific case. The general form of the equation will be presented later in a matrix notation. Right now, our goal is to understand the concept of the Kalman Filter.

Example 6 – Estimating the temperature of the liquid in a tank

We would like to estimate the temperature of the liquid in a tank.

Estimating the liquid temperature

We assume that at the steady state the liquid temperature is constant. However, some fluctuations in the true liquid temperature are possible. We can describe the system dynamics by the following equation:

\[ x_{n}=T+ w_{n} \]

where:

\( T \) is the constant temperature

\( w_{n} \) is a random process noise with variance \( q \)

The numerical example

  • Let us assume the true temperature of 50 degrees Celsius.
  • We think that we have an accurate model, thus we set the process noise variance ( \( q \) ) to 0.0001.
  • The measurement error (standard deviation) is 0.1 degrees Celsius.
  • The measurements are taken every 5 seconds.
  • The true liquid temperature at the measurement points is: 49.979\( ^{o}C \), 50.025\( ^{o}C \), 50\( ^{o}C \), 50.003\( ^{o}C \), 49.994\( ^{o}C \), 50.002\( ^{o}C \), 49.999\( ^{o}C \), 50.006\( ^{o}C \), 49.998\( ^{o}C \), and 49.991\( ^{o}C \).
  • The set of measurements is: 49.95\( ^{o}C \), 49.967\( ^{o}C \), 50.1\( ^{o}C \), 50.106\( ^{o}C \), 49.992\( ^{o}C \), 49.819\( ^{o}C \), 49.933\( ^{o}C \), 50.007\( ^{o}C \), 50.023\( ^{o}C \), and 49.99\( ^{o}C \).

The following chart compares the true liquid temperature and the measurements.

True temperature vss measurements

Iteration Zero

Before the first iteration, we must initialize the Kalman Filter and predict the next state (which is the first state).

Initialization

We don't know what the temperature of the liquid is, and our guess is 10\( ^{o}C \).

\[ \hat{x}_{0,0}=10^{o}C \]

Our guess is very imprecise, we set our initialization estimate error \( \sigma \) ) to 100. The Estimate Uncertainty of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000 \]

This variance is very high. If we initialize with a more meaningful value, we will get faster Kalman Filter convergence.

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model is constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{o}C \]

The extrapolated estimate uncertainty (variance):

\[ p_{1,0}= p_{0,0}+q=10000+ 0.0001=10000.0001 \]

First Iteration

Step 1 - Measure

The measurement value:

\[ z_{1}=~ 49.95^{o}C \]

Since the measurement error is 0.1 ( \( \sigma \) ), the variance ( \( \sigma ^{2} \) ) would be 0.01, thus the measurement uncertainty is:

\[ r_{1}= 0.01 \]

Step 2 - Update

Kalman Gain calculation:

\[ K_{1}= \frac{p_{1,0}}{p_{1,0}+r_{1}}= \frac{10000.0001}{10000.0001+0.01} = 0.999999 \]

The Kalman Gain is almost 1, i.e. our estimate error is much bigger than the measurement error. Thus the estimate weight is negligible, while the measurement weight is almost 1.

Estimating the current state:

\[ \hat{x}_{1,1}=~ \hat{x}_{1,0}+ K_{1} \left( z_{1}- \hat{x}_{1,0} \right) =10+0.999999 \left( 49.95-10 \right) =49.95^{o}C \]

Update the current estimate uncertainty:

\[ p_{1,1}=~ \left( 1-K_{1} \right) p_{1,0}= \left( 1-0.999999 \right) 10000.0001=0.01 \]

Step 3 - Predict

Since the using system’s Dynamic Model is constant, i.e. the liquid temperature doesn’t change, then:

\[ \hat{x}_{2,1}=\hat{x}_{1,1}= 49.95^{o}C \]

The extrapolated estimate uncertainty (variance) is:

\[ p_{2,1}= p_{1,1}+q=0.01+ 0.0001=0.0101 \]

Second Iteration

Step 1 - Measure

The measurement value:

\[ z_{2}=~ 49.967^{o}C \]

Since the measurement error is 0.1 ( \( \sigma \) ), the variance ( \( \sigma^{2} \) ) would be 0.01, thus the measurement uncertainty is:

\[ r_{2}= 0.01 \]

Step 2 - Update

Kalman Gain calculation:

\[ K_{2}= \frac{p_{2,1}}{p_{2,1}+r_{2}}= \frac{0.0101}{0.0101+0.01} = 0.5 \]

The Kalman Gain is 0.5, i.e. the estimate weight and the measurement weight are equal.

Estimating the current state:

\[ \hat{x}_{2,2}=~ \hat{x}_{2,1}+ K_{2} \left( z_{2}- \hat{x}_{2,1} \right) =49.95+0.5 \left( 49.967-49.95 \right) =49.959^{o}C \]

Update the current estimate uncertainty:

\[ p_{2,2}=~ \left( 1-K_{2} \right) p_{2,1}= \left( 1-0.5 \right) 0.0101=0.005 \]

Step 3 - Predict

Since the using system’s Dynamic Model is constant, i.e. the liquid temperature doesn’t change, then:

\[ \hat{x}_{3,2}=\hat{x}_{2,2}= 49.959^{o}C \]

The extrapolated estimate uncertainty (variance) is:

\[ p_{3,2}= p_{2,2}+q=0.005+ 0.0001=0.0051 \]

Iterations 3-10

The calculations for the next iterations are summarized in the next table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
3 \( 50.1^{o}C \) \[ K_{3}= \frac{0.0051}{0.0051+0.01}=0.3388 \] \[ \hat{x}_{3,3}=~ 49.959+0.3388 \left( 50.1-49.959 \right) =50.007^{o}C \] \[ p_{3,3}= \left( 1-0.0051 \right) 0.3388=0.0034 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=50.007^{o}C \] \[ p_{4,3}= 0.0034+0.0001=0.0035 \]
4 \( 50.106^{o}C \) \[ K_{4}= \frac{0.0035}{0.0035+0.01}=0.2586 \] \[ \hat{x}_{4,4}=~ 50.007+0.2586 \left( 50.106-50.007 \right) =50.032^{o}C \] \[ p_{4,4}= \left( 1-0.2586 \right) 0.0035=0.0026 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=50.032^{o}C \] \[ p_{5,4}= 0.0026+0.0001=0.0027 \]
5 \( 49.992^{o}C \) \[ K_{5}= \frac{0.0027}{0.0027+0.01}=0.2117 \] \[ \hat{x}_{5,5}= 50.032+0.2117 \left( 49.992-50.032 \right) =50.023^{o}C \] \[ p_{5,5}= \left( 1-0.2117 \right) 0.0027=0.0021 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=50.023^{o}C \] \[ p_{6,5}= 0.0021+0.0001=0.0022 \]
6 \( 49.819^{o}C \) \[ K_{6}= \frac{0.0022}{0.0022+0.01}=0.1815 \] \[ \hat{x}_{6,6}=~ 50.023+0.1815 \left( 49.819-50.023 \right) =49.987^{o}C \] \[ p_{6,6}= \left( 1-0.1815 \right) 0.0022=0.0018 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=49.987^{o}C \] \[ p_{7,6}= 0.0018+0.0001=0.0019 \]
7 \( 49.933^{o}C \) \[ K_{7}= \frac{0.0019}{0.0019+0.01}=0.1607 \] \[ \hat{x}_{7,7}=~ 49.987+0.1607 \left( 49.933-49.987 \right) =49.978^{o}C \] \[ p_{7,7}= \left( 1-0.1607 \right) 0.0019=0.0016 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=49.978^{o}C \] \[ p_{8,7}= 0.0016+0.0001=0.0017 \]
8 \( 50.007^{o}C \) \[ K_{8}= \frac{0.0017}{0.0017+0.01}=0.1458 \] \[ \hat{x}_{8,8}= 49.978+0.1458 \left( 50.007-49.978 \right) =49.983^{o}C \] \[ p_{8,8}= \left( 1-0.1458 \right) 0.0017=0.0015 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=49.983^{o}C \] \[ p_{9,8}= 0.0015+0.0001=0.0016 \]
9 \( 50.023^{o}C \) \[ K_{9}= \frac{0.0016}{0.0016+0.01}=0.1348 \] \[ \hat{x}_{9,9}=~ 49.983+0.1348 \left( 50.023-49.983 \right) =49.988^{o}C \] \[ p_{9,9}= \left( 1-0.1348 \right) 0.0016=0.0014 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=49.988^{o}C \] \[ p_{10,9}= 0.0014+0.0001=0.0015 \]
10 \( 49.99^{o}C \) \[ K_{10}= \frac{0.0015}{0.0015+0.01}=0.1265 \] \[ \hat{x}_{10,10}=~ 49.988+0.1265 \left( 49.99 -49.988 \right) =49.988^{o}C \] \[ p_{10,10}= \left( 1-0.1265 \right) 0.0015=0.0013 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=49.988^{o}C \] \[ p_{11,10}= 0.0013+0.0001=0.0014 \]

The following chart compares the true value, measured values and estimates.

True value, measured values and estimates

As you can see the estimated value converges towards the true value.

The next chart shows the estimate uncertainty.

Estimate uncertainty

The estimate uncertainty quickly goes down. After 10 measurements, the estimate uncertainty ( \( \sigma ^{2} \) ) is 0.0013, i.e. the estimate error standard deviation is: \( \sigma = \sqrt[]{0.0013}=0.036^{o}C \)

So we can say that the liquid temperature estimate is: \( 49.988 \pm 0.036_{ }^{o}C \)

The next chart shows the Kalman Gain.

The Kalman Gain

As you can see, the Kalman Gain is going down, making the measurement weight smaller and smaller.

Example summary

In this example we've measured the liquid temperature using the one-dimensional Kalman Filter. Although the system dynamics including a random process noise, the Kalman Filter can provide good estimation.

Example 7 – Estimating the temperature of the heating liquid

Like in the previous example, in this example we are going to estimate the temperature of the liquid in the tank. The system dynamics is not constant, the liquid is heating at the rate of 0.1\( ^{o}C \) every second.

The Kalman Filter parameters are similar to the previous example:

  • We think that we have an accurate model, thus we set the process noise variance (q) to 0.0001.
  • The measurement error (standard deviation) is 0.1\( ^{o}C \).
  • The measurements are taken every 5 seconds.
  • The system dynamics is constant.

Pay attention, although the real system dynamics is not constant (since the liquid is heating), we are going to treat the system as a system with constant dynamics (the temperature doesn't change).

  • The true liquid temperature at the measurement points is: 50.479\( ^{o}C \), 51.025\( ^{o}C \), 51.5\( ^{o}C \), 52.003\( ^{o}C \), 52.494\( ^{o}C \), 53.002\( ^{o}C \), 53.499\( ^{o}C \), 54.006\( ^{o}C \), 54.498\( ^{o}C \), and 54.991\( ^{o}C \).
  • The set of measurements is: 50.45\( ^{o}C \), 50.967\( ^{o}C \), 51.6\( ^{o}C \), 52.106\( ^{o}C \), 52.492\( ^{o}C \), 52.819\( ^{o}C \), 53.433\( ^{o}C \), 54.007\( ^{o}C \), 54.523\( ^{o}C \), and 54.99\( ^{o}C \).

The following chart compares the true liquid temperature and the measurements.

True temperature vss measurements

Iteration Zero

Iteration zero is similar to the previous example.

Before the first iteration, we must initialize the Kalman Filter and predict the next state (which is the first state).

Initialization

We don't know what is the temperature of the liquid in a tank is and our guess is 10\( ^{o}C \).

\[ \hat{x}_{0,0}=10^{o}C \]

Our guess is very imprecise, we set our initialization estimate error ( \( \sigma \) ) to 100. The Estimate Uncertainty of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000 \]

This variance is very high. If we initialize with a more meaningful value, we will get faster Kalman Filter convergence.

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model is constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{o}C \]

The extrapolated estimate uncertainty (variance):

\[ p_{1,0}= p_{0,0}+q=10000+ 0.0001=10000.0001 \]

Iterations 1-10

The calculations for the next iterations are summarized in the next table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
1 \( 50.45^{o}C \) \[ K_{1}= \frac{10000.0001}{10000.0001+0.01} = 0.999999 \] \[ \hat{x}_{1,1}=~ 10+0.999999 \left( 50.45-10 \right) =50.45^{o}C \] \[ p_{1,1}= \left( 1-0.999999 \right) 10000.0001=0.01 \] \[ \hat{x}_{2,1}= \hat{x}_{1,1}=50.45^{o}C \] \[ p_{2,1}= 0.01+0.0001=0.0101 \]
2 \( 50.967^{o}C \) \[ K_{2}= \frac{0.0101}{0.0101+0.01}=0.5025 \] \[ \hat{x}_{2,2}=~ 50.45+0.5025 \left( 50.967-50.45 \right) =50.71^{o}C\] \[ p_{2,2}= \left( 1-0.5025 \right) 0.0101=0.005 \] \[ \hat{x}_{3,2}= \hat{x}_{2,2}=50.71^{o}C \] \[ p_{3,2}= 0.005+0.0001=0.0051 \]
3 \( 51.6^{o}C \) \[ K_{3}= \frac{0.0051}{0.0051+0.01}=0.3388 \] \[ \hat{x}_{3,3}=~ 50.71+0.3388 \left( 51.6-50.71 \right) =51.011^{o}C\] \[ p_{3,3}= \left( 1-0.3388 \right) 0.0051=0.0034 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=51.011^{o}C \] \[ p_{4,3}= 0.0034+0.0001=0.0035 \]
4 \( 52.106^{o}C \) \[ K_{4}= \frac{0.0035}{0.0035+0.01}=0.2586 \] \[ \hat{x}_{4,4}=~ 51.011+0.2586 \left( 52.106-51.011 \right) =51.295^{o}C \] \[ p_{4,4}= \left( 1-0.2586 \right) 0.0035=0.0026 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=51.295^{o}C \] \[ p_{5,4}= 0.0026+0.0001=0.0027 \]
5 \( 52.492^{o}C \) \[ K_{5}= \frac{0.0027}{0.0027+0.01}=0.2117 \] \[ \hat{x}_{5,5}= 51.295+0.2117 \left( 52.492-51.295 \right) =51.548^{o}C \] \[ p_{5,5}= \left( 1-0.2117 \right) 0.0027=0.0021 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=51.548^{o}C \] \[ p_{6,5}= 0.0021+0.0001=0.0022 \]
6 \( 52.819^{o}C \) \[ K_{6}= \frac{0.0022}{0.0022+0.01}=0.1815 \] \[ \hat{x}_{6,6}=~ 51.548+0.1815 \left( 52.819-51.548 \right) =51.779^{o}C \] \[ p_{6,6}= \left( 1-0.1815 \right) 0.0022=0.0018 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=51.779^{o}C \] \[ p_{7,6}= 0.0018+0.0001=0.0019 \]
7 \( 53.433^{o}C \) \[ K_{7}= \frac{0.0019}{0.0019+0.01}=0.1607 \] \[ \hat{x}_{7,7}=~ 51.779+0.1607 \left( 53.433-51.779 \right) =52.045^{o}C \] \[ p_{7,7}= \left( 1-0.1607 \right) 0.0019=0.0016 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=52.045^{o}C \] \[ p_{8,7}= 0.0016+0.0001=0.0017 \]
8 \( 54.007^{o}C \) \[ K_{8}= \frac{0.0017}{0.0017+0.01}=0.1458 \] \[ \hat{x}_{8,8}= 52.045+0.1458 \left( 54.007-52.045 \right) =52.331^{o}C \] \[ p_{8,8}= \left( 1-0.1458 \right) 0.0017=0.0015 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=52.331^{o}C \] \[ p_{9,8}= 0.0015+0.0001=0.0016 \]
9 \( 54.523^{o}C \) \[ K_{9}= \frac{0.0016}{0.0016+0.01}=0.1348 \] \[ \hat{x}_{9,9}=~ 52.331+0.1348 \left( 54.523-52.331 \right) =52.626^{o}C \] \[ p_{9,9}= \left( 1-0.1348 \right) 0.0016=0.0014 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=52.626^{o}C \] \[ p_{10,9}= 0.0014+0.0001=0.0015 \]
10 \( 54.99^{o}C \) \[ K_{10}= \frac{0.0015}{0.0015+0.01}=0.1265 \] \[ \hat{x}_{10,10}=~ 2.626+0.1265 \left( 54.99 -52.626 \right) =52.925^{o}C \] \[ p_{10,10}= \left( 1-0.1265 \right) 0.0015=0.0013 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=52.925^{o}C \] \[ p_{11,10}= 0.0013+0.0001=0.0014 \]

The following chart compares the true value, measured values and estimates.

True value, measured values and estimates

As you can see, the Kalman Filter has failed to provide trustworthy estimation. There is a lag error in Kalman Filter estimation. We've already encountered the lag error in Example 3, where we estimated position of accelerating aircraft using the \( \alpha - \beta \) filter that assumes constant aircraft velocity. We got rid of the lag error in Example 4, where we replaced the \( \alpha - \beta \) filter by \( \alpha -\beta -\gamma \) filter that assumes acceleration.

There are two reasons that causing lag error in our Kalman Filter example:

  • The dynamic model doesn't fit the case.
  • The process model reliability. We have chosen very low process noise \( \left( q=0.0001 \right) \) while the real temperature fluctuations are much bigger.
Note: The lag error shall be constant, therefore the estimate curve shall have the same slope of the true value curve. The figure above presents only 10 first measurements and it is not enough for convergence. The figure below presents the first 100 measurements with the constant lag error.
True value, measured values and estimates

There are two possible ways to fix the lag error:

  • If we know that the liquid temperature can change linearly, we can define a new model that takes into account a possible linear change in the liquid temperature. We did it in Example 4. This is a preferred method. However, if the temperature change can't be modeled, this method won't improve the Kalman Filter performance.
  • On the other hand, since our model is not well defined, we can adjust the process model reliability by increasing the process noise \( \left( q \right) \). See the next example for details.

Example summary

In this example, we've measured the temperature of heating liquid using the one-dimensional Kalman Filter with constant dynamic model. We've observed the lag error in the Kalman Filter estimation. The lag error is caused by wrong dynamic model definition and wrong process model definition.

The lag error can be fixed by appropriate definition of the dynamic model or process model.

Example 8 – Estimating the temperature of the heating liquid

This example is similar to the previous example with only one change. Since our process is not well defined, we will increase the process uncertainty \( \left( q \right) \) from 0.0001 to 0.15.

Iteration Zero

Before the first iteration, we must initialize the Kalman Filter and predict the next state (which is the first state).

Initialization

The initialization zero is similar to the previous example.

We don't know what is the temperature of the liquid in a tank is and our guess is 10\( ^{o}C \).

\[ \hat{x}_{0,0}=10^{o}C \]

Our guess is very imprecise, we set our initialization estimate error ( \( \sigma \) ) to 100. The Estimate Uncertainty of the initialization is the error variance \( \left( \sigma ^{2} \right) \):

\[ p_{0,0}=100^{2}=10,000 \]

This variance is very high. If we initialize with a more meaningful value, we will get faster Kalman Filter convergence.

Prediction

Now, we shall predict the next state based on the initialization values.

Since our model is constant dynamics, the predicted estimate is equal to the current estimate:

\[ \hat{x}_{1,0}=10^{o}C \]

The extrapolated estimate uncertainty (variance):

\[ p_{1,0}= p_{0,0}+q=10000+ 0.15=10000.15 \]

Iterations 1-10

The calculations for the next iterations are summarized in the next table:

\( n \) \( z_{n} \) Current state estimates ( \( K_{n} \) , \( \hat{x}_{n,n} \) , \( p_{n,n} \) ) Prediction ( \( \hat{x}_{n+1,n} \) , \( p_{n+1,n} \) )
1 \( 50.45^{o}C \) \[ K_{1}= \frac{10000.15}{10000.15+0.01} = 0.999999 \] \[ \hat{x}_{1,1}=~ 10+0.999999 \left( 50.45-10 \right) =50.45^{o}C \] \[ p_{1,1}= \left( 1-0.999999 \right) 0.10000.15=0.01 \] \[ \hat{x}_{2,1}= \hat{x}_{1,1}=50.45^{o}C \] \[ p_{2,1}= 0.01+0.15=0.16 \]
2 \( 50.967^{o}C \) \[ K_{2}= \frac{0.16}{0.16+0.01}=0.9412 \] \[ \hat{x}_{2,2}=~ 50.45+0.9412 \left( 50.967-50.45 \right) =50.94^{o}C\] \[ p_{2,2}= \left( 1-0.9412 \right) 0.16=0.0094 \] \[ \hat{x}_{3,2}= \hat{x}_{2,2}=50.94^{o}C \] \[ p_{3,2}= 0.0094+0.15=0.1594 \]
3 \( 51.6^{o}C \) \[ K_{3}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{3,3}=~ 50.94+0.941 \left( 51.6-50.94 \right) =51.56^{o}C\] \[ p_{3,3}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{4,3}= \hat{x}_{3,3}=51.56^{o}C \] \[ p_{4,3}= 0.0094+0.15=0.1594 \]
4 \( 52.106^{o}C \) \[ K_{4}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{4,4}=~ 51.56+0.941 \left( 52.106-51.56 \right) =52.07^{o}C \] \[ p_{4,4}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{5,4}= \hat{x}_{4,4}=52.07^{o}C \] \[ p_{5,4}= 0.0094+0.15=0.1594 \]
5 \( 52.492^{o}C \) \[ K_{5}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{5,5}= 52.07+0.941 \left( 52.492-52.07 \right) =52.47^{o}C \] \[ p_{5,5}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{6,5}= \hat{x}_{5,5}=52.47^{o}C \] \[ p_{6,5}= 0.0094+0.15=0.1594 \]
6 \( 52.819^{o}C \) \[ K_{6}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{6,6}=~ 52.47+0.941 \left( 52.819-52.47 \right) =52.8^{o}C \] \[ p_{6,6}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{7,6}= \hat{x}_{6,6}=52.8^{o}C \] \[ p_{7,6}= 0.0094+0.15=0.1594 \]
7 \( 53.433^{o}C \) \[ K_{7}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{7,7}=~ 52.8+0.941 \left( 53.433-52.8 \right) =53.4^{o}C \] \[ p_{7,7}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{8,7}= \hat{x}_{7,7}=53.4^{o}C \] \[ p_{8,7}= 0.0094+0.15=0.1594 \]
8 \( 54.007^{o}C \) \[ K_{8}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{8,8}= 53.4+0.941 \left( 54.007-53.4 \right) =53.97^{o}C \] \[ p_{8,8}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{9,8}= \hat{x}_{8,8}=53.97^{o}C \] \[ p_{9,8}= 0.0094+0.15=0.1594 \]
9 \( 54.523^{o}C \) \[ K_{9}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{9,9}=~ 53.97+0.941 \left( 54.523-53.97 \right) =54.49^{o}C \] \[ p_{9,9}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{10,9}= \hat{x}_{9,9}=54.49^{o}C \] \[ p_{10,9}= 0.0094+0.15=0.1594 \]
10 \( 54.99^{o}C \) \[ K_{10}= \frac{0.1594}{0.1594+0.01}=0.941 \] \[ \hat{x}_{10,10}=~ 54.49+0.941 \left( 54.99 -54.49 \right) =54.96^{o}C \] \[ p_{10,10}= \left( 1-0.941 \right) 0.1594=0.0094 \] \[ \hat{x}_{11,10}= \hat{x}_{10,10}=54.96^{o}C \] \[ p_{11,10}= 0.0094+0.15=0.1594 \]

The following chart compares the true value, measured values and estimates.

True value, measured values and estimates

As you can see, the estimates are following the measurements. There is no lag error.

The next chart shows the Kalman Gain.

The Kalman Gain

Due to the high process uncertainty, the measurement weight is much higher than the estimate weight, thus the Kalman Gain is high, and it is converged at 0.94.

Example summary

We can get rid of the lag error by setting the high process uncertainty. However, since our model is not well defined, we get noisy estimates that are almost equal to the measurements, and we miss the goal of the Kalman Filter.

The best Kalman Filter implementation shall involve the model that is very close to reality leaving a small space for the process noise. However, the precise model is not always available, for example the airplane pilot can decide to perform a sudden maneuver that will change predicted airplane trajectory. In this case, the process noise shall be increased.

Previous Next