# Examples

It is the final part of the Multidimensional Kalman Filter chapter.

It includes two numerical examples. In the first example, we design a six-dimensional Kalman Filter without control input. In the second example, we design a two-dimensional Kalman Filter with a control input.

## Example 9 – vehicle location estimation

In the following example, we implement the Multidimensional Kalman Filter using the material we've learned.

In this example, we would like to estimate the vehicle's location on the $$XY$$ plane.

The vehicle has an onboard location sensor that reports $$X$$ and $$Y$$ coordinates of the system.

We assume constant acceleration dynamics.

### The state extrapolation equation

First, we derive the state extrapolation equation. As you remember, the general form of the state extrapolation equation in matrix notation is:

$\boldsymbol{\hat{x}}_{n+1,n}=\boldsymbol{F\hat{x}}_{n,n} + \boldsymbol{Gu}_{n} + \boldsymbol{w}_{n}$
Where:
 $$\boldsymbol{\hat{x}}_{n+1,n}$$ is a predicted system state vector at time step $$n + 1$$ $$\boldsymbol{\hat{x}}_{n,n}$$ is an estimated system state vector at time step $$n$$ $$\boldsymbol{u}_{n}$$ is a control variable $$\boldsymbol{w}_{n}$$ is the process noise $$\boldsymbol{F}$$ is the state transition matrix $$\boldsymbol{G}$$ is the control matrix

In this example, there is no control variable $$\boldsymbol{u}$$ since there is no control input.

For this example, the state extrapolation equation can be simplified to:

$\boldsymbol{\hat{x}}_{n+1,n} = \boldsymbol{F\hat{x}}_{n,n}$

The system state $$\boldsymbol{x}_{n}$$ is defined by:

$\boldsymbol{x}_{n}= \left[ \begin{matrix} x_{n}\\ \dot{x}_{n}\\ \ddot{x}_{n}\\ y_{n}\\ \dot{y}_{n}\\ \ddot{y}_{n}\\ \end{matrix} \right]$

The extrapolated vehicle state for time $$n + 1$$ can be described by the following system of equations:

$\begin{cases} \hat{x}_{n+1,n} = \hat{x}_{n,n} + \hat{\dot{x}}_{n,n} \Delta t+ \frac{1}{2}\hat{\ddot{x}}_{n,n}\Delta t^{2}\\ \hat{\dot{x}}_{n+1,n} = \hat{\dot{x}}_{n,n} + \hat{\ddot{x}}_{n,n}\Delta t\\ \hat{\ddot{x}}_{n+1,n} = \hat{\ddot{x}}_{n,n}\\ \hat{y}_{n+1,n} = \hat{y}_{n,n} + \hat{\dot{y}}_{n,n} \Delta t+ \frac{1}{2}\hat{\ddot{y}}_{n,n}\Delta t^{2}\\ \hat{\dot{y}}_{n+1,n} = \hat{\dot{y}}_{n,n} + \hat{\ddot{y}}_{n,n}\Delta t\\ \hat{\ddot{y}}_{n+1,n} = \hat{\ddot{y}}_{n,n} \end{cases}$

In matrix form:

$\left[ \begin{matrix} \hat{x}_{n+1,n}\\ \hat{\dot{x}}_{n+1,n}\\ \hat{\ddot{x}}_{n+1,n}\\ \hat{y}_{n+1,n}\\ \hat{\dot{y}}_{n+1,n}\\ \hat{\ddot{y}}_{n+1,n}\\ \end{matrix} \right] = \left[ \begin{matrix} 1 & \Delta t & 0.5\Delta t^{2} & 0 & 0 & 0\\ 0 & 1 & \Delta t & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & \Delta t & 0.5\Delta t^{2}\\ 0 & 0 & 0 & 0 & 1 & \Delta t\\ 0 & 0 & 0 & 0 & 0 & 1\\ \end{matrix} \right] \left[ \begin{matrix} \hat{x}_{n,n}\\ \hat{\dot{x}}_{n,n}\\ \hat{\ddot{x}}_{n,n}\\ \hat{y}_{n,n}\\ \hat{\dot{y}}_{n,n}\\ \hat{\ddot{y}}_{n,n}\\ \end{matrix} \right]$

### The covariance extrapolation equation

The general form of the Covariance Extrapolation Equation is given by:

$\boldsymbol{P}_{n+1,n} = \boldsymbol{FP}_{n,n}\boldsymbol{F}^{T} + \boldsymbol{Q}$
Where:
 $$\boldsymbol{P}_{n,n}$$ is the uncertainty (covariance) matrix of the current state estimation $$\boldsymbol{P}_{n+1,n}$$ is the uncertainty (covariance) matrix of the next state estimation (prediction) $$\boldsymbol{F}$$ is the state transition matrix that we've derived in "Modeling linear dynamic systems" section $$\boldsymbol{Q}$$ is the process noise matrix

The estimate uncertainty in matrix form is:

$\boldsymbol{P} = \left[ \begin{matrix} p_{x} & p_{x\dot{x}} & p_{x\ddot{x}} & p_{xy} & p_{x\dot{y}} & p_{x\ddot{y}} \\ p_{\dot{x}x} & p_{\dot{x}} & p_{\dot{x}\ddot{x}} & p_{\dot{x}y} & p_{\dot{x}\dot{y}} & p_{\dot{x}\ddot{y}} \\ p_{\ddot{x}x} & p_{\ddot{x}\dot{x}} & p_{\ddot{x}} & p_{\ddot{x}y} & p_{\ddot{x}\dot{y}} & p_{\ddot{x}\ddot{y}} \\ p_{yx} & p_{y\dot{x}} & p_{y\ddot{x}} & p_{y} & p_{y\dot{y}} & p_{y\ddot{y}} \\ p_{\dot{y}x} & p_{\dot{y}\dot{x}} & p_{\dot{y}\ddot{x}} & p_{\dot{y}y} & p_{\dot{y}} & p_{\dot{y}\ddot{y}} \\ p_{\ddot{y}x} & p_{\ddot{y}\dot{x}} & p_{\ddot{y}\ddot{x}} & p_{\ddot{y}y} & p_{\ddot{y}\dot{y}} & p_{\ddot{y}} \\ \end{matrix} \right]$

The elements on the main diagonal of the matrix are the variances of the estimation:

• $$p_{x}$$ is the variance of the $$X$$ coordinate position estimation
• $$p_{\dot{x}}$$ is the variance of the $$X$$ coordinate velocity estimation
• $$p_{\ddot{x}}$$ is the variance of the $$X$$ coordinate acceleration estimation
• $$p_{y}$$ is the variance of the $$Y$$ coordinate position estimation
• $$p_{\dot{y}}$$ is the variance of the $$Y$$ coordinate velocity estimation
• $$p_{\ddot{y}}$$ is the variance of the $$Y$$ coordinate acceleration estimation
• The off-diagonal entries are covariances

We assume that the estimation errors in $$X$$ and $$Y$$ axes are not correlated so that the mutual terms can be set to zero.

$\boldsymbol{P} = \left[ \begin{matrix} p_{x} & p_{x\dot{x}} & p_{x\ddot{x}} & 0 & 0 & 0 \\ p_{\dot{x}x} & p_{\dot{x}} & p_{\dot{x}\ddot{x}} & 0 & 0 & 0 \\ p_{\ddot{x}x} & p_{\ddot{x}\dot{x}} & p_{\ddot{x}} & 0 & 0 & 0 \\ 0 & 0 & 0 & p_{y} & p_{y\dot{y}} & p_{y\ddot{y}} \\ 0 & 0 & 0 & p_{\dot{y}y} & p_{\dot{y}} & p_{\dot{y}\ddot{y}} \\ 0 & 0 & 0 & p_{\ddot{y}y} & p_{\ddot{y}\dot{y}} & p_{\ddot{y}} \\ \end{matrix} \right]$

We have already derived the state transition matrix $$\boldsymbol{F}$$. Now we shall derive the process noise $$\boldsymbol{Q}$$ matrix.

#### The process noise matrix

We assume a discrete noise model - the noise is different at each time sample but is constant between time samples.

The process noise matrix for the two-dimensional constant acceleration model looks as follows:

$\boldsymbol{Q} = \left[ \begin{matrix} \sigma_{x}^{2} & \sigma_{x\dot{x}}^{2} & \sigma_{x\ddot{x}}^{2} & \sigma_{xy}^{2} & \sigma_{x\dot{y}}^{2} & \sigma_{x\ddot{y}}^{2} \\ \sigma_{\dot{x}x}^{2} & \sigma_{\dot{x}}^{2} & \sigma_{\dot{x}\ddot{x}}^{2} & \sigma_{\dot{x}y}^{2} & \sigma_{\dot{x}\dot{y}}^{2} & \sigma_{\dot{x}\ddot{y}}^{2} \\ \sigma_{\ddot{x}x}^{2} & \sigma_{\ddot{x}\dot{x}}^{2} & \sigma_{\ddot{x}}^{2} & \sigma_{\ddot{x}y}^{2} & \sigma_{\ddot{x}\dot{y}}^{2} & \sigma_{\ddot{x}\ddot{y}}^{2} \\ \sigma_{yx}^{2} & \sigma_{y\dot{x}}^{2} & \sigma_{y\ddot{x}}^{2} & \sigma_{y}^{2} & \sigma_{y\dot{y}}^{2} & \sigma_{y\ddot{y}}^{2} \\ \sigma_{\dot{y}x}^{2} & \sigma_{\dot{y}\dot{x}}^{2} & \sigma_{\dot{y}\ddot{x}}^{2} & \sigma_{\dot{y}y}^{2} & \sigma_{\dot{y}}^{2} & \sigma_{\dot{y}\ddot{y}}^{2} \\ \sigma_{\ddot{y}x}^{2} & \sigma_{\ddot{y}\dot{x}}^{2} & \sigma_{\ddot{y}\ddot{x}}^{2} & \sigma_{\ddot{y}y}^{2} & \sigma_{\ddot{y}\dot{y}}^{2} & \sigma_{\ddot{y}}^{2} \\ \end{matrix} \right]$

We assume that the process noise in $$X$$ and $$Y$$ axes is not correlated so that the mutual terms can be set to zero.

$\boldsymbol{Q} = \left[ \begin{matrix} \sigma_{x}^{2} & \sigma_{x\dot{x}}^{2} & \sigma_{x\ddot{x}}^{2} & 0 & 0 & 0 \\ \sigma_{\dot{x}x}^{2} & \sigma_{\dot{x}}^{2} & \sigma_{\dot{x}\ddot{x}}^{2} & 0 & 0 & 0 \\ \sigma_{\ddot{x}x}^{2} & \sigma_{\ddot{x}\dot{x}}^{2} & \sigma_{\ddot{x}}^{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & \sigma_{y}^{2} & \sigma_{y\dot{y}}^{2} & \sigma_{y\ddot{y}}^{2} \\ 0 & 0 & 0 & \sigma_{\dot{y}y}^{2} & \sigma_{\dot{y}}^{2} & \sigma_{\dot{y}\ddot{y}}^{2} \\ 0 & 0 & 0 & \sigma_{\ddot{y}y}^{2} & \sigma_{\ddot{y}\dot{y}}^{2} & \sigma_{\ddot{y}}^{2} \\ \end{matrix} \right]$

We've already derived the Q matrix for the constant acceleration motion model. The Q matrix for our example is:

$\boldsymbol{Q} = \left[ \begin{matrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} & \frac{\Delta t^{2}}{2} & 0 & 0 & 0 \\ \frac{\Delta t^{3}}{2} & \Delta t^{2} & \Delta t & 0 & 0 & 0 \\ \frac{\Delta t^{2}}{2} & \Delta t & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} & \frac{\Delta t^{2}}{2} \\ 0 & 0 & 0 & \frac{\Delta t^{3}}{2} & \Delta t^{2} & \Delta t \\ 0 & 0 & 0 & \frac{\Delta t^{2}}{2} & \Delta t & 1 \\ \end{matrix} \right] \sigma_{a}^{2}$

Where:

• $$\Delta t$$ is the time between successive measurements
• $$\sigma_{a}^{2}$$ is a random variance in acceleration

Now we can write down the covariance extrapolation equation for our example:

$\boldsymbol{P}_{n+1,n} = \boldsymbol{FP}_{n,n}\boldsymbol{F}^{T} + \boldsymbol{Q}$

$\left[ \begin{matrix} p_{x_{n+1,n}} & p_{x\dot{x}_{n+1,n}} & p_{x\ddot{x}_{n+1,n}} & 0 & 0 & 0 \\ p_{\dot{x}x_{n+1,n}} & p_{\dot{x}_{n+1,n}} & p_{\dot{x}\ddot{x}_{n+1,n}} & 0 & 0 & 0 \\ p_{\ddot{x}x_{n+1,n}} & p_{\ddot{x}\dot{x}_{n+1,n}} & p_{\ddot{x}_{n+1,n}} & 0 & 0 & 0 \\ 0 & 0 & 0 & p_{y_{n+1,n}} & p_{y\dot{y}_{n+1,n}} & p_{y\ddot{y}_{n+1,n}} \\ 0 & 0 & 0 & p_{\dot{y}y_{n+1,n}} & p_{\dot{y}_{n+1,n}} & p_{\dot{y}\ddot{y}_{n+1,n}} \\ 0 & 0 & 0 & p_{\ddot{y}y_{n+1,n}} & p_{\ddot{y}\dot{y}_{n+1,n}} & p_{\ddot{y}_{n+1,n}} \\ \end{matrix} \right] = \left[ \begin{matrix} 1 & \Delta t & 0.5\Delta t^{2} & 0 & 0 & 0\\ 0 & 1 & \Delta t & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & \Delta t & 0.5\Delta t^{2}\\ 0 & 0 & 0 & 0 & 1 & \Delta t\\ 0 & 0 & 0 & 0 & 0 & 1\\ \end{matrix} \right] \times \left[ \begin{matrix} p_{x_{n,n}} & p_{x\dot{x}_{n,n}} & p_{x\ddot{x}_{n,n}} & 0 & 0 & 0 \\ p_{\dot{x}x_{n,n}} & p_{\dot{x}_{n,n}} & p_{\dot{x}\ddot{x}_{n,n}} & 0 & 0 & 0 \\ p_{\ddot{x}x_{n,n}} & p_{\ddot{x}\dot{x}_{n,n}} & p_{\ddot{x}_{n,n}} & 0 & 0 & 0 \\ 0 & 0 & 0 & p_{y_{n,n}} & p_{y\dot{y}_{n,n}} & p_{y\ddot{y}_{n,n}} \\ 0 & 0 & 0 & p_{\dot{y}y_{n,n}} & p_{\dot{y}_{n,n}} & p_{\dot{y}\ddot{y}_{n,n}} \\ 0 & 0 & 0 & p_{\ddot{y}y_{n,n}} & p_{\ddot{y}\dot{y}_{n,n}} & p_{\ddot{y}_{n,n}} \\ \end{matrix} \right] \times \left[ \begin{matrix} 1 & 0 & 0 & 0 & 0 & 0\\ \Delta t & 1 & 0 & 0 & 0 & 0\\ 0.5\Delta t^{2} & \Delta t & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & \Delta t & 1 & 0\\ 0 & 0 & 0 & 0.5\Delta t^{2} & \Delta t & 1\\ \end{matrix} \right] + \left[ \begin{matrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} & \frac{\Delta t^{2}}{2} & 0 & 0 & 0 \\ \frac{\Delta t^{3}}{2} & \Delta t^{2} & \Delta t & 0 & 0 & 0 \\ \frac{\Delta t^{2}}{2} & \Delta t & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} & \frac{\Delta t^{2}}{2} \\ 0 & 0 & 0 & \frac{\Delta t^{3}}{2} & \Delta t^{2} & \Delta t \\ 0 & 0 & 0 & \frac{\Delta t^{2}}{2} & \Delta t & 1 \\ \end{matrix} \right] \sigma_{a}^{2}$

### The measurement equation

The generalized measurement equation in the matrix form is given by:

$\boldsymbol{z}_{n} = \boldsymbol{Hx}_{n} + \boldsymbol{v}_{n}$
Where:
 $$\boldsymbol{z}_{n}$$ is the measurement vector $$\boldsymbol{x}_{n}$$ is the true system state (hidden state) $$\boldsymbol{v}_{n}$$ the random noise vector $$\boldsymbol{H}$$ is the observation matrix

The measurement provides us only $$X$$ and $$Y$$ coordinates of the vehicle. So: $$\boldsymbol{z}_{n} = \left[ \begin{matrix} x_{n,measured}\\ y_{n,measured}\\ \end{matrix}\right]$$

$\boldsymbol{z_{n} = Hx_{n}}$

$\left[ \begin{matrix} x_{n,measured}\\ y_{n,measured}\\ \end{matrix}\right] = \boldsymbol{H} \left[ \begin{matrix} x_{n}\\ \dot{x}_{n} \\ \ddot{x}_{n}\\ y_{n} \\ \dot{y}_{n} \\ \ddot{y}_{n}\\ \end{matrix} \right]$

The dimension of $$\boldsymbol{z}_{n}$$ is $$2 \times 1$$ and the dimension of $$\boldsymbol{x}_{n}$$ is $$6 \times 1$$. Therefore the dimension of the observation matrix $$\boldsymbol{H}$$ shall be $$2 \times 6$$.

$\boldsymbol{H} = \left[ \begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ \end{matrix} \right]$

### The measurement uncertainty

The measurement covariance matrix is:

$\boldsymbol{R}_{n} = \left[ \begin{matrix} \sigma_{x_{m}}^{2} & \sigma_{yx_{m}}^{2} \\ \sigma_{xy_{m}}^{2} & \sigma_{y_{m}}^{2} \\ \end{matrix} \right]$

The subscript $$m$$ is for measurement uncertainty.

Assume that the $$x$$ and $$y$$ measurements are uncorrelated, i.e., the error in the $$x$$ coordinate measurement doesn't depend on the error in the $$y$$ coordinate measurement.

$\boldsymbol{R}_{n} = \left[ \begin{matrix} \sigma_{x_{m}}^{2} & 0 \\ 0 & \sigma_{y_{m}}^{2} \\ \end{matrix} \right]$

In real-life applications, the measurement uncertainty can differ between measurements. In many systems, the measurement uncertainty depends on the measurement SNR (signal-to-noise ratio), the angle between the sensor (or sensors) and target, signal frequency, and many other parameters.

For the sake of the example simplicity, we assume a constant measurement uncertainty:

$\boldsymbol{R}_{1} = \boldsymbol{R}_{2} ... \boldsymbol{R}_{n-1} = \boldsymbol{R}_{n} = \boldsymbol{R}$

### The Kalman Gain

The Kalman Gain in matrix notation is given by:

$\boldsymbol{K}_{n} = \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right)^{-1}$
Where:
 $$\boldsymbol{K}_{n}$$ is the Kalman Gain $$\boldsymbol{P}_{n,n-1}$$ is a prior estimate uncertainty (covariance) matrix of the current state (predicted at the previous state) $$\boldsymbol{H}$$ is the observation matrix $$\boldsymbol{R}_{n}$$ is the Measurement Uncertainty (measurement noise covariance matrix)

We have already derived all the building blocks of the Kalman Gain:

$\left[ \begin{matrix} K_{1,1_{n}} & K_{1,2_{n}} \\ K_{2,1_{n}} & K_{2,2_{n}} \\ K_{3,1_{n}} & K_{3,2_{n}} \\ K_{4,1_{n}} & K_{4,2_{n}} \\ K_{5,1_{n}} & K_{5,2_{n}} \\ K_{6,1_{n}} & K_{6,2_{n}} \\ \end{matrix} \right] = \left[ \begin{matrix} p_{x_{n,n-1}} & p_{x\dot{x}_{n,n-1}} & p_{x\ddot{x}_{n,n-1}} & 0 & 0 & 0 \\ p_{\dot{x}x_{n,n-1}} & p_{\dot{x}_{n,n-1}} & p_{\dot{x}\ddot{x}_{n,n-1}} & 0 & 0 & 0 \\ p_{\ddot{x}x_{n,n-1}} & p_{\ddot{x}\dot{x}_{n,n-1}} & p_{\ddot{x}_{n,n-1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & p_{y_{n,n-1}} & p_{y\dot{y}_{n,n-1}} & p_{y\ddot{y}_{n,n-1}} \\ 0 & 0 & 0 & p_{\dot{y}y_{n,n-1}} & p_{\dot{y}_{n,n-1}} & p_{\dot{y}\ddot{y}_{n,n-1}} \\ 0 & 0 & 0 & p_{\ddot{y}y_{n,n-1}} & p_{\ddot{y}\dot{y}_{n,n-1}} & p_{\ddot{y}_{n,n-1}} \\ \end{matrix} \right] \left[ \begin{matrix} 1 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 1 \\ 0 & 0 \\ 0 & 0 \\ \end{matrix} \right] \times \left( \left[ \begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ \end{matrix}\right] \left[ \begin{matrix} p_{x_{n,n-1}} & p_{x\dot{x}_{n,n-1}} & p_{x\ddot{x}_{n,n-1}} & 0 & 0 & 0 \\ p_{\dot{x}x_{n,n-1}} & p_{\dot{x}_{n,n-1}} & p_{\dot{x}\ddot{x}_{n,n-1}} & 0 & 0 & 0 \\ p_{\ddot{x}x_{n,n-1}} & p_{\ddot{x}\dot{x}_{n,n-1}} & p_{\ddot{x}_{n,n-1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & p_{y_{n,n-1}} & p_{y\dot{y}_{n,n-1}} & p_{y\ddot{y}_{n,n-1}} \\ 0 & 0 & 0 & p_{\dot{y}y_{n,n-1}} & p_{\dot{y}_{n,n-1}} & p_{\dot{y}\ddot{y}_{n,n-1}} \\ 0 & 0 & 0 & p_{\ddot{y}y_{n,n-1}} & p_{\ddot{y}\dot{y}_{n,n-1}} & p_{\ddot{y}_{n,n-1}} \\ \end{matrix} \right] \left[ \begin{matrix} 1 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 1 \\ 0 & 0 \\ 0 & 0 \\ \end{matrix} \right] + \left[ \begin{matrix} \sigma_{x_{m}}^{2} & 0 \\ 0 & \sigma_{y_{m}}^{2} \\ \end{matrix} \right] \right)^{-1}$

### The state update equation

The State Update Equation in matrix form is given by:

$\boldsymbol{\hat{x}}_{n,n} = \boldsymbol{\hat{x}}_{n,n-1} + \boldsymbol{K}_{n} ( \boldsymbol{z}_{n} - \boldsymbol{H \hat{x}}_{n,n-1} )$
Where:
 $$\boldsymbol{\hat{x}}_{n,n}$$ is the estimated system state vector at time step $$n$$ $$\boldsymbol{\hat{x}}_{n,n-1}$$ is the predicted system state vector at time step $$n - 1$$ $$\boldsymbol{K}_{n}$$ is the Kalman Gain $$\boldsymbol{z}_{n}$$ is a measurement $$\boldsymbol{H}$$ is the observation matrix

We have already defined all the building blocks of the state update equation.

### The covariance update equation

The Covariance Update Equation is given by:

$\boldsymbol{P}_{n,n} = \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1} \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{n}\boldsymbol{R}_{n}\boldsymbol{K}_{n}^{T}$
Where:
 $$\boldsymbol{P}_{n,n}$$ is the estimate uncertainty (covariance) matrix of the current state $$\boldsymbol{P}_{n,n-1}$$ is the prior estimate uncertainty (covariance) matrix of the current state (predicted at the previous state) $$\boldsymbol{K}_{n}$$ is the Kalman Gain $$\boldsymbol{H}$$ is the observation matrix $$\boldsymbol{R}_{n}$$ is the Measurement Uncertainty (measurement noise covariance matrix)

We have already defined all the building blocks of the covariance update equation.

### The numerical example

Now we are ready to solve the numerical example. Let us assume a vehicle moving in a straight line in the $$X$$ direction with a constant velocity. After traveling 400 meters the vehicle turns right, with a turning radius of 300 meters. During the turning maneuver, the vehicle experiences acceleration due to the circular motion (angular acceleration).

The following chart depicts the vehicle movement.

• The measurements period: $$\Delta t = 1s$$
• The random acceleration standard deviation: $$\sigma_{a} = 0.2\frac{m}{s^{2}}$$
• The measurement error standard deviation: $$\sigma_{x_{m}} = \sigma_{y_{m}} = 3m$$
• The state transition matrix $$\boldsymbol{F}$$ would be:

$\boldsymbol{F} = \left[ \begin{matrix} 1 & \Delta t & 0.5\Delta t^{2} & 0 & 0 & 0\\ 0 & 1 & \Delta t & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & \Delta t & 0.5\Delta t^{2}\\ 0 & 0 & 0 & 0 & 1 & \Delta t\\ 0 & 0 & 0 & 0 & 0 & 1\\ \end{matrix} \right] = \left[ \begin{matrix} 1 & 1 & 0.5 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0.5\\ 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 1\\ \end{matrix} \right]$

• The process noise matrix $$\boldsymbol{Q}$$ would be:

$\boldsymbol{Q} = \left[ \begin{matrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} & \frac{\Delta t^{2}}{2} & 0 & 0 & 0 \\ \frac{\Delta t^{3}}{2} & \Delta t^{2} & \Delta t & 0 & 0 & 0 \\ \frac{\Delta t^{2}}{2} & \Delta t & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} & \frac{\Delta t^{2}}{2} \\ 0 & 0 & 0 & \frac{\Delta t^{3}}{2} & \Delta t^{2} & \Delta t \\ 0 & 0 & 0 & \frac{\Delta t^{2}}{2} & \Delta t & 1 \\ \end{matrix} \right] \sigma_{a}^{2} = \left[ \begin{matrix} \frac{1}{4} & \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ \frac{1}{2} & 1 & 1 & 0 & 0 & 0 \\ \frac{1}{2} & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{1}{4} & \frac{1}{2} & \frac{1}{2} \\ 0 & 0 & 0 & \frac{1}{2} & 1 & 1 \\ 0 & 0 & 0 & \frac{1}{2} & 1 & 1 \\ \end{matrix} \right] 0.2^{2}$

• The measurement uncertainty $$\boldsymbol{R}$$ would be:

$\boldsymbol{R}_{n} = \left[ \begin{matrix} \sigma_{x_{m}}^{2} & 0 \\ 0 & \sigma_{y_{m}}^{2} \\ \end{matrix} \right] = \left[ \begin{matrix} 9 & 0 \\ 0 & 9 \\ \end{matrix} \right]$

The following table contains the set of 35 noisy measurements:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
x(m) -393.66 -375.93 -351.04 -328.96 -299.35 -273.36 -245.89 -222.58 -198.03 -174.17 -146.32 -123.72 -103.47 -78.23 -52.63 -23.34 25.96 49.72 76.94 95.38 119.83 144.01 161.84 180.56 201.42 222.62 239.4 252.51 266.26 271.75 277.4 294.12 301.23 291.8 299.89
y(m) 300.4 301.78 295.1 305.19 301.06 302.05 300 303.57 296.33 297.65 297.41 299.61 299.6 302.39 295.04 300.09 294.72 298.61 294.64 284.88 272.82 264.93 251.46 241.27 222.98 203.73 184.1 166.12 138.71 119.71 100.41 79.76 50.62 32.99 2.14

#### Iteration Zero

##### Initialization

We don't know the vehicle location; we set the initial position, velocity, and acceleration to 0.

$\boldsymbol{\hat{x}}_{0,0}= \left[ \begin{matrix} 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \end{matrix} \right]$

Since our initial state vector is a guess, we set a very high estimate uncertainty. The high estimate uncertainty results in a high Kalman Gain by giving a high weight to the measurement.

$\boldsymbol{P}_{0,0}= \left[ \begin{matrix} 500 & 0 & 0 & 0 & 0 & 0 \\ 0 & 500 & 0 & 0 & 0 & 0 \\ 0 & 0 & 500 & 0 & 0 & 0 \\ 0 & 0 & 0 & 500 & 0 & 0 \\ 0 & 0 & 0 & 0 & 500 & 0 \\ 0 & 0 & 0 & 0 & 0 & 500 \\ \end{matrix} \right]$

##### Prediction

Now we can predict the next state based on the initialization values.

$\boldsymbol{\hat{x}}_{1,0} = \boldsymbol{F\hat{x}}_{0,0} = \left[ \begin{matrix} 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \end{matrix} \right]$

$\boldsymbol{P}_{1,0} = \boldsymbol{FP}_{0,0}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 1125 & 750 & 250 & 0 & 0 & 0 \\ 750 & 1000 & 500 & 0 & 0 & 0 \\ 250 & 500 & 500 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1125 & 750 & 250 \\ 0 & 0 & 0 & 750 & 1000 & 500 \\ 0 & 0 & 0 & 250 & 500 & 500 \\ \end{matrix} \right]$

#### First Iteration

##### Step 1 - Measure

The measurement values:

$\boldsymbol{z}_{1} = \left[ \begin{matrix} -393.66 \\ 300.4 \\ \end{matrix} \right]$

##### Step 2 - Update

The Kalman Gain calculation:

$\boldsymbol{K}_{1} = \boldsymbol{P}_{1,0}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{1,0}\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} = \left[ \begin{matrix} 0.9921 & 0 \\ 0.6614 & 0 \\ 0.2205 & 0 \\ 0 & 0.9921 \\ 0 & 0.6614 \\ 0 & 0.2205 \\ \end{matrix} \right]$

As you can see, the Kalman Gain for a position is 0.9921, which means that the weight of the first measurement is significantly higher than the weight of the estimation. The weight of the estimation is negligible.

Estimate the current state:

$\boldsymbol{\hat{x}}_{1,1} = \boldsymbol{\hat{x}}_{1,0} + \boldsymbol{K}_{1} (\boldsymbol{z}_{1} - \boldsymbol{H \hat{x}}_{1,0} ) = \left[ \begin{matrix} -390.54 \\ -260.36 \\ -86.8 \\ 298.02 \\ 198.7 \\ 66.23 \\ \end{matrix} \right]$

Update the current estimate uncertainty:

$\boldsymbol{P}_{1,1} = \left(\boldsymbol{I} - \boldsymbol{K}_{1}\boldsymbol{H} \right) \boldsymbol{P}_{1,0} \left(\boldsymbol{I} - \boldsymbol{K}_{1}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{1}\boldsymbol{RK}_{1}^{T} = \left[ \begin{matrix} 8.93 & 5.95 & 2 & 0 & 0 & 0 \\ 5.95 & 504 & 334.7 & 0 & 0 & 0 \\ 2 & 334.7 & 444.9 & 0 & 0 & 0 \\ 0 & 0 & 0 & 8.93 & 5.95 & 2 \\ 0 & 0 & 0 & 5.95 & 504 & 334.7 \\ 0 & 0 & 0 & 2 & 334.7 & 444.9 \\ \end{matrix} \right]$

##### Step 3 - Predict

$\boldsymbol{\hat{x}}_{2,1} = \boldsymbol{F\hat{x}}_{1,1} = \left[ \begin{matrix} -694.3 \\ -347.15 \\ -86.8 \\ 529.8 \\ 264.9 \\ 66.23 \\ \end{matrix} \right]$

$\boldsymbol{P}_{2,1} = \boldsymbol{FP}_{1,1}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 972 & 1236 & 559 & 0 & 0 & 0 \\ 1236 & 1618 & 780 & 0 & 0 & 0 \\ 559 & 780 & 445 & 0 & 0 & 0 \\ 0 & 0 & 0 & 972 & 1236 & 559 \\ 0 & 0 & 0 & 1236 & 1618 & 780 \\ 0 & 0 & 0 & 559 & 780 & 445 \\ \end{matrix} \right]$

Our prediction uncertainty is still very high.

#### Second Iteration

##### Step 1 - Measure

The measurement values:

$\boldsymbol{z}_{2} = \left[ \begin{matrix} -375.93 \\ 301.78 \\ \end{matrix} \right]$

##### Step 2 - Update

The Kalman Gain calculation:

$\boldsymbol{K}_{2} = \boldsymbol{P}_{2,1}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{2,1}\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} = \left[ \begin{matrix} 0.9908 & 0 \\ 1.26 & 0 \\ 0.57 & 0 \\ 0 & 0.9908 \\ 0 & 1.26 \\ 0 & 0.57 \\ \end{matrix} \right]$

Estimate the current state:

$\boldsymbol{\hat{x}}_{2,2} = \boldsymbol{\hat{x}}_{2,1} + \boldsymbol{K}_{2} (\boldsymbol{z}_{2} - \boldsymbol{H \hat{x}}_{2,1} ) = \left[ \begin{matrix} -378.9 \\ 53.8 \\ 94.5 \\ 303.9 \\ -22.3 \\ -63.6 \\ \end{matrix} \right]$

Update the current estimate uncertainty:

$\boldsymbol{P}_{2,2} = \left(\boldsymbol{I} - \boldsymbol{K}_{2}\boldsymbol{H} \right)\boldsymbol{P}_{2,1} \left(\boldsymbol{I} - \boldsymbol{K}_{2}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{2}\boldsymbol{RK}_{2}^{T} = \left[ \begin{matrix} 8.92 & 11.33 & 5.13 & 0 & 0 & 0 \\ 11.33 & 61.1 & 75.4 & 0 & 0 & 0 \\ 5.13 & 75.4 & 126.5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 8.92 & 11.33 & 5.13 \\ 0 & 0 & 0 & 11.33 & 61.1 & 75.4 \\ 0 & 0 & 0 & 5.13 & 75.4 & 126.5 \\ \end{matrix} \right]$

##### Step 3 - Predict

$\boldsymbol{\hat{x}}_{3,2} = \boldsymbol{F\hat{x}}_{2,2} = \left[ \begin{matrix} -277.8 \\ 148.3 \\ 94.5 \\ 249.8 \\ -85.9 \\ -63.6 \\ \end{matrix} \right]$

$\boldsymbol{P}_{3,2} = \boldsymbol{FP}_{2,2}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 204.9 & 254 & 143.8 & 0 & 0 & 0 \\ 254 & 338.5 & 202 & 0 & 0 & 0 \\ 143.8 & 202 & 126.5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 204.9 & 254 & 143.8 \\ 0 & 0 & 0 & 254 & 338.5 & 202 \\ 0 & 0 & 0 & 143.8 & 202 & 126.5 \\ \end{matrix} \right]$

Our prediction uncertainty is still very high.

At this point, it would be reasonable to jump to the last Kalman Filter iteration.

#### Thirty-Fifth Iteration

##### Step 1 - Measure

The measurement values:

$\boldsymbol{z}_{35} = \left[ \begin{matrix} 299.89 \\ 2.14 \\ \end{matrix} \right]$

##### Step 2 - Update

The Kalman Gain calculation:

$\boldsymbol{K}_{35} = \boldsymbol{P}_{35,34}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{35,34}\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} = \left[ \begin{matrix} 0.5556 & 0 \\ 0.2222 & 0 \\ 0.0444 & 0 \\ 0 & 0.5556 \\ 0 & 0.2222 \\ 0 & 0.0444 \\ \end{matrix} \right]$

The Kalman Gain for the position has converged to 0.56, meaning that the measurement and estimation weights are almost equal.

Estimate the current state:

$\boldsymbol{\hat{x}}_{35,35} = \boldsymbol{\hat{x}}_{35,34} + \boldsymbol{K}_{35} (\boldsymbol{z}_{35} - \boldsymbol{H \hat{x}}_{35,34} ) = \left[ \begin{matrix} 299.2 \\ 0.25 \\ -1.9 \\ 3.3 \\ -25.5 \\ -0.64 \\ \end{matrix} \right]$

Update the current estimate uncertainty:

$\boldsymbol{P}_{35,35} = \left(\boldsymbol{I} - \boldsymbol{K}_{35}\boldsymbol{H} \right) \boldsymbol{P}_{35,34} \left(\boldsymbol{I} - \boldsymbol{K}_{35}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{35}\boldsymbol{RK}_{35}^{T} = \left[ \begin{matrix} 5 & 2 & 0.4 & 0 & 0 & 0 \\ 2 & 1.4 & 0.4 & 0 & 0 & 0 \\ 0.4 & 0.4 & 0.16 & 0 & 0 & 0 \\ 0 & 0 & 0 & 5 & 2 & 0.4 \\ 0 & 0 & 0 & 2 & 1.4 & 0.4 \\ 0 & 0 & 0 & 0.4 & 0.4 & 0.16 \\ \end{matrix} \right]$

At this point, the position uncertainty $$p_{x} = p_{y} = 5$$, which means that the standard deviation of the prediction is $$\sqrt{5}m$$.

##### Step 3 - Predict

$\boldsymbol{\hat{x}}_{36,35} = \boldsymbol{F\hat{x}}_{35,35} = \left[ \begin{matrix} 298.5 \\ -1.65 \\ -1.9 \\ -22.5 \\ -26.1 \\ -0.64 \\ \end{matrix} \right]$

$\boldsymbol{P}_{36,35} = \boldsymbol{FP}_{35,35}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 11.25 & 4.5 & 0.9 & 0 & 0 & 0 \\ 4.5 & 2.4 & 0.6 & 0 & 0 & 0 \\ 0.9 & 0.6 & 0.2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 11.25 & 4.5 & 0.9 \\ 0 & 0 & 0 & 4.5 & 2.4 & 0.6 \\ 0 & 0 & 0 & 0.9 & 0.6 & 0.2 \\ \end{matrix} \right]$

### Example summary

The following chart compares the true, measured, and estimated values of the vehicle position.

As you can see, the Kalman Filter tracks the vehicle quite well. However, the estimates are inaccurate when the vehicle starts the turning maneuver. After a while, the Kalman Filter accuracy improves.

While the vehicle travels along a straight line, the acceleration is constant and equal to zero. However, during the turn maneuver, the vehicle experiences acceleration due to the circular motion - angular acceleration.

Recall from an introductory physics that angular acceleration is $$\alpha = \frac{\Delta V}{R \Delta t}$$ , where $$\Delta t$$ is the time interval, $$\Delta V$$ is the velocity difference within the time interval, and $$R$$ is the circle radius.

Although the angular acceleration is constant, the angular acceleration projection on the $$x$$ and $$y$$ axes is not constant, therefore $$\ddot{x}$$ and $$\ddot{y}$$ are not constant.

Our Kalman Filter is designed for a constant acceleration model. Nevertheless, it succeeds in tracking maneuvering vehicle due to a properly chosen $$\sigma_{a}^{2}$$ parameter.

I would like to encourage the readers to implement this example in software and see how different values of $$\sigma_{a}^{2}$$ of $$\boldsymbol{R}$$ influence the actual Kalman Filter accuracy, Kalman Gain convergence, and estimation uncertainty.

## Example 10 – rocket altitude estimation

In this example, we estimate the altitude of a rocket. The rocket is equipped with an onboard altimeter that provides altitude measurements. In addition to an altimeter, the rocket is equipped with an accelerometer that measures the rocket's acceleration.

The accelerometer serves as a control input to the Kalman Filter.

We assume constant acceleration dynamics.

Accelerometers don't sense gravity. An accelerometer at rest on a table measures $$1g$$ upwards, while an accelerometer in free fall measures zero acceleration. Thus, we need to subtract the gravitational acceleration constant $$g$$ from each accelerometer measurement.

The accelerometer measurement at time step $$n$$ is:

$a_{n} = \ddot{x} - g + \epsilon$

Where:

• $$\ddot{x}$$ is the actual acceleration of the object (the second derivative of the object position)
• $$g$$ is the gravitational acceleration constant; $$g = -9.8\frac{m}{s^{2}}$$
• $$\epsilon$$ is the accelerometer measurement error

### The state extrapolation equation

The general form of the state extrapolation equation in matrix notation is:

$\boldsymbol{\hat{x}}_{n+1,n} = \boldsymbol{F\hat{x}}_{n,n} + \boldsymbol{Gu}_{n} + \boldsymbol{w}_{n}$
Where:
 $$\boldsymbol{\hat{x}}_{n+1,n}$$ is the predicted system state vector at time step $$n + 1$$ $$\boldsymbol{\hat{x}}_{n,n}$$ is the estimated system state vector at time step $$n$$ $$\boldsymbol{u}_{n}$$ is the control variable $$\boldsymbol{w}_{n}$$ is the process noise $$\boldsymbol{F}$$ is the state transition matrix $$\boldsymbol{G}$$ is the control matrix

In this example, we have a control variable $$\boldsymbol{u}$$, which is based on the accelerometer measurement.

The system state $$\boldsymbol{x}_{n}$$ is defined by:

$\boldsymbol{x}_{n} = \left[ \begin{matrix} x_{n} \\ \dot{x}_{n} \\ \end{matrix} \right]$

Where:

• $$x_{n}$$ is the rocket altitude at time $$n$$
• $$\dot{x}_{n}$$ is the rocket velocity at time $$n$$

We can express the state extrapolation equation as follows:

$\left[ \begin{matrix} \hat{x}_{n+1,n}\\ \hat{\dot{x}}_{n+1,n}\\ \end{matrix} \right] = \left[ \begin{matrix} 1 & \Delta t \\ 0 & 1 \\ \end{matrix} \right] \left[ \begin{matrix} \hat{x}_{n,n}\\ \hat{\dot{x}}_{n,n}\\ \end{matrix} \right] + \left[ \begin{matrix} 0.5 \Delta t^{2} \\ \Delta t \\ \end{matrix} \right] \left( a_{n} + g \right)$

In the above equation:

$\boldsymbol{F}= \left[ \begin{matrix} 1 & \Delta t \\ 0 & 1 \\ \end{matrix} \right]$

$\boldsymbol{G}= \left[ \begin{matrix} 0.5 \Delta t^{2} \\ \Delta t \\ \end{matrix} \right]$

$\boldsymbol{u}_{n} = \left( a_{n} + g \right)$

### The covariance extrapolation equation

The general form of the Covariance Extrapolation Equation is:

$\boldsymbol{P}_{n+1,n} = \boldsymbol{FP}_{n,n}\boldsymbol{F}^{T} + \boldsymbol{Q}$
Where:
 $$\boldsymbol{P}_{n,n}$$ is the uncertainty (covariance) matrix of the current state estimation $$\boldsymbol{P}_{n+1,n}$$ is the uncertainty (covariance) matrix of the next state estimation (prediction) $$\boldsymbol{F}$$ is the state transition matrix that we derived in the "Modeling linear dynamic systems" section $$\boldsymbol{Q}$$ is the process noise matrix

The estimate uncertainty in matrix form is:

$\boldsymbol{P} = \left[ \begin{matrix} p_{x} & p_{x\dot{x}} \\ p_{\dot{x}x} & p_{\dot{x}} \\ \end{matrix} \right]$

The elements of the main diagonal of the matrix are the variances of the estimation:

• $$p_{x}$$ is the variance of the altitude estimation
• $$p_{\dot{x}}$$ is the variance of the velocity estimation
• The off-diagonal entries are covariances

We have already derived the state transition matrix $$\boldsymbol{F}$$. Now we shall derive the process noise $$\boldsymbol{Q}$$ matrix.

#### The process noise matrix

We assume a discrete noise model - the noise is different at each time sample but is constant between time samples.

The process noise matrix for a constant acceleration model looks like this:

$\boldsymbol{Q} = \left[ \begin{matrix} \sigma_{x}^{2} & \sigma_{x\dot{x}}^{2} \\ \sigma_{\dot{x}x}^{2} & \sigma_{\dot{x}}^{2} \\ \end{matrix} \right]$

We've already derived the Q matrix for the constant acceleration model. The Q matrix for our example is:

$\boldsymbol{Q} = \left[ \begin{matrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} \\ \frac{\Delta t^{3}}{2} & \Delta t^{2} \\ \end{matrix} \right] \epsilon^{2}$

Where:

• $$\Delta t$$ is the time between successive measurements
• $$\epsilon^{2}$$ is the random variance in accelerometer measurement

In the previous example, we used the system's random variance in acceleration $$\sigma_{a}^{2}$$ as a multiplier of the process noise matrix. But here, we have an accelerometer that measures the system's random acceleration. The accelerometer error $$v$$ is much lower than the system's random acceleration; therefore, we use $$\epsilon^{2}$$ as a multiplier of the process noise matrix.

It makes our estimation uncertainty much lower!

Now we can write down the covariance extrapolation equation for our example:

$\boldsymbol{P}_{n+1,n} = \boldsymbol{FP}_{n,n}\boldsymbol{F}^{T} + \boldsymbol{Q}$

$\left[ \begin{matrix} p_{x_{n+1,n}} & p_{x\dot{x}_{n+1,n}} \\ p_{\dot{x}x_{n+1,n}} & p_{\dot{x}_{n+1,n}} \\ \end{matrix} \right] = \left[ \begin{matrix} 1 & \Delta t \\ 0 & 1 \\ \end{matrix} \right] \times \left[ \begin{matrix} p_{x_{n,n}} & p_{x\dot{x}_{n,n}} \\ p_{\dot{x}x_{n,n}} & p_{\dot{x}_{n,n}} \\ \end{matrix} \right] \times \left[ \begin{matrix} 1 & 0 \\ \Delta t & 1 \\ \end{matrix} \right] + \left[ \begin{matrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} \\ \frac{\Delta t^{3}}{2} & \Delta t^{2} \\ \end{matrix} \right] \epsilon^{2}$

Note: According to the “Constructing the process noise matrix” chapter, the size of the $$\boldsymbol{Q}$$ matrix for constant velocity model should be $$2 \times 2$$, and the size of the $$\boldsymbol{Q}$$ matrix for constant acceleration model should be $$3 \times 3$$. In this example, the acceleration is handled by control input (therefore, it is not part of $$\boldsymbol{F}$$ matrix, and the $$\boldsymbol{Q}$$ matrix is $$2 \times 2$$). Without an external control input, the process matrix would be $$2 \times 2$$.

### The measurement equation

The generalized measurement equation in matrix form is given by:

$\boldsymbol{z}_{n} = \boldsymbol{Hx}_{n} + \boldsymbol{v}_{n}$
Where:
 $$\boldsymbol{z}_{n}$$ is the measurement vector $$\boldsymbol{x}_{n}$$ is the true system state (hidden state) $$\boldsymbol{v}_{n}$$ the random noise vector $$\boldsymbol{H}$$ is the observation matrix

The measurement provides us only the altitude of the rocket:: $$\boldsymbol{z}_{n} = \left[ x_{n,measured} \right]$$

$\boldsymbol{z}_{n} = \boldsymbol{Hx}_{n}$

$\left[ x_{n,measured} \right] = \boldsymbol{H} \left[ \begin{matrix} x_{n}\\ \dot{x}_{n} \\ \end{matrix} \right]$

The dimension of $$\boldsymbol{z}_{n}$$ is $$1 \times 1$$ and the dimension of $$\boldsymbol{x}_{n}$$ is $$2 \times 1$$, so the dimension of the observation matrix $$\boldsymbol{H}$$ is $$1 \times 2$$.

$\boldsymbol{H} = \left[ \begin{matrix} 1 & 0 \\ \end{matrix} \right]$

### The measurement uncertainty

The measurement covariance matrix is:

$\boldsymbol{R}_{n} = \left[ \sigma_{x_{m}}^{2} \right]$

The subscript $$m$$ is for measurement uncertainty.

For the sake of the example simplicity, we assume a constant measurement uncertainty:

$\boldsymbol{R}_{1} = \boldsymbol{R}_{2} ... \boldsymbol{R}_{n-1} = \boldsymbol{R}_{n} = \boldsymbol{R}$

### The Kalman Gain

The Kalman Gain in matrix notation is given by:

$\boldsymbol{K}_{n} = \boldsymbol{P}_{n,n-1}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{n,n-1}\boldsymbol{H}^{T} + \boldsymbol{R}_{n} \right)^{-1}$
Where:
 $$\boldsymbol{K}_{n}$$ is the Kalman Gain $$\boldsymbol{P}_{n,n-1}$$ is the prior estimate uncertainty (covariance) matrix of the current state (predicted at the previous state) $$\boldsymbol{H}$$ is the observation matrix $$\boldsymbol{R}_{n}$$ is the Measurement Uncertainty (measurement noise covariance matrix)

We have already derived all the building blocks of the Kalman Gain:

$\left[ \begin{matrix} K_{1,1_{n}} \\ K_{2,1_{n}} \\ \end{matrix} \right] = \left[ \begin{matrix} p_{x_{n,n-1}} & p_{x\dot{x}_{n,n-1}} \\ p_{\dot{x}x_{n,n-1}} & p_{\dot{x}_{n,n-1}} \\ \end{matrix} \right] \left[ \begin{matrix} 1 \\ 0 \\ \end{matrix} \right] \times \left( \left[ \begin{matrix} 1 & 0 \\ \end{matrix}\right] \left[ \begin{matrix} p_{x_{n,n-1}} & p_{x\dot{x}_{n,n-1}} \\ p_{\dot{x}x_{n,n-1}} & p_{\dot{x}_{n,n-1}} \\ \end{matrix} \right] \left[ \begin{matrix} 1 \\ 0 \\ \end{matrix} \right] + \left[ \sigma_{x_{m}}^{2} \right] \right)^{-1}$

### The state update equation

The State Update Equation in matrix form is given by:

$\boldsymbol{\hat{x}}_{n,n} = \boldsymbol{\hat{x}}_{n,n-1} + \boldsymbol{K}_{n} (\boldsymbol{z}_{n} - \boldsymbol{H \hat{x}}_{n,n-1} )$
Where:
 $$\boldsymbol{\hat{x}}_{n,n}$$ is the estimated system state vector at time step $$n$$ $$\boldsymbol{\hat{x}}_{n,n-1}$$ is the predicted system state vector at time step $$n - 1$$ $$\boldsymbol{K}_{n}$$ is the Kalman Gain $$\boldsymbol{z}_{n}$$ is the measurement $$\boldsymbol{H}$$ is the observation matrix

We have already defined all the building blocks of the state update equation.

### The covariance update equation

The Covariance Update Equation is given by:

$\boldsymbol{P}_{n,n} = \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right) \boldsymbol{P}_{n,n-1} \left(\boldsymbol{I} - \boldsymbol{K}_{n}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{n}\boldsymbol{R}_{n}\boldsymbol{K}_{n}^{T}$
Where:
 $$\boldsymbol{P}_{n,n}$$ is the estimate uncertainty (covariance) matrix of the current state $$\boldsymbol{P}_{n,n-1}$$ is the prior estimate uncertainty (covariance) matrix of the current state (predicted at the previous state) $$\boldsymbol{K}_{n}$$ is the Kalman Gain $$\boldsymbol{H}$$ is the observation matrix $$\boldsymbol{R}_{n}$$ is the Measurement Uncertainty (measurement noise covariance matrix)

We have already defined all the building blocks of the covariance update equation.

### The numerical example

Let us assume a vertically boosting rocket with constant acceleration. The rocket is equipped with an altimeter that provides altitude measurements and an accelerometer that serves as a control input.

• The measurements period: $$\Delta t = 0.25s$$
• The rocket acceleration: $$\ddot{x} = 30\frac{m}{s^{2}}$$
• The altimeter measurement error standard deviation: $$\sigma_{x_{m}} = 20m$$
• The accelerometer measurement error standard deviation: $$\epsilon = 0.1\frac{m}{s^{2}}$$
• The state transition matrix $$\boldsymbol{F}$$ would be:

$\boldsymbol{F} = \left[ \begin{matrix} 1 & \Delta t \\ 0 & 1 \\ \end{matrix} \right] = \left[ \begin{matrix} 1 & 0.25 \\ 0 & 1 \\ \end{matrix} \right]$

• The control matrix $$\boldsymbol{G}$$ would be:

$\boldsymbol{G} = \left[ \begin{matrix} 0.5 \Delta t^{2} \\ \Delta t \\ \end{matrix} \right] = \left[ \begin{matrix} 0.0313 \\ 0.25 \\ \end{matrix} \right]$

• The process noise matrix $$\boldsymbol{Q}$$ would be:

$\boldsymbol{Q} = \left[ \begin{matrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2} \\ \frac{\Delta t^{3}}{2} & \Delta t^{2} \\ \end{matrix} \right] \sigma_{a}^{2} = \left[ \begin{matrix} \frac{0.25^{4}}{4} & \frac{0.25^{3}}{2} \\ \frac{0.25^{3}}{2} & 0.25^{2} \\ \end{matrix} \right] 0.1^{2}$

• The measurement uncertainty $$\boldsymbol{R}$$ would be:

$\boldsymbol{R}_{n} = \boldsymbol{R} = \left[ \sigma_{x_{m}}^{2} \right] = 400$

The following table contains the set of 30 noisy measurements:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Altitude $$x_{n}(m)$$ -32.4 -11.1 18 22.9 19.5 28.5 46.5 68.9 48.2 56.1 90.5 104.9 140.9 148 187.6 209.2 244.6 276.4 323.5 357.3 357.4 398.3 446.7 465.1 529.4 570.4 636.8 693.3 707.3 748.5
Acceleration $$a_{n}\left( \frac{m}{s^{2}} \right)$$ 39.72 40.02 39.97 39.81 39.75 39.6 39.77 39.83 39.73 39.87 39.81 39.92 39.78 39.98 39.76 39.86 39.61 39.86 39.74 39.87 39.63 39.67 39.96 39.8 39.89 39.85 39.9 39.81 39.81 39.68

#### Iteration Zero

##### Initialization

We don't know the rocket's location; we set the initial position and velocity to 0.

$\boldsymbol{\hat{x}}_{0,0}= \left[ \begin{matrix} 0\\ 0\\ \end{matrix} \right]$

We also don't know the rocket's acceleration, but we can assume it's greater than zero. Let's assume:

$\boldsymbol{u}_{0} = g$

Since our initial state vector is a guess, we set a very high estimate uncertainty. The high estimate uncertainty results in high Kalman Gain, giving a high weight to the measurement.

$\boldsymbol{P}_{0,0}= \left[ \begin{matrix} 500 & 0 \\ 0 & 500 \\ \end{matrix} \right]$

##### Prediction

Now we can predict the next state based on the initialization values.

$\boldsymbol{\hat{x}}_{1,0} = \boldsymbol{F\hat{x}}_{0,0} + \boldsymbol{Gu}_{0} = \left[ \begin{matrix} 1 & 0.25 \\ 0 & 1 \\ \end{matrix} \right] \left[ \begin{matrix} 0 \\ 0 \\ \end{matrix} \right] + \left[ \begin{matrix} 0.0313 \\ 0.25 \\ \end{matrix} \right]9.8 = \left[ \begin{matrix} 0.3 \\ 2.45 \\ \end{matrix} \right]$

$\boldsymbol{P}_{1,0} = \boldsymbol{FP}_{0,0}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 1 & 0.25 \\ 0 & 1 \\ \end{matrix} \right] \left[ \begin{matrix} 500 & 0 \\ 0 & 500 \\ \end{matrix} \right] \left[ \begin{matrix} 1 & 0 \\ 0.25 & 1 \\ \end{matrix} \right] + \left[ \begin{matrix} \frac{0.25^{4}}{4} & \frac{0.25^{3}}{2} \\ \frac{0.25^{3}}{2} & 0.25^{2} \\ \end{matrix} \right] 0.1^{2} = \left[ \begin{matrix} 531.25 & 125 \\ 125 & 500 \\ \end{matrix} \right]$

#### First Iteration

##### Step 1 - Measure

The measurement values:

$\boldsymbol{z}_{1} = -32.40$ $\boldsymbol{u}_{1} = 39.72$

##### Step 2 - Update

The Kalman Gain calculation:

$\boldsymbol{K}_{1} = \boldsymbol{P}_{1,0}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{1,0}\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} = \left[ \begin{matrix} 0.57 \\ 0.13 \\ \end{matrix} \right]$

Estimate the current state:

$\boldsymbol{\hat{x}}_{1,1} = \boldsymbol{\hat{x}}_{1,0} + \boldsymbol{K}_{1} (\boldsymbol{z}_{1} - \boldsymbol{H\hat{x}}_{1,0} ) = \left[ \begin{matrix} -18.35 \\ -1.94 \\ \end{matrix} \right]$

Update the current estimate uncertainty:

$\boldsymbol{P}_{1,1} = \left(\boldsymbol{I} - \boldsymbol{K}_{1}\boldsymbol{H} \right) \boldsymbol{P}_{1,0} \left(\boldsymbol{I} - \boldsymbol{K}_{1}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{1}\boldsymbol{RK}_{1}^{T} = \left[ \begin{matrix} 228.2 & 53.7 \\ 53.7 & 483.2 \\ \end{matrix} \right]$

##### Step 3 - Predict

$\boldsymbol{\hat{x}}_{2,1} = \boldsymbol{F\hat{x}}_{1,1} + \boldsymbol{Gu}_{1} = \left[ \begin{matrix} 1 & 0.25 \\ 0 & 1 \\ \end{matrix} \right] \left[ \begin{matrix} -18.35 \\ -1.94 \\ \end{matrix} \right] + \left[ \begin{matrix} 0.0313 \\ 0.25 \\ \end{matrix} \right] \left( 39.72 + (-9.8) \right) = \left[ \begin{matrix} -17.9 \\ 5.54 \\ \end{matrix} \right]$

$\boldsymbol{P}_{2,1} = \boldsymbol{FP}_{1,1}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 285.2 & 174.5 \\ 174.5 & 483.2 \\ \end{matrix} \right]$

Our prediction uncertainty is still very high.

#### Second Iteration

##### Step 1 - Measure

The measurement values:

$\boldsymbol{z}_{2} = -11.1$ $\boldsymbol{u}_{2} = 40.02$

##### Step 2 - Update

The Kalman Gain calculation:

$\boldsymbol{K}_{2} = \boldsymbol{P}_{2,1}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{2,1}\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} = \left[ \begin{matrix} 0.42 \\ 0.26 \\ \end{matrix} \right]$

Estimate the current state:

$\boldsymbol{\hat{x}}_{2,2} = \boldsymbol{\hat{x}}_{2,1} + \boldsymbol{K}_{2} (\boldsymbol{z}_{2} - \boldsymbol{H \hat{x}}_{2,1} ) = \left[ \begin{matrix} -15.1 \\ 7.3 \\ \end{matrix} \right]$

Update the current estimate uncertainty:

$\boldsymbol{P}_{2,2} = \left(\boldsymbol{I} - \boldsymbol{K}_{2}\boldsymbol{H}\right) \boldsymbol{P}_{2,1} \left(\boldsymbol{I} - \boldsymbol{K}_{2}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{2}\boldsymbol{RK}_{2}^{T} = \left[ \begin{matrix} 166.5 & 101.9 \\ 101.9 & 438.8 \\ \end{matrix} \right]$

##### Step 3 - Predict

$\boldsymbol{\hat{x}}_{3,2} = \boldsymbol{F\hat{x}}_{2,2} + \boldsymbol{Gu}_{2} = \left[ \begin{matrix} -12.3 \\ 14.8 \\ \end{matrix} \right]$

$\boldsymbol{P}_{3,2} = \boldsymbol{FP}_{2,2}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 244.9 & 211.6 \\ 211.6 & 438.8 \\ \end{matrix} \right]$

At this point, it would be reasonable to jump to the last Kalman Filter iteration.

#### Thirtieth Iteration

##### Step 1 - Measure

The measurement values:

$\boldsymbol{z}_{30} = 748.35$ $\boldsymbol{u}_{30} = 39.68$

##### Step 2 - Update

The Kalman Gain calculation:

$\boldsymbol{K}_{30} = \boldsymbol{P}_{30,29}\boldsymbol{H}^{T}\left(\boldsymbol{HP}_{30,29}\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} = \left[ \begin{matrix} 0.12 \\ 0.02 \\ \end{matrix} \right]$

The Kalman Gain for altitude converged to 0.12, which means that the estimation weight is much higher than the measurement weight.

Estimate the current state:

$\boldsymbol{\hat{x}}_{30,30} = \boldsymbol{\hat{x}}_{30,29} + \boldsymbol{K}_{30} (\boldsymbol{z}_{30} - \boldsymbol{H\hat{x}}_{30,29}) = \left[ \begin{matrix} 776.7 \\ 215.4 \\ \end{matrix} \right]$

Update the current estimate uncertainty:

$\boldsymbol{P}_{30,30} = \left(\boldsymbol{I} - \boldsymbol{K}_{30}\boldsymbol{H} \right) \boldsymbol{P}_{30,29} \left(\boldsymbol{I} - \boldsymbol{K}_{35}\boldsymbol{H} \right)^{T} + \boldsymbol{K}_{30}\boldsymbol{RK}_{30}^{T} = \left[ \begin{matrix} 49.3 & 9.7 \\ 9.7 & 2.6 \\ \end{matrix} \right]$

At this point, the altitude uncertainty $$p_{x} = 49.3$$, which means that the standard deviation of the prediction is $$\sqrt{49.3}m = 7.02m$$ (remember that the standard deviation of the measurement is $$20m$$).

##### Step 3 - Predict

$\boldsymbol{\hat{x}}_{31,30} = \boldsymbol{F\hat{x}}_{30,30} + \boldsymbol{Gu}_{30} = \left[ \begin{matrix} 831.5 \\ 222.94 \\ \end{matrix} \right]$

$\boldsymbol{P}_{31,30} = \boldsymbol{FP}_{30,30}\boldsymbol{F}^{T} + \boldsymbol{Q} = \left[ \begin{matrix} 54.3 & 10.4 \\ 10.4 & 2.6 \\ \end{matrix} \right]$

### Example summary

The following chart compares the true, measured, and estimated values of the rocket altitude.

The following chart depicts the Kalman Gain.

In the beginning, the estimated altitude is influenced by measurements, and it is not aligned well with the true rocket altitude since the measurements are very noisy.

But as the Kalman Gain converges, the noisy measurement has less influence, and the estimated altitude is well aligned with the true altitude.

In this example, we don't have any maneuvers that cause acceleration changes, but if we had, the control input (accelerometer) would update the state extrapolation equation.