Previous post: Kalman Filter VI: Further examples
Read next: The Kalman Filter VIII: Square root form
Consider the dynamical system
x t + 1 = A x t + w t , y t = C x t + v t , \begin{aligned}
x_{t+1} {}={} & Ax_t + w_t,
\\
y_t {}={} & Cx_t + v_t,
\end{aligned} x t + 1 = y t = A x t + w t , C x t + v t ,
under the independence assumptions we stated earlier and assuming that w t w_t w t , v t v_t v t , and x 0 x_0 x 0 are normally distributed, that is,
p w t ( w ) ∝ exp [ − 1 2 ∥ w ∥ Q − 1 2 ] , p v t ( v ) ∝ exp [ − 1 2 ∥ v ∥ R − 1 2 ] , p x 0 ( x 0 ) ∝ exp [ − 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 ] . \begin{aligned}
p_{w_t}(w) {}\propto{} & \exp \left[-\tfrac{1}{2}\|w\|_{Q^{-1}}^2\right],
\\
p_{v_t}(v) {}\propto{} & \exp \left[-\tfrac{1}{2}\|v\|_{R^{-1}}^2\right],
\\
p_{x_0}(x_0) {}\propto{} & \exp \left[-\tfrac{1}{2}\|x_0-\bar{x}_0\|_{P_{0}^{-1}}^{2}\right].
\end{aligned} p w t ( w ) ∝ p v t ( v ) ∝ p x 0 ( x 0 ) ∝ exp [ − 2 1 ∥ w ∥ Q − 1 2 ] , exp [ − 2 1 ∥ v ∥ R − 1 2 ] , exp [ − 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 ] .
Given a set of measurements y 0 , y 1 , … , y N − 1 y_0, y_1, \ldots, y_{N-1} y 0 , y 1 , … , y N − 1 we need to estimate x 0 , x 1 , … , x N x_0, x_1, \ldots, x_{N} x 0 , x 1 , … , x N .
VIDEO
Useful results#
In what follows we use the convenient notation x 0 : N = ( x 0 , … , x N ) x_{0:N}=(x_0, \ldots, x_N) x 0 : N = ( x 0 , … , x N ) , and likewise we define y 0 : N y_{0:N} y 0 : N , w 0 : N w_{0:N} w 0 : N and so on.
Proposition VII.1 (Joint distribution of states). For any N ∈ I N N\in{\rm I\!N} N ∈ I N
p ( x 0 : N ) = p ( x 0 ) ∏ t = 0 N − 1 p w t ( x t + 1 − A x t ) . p(x_{0:N})
{}={}
p(x_0)\prod_{t=0}^{N-1}p_{w_t}(x_{t+1} - Ax_{t}). p ( x 0 : N ) = p ( x 0 ) t = 0 ∏ N − 1 p w t ( x t + 1 − A x t ) .
Click to see the proof
Proof. It is
p ( x 0 : N ) = p ( x 0 ) p ( x 1 : N ∣ x 0 ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 : N ∣ x 0 , x 1 ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 0 , x 1 ) ⋅ … ⋅ p ( x N ∣ x 0 : N − 1 ) \begin{aligned}
p(x_{0:N}) {}={}& p(x_0)p(x_{1:N} \mid x_0)
\\
{}={}& p(x_0)p(x_1 \mid x_0) p(x_{2:N} \mid x_0,x_1)
\\
{}={}& p(x_0)p(x_1 \mid x_0) p(x_2 \mid x_{0}, x_{1})\cdot\ldots\cdot p(x_N \mid x_{0:N-1})
\end{aligned} p ( x 0 : N ) = = = p ( x 0 ) p ( x 1 : N ∣ x 0 ) p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 : N ∣ x 0 , x 1 ) p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 0 , x 1 ) ⋅ … ⋅ p ( x N ∣ x 0 : N − 1 )
By the Markovianity of the sequence of states ,
p ( x 0 : N ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 1 ) ⋅ … ⋅ p ( x N ∣ x N − 1 ) = p ( x 0 ) ∏ t = 1 N − 1 p w t ( x t + 1 − f ( x t ) ) , \begin{aligned}
p(x_{0:N}) {}={}& p(x_0)p(x_1 \mid x_0) p(x_2 \mid x_{1})\cdot\ldots\cdot p(x_N \mid x_{N-1})
\\
{}={}& p(x_0) \prod_{t=1}^{N-1} p_{w_t}(x_{t+1} - f(x_t)),
\end{aligned} p ( x 0 : N ) = = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 1 ) ⋅ … ⋅ p ( x N ∣ x N − 1 ) p ( x 0 ) t = 1 ∏ N − 1 p w t ( x t + 1 − f ( x t ) ) ,
where in the last step we used Proposition I.1 . This completes the proof. □ \Box □
Proposition VI.2 (Joint distribution of outputs). Let N ∈ I N N\in{\rm I\!N} N ∈ I N . Then
p ( y 0 : N ) = p ( y 0 ) ∏ t = 1 N p ( y t ∣ y 0 : t − 1 ) , p(y_{0:N})
{}={}
p(y_0)\prod_{t=1}^{N}p(y_t {}\mid{} y_{0:t-1}), p ( y 0 : N ) = p ( y 0 ) t = 1 ∏ N p ( y t ∣ y 0 : t − 1 ) ,
where y 0 : t = ( y 0 , … , y t ) y_{0:t} = (y_0, \ldots, y_t) y 0 : t = ( y 0 , … , y t ) .
Proof. Similar to the proof of Proposition VII.1 . □ \Box □
Proposition VII.3 (Joint distribution of states and w w w 's). For any N ∈ I N N\in{\rm I\!N} N ∈ I N
p ( x 0 : N , w 0 : N − 1 ) = { 0 , if not x t + 1 = A x t + w t , p x 0 ( x 0 ) ∏ t = 0 N − 1 p w t ( w t ) , otherwise \begin{aligned}p(x_{0:N}, w_{0:N-1})
{}={}
\begin{cases}
0, & \text{if not } x_{t+1} = Ax_t + w_t,
\\
p_{x_0}(x_0)\prod_{t=0}^{N-1}p_{w_t}(w_t), & \text{otherwise}
\end{cases}\end{aligned} p ( x 0 : N , w 0 : N − 1 ) = { 0 , p x 0 ( x 0 ) ∏ t = 0 N − 1 p w t ( w t ) , if not x t + 1 = A x t + w t , otherwise
Proposition VI.4 (Joint distribution of states given outputs). For any N ∈ I N N\in{\rm I\!N} N ∈ I N
p ( x 0 : N ∣ y 0 : N − 1 ) ∝ p x 0 ( x 0 ) ∏ t = 0 N − 1 p v t ( y t − C x t ) p w t ( x t + 1 − A x t ) . \begin{aligned}p(x_{0:N} {}\mid{} y_{0:N-1})
{}\propto{}
p_{x_0}(x_0)\prod_{t=0}^{N-1}p_{v_t}(y_t - Cx_t)p_{w_t}(x_{t+1}-Ax_t).\end{aligned} p ( x 0 : N ∣ y 0 : N − 1 ) ∝ p x 0 ( x 0 ) t = 0 ∏ N − 1 p v t ( y t − C x t ) p w t ( x t + 1 − A x t ) .
From Proposition VII.4 , the log-likelihood (omitting the constant term) is
log p ( x 0 , … , x N ∣ y 0 , … , y N − 1 ) = log p x 0 ( x 0 ) + ∑ t = 1 N − 1 log p v t ( y t − C x t ) + log p w t ( x t + 1 − A x t ) = − 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 1 2 ∑ t = 0 N − 1 − ∥ y t − C x t ∥ R − 1 2 − ∥ x t + 1 − A x t ∥ Q − 1 2 . \begin{aligned}
& \log p(x_0,\ldots, x_{N} {}\mid{} y_0, \ldots, y_{N-1})\\
& \qquad{}={}
\log p_{x_0}(x_0)
{}+{}
\sum_{t=1}^{N-1}\log p_{v_t}(y_t - Cx_t) + \log p_{w_t}(x_{t+1}-Ax_t)
\\
& \qquad{}={}
-\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^{2}
+ \tfrac{1}{2}\sum_{t=0}^{N-1} -\|y_t - Cx_t\|_{R^{-1}}^{2} {}-{} \|x_{t+1}-Ax_t\|_{Q^{-1}}^{2}.
\end{aligned} log p ( x 0 , … , x N ∣ y 0 , … , y N − 1 ) = log p x 0 ( x 0 ) + t = 1 ∑ N − 1 log p v t ( y t − C x t ) + log p w t ( x t + 1 − A x t ) = − 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 2 1 t = 0 ∑ N − 1 − ∥ y t − C x t ∥ R − 1 2 − ∥ x t + 1 − A x t ∥ Q − 1 2 .
Using the result of this exercise, we have the maximum a posteriori (MAP) estimate
( x ^ t ) t = 0 N − 1 = arg max x 0 , … , x N − 1 p ( x 0 , … , x N − 1 ∣ y 0 , … , y N ) = arg max x 0 , … , x N − 1 log p ( x 0 , … , x N − 1 ∣ y 0 , … , y N ) = arg max x 0 , … , x N − 1 − 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 1 2 ∑ t = 0 N − 1 − ∥ y t − C x t ∥ R − 1 2 − ∥ x t + 1 − A x t ∥ Q − 1 2 = arg min x 0 , … , x N − 1 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 1 2 ∑ t = 0 N − 1 ∥ y t − C x t ∥ R − 1 2 + ∥ x t + 1 − A x t ∥ Q − 1 2 . \begin{aligned}
(\hat{x}_t)_{t=0}^{N-1}
{}={} &
\argmax_{x_0,\ldots,x_{N-1}} p(x_0,\ldots, x_{N-1} {}\mid{} y_0, \ldots, y_{N})
\\
{}={} &
\argmax_{x_0,\ldots,x_{N-1}} \log p(x_0,\ldots, x_{N-1} {}\mid{} y_0, \ldots, y_{N})
\\
{}={} &
\argmax_{x_0,\ldots,x_{N-1}}
-\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^{2}
+ \tfrac{1}{2}\sum_{t=0}^{N-1} -\|y_t - Cx_t\|_{R^{-1}}^{2} {}-{} \|x_{t+1}-Ax_t\|_{Q^{-1}}^{2}
\\
{}={} &
\argmin_{x_0,\ldots,x_{N-1}}
\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^{2}
+ \tfrac{1}{2}\sum_{t=0}^{N-1} \|y_t - Cx_t\|_{R^{-1}}^{2} {}+{} \|x_{t+1}-Ax_t\|_{Q^{-1}}^{2}.
\end{aligned} ( x ^ t ) t = 0 N − 1 = = = = x 0 , … , x N − 1 a r g m a x p ( x 0 , … , x N − 1 ∣ y 0 , … , y N ) x 0 , … , x N − 1 a r g m a x log p ( x 0 , … , x N − 1 ∣ y 0 , … , y N ) x 0 , … , x N − 1 a r g m a x − 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 2 1 t = 0 ∑ N − 1 − ∥ y t − C x t ∥ R − 1 2 − ∥ x t + 1 − A x t ∥ Q − 1 2 x 0 , … , x N − 1 a r g m i n 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 2 1 t = 0 ∑ N − 1 ∥ y t − C x t ∥ R − 1 2 + ∥ x t + 1 − A x t ∥ Q − 1 2 .
In fact we can write it as
( x ^ t ) t = 0 N − 1 = arg min x 0 , … , x N , w 0 , … , w N − 1 , v 0 , … , v N − 1 , x t + 1 = A x t + w t , t ∈ I N [ 0 , N − 1 ] y t = C x t + v t , t ∈ I N [ 0 , N ] 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + ∑ t = 0 N − 1 1 2 ∥ v t ∥ R − 1 2 + 1 2 ∥ w t ∥ Q − 1 2 . (\hat{x}_t)_{t=0}^{N-1}
{}={}
\hspace{-1.5em}
\argmin_{
\substack{
x_0,\ldots,x_{N},
\\
w_0, \ldots, w_{N-1},
\\
v_0, \ldots, v_{N-1},
\\
x_{t+1} = Ax_t + w_t, t\in{\rm I\!N}_{[0, N-1]}
\\
y_{t} = Cx_t + v_t, t \in {\rm I\!N}_{[0, N]}
}
}\hspace{-1em}
\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^{2}
+ \sum_{t=0}^{N-1} \tfrac{1}{2}\|v_t\|_{R^{-1}}^{2} {}+{} \tfrac{1}{2}\|w_t\|_{Q^{-1}}^{2}. ( x ^ t ) t = 0 N − 1 = x 0 , … , x N , w 0 , … , w N − 1 , v 0 , … , v N − 1 , x t + 1 = A x t + w t , t ∈ I N [ 0 , N − 1 ] y t = C x t + v t , t ∈ I N [ 0 , N ] a r g m i n 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + t = 0 ∑ N − 1 2 1 ∥ v t ∥ R − 1 2 + 2 1 ∥ w t ∥ Q − 1 2 .
We need to solve the problem
minimise x 0 , … , x N , w 0 , … , w N − 1 , v 0 , … , v N − 1 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + ∑ t = 0 N − 1 1 2 ∥ v t ∥ R − 1 2 + 1 2 ∥ w t ∥ Q − 1 2 , subject to: x t + 1 = A x t + w t , t ∈ I N [ 0 , N − 1 ] , y t = C x t + v t , t ∈ I N [ 0 , N ] , \begin{aligned}
\operatorname*{minimise}_{
\substack{
x_0,\ldots,x_{N},
\\
w_0,\ldots, w_{N-1},
\\
v_0, \ldots, v_{N-1}
}
}\ &
\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^{2}
+ \sum_{t=0}^{N-1} \tfrac{1}{2}\|v_t\|_{R^{-1}}^{2} {}+{} \tfrac{1}{2}\|w_t\|_{Q^{-1}}^{2},
\\
\text{subject to: } & x_{t+1} = Ax_t + w_t, t\in{\rm I\!N}_{[0, N-1]},
\\
& y_{t} = Cx_t + v_t, t \in {\rm I\!N}_{[0, N]},
\end{aligned} x 0 , … , x N , w 0 , … , w N − 1 , v 0 , … , v N − 1 m i n i m i s e subject to: 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + t = 0 ∑ N − 1 2 1 ∥ v t ∥ R − 1 2 + 2 1 ∥ w t ∥ Q − 1 2 , x t + 1 = A x t + w t , t ∈ I N [ 0 , N − 1 ] , y t = C x t + v t , t ∈ I N [ 0 , N ] ,
which is akin to a linear-quadratic optimal control problem. The key difference is the presence of an arrival cost instead of a terminal cost and the initial condition is not fixed. The problem can be written as follows:
minimise x 0 , … , x N , w 0 , … , w N − 1 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 ⏟ ℓ x 0 ( x 0 ) + ∑ t = 0 N − 1 1 2 ∥ y t − C x t ∥ R − 1 2 + 1 2 ∥ w t ∥ Q − 1 2 ⏟ ℓ t ( x t , w t ) , subject to: x t + 1 = A x t + w t , t ∈ I N [ 0 , N − 1 ] . \begin{aligned}
\operatorname*{minimise}_{
\substack{
x_0,\ldots,x_{N},
\\
w_0,\ldots, w_{N-1}
}
}\ &
\underbrace{\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^{2}}_{\ell_{x_0}(x_0)}
+ \sum_{t=0}^{N-1} \underbrace{
\tfrac{1}{2}\|y_t-Cx_t\|_{R^{-1}}^{2} {}+{} \tfrac{1}{2}\|w_t\|_{Q^{-1}}^{2}
}_{\ell_t(x_t, w_t)},
\\
\text{subject to: } & x_{t+1} = Ax_t + w_t, t\in{\rm I\!N}_{[0, N-1]}.
\end{aligned} x 0 , … , x N , w 0 , … , w N − 1 m i n i m i s e subject to: ℓ x 0 ( x 0 ) 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + t = 0 ∑ N − 1 ℓ t ( x t , w t ) 2 1 ∥ y t − C x t ∥ R − 1 2 + 2 1 ∥ w t ∥ Q − 1 2 , x t + 1 = A x t + w t , t ∈ I N [ 0 , N − 1 ] .
This is known as the full information estimation (FIE) problem. Later we will show that FIE is equivalent to the Kalman filter.
So what?#
Firstly, the full information estimation formulation allows us to interpret the Kalman filter as a Bayesian estimator and work in terms of prior and posterior distributions instead of conditional expectations as we did earlier . This way we can get a better understanding of how the Kalam filter works.
By looking at the cost function of the FIE problem we see that we are trying to minimise:
A penalty on the distance between the estimated initial state, x 0 x_0 x 0 , and the expected initial state, x ˉ 0 \bar{x}_0 x ˉ 0 , of our prior on x 0 x_0 x 0
A penalty on the output error, y t − C x t y_t - Cx_t y t − C x t (what we observe minus what we should be observing based on the estimated state)
A penalty on the process noise (because we want to avoid an estimator that attributes everything to process noise)
The above penalties are scaled by their corresponding variance-covariance matrices. This means that the "higher" a covariance, the "lower" the inverse covariance will be, entailing a lower penalty. For example, if we don't trust our sensors very much we will be having a "large" R R R , so a "small" R − 1 R^{-1} R − 1 , so a small penalty on y t − C x t y_t - Cx_t y t − C x t , thus allowing the estimator to predict states which lead to large observation errors.
Lastly, the Bayesian approach allows us to generalise to nonlinear systems. We can easily repeat the above procedure assuming we have nonlinear dynamics and/or a nonlinear relationship between states and outputs. This will lead to the moving horizon estimation method.
Forward dynamic programming#
The estimation problem becomes
minimise x 0 , … , x N − 1 ℓ x 0 ( x 0 ) + ∑ t = 0 N − 1 ℓ t ( x t , w t ) , subject to: x t + 1 = A x t + w t , t ∈ N [ 0 , N − 1 ] . \begin{aligned}
\operatorname*{minimise}_{
x_0,\ldots,x_{N-1}
}\ &
\ell_{x_0}(x_0) + \sum_{t=0}^{N-1} \ell_t(x_t, w_t),
\\
\text{subject to: } & x_{t+1} = Ax_t + w_t, t\in\N_{[0, N-1]}.
\end{aligned} x 0 , … , x N − 1 m i n i m i s e subject to: ℓ x 0 ( x 0 ) + t = 0 ∑ N − 1 ℓ t ( x t , w t ) , x t + 1 = A x t + w t , t ∈ N [ 0 , N − 1 ] .
We apply the DP procedure in a forward fashion:
V ^ N ⋆ = min x 0 , … , x N w 0 , … , w N − 1 { ℓ x 0 ( x 0 ) + ∑ t = 0 N − 1 ℓ t ( x t , w t ) ∣ x t + 1 = A x t + w t , t ∈ N [ 0 , N − 1 ] } = min x 1 , … , x N w 1 , … , w N − 1 { min x 0 , w 0 { ℓ x 0 ( x 0 ) + ℓ 0 ( x 0 , w 0 ) ∣ x 1 = A x 0 + w 0 } ⏟ V 1 ⋆ ( x 1 ) + ∑ t = 1 N − 1 ℓ t ( x t , w t ) ∣ x t + 1 = A x t + w t , t ∈ N [ 1 , N − 1 ] } = min x 1 , … , x N w 1 , … , w N − 1 { V 1 ⋆ ( x 1 ) + ∑ t = 1 N − 1 ℓ t ( x t , w t ) ∣ x t + 1 = A x t + w t , t ∈ N [ 1 , N − 1 ] } . \begin{aligned}
\widehat{V}_N^\star
{}={} &
\min_{\substack{x_0,\ldots,x_N \\w_0,\ldots,w_{N-1}}}
\left\{
\ell_{x_0}(x_0) + \sum_{t=0}^{N-1} \ell_t(x_t, w_t)
\left|
\begin{array}{l}
x_{t+1} = Ax_t + w_t,
\\
t\in\N_{[0, N-1]}
\end{array}
\right.
\right\}
\\
{}={} &
\min_{\substack{x_1,\ldots,x_N \\w_1,\ldots,w_{N-1}}}
\left\{
\underbrace{
\min_{x_0, w_0}
\left\{
\ell_{x_0}(x_0) + \ell_0(x_0, w_0)
{}\mid{}
x_1 = Ax_0 + w_0
\right\}
}_{V_1^{\star}(x_1)}
\rule{0cm}{1.1cm}\right.
\\&
\qquad\qquad\qquad
\left.
\rule{0cm}{1.1cm}
{}+{}
\sum_{t=1}^{N-1} \ell_t(x_t, w_t)\,
\left|
\begin{array}{l}
x_{t+1} = Ax_t + w_t,
\\
t\in\N_{[1, N-1]}
\end{array}
\right.
\right\}
\\
{}={} &
\min_{\substack{x_1,\ldots,x_N \\w_1,\ldots,w_{N-1}}}
\left\{
V_1^{\star}(x_1)
{}+{}
\sum_{t=1}^{N-1} \ell_t(x_t, w_t)
\left|
\begin{array}{l}
x_{t+1} = Ax_t + w_t,
\\
t\in\N_{[1, N-1]}
\end{array}
\right.
\right\}.
\end{aligned} V N ⋆ = = = x 0 , … , x N w 0 , … , w N − 1 min { ℓ x 0 ( x 0 ) + t = 0 ∑ N − 1 ℓ t ( x t , w t ) ∣ ∣ ∣ ∣ ∣ x t + 1 = A x t + w t , t ∈ N [ 0 , N − 1 ] } x 1 , … , x N w 1 , … , w N − 1 min ⎩ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎧ V 1 ⋆ ( x 1 ) x 0 , w 0 min { ℓ x 0 ( x 0 ) + ℓ 0 ( x 0 , w 0 ) ∣ x 1 = A x 0 + w 0 } + t = 1 ∑ N − 1 ℓ t ( x t , w t ) ∣ ∣ ∣ ∣ ∣ x t + 1 = A x t + w t , t ∈ N [ 1 , N − 1 ] ⎭ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎫ x 1 , … , x N w 1 , … , w N − 1 min { V 1 ⋆ ( x 1 ) + t = 1 ∑ N − 1 ℓ t ( x t , w t ) ∣ ∣ ∣ ∣ ∣ x t + 1 = A x t + w t , t ∈ N [ 1 , N − 1 ] } .
The forward dynamic programming procedure can be written as
V 0 ⋆ ( x 0 ) = ℓ x 0 ( x 0 ) , V t + 1 ⋆ ( x t + 1 ) = min x t , w t { V t ⋆ ( x t ) + ℓ t ( x t , w t ) ∣ x t + 1 = A x t + w t } , V ^ N ⋆ = min x N V N ⋆ ( x N ) . \begin{aligned}
V_0^\star(x_0) {}={} & \ell_{x_0}(x_0),
\\
V_{t+1}^{\star}(x_{t+1}) {}={} & \min_{x_t, w_t}
\left\{
V_t^\star(x_t) + \ell_t(x_t, w_t) {}\mid{} x_{t+1} = Ax_t + w_t
\right\},
\\
\widehat{V}_N^\star {}={} & \min_{x_{N}}V_N^\star(x_N).
\end{aligned} V 0 ⋆ ( x 0 ) = V t + 1 ⋆ ( x t + 1 ) = V N ⋆ = ℓ x 0 ( x 0 ) , x t , w t min { V t ⋆ ( x t ) + ℓ t ( x t , w t ) ∣ x t + 1 = A x t + w t } , x N min V N ⋆ ( x N ) .
We shall prove that the solution of this problem yields the Kalman filter!
The Kalman Filter is a MAP estimator#
To show that the forward DP yields the Kalman Filter, we need two lemmata. Firstly, Woodbury’s matrix inversion lemma which we state without a proof.
Lemma VII.5 (Woodbury's identity). The following holds
( S + U C V ) − 1 = S − 1 − S − 1 U ( C − 1 + V S − 1 U ) − 1 V S − 1 . (S+UCV)^{-1} = S^{-1} - S^{-1}U(C^{-1} + VS^{-1}U)^{-1}VS^{-1}. ( S + U C V ) − 1 = S − 1 − S − 1 U ( C − 1 + V S − 1 U ) − 1 V S − 1 .
Secondly, we state and prove the following lemma.
Lemma VII.6 (Factorisation of sum of quadratics). Let Q 1 ∈ S + + n Q_1\in\mathbb{S}_{++}^n Q 1 ∈ S + + n , Q 2 ∈ S + + m Q_2\in\mathbb{S}_{++}^m Q 2 ∈ S + + m , F ∈ I R m × n F\in{\rm I\!R}^{m\times n} F ∈ I R m × n , b ∈ I R m b\in{\rm I\!R}^m b ∈ I R m . Define
q ( x ) = 1 2 ∥ x − x ˉ ∥ Q 1 − 1 2 + 1 2 ∥ F x − b ∥ Q 2 − 1 2 . q(x) = \tfrac{1}{2}\|x-\bar{x}\|_{Q_1^{-1}}^{2} + \tfrac{1}{2}\|Fx-b\|_{Q_2^{-1}}^2. q ( x ) = 2 1 ∥ x − x ˉ ∥ Q 1 − 1 2 + 2 1 ∥ F x − b ∥ Q 2 − 1 2 .
Then for all x ∈ I R n x\in{\rm I\!R}^n x ∈ I R n ,
q ( x ) = 1 2 ∥ x − x ⋆ ∥ W − 1 2 + c , q(x){}={}\tfrac{1}{2}\|x-x^{\star}\|_{W^{-1}}^2 + c, q ( x ) = 2 1 ∥ x − x ⋆ ∥ W − 1 2 + c ,
where
x ⋆ = x ˉ + Q 1 F ⊺ S − 1 ( b − F x ˉ ) , W = Q 1 − Q 1 F ⊺ S − 1 F Q 1 , c = 1 2 ∥ F x ˉ − b ∥ S − 1 2 , \begin{aligned}
x^\star {}={} & \bar{x} + Q_1F^\intercal S^{-1}(b-F\bar{x}),
\\
W {}={} & Q_1 - Q_1F^\intercal S^{-1}F Q_1,
\\
c {}={} & \tfrac{1}{2}\|F\bar{x}-b\|_{S^{-1}}^2,
\end{aligned} x ⋆ = W = c = x ˉ + Q 1 F ⊺ S − 1 ( b − F x ˉ ) , Q 1 − Q 1 F ⊺ S − 1 F Q 1 , 2 1 ∥ F x ˉ − b ∥ S − 1 2 ,
and where S = Q 2 + F Q 1 F ⊺ . S = Q_2 + FQ_1F^\intercal. S = Q 2 + F Q 1 F ⊺ . Moreover,
x ⋆ = arg min x q ( x ) , x^\star = \argmin_x q(x), x ⋆ = x a r g m i n q ( x ) ,
and
min x q ( x ) = c . \min_x q(x) = c. x min q ( x ) = c .
Click to see the proof
Proof. Since q q q is a convex quadratic function, it has a minimiser, x ⋆ x^\star x ⋆ , which satisfies ∇ q ( x ⋆ ) = 0 \nabla q(x^\star) = 0 ∇ q ( x ⋆ ) = 0 . The idea is that we can use Taylor's expansion to write q q q as follows
q ( x ) = q ( x ⋆ ) + ∇ q ( x ⋆ ) ⊺ ( x − x ⋆ ) + 1 2 ∥ x − x ⋆ ∥ ∇ 2 q ( x ⋆ ) 2 = q ( x ⋆ ) + 1 2 ∥ x − x ⋆ ∥ ∇ 2 q ( x ⋆ ) 2 , \begin{aligned}
q(x) {}={} & q(x^\star) + \nabla q(x^\star)^\intercal (x-x^\star) + \tfrac{1}{2}\|x-x^\star\|_{\nabla^2 q(x^\star)}^2
\\
{}={} & q(x^\star) + \tfrac{1}{2}\|x-x^\star\|_{\nabla^2 q(x^\star)}^2,
\end{aligned} q ( x ) = = q ( x ⋆ ) + ∇ q ( x ⋆ ) ⊺ ( x − x ⋆ ) + 2 1 ∥ x − x ⋆ ∥ ∇ 2 q ( x ⋆ ) 2 q ( x ⋆ ) + 2 1 ∥ x − x ⋆ ∥ ∇ 2 q ( x ⋆ ) 2 ,
It suffices to determine x ⋆ x^\star x ⋆ , q ( x ⋆ ) q(x^\star) q ( x ⋆ ) and ∇ 2 q ( x ⋆ ) \nabla^2 q(x^\star) ∇ 2 q ( x ⋆ ) . Let us first determine x ⋆ x^\star x ⋆ . We have ∇ q ( x ) = Q 1 − 1 ( x − x ˉ ) + F ⊺ Q 2 − 1 ( F x − b ) , \nabla q(x) {}={} Q_1^{-1}(x-\bar{x}) + F^\intercal Q_2^{-1}(Fx-b), ∇ q ( x ) = Q 1 − 1 ( x − x ˉ ) + F ⊺ Q 2 − 1 ( F x − b ) , and we need to solve ∇ q ( x ⋆ ) = 0 \nabla q(x^\star) = 0 ∇ q ( x ⋆ ) = 0 , that is
Q 1 − 1 ( x ⋆ − x ˉ ) + F ⊺ Q 2 − 1 ( F x ⋆ − b ) = 0 , Q_1^{-1}(x^\star-\bar{x}) + F^\intercal Q_2^{-1}(Fx^\star-b) {}={} 0, Q 1 − 1 ( x ⋆ − x ˉ ) + F ⊺ Q 2 − 1 ( F x ⋆ − b ) = 0 ,
from which
x ⋆ = ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1 ( Q 1 − 1 x ˉ + F ⊺ Q 2 − 1 b ) = x ˉ + ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1 ( Q 1 − 1 x ˉ + F ⊺ Q 2 − 1 b − ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) x ˉ ) = x ˉ + ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1 F ⊺ Q 2 − 1 ( b − F x ˉ ) \begin{aligned}
x^\star {}={} & (Q_1^{-1} + F^\intercal Q_2^{-1}F)^{-1}(Q_1^{-1}\bar{x} + F^\intercal Q_2^{-1}b)
\\
{}={} &
\textcolor{red}{\bar{x}}
+
(Q_1^{-1} + F^\intercal Q_2^{-1}F)^{-1}
(Q_1^{-1}\bar{x} + F^\intercal Q_2^{-1}b
{}\,{}
\textcolor{red}{ {}-{} (Q_1^{-1} + F^\intercal Q_2^{-1}F)\bar{x}})
\\
{}={} &
\bar{x} + (Q_1^{-1} + F^\intercal Q_2^{-1}F)^{-1}F^\intercal Q_2^{-1}(b-F\bar{x})
\end{aligned} x ⋆ = = = ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1 ( Q 1 − 1 x ˉ + F ⊺ Q 2 − 1 b ) x ˉ + ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1 ( Q 1 − 1 x ˉ + F ⊺ Q 2 − 1 b − ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) x ˉ ) x ˉ + ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1 F ⊺ Q 2 − 1 ( b − F x ˉ )
and now we apply Woodbury's matrix identity (Lemma VII.5 ) to the term ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1 (Q_1^{-1} + F^\intercal Q_2^{-1}F)^{-1} ( Q 1 − 1 + F ⊺ Q 2 − 1 F ) − 1
= x ˉ + ( Q 1 − Q 1 F ⊺ ( Q 2 + F Q 1 F ⊺ ) − 1 F Q 1 ) F ⊺ Q 2 − 1 ( b − F x ˉ ) = x ˉ + Q 1 F ⊺ [ I − ( Q 2 + F Q 1 F ⊺ ⏟ S ) − 1 F Q 1 F ⊺ ⏟ S − Q 2 ] Q 2 − 1 ( b − F x ˉ ) = x ˉ + Q 1 F ⊺ S − 1 ( b − F x ˉ ) , \begin{aligned}
{}={} &
\bar{x} + \textcolor{blue}{(Q_1 - Q_1 F^\intercal (Q_2 + FQ_1F^\intercal)^{-1} FQ_1 )} F^\intercal Q_2^{-1}(b-F\bar{x})
\\
{}={} &
\bar{x} + Q_1 F^\intercal \left[
I - (\underbrace{Q_2 + FQ_1F^\intercal}_{S})^{-1}\underbrace{FQ_1 F^\intercal}_{S - Q_2}
\right]
Q_2^{-1}(b-F\bar{x})
\\
{}={} &
\bar{x} + Q_1 F^\intercal S^{-1}(b-F\bar{x}),\end{aligned} = = = x ˉ + ( Q 1 − Q 1 F ⊺ ( Q 2 + F Q 1 F ⊺ ) − 1 F Q 1 ) F ⊺ Q 2 − 1 ( b − F x ˉ ) x ˉ + Q 1 F ⊺ ⎣ ⎢ ⎡ I − ( S Q 2 + F Q 1 F ⊺ ) − 1 S − Q 2 F Q 1 F ⊺ ⎦ ⎥ ⎤ Q 2 − 1 ( b − F x ˉ ) x ˉ + Q 1 F ⊺ S − 1 ( b − F x ˉ ) ,
Next, let us determine q ( x ⋆ ) q(x^\star) q ( x ⋆ )
q ( x ⋆ ) = 1 2 ∥ x ⋆ − x ˉ ∥ Q 1 − 1 2 + 1 2 ∥ F x ⋆ − b ∥ Q 2 − 1 2 = 1 2 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 1 2 ∥ F x ˉ − b + F Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 2 − 1 2 \begin{aligned}
q(x^\star)
{}={} &
\tfrac{1}{2}\|x^\star-\bar{x}\|_{Q_1^{-1}}^{2} + \tfrac{1}{2}\|Fx^\star-b\|_{Q_2^{-1}}^2
\\
{}={} &
\tfrac{1}{2}\|Q_1 F^\intercal S^{-1} (b-F\bar{x})\|_{Q_1^{-1}}^{2}
{}+{}
\tfrac{1}{2}\|F\bar{x} - b + FQ_1F^\intercal S^{-1}(b-F\bar{x})\|_{Q_2^{-1}}^2\end{aligned} q ( x ⋆ ) = = 2 1 ∥ x ⋆ − x ˉ ∥ Q 1 − 1 2 + 2 1 ∥ F x ⋆ − b ∥ Q 2 − 1 2 2 1 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 2 1 ∥ F x ˉ − b + F Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 2 − 1 2
= 1 2 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 1 2 ∥ ( I − F Q 1 F ⊺ S − 1 ) ( b − F x ˉ ) ∥ Q 2 − 1 2 = 1 2 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 1 2 ∥ ( I − ( S − Q 2 ) S − 1 ) ( b − F x ˉ ) ∥ Q 2 − 1 2 = 1 2 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 1 2 ∥ Q 2 S − 1 ( b − F x ˉ ) ∥ Q 2 − 1 2 = 1 2 ( b − F x ˉ ) ⊺ S − 1 F Q 1 F ⊺ S − 1 ( b − F x ˉ ) 1 2 ( b − F x ˉ ) ⊺ S − 1 Q 2 S − 1 ( b − F x ˉ ) = 1 2 ( b − F x ˉ ) ⊺ S − 1 ( F Q 1 F ⊺ + Q 2 ⏟ S ) S − 1 ( b − F x ˉ ) = 1 2 ( b − F x ˉ ) ⊺ S − 1 ( b − F x ˉ ) = c . \begin{aligned}{}={} &
\tfrac{1}{2}\|Q_1 F^\intercal S^{-1} (b-F\bar{x})\|_{Q_1^{-1}}^{2}
{}+{}
\tfrac{1}{2}\| \left(I - FQ_1F^\intercal S^{-1}\right)(b-F\bar{x})\|_{Q_2^{-1}}^2
\\
{}={} &
\tfrac{1}{2}\|Q_1 F^\intercal S^{-1} (b-F\bar{x})\|_{Q_1^{-1}}^{2}
{}+{}
\tfrac{1}{2}\| \left(I - (S-Q_2) S^{-1}\right)(b-F\bar{x})\|_{Q_2^{-1}}^2
\\
{}={} &
\tfrac{1}{2}\|Q_1 F^\intercal S^{-1} (b-F\bar{x})\|_{Q_1^{-1}}^{2}
{}+{}
\tfrac{1}{2}\| Q_2 S^{-1}(b-F\bar{x})\|_{Q_2^{-1}}^2
\\
{}={} &
\tfrac{1}{2}(b-F\bar{x})^\intercal S^{-1} FQ_1 F^\intercal S^{-1}(b-F\bar{x})
\tfrac{1}{2}(b-F\bar{x})^\intercal S^{-1} Q_2 S^{-1} (b-F\bar{x})
\\
{}={} &
\tfrac{1}{2}(b-F\bar{x})^\intercal S^{-1} (\underbrace{FQ_1 F^\intercal + Q_2}_{S}) S^{-1}(b-F\bar{x})
\\
{}={} &
\tfrac{1}{2}(b-F\bar{x})^\intercal S^{-1} (b-F\bar{x}) = c.\end{aligned} = = = = = = 2 1 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 2 1 ∥ ( I − F Q 1 F ⊺ S − 1 ) ( b − F x ˉ ) ∥ Q 2 − 1 2 2 1 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 2 1 ∥ ( I − ( S − Q 2 ) S − 1 ) ( b − F x ˉ ) ∥ Q 2 − 1 2 2 1 ∥ Q 1 F ⊺ S − 1 ( b − F x ˉ ) ∥ Q 1 − 1 2 + 2 1 ∥ Q 2 S − 1 ( b − F x ˉ ) ∥ Q 2 − 1 2 2 1 ( b − F x ˉ ) ⊺ S − 1 F Q 1 F ⊺ S − 1 ( b − F x ˉ ) 2 1 ( b − F x ˉ ) ⊺ S − 1 Q 2 S − 1 ( b − F x ˉ ) 2 1 ( b − F x ˉ ) ⊺ S − 1 ( S F Q 1 F ⊺ + Q 2 ) S − 1 ( b − F x ˉ ) 2 1 ( b − F x ˉ ) ⊺ S − 1 ( b − F x ˉ ) = c .
Lastly, it is easy to see that
∇ 2 q ( x ) = Q 1 − 1 + F ⊺ Q 2 − 1 F = W − 1 , \nabla^2 q(x) = Q_1^{-1} + F^\intercal Q_2^{-1} F = W^{-1}, ∇ 2 q ( x ) = Q 1 − 1 + F ⊺ Q 2 − 1 F = W − 1 ,
where the second equality is due to Lemma VII.5 . This completes the proof. □ \Box □
We can now state the main result of this section
Theorem VII.7 (MAP Estimator = KF). Suppose that
ℓ x 0 ( x 0 ) = 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 , ℓ t ( x t , w t ) = 1 2 ∥ y t − C x t ∥ R − 1 2 + 1 2 ∥ w t ∥ Q − 1 2 . \begin{aligned}
\ell_{x_0}(x_0) {}={} & \tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^2,
\\
\ell_t(x_t, w_t) {}={} & \tfrac{1}{2}\|y_t-Cx_t\|_{R^{-1}}^2 + \tfrac{1}{2}\|w_t\|_{Q^{-1}}^2.
\end{aligned} ℓ x 0 ( x 0 ) = ℓ t ( x t , w t ) = 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 , 2 1 ∥ y t − C x t ∥ R − 1 2 + 2 1 ∥ w t ∥ Q − 1 2 .
Then, the forward DP yields the KF equations.
Indeed, the first step in the forward DP is
V 1 ⋆ ( x 1 ) = min x 0 , w 0 { V 0 ⋆ ( x 0 ) + ℓ 0 ( x 0 , w 0 ) ∣ x 1 = A x 0 + w 0 } = min x 0 , w 0 { 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 1 2 ∥ y 0 − C x 0 ∥ R − 1 2 + 1 2 ∥ w 0 ∥ Q − 1 2 ∣ x 1 = A x 0 + w 0 } = min x 0 { 1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 1 2 ∥ y 0 − C x 0 ∥ R − 1 2 + 1 2 ∥ x 1 − A x 0 ∥ Q − 1 2 } . \begin{aligned}
V_{1}^{\star}(x_{1})
{}={} & \min_{x_0, w_0}
\left\{
V_0^\star(x_0) + \ell_0(x_0, w_0) {}\mid{} x_{1} = Ax_0 + w_0
\right\}
\\
{}={} & \min_{x_0, w_0}
\left\{
\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^2 + \tfrac{1}{2}\|y_0-Cx_0\|_{R^{-1}}^2 + \tfrac{1}{2}\|w_0\|_{Q^{-1}}^2 {}\mid{} x_{1} = Ax_0 + w_0
\right\}
\\
{}={} & \min_{x_0}
\left\{
\tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^2 + \tfrac{1}{2}\|y_0-Cx_0\|_{R^{-1}}^2 + \tfrac{1}{2}\|x_1 - Ax_0\|_{Q^{-1}}^2
\right\}.
\end{aligned} V 1 ⋆ ( x 1 ) = = = x 0 , w 0 min { V 0 ⋆ ( x 0 ) + ℓ 0 ( x 0 , w 0 ) ∣ x 1 = A x 0 + w 0 } x 0 , w 0 min { 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 2 1 ∥ y 0 − C x 0 ∥ R − 1 2 + 2 1 ∥ w 0 ∥ Q − 1 2 ∣ x 1 = A x 0 + w 0 } x 0 min { 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 2 1 ∥ y 0 − C x 0 ∥ R − 1 2 + 2 1 ∥ x 1 − A x 0 ∥ Q − 1 2 } .
We can use Lemma VII.6 to write the sum of the first two terms as follows
1 2 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 1 2 ∥ y 0 − C x 0 ∥ R − 1 2 = 1 2 ∥ x 0 − x 0 ⋆ ∥ W 0 − 1 2 + 1 2 ∥ y 0 − C x 0 ⋆ ∥ S 0 − 1 2 , \tfrac{1}{2}\|x_0 - \bar{x}_0\|_{P_0^{-1}}^2 + \tfrac{1}{2}\|y_0-Cx_0\|_{R^{-1}}^2
{}={}
\tfrac{1}{2}\|x_0-x_0^\star\|_{W_{0}^{-1}}^2 + \tfrac{1}{2}\|y_0 - Cx_0^\star\|_{S_0^{-1}}^2, 2 1 ∥ x 0 − x ˉ 0 ∥ P 0 − 1 2 + 2 1 ∥ y 0 − C x 0 ∥ R − 1 2 = 2 1 ∥ x 0 − x 0 ⋆ ∥ W 0 − 1 2 + 2 1 ∥ y 0 − C x 0 ⋆ ∥ S 0 − 1 2 ,
where
S 0 = R + C P 0 C ⊺ , W 0 = P 0 − P 0 C ⊺ S 0 − 1 C P 0 , x 0 ⋆ = x ˉ 0 + P 0 C ⊺ S 0 ⊺ ( y 0 − C x ˉ 0 ) . \begin{aligned}
S_0 {}={} & R + CP_0C^\intercal,
\\
W_0 {}={} & P_0 - P_0 C^\intercal S_0^{-1} CP_0,
\\
x_0^\star {}={} & \bar{x}_0 + P_0C^\intercal S_0^\intercal (y_0 - C\bar{x}_0).
\end{aligned} S 0 = W 0 = x 0 ⋆ = R + C P 0 C ⊺ , P 0 − P 0 C ⊺ S 0 − 1 C P 0 , x ˉ 0 + P 0 C ⊺ S 0 ⊺ ( y 0 − C x ˉ 0 ) .
Note that W 0 = Σ 0 ∣ 0 W_0 = \Sigma_{0\mid 0} W 0 = Σ 0 ∣ 0 and x 0 ⋆ = x ^ 0 ∣ 0 x_0^\star = \hat{x}_{0\mid 0} x 0 ⋆ = x ^ 0 ∣ 0 . Next, we have
V 1 ⋆ ( x 1 ) = 1 2 ∥ y 0 − C x 0 ⋆ ∥ S 0 − 1 2 ⏟ constant + min x 0 { 1 2 ∥ x 0 − x 0 ⋆ ∥ W 0 − 1 2 + 1 2 ∥ x 1 − A x 0 ∥ Q − 1 2 } . V_1^\star(x_1)
{}={}
\underbrace{\tfrac{1}{2}\|y_0 - Cx_0^\star\|_{S_0^{-1}}^2}_{\text{constant}}
{}+{}
\min_{x_0}
\left\{
\tfrac{1}{2}\|x_0-x_0^\star\|_{W_{0}^{-1}}^2
{}+{}
\tfrac{1}{2}\|x_1 - Ax_0\|_{Q^{-1}}^2
\right\}. V 1 ⋆ ( x 1 ) = constant 2 1 ∥ y 0 − C x 0 ⋆ ∥ S 0 − 1 2 + x 0 min { 2 1 ∥ x 0 − x 0 ⋆ ∥ W 0 − 1 2 + 2 1 ∥ x 1 − A x 0 ∥ Q − 1 2 } .
From Lemma VII.6 we have
V 1 ⋆ ( x 1 ) = constant + 1 2 ∥ x 1 − A x ^ 0 ∣ 0 ∥ S ‾ 0 − 1 2 , V_1^\star(x_1)
{}={}
\text{constant}
{}+{}
\tfrac{1}{2}\|x_1 - A\hat{x}_{0\mid 0}\|^2_{\overline{S}_{0}^{-1}}, V 1 ⋆ ( x 1 ) = constant + 2 1 ∥ x 1 − A x ^ 0 ∣ 0 ∥ S 0 − 1 2 ,
where x ^ 1 ∣ 0 = A x ^ 0 ∣ 0 \hat{x}_{1\mid 0} = A\hat{x}_{0\mid 0} x ^ 1 ∣ 0 = A x ^ 0 ∣ 0 and
S ‾ 0 = Q + A W 0 A ⊺ = Σ 1 ∣ 0 , \overline{S}_{0} = Q + AW_0A^\intercal = \Sigma_{1\mid 0}, S 0 = Q + A W 0 A ⊺ = Σ 1 ∣ 0 ,
(compare this the time update step of the Kalman filter).
Note that V 1 ⋆ V_1^\star V 1 ⋆ has the same form as V 1 ⋆ V_1^\star V 1 ⋆ , so the same procedure can be repeated. This will yield the same iterates.