We illustrate the various methods using two sets of time series:
The auto-regressive block is defined by
Φ(B)yt = ϵt
where:
Φ(B) = 1 + φ1B + ⋯ + φpBp
is an auto-regressive polynomial.
Let γi be the autocovariances of the process
Using those notations, the state-space block can be written as follows :
$$ \alpha_t= \begin{pmatrix} y_t \\
y_{t-1} \\ \vdots \\ y_{t-p+1} \end{pmatrix}$$
The state block can be extended with additional lags. That can be useful
in complex (multi-variate) models
$$ T_t = \begin{pmatrix}-\varphi_1 & \cdots & \cdots & -\varphi_p \\ 1 & \cdots & \cdots & 0 \\ \vdots & \ddots & \ddots & \vdots\\ 0 & 0 & 1 & 0 \end{pmatrix}$$
$$ S_t = \sigma_{ar} \begin{pmatrix} 1 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
Vt = SS′
$$ Z_t = \begin{pmatrix} 1 & 0 & \cdots & 0\end{pmatrix}$$
$$ \alpha_{-1} = \begin{pmatrix}0 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
P* = Ω Ω is the unconditional covariance of the state array; it can be easily derived using the MA representation. We have:
Ω(i, 0) = γi
Ω(i, j) = Ω(i − 1, j − 1) − ψiψj
An alternative representation of the auto-regressive block will be very useful for the purposes of reflecting expectations. The process is defined as above:
Φ(B)yt = ϵt
where:
Φ(B) = 1 + φ1B + ⋯ + φpBp
is an auto-regressive polynomial. However, modeling data that refers to expectations may require including conditional expectations in the state vector. Thus, the same type of representation that is used for the ARMA model will be considered here.
Let γi be the autocovariances of the model. We also define the size of our state vector as r0 = max(p, h + 1), where h is the forecast horizon desired by the user. If the user needs to use nlags lagged values, whose default value is zero. Then the size of the state vector will be r = r0 + nlags
Using those notations, the state-space model can be written as follows :
$$ \alpha_t= \begin{pmatrix} y_{t-nlags} \\ \vdots \\ y_{t-1} \\ \hline y_{t} \\ y_{t+1|t} \\ \vdots \\ y_{t+h|t} \end{pmatrix}$$
where yt + i|t is the orthogonal projection of yt + i on the subspace generated by y(s) : s ≤ t. Thus, it is the forecast function with respect to the semi-infinite sample. We also have that $y_{t+i|t} = \sum_{j=i}^\infty {\psi_j \epsilon_{t+i-j}}$
$$ T_t = \begin{pmatrix} 0 &1 & 0 & \cdots & 0 \\0& 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1\\ -\varphi_r & \cdots & \cdots & \cdots &-\varphi_1 \end{pmatrix}$$
with φj = 0 for j > p
$$ S_t = \sigma_{ar} \begin{pmatrix} 0 \\ \vdots \\ 0\\ \hline 1 \\ \psi_1 \\ \vdots\\ \psi_s \end{pmatrix} $$
Vt = SS′
$$ Z_t = \begin{pmatrix} 0 & \cdots &0 & | & 1 & 0 & \cdots & 0\end{pmatrix}$$
$$ \alpha_{-1} = \begin{pmatrix} 0 \\ \vdots \\ 0\\ \hline 0 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
P* = Ω
Ω is the unconditional covariance of the state array; it can be easily derived using the MA representation. We have:
Ω(i, 0) = γi
Ω(i, j) = Ω(i − 1, j − 1) − ψiψj
The arma block is defined by
Φ(B)yt = Θ(B)ϵt
where:
Φ(B) = 1 + φ1B + ⋯ + φpBp
Θ(B) = 1 + θ1B + ⋯ + θqBq
are the auto-regressive and the moving average polynomials.
The MA representation of the process is $y_t=\sum_{i=0}^\infty {\psi_i \epsilon_{t-i}}$. Let γi be the autocovariances of the model. We also define: r = max (p, q + 1), s = r − 1.
Using those notations, the state-space block can be written as follows :
$$ \alpha_t= \begin{pmatrix} y_t \\ y_{t+1|t} \\ \vdots \\ y_{t+s|t} \end{pmatrix}$$
where yt + i|t is the orthogonal projection of yt + i on the subspace generated by y(s) : s ≤ t.Thus, it is the forecast function with respect to the semi-infinite sample. We also have that $y_{t+i|t} = \sum_{j=i}^\infty {\psi_j \epsilon_{t+i-j}}$
$$ T_t = \begin{pmatrix}0 &1 & 0 & \cdots & 0 \\0& 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1\\ -\varphi_r & \cdots & \cdots & \cdots &-\varphi_1 \end{pmatrix}$$
with φj = 0 for j > p
$$ S_t = \begin{pmatrix}1 \\ \psi_1 \\ \vdots\\ \psi_s \end{pmatrix} $$
Vt = SS′
$$ Z_t = \begin{pmatrix} 1 & 0 & \cdots & 0\end{pmatrix}$$
$$ \alpha_{-1} = \begin{pmatrix}0 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
P* = Ω
Ω is the unconditional covariance of the state array; it can be easily derived using the MA representation. We have:
Ω(i, 0) = γi
Ω(i, j) = Ω(i − 1, j − 1) − ψiψj
b_arma<-arma("arma", ar=c(-.2, .4, -.1), ma=c(.3, .6))
knit_print(block_t(b_arma))
#> [,1] [,2] [,3]
#> [1,] 0.0 1.0 0.0
#> [2,] 0.0 0.0 1.0
#> [3,] 0.1 -0.4 0.2
knit_print(block_p0(b_arma))
#> [,1] [,2] [,3]
#> [1,] 1.3501359 0.6394319 0.2517752
#> [2,] 0.6394319 0.3501359 0.1394319
#> [3,] 0.2517752 0.1394319 0.1001359