Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Eviews5 / EViews5 / Docs / EViews 5 Users Guide.pdf
Скачиваний:
152
Добавлен:
23.03.2015
Размер:
11.51 Mб
Скачать

Chapter 25. State Space Models and the Kalman Filter

The EViews sspace (state space) object provides a straightforward, easy-to-use interface for specifying, estimating, and working with the results of your single or multiple equation dynamic system. EViews provides a wide range of specification, filtering, smoothing, and other forecasting tools which aid you in working with dynamic systems specified in state space form.

A wide range of time series models, including the classical linear regression model and ARIMA models, can be written and estimated as special cases of a state space specification. State space models have been applied in the econometrics literature to model unobserved variables: (rational) expectations, measurement errors, missing observations, permanent income, unobserved components (cycles and trends), and the non-accelerating rate of unemployment. Extensive surveys of applications of state space models in econometrics can be found in Hamilton (1994a, Chapter 13; 1994b) and Harvey (1989, Chapters 3, 4).

There are two main benefits to representing a dynamic system in state space form. First, the state space allows unobserved variables (known as the state variables) to be incorporated into, and estimated along with, the observable model. Second, state space models can be analyzed using a powerful recursive algorithm known as the Kalman (Bucy) filter. The Kalman filter algorithm has been used, among other things, to compute exact, finite sample forecasts for Gaussian ARMA models, multivariate (vector) ARMA models, MIMIC (multiple indicators and multiple causes), Markov switching models, and time varying (random) coefficient models.

Those of you who have used early versions of the sspace object will note that much was changed with the EViews 4 release. We strongly recommend that you read “Converting from Version 3 Sspace” on page 775 before loading existing workfiles and before beginning to work with the new state space routines.

Background

We present here a very brief discussion of the specification and estimation of a linear state space model. Those desiring greater detail are directed to Harvey (1989), Hamilton (1994a, Chapter 13, 1994b), and especially the excellent treatment of Koopman, Shephard and Doornik (1999).

Specification

A linear state space representation of the dynamics of the n × 1 vector yt is given by the system of equations:

754—Chapter 25. State Space Models and the Kalman Filter

yt = ct + Ztαt + t

(25.1)

αt + 1 = dt + Ttαt + vt

(25.2)

where αt is an m × 1 vector of possibly unobserved state variables, where ct , Zt , dt and Tt are conformable vectors and matrices, and where t and vt are vectors of mean zero, Gaussian disturbances. Note that the unobserved state vector is assumed to move over time as a first-order vector autoregression.

We will refer to the first set of equations as the “signal” or “observation” equations and the second set as the “state” or “transition” equations. The disturbance vectors t and vt are assumed to be serially independent, with contemporaneous variance structure:

 

 

 

 

 

 

 

 

 

 

t = var

t

=

 

Ht

Gt

(25.3)

 

 

 

GtQt

 

 

vt

 

 

 

 

 

 

 

 

 

 

 

 

where Ht is an n × n symmetric variance matrix, Qt

is an m × m symmetric variance

matrix, and Gt is an n × m matrix of covariances.

 

 

 

In the discussion that follows, we will generalize the specification given in (25.1)(25.3) by allowing the system matrices and vectors Ξt ≡ { ct, dt, Zt, Tt, Ht, Qt, Gt} to depend upon observable explanatory variables Xt and unobservable parameters θ . Estimation of the parameters θ is discussed in “Estimation” beginning on page 757.

Filtering

Consider the conditional distribution of the state vector αt given information available at time s . We can define the mean and variance matrix of the conditional distribution as:

 

 

at

 

s Es( αt)

(25.4)

 

Pt

 

s Es[ ( αt at

 

s) ( αt at

 

s) ′ ]

(25.5)

 

 

 

where the subscript below the expectation operator indicates that expectations are taken using the conditional distribution for that period.

One important conditional distribution is obtained by setting s = t − 1 , so that we obtain the one-step ahead mean at t − 1 and one-step ahead variance Pt t − 1 of the states

αt . Under the Gaussian error assumption, at t − 1 is also the minimum mean square error estimator of αt and Pt t − 1 is the mean square error (MSE) of at t − 1 . If the normality assumption is dropped, at t − 1 is still the minimum mean square linear estimator of αt .

Given the one-step ahead state conditional mean, we can also form the (linear) minimum MSE one-step ahead estimate of yt :

˜

Et − 1( yt) = E( yt at t − 1) = ct + Ztat t − 1

(25.6)

yt = yt t − 1

Background—755

The one-step ahead prediction error is given by,

˜

=

t

˜

t − 1

(25.7)

t

t − 1 yt yt

and the prediction error variance is defined as:

˜

var( t t − 1) = ZtPt t − 1Zt′ + Ht

(25.8)

Ft = Ft t 1

The Kalman (Bucy) filter is a recursive algorithm for sequentially updating the one-step ahead estimate of the state mean and variance given new information. Details on the recursion are provided in the references above. For our purposes, it is sufficient to note that given initial values for the state mean and covariance, values for the system matrices Ξt , and observations on yt , the Kalman filter may be used to compute one-step ahead estimates of the state and the associated mean square error matrix, {at t − 1, Pt t − 1} , the contemporaneous or filtered state mean and variance, { at, Pt} , and the one-step ahead prediction, prediction error, and prediction error variance, { yt t − 1, t t − 1, Ft t − 1} . Note that we may also obtain the standardized prediction residual, et t − 1 , by dividing t t − 1 by the square-root of the corresponding diagonal element of Ft t − 1 .

Fixed-Interval Smoothing

Suppose that we observe the sequence of data up to time period T . The process of using this information to form expectations at any time period up to T is known as fixed-interval smoothing. Despite the fact that there are a variety of other distinct forms of smoothing (e.g., fixed-point, fixed-lag), we will use the term smoothing to refer to fixed-interval smoothing.

Additional details on the smoothing procedure are provided in the references given above. For now, note that smoothing uses all of the information in the sample to provide

ˆ ≡ ≡ E ( )

smoothed estimates of the states, αt at T T αt , and smoothed estimates of the state

variances, Vt varT( αt) . The matrix Vt may also be interpreted as the MSE of the

smoothed state estimate ˆ t .

α

As with the one-step ahead states and variances above, we may use the smoothed values to form smoothed estimates of the signal variables,

ˆ

 

ˆ

ˆ

(25.9)

 

yt E( yt

 

αt)

= ct + Ztαt

and to compute the variance of the smoothed signal estimates:

 

 

ˆ

 

 

St var( yt

 

T)

= ZtVtZt.

(25.10)

 

Lastly, the smoothing procedure allows us to compute smoothed disturbance estimates,

ˆ

t

 

T ET( t)

ˆ

vt

 

T ET( vt) , and a corresponding smoothed disturbance

t

 

and vt

 

 

 

variance matrix:

Ft + n t

756—Chapter 25. State Space Models and the Kalman Filter

 

 

 

 

 

 

 

ˆ

= var

t

(25.11)

t

 

 

 

 

 

 

T

vt

 

 

 

 

 

 

 

 

 

Dividing the smoothed disturbance estimates by the square roots of the corresponding diagonal elements of the smoothed variance matrix yields the standardized smoothed disturbance estimates eˆt and νˆ t .

Forecasting

There are a variety of types of forecasting which may be performed with state space models. These methods differ primarily in what and how information is used. We will focus on the three methods that are supported by EViews built-in forecasting routines.

n-Step Ahead Forecasting

Earlier, we examined the notion of one-step ahead prediction. Consider now the notion of multi-step ahead prediction of observations, in which we take a fixed set of information available at a given period, and forecast several periods ahead. Modifying slightly the expressions in (25.4)(25.8) yields the n-step ahead state conditional mean and variance:

 

 

 

 

 

at + n

 

t Et( αt + n) ,

(25.12)

 

Pt + n

 

 

t Et[ ( αt + n at + n

 

t)( αt + n at + n

 

t) ′ ]

(25.13)

 

 

 

the n-step ahead forecast,

 

 

 

 

yt + n

 

t Et( yt + n) = ct + Ztat + n

 

t

(25.14)

 

 

 

 

and the corresponding n-step ahead forecast MSE matrix:

 

Ft + n

 

 

˜

t) = Zt + nPt + n

 

tZt + n′ + Ht

(25.15)

t MSE( yt + n

 

for n = 1, 2, … . As before, at + n t may also be interpreted as the minimum MSE estimate of αt + n based on the information set available at time t , and Pt + n t is the MSE of the estimate.

It is worth emphasizing that the definitions given above for the forecast MSE matrices do not account for extra variability introduced in the estimation of any unknown

parameters θ . In this setting, the Ft + n t will understate the true variability of the forecast, and should be viewed as being computed conditional on the specific value of the estimated parameters.

It is also worth noting that the n-step ahead forecasts may be computed using a slightly modified version of the basic Kalman recursion (Harvey 1989). To forecast at period

s = t + n , simply initialize a Kalman filter at time t + 1 with the values of the predicted states and state covariances using information at time t , and run the filter forward

Background—757

n − 1 additional periods using no additional signal information. This procedure is repeated for each observation in the forecast sample, s = t + 1, …, t + n .

Dynamic Forecasting

The concept of dynamic forecasting should be familiar to you from other EViews estimation objects. In dynamic forecasting, we start at the beginning of the forecast sample t , and compute a complete set of n-period ahead forecasts for each period n = 1, …, n in the forecast interval. Thus, if we wish to start at period t and forecast dynamically to

t+ n , we would compute a one-step ahead forecast for t + 1 , a two-step ahead forecast for t + 2 , and so forth, up to an n -step ahead forecast for t + n . It may be useful to

note that as with n-step ahead forecasting, we simply initialize a Kalman filter at time

t+ 1 and run the filter forward additional periods using no additional signal information. For dynamic forecasting, however, only one n-step ahead forecast is required to compute all of the forecast values since the information set is not updated from the beginning of the forecast period.

Smoothed Forecasting

Alternatively, we can compute smoothed forecasts which use all available signal data over the forecast sample (for example, at + n t + n ). These forward looking forecasts may be computed by initializing the states at the start of the forecast period, and performing a Kalman smooth over the entire forecast period using all relevant signal data. This technique is useful in settings where information on the entire path of the signals is used to interpolate values throughout the forecast sample.

We make one final comment about the forecasting methods described above. For traditional n-step ahead and dynamic forecasting, the states are typically initialized using the one-step ahead forecasts of the states and variances at the start of the forecast window. For smoothed forecasts, one would generally initialize the forecasts using the corresponding smoothed values of states and variances. There may, however, be situations where you wish to choose a different set of initial values for the forecast filter or smoother. The EViews forecasting routines (described in “State Space Procedures” beginning on

page 771) provide you with considerable control over these initial settings. Be aware, however, that the interpretation of the forecasts in terms of the available information will change if you choose alternative settings.

Estimation

To implement the Kalman filter and the fixed-interval smoother, we must first replace any unknown elements of the system matrices by their estimates. Under the assumption that the t and vt are Gaussian, the sample log likelihood:

 

nT

1

 

 

˜

 

1

 

˜

˜

1˜

(25.16)

 

 

 

 

 

log L( θ)

= − ------log 2π--

Σ

log

Ft( θ)

 

--

Σ

t′ ( θ)Ft( θ)

t( θ)

 

2

2

 

 

2

 

 

 

 

 

 

 

t

 

 

 

 

t

 

 

 

 

Соседние файлы в папке Docs