
- •Preface
- •Part IV. Basic Single Equation Analysis
- •Chapter 18. Basic Regression Analysis
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •References
- •Chapter 19. Additional Regression Tools
- •Special Equation Expressions
- •Robust Standard Errors
- •Weighted Least Squares
- •Nonlinear Least Squares
- •Stepwise Least Squares Regression
- •References
- •Chapter 20. Instrumental Variables and GMM
- •Background
- •Two-stage Least Squares
- •Nonlinear Two-stage Least Squares
- •Limited Information Maximum Likelihood and K-Class Estimation
- •Generalized Method of Moments
- •IV Diagnostics and Tests
- •References
- •Chapter 21. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •References
- •Chapter 22. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasts with Lagged Dependent Variables
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Nonlinear and PDL Specifications
- •References
- •Chapter 23. Specification and Diagnostic Tests
- •Background
- •Coefficient Diagnostics
- •Residual Diagnostics
- •Stability Diagnostics
- •Applications
- •References
- •Part V. Advanced Single Equation Analysis
- •Chapter 24. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •References
- •Chapter 25. Cointegrating Regression
- •Background
- •Estimating a Cointegrating Regression
- •Testing for Cointegration
- •Working with an Equation
- •References
- •Binary Dependent Variable Models
- •Ordered Dependent Variable Models
- •Censored Regression Models
- •Truncated Regression Models
- •Count Models
- •Technical Notes
- •References
- •Chapter 27. Generalized Linear Models
- •Overview
- •How to Estimate a GLM in EViews
- •Examples
- •Working with a GLM Equation
- •Technical Details
- •References
- •Chapter 28. Quantile Regression
- •Estimating Quantile Regression in EViews
- •Views and Procedures
- •Background
- •References
- •Chapter 29. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •References
- •Part VI. Advanced Univariate Analysis
- •Chapter 30. Univariate Time Series Analysis
- •Unit Root Testing
- •Panel Unit Root Test
- •Variance Ratio Test
- •BDS Independence Test
- •References
- •Part VII. Multiple Equation Analysis
- •Chapter 31. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •References
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •References
- •Chapter 33. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •References
- •Chapter 34. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •References
- •Part VIII. Panel and Pooled Data
- •Chapter 35. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •References
- •Chapter 36. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •References
- •Chapter 37. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •References
- •Part IX. Advanced Multivariate Analysis
- •Chapter 38. Cointegration Testing
- •Johansen Cointegration Test
- •Single-Equation Cointegration Tests
- •Panel Cointegration Testing
- •References
- •Chapter 39. Factor Analysis
- •Creating a Factor Object
- •Rotating Factors
- •Estimating Scores
- •Factor Views
- •Factor Procedures
- •Factor Data Members
- •An Example
- •Background
- •References
- •Appendix B. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •References
- •Appendix C. Gradients and Derivatives
- •Gradients
- •Derivatives
- •References
- •Appendix D. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Appendix E. Long-run Covariance Estimation
- •Technical Discussion
- •Kernel Function Properties
- •References
- •Index
- •Symbols
- •Numerics

Chapter 33. State Space Models and the Kalman Filter
The EViews sspace (state space) object provides a straightforward, easy-to-use interface for specifying, estimating, and working with the results of your single or multiple equation dynamic system. EViews provides a wide range of specification, filtering, smoothing, and other forecasting tools which aid you in working with dynamic systems specified in state space form.
A wide range of time series models, including the classical linear regression model and ARIMA models, can be written and estimated as special cases of a state space specification. State space models have been applied in the econometrics literature to model unobserved variables: (rational) expectations, measurement errors, missing observations, permanent income, unobserved components (cycles and trends), and the non-accelerating rate of unemployment. Extensive surveys of applications of state space models in econometrics can be found in Hamilton (1994a, Chapter 13; 1994b) and Harvey (1989, Chapters 3, 4).
There are two main benefits to representing a dynamic system in state space form. First, the state space allows unobserved variables (known as the state variables) to be incorporated into, and estimated along with, the observable model. Second, state space models can be analyzed using a powerful recursive algorithm known as the Kalman (Bucy) filter. The Kalman filter algorithm has been used, among other things, to compute exact, finite sample forecasts for Gaussian ARMA models, multivariate (vector) ARMA models, MIMIC (multiple indicators and multiple causes), Markov switching models, and time varying (random) coefficient models.
Those of you who have used early versions of the sspace object will note that much was changed with the EViews 4 release. We strongly recommend that you read “Converting from Version 3 Sspace” on page 509 before loading existing workfiles and before beginning to work with the new state space routines.
Background
We present here a very brief discussion of the specification and estimation of a linear state space model. Those desiring greater detail are directed to Harvey (1989), Hamilton (1994a, Chapter 13; 1994b), and especially the excellent treatment of Koopman, Shephard and Doornik (1999).
Specification
A linear state space representation of the dynamics of the n ¥ 1 vector yt is given by the system of equations:
yt = ct + Ztat + et |
(33.1) |

488—Chapter 33. State Space Models and the Kalman Filter
at + 1 = dt + Ttat + vt |
(33.2) |
where at is an m ¥ 1 vector of possibly unobserved state variables, where ct , Zt , dt and Tt are conformable vectors and matrices, and where et and vt are vectors of mean zero, Gaussian disturbances. Note that the unobserved state vector is assumed to move over time as a first-order vector autoregression.
We will refer to the first set of equations as the “signal” or “observation” equations and the second set as the “state” or “transition” equations. The disturbance vectors et and vt are assumed to be serially independent, with contemporaneous variance structure:
|
|
|
|
|
|
|
|
Qt = var |
|
et |
= |
Ht |
Gt |
(33.3) |
|
|
vt |
Gt¢ Qt |
|||||
|
|
|
|
||||
|
|
|
|
|
|
|
|
where Ht is an n ¥ n symmetric variance matrix, Qt is an m ¥ m symmetric variance matrix, and Gt is an n ¥ m matrix of covariances.
In the discussion that follows, we will generalize the specification given in (33.1)—(33.3) by allowing the system matrices and vectors Yt ∫ {ct, dt, Zt, Tt, Ht, Qt, Gt} to depend upon observable explanatory variables Xt and unobservable parameters v . Estimation of the parameters v is discussed in “Estimation,” beginning on page 491.
Filtering
Consider the conditional distribution of the state vector at |
given information available at |
||||||||
time s . We can define the mean and variance matrix of the conditional distribution as: |
|||||||||
|
|
at |
|
s ∫ Es(at) |
(33.4) |
||||
|
|||||||||
Pt |
|
s ∫ Es[(at – at |
|
s)(at – at |
|
s )¢] |
(33.5) |
||
|
|
|
where the subscript below the expectation operator indicates that expectations are taken using the conditional distribution for that period.
One important conditional distribution is obtained by setting s = t – 1 , so that we obtain the one-step ahead mean at t – 1 and one-step ahead variance Pt t – 1 of the states at . Under the Gaussian error assumption, at t – 1 is also the minimum mean square error estimator of at and Pt t – 1 is the mean square error (MSE) of at t – 1 . If the normality assumption is dropped, at t – 1 is still the minimum mean square linear estimator of at .
Given the one-step ahead state conditional mean, we can also form the (linear) minimum MSE one-step ahead estimate of yt :
˜ |
= yt t – 1 ∫ Et – 1(yt) |
= E(yt at t – 1 ) = ct + Ztat t – 1 |
(33.6) |
yt |
The one-step ahead prediction error is given by,
˜ |
= |
et |
˜ |
t – 1 |
(33.7) |
et |
t – 1 ∫ yt – yt |

Background—489
and the prediction error variance is defined as:
˜ |
= Ft t – 1 ∫ var(et t – 1) |
= ZtPt t – 1Zt¢ + Ht |
(33.8) |
Ft |
The Kalman (Bucy) filter is a recursive algorithm for sequentially updating the one-step ahead estimate of the state mean and variance given new information. Details on the recursion are provided in the references above. For our purposes, it is sufficient to note that given initial values for the state mean and covariance, values for the system matrices Yt , and observations on yt , the Kalman filter may be used to compute one-step ahead estimates of the state and the associated mean square error matrix, {at t – 1, Pt t – 1 }, the contemporaneous or filtered state mean and variance, {at, Pt}, and the one-step ahead prediction, prediction error, and prediction error variance, {yt t – 1, et t – 1, Ft t – 1}. Note that we may also obtain the standardized prediction residual, et t – 1 , by dividing et t – 1 by the squareroot of the corresponding diagonal element of Ft t – 1 .
Fixed-Interval Smoothing
Suppose that we observe the sequence of data up to time period T . The process of using this information to form expectations at any time period up to T is known as fixed-interval smoothing. Despite the fact that there are a variety of other distinct forms of smoothing (e.g., fixed-point, fixed-lag), we will use the term smoothing to refer to fixed-interval smoothing.
Additional details on the smoothing procedure are provided in the references given above. For now, note that smoothing uses all of the information in the sample to provide smoothed estimates of the states, aˆ t ∫ at T ∫ ET(at), and smoothed estimates of the state variances,
Vt ∫ varT(at). The matrix Vt may also be interpreted as the MSE of the smoothed state estimate aˆ t .
As with the one-step ahead states and variances above, we may use the smoothed values to form smoothed estimates of the signal variables,
ˆ |
∫ E(yt |
ˆ |
= |
ˆ |
(33.9) |
|
yt |
at) |
ct + Ztat |
||||
and to compute the variance of the smoothed signal estimates: |
|
|||||
St |
∫ var(yt T) |
= |
ZtVtZt¢. |
(33.10) |
||
|
ˆ |
|
|
|
|
Lastly, the smoothing procedure allows us to compute smoothed disturbance estimates, eˆt ∫ et T ∫ ET(et) and vˆt ∫ vt T ∫ ET(vt), and a corresponding smoothed disturbance variance matrix:
|
|
|
|
|
|
|
ˆ |
= |
|
et |
(33.11) |
||
Qt |
varT |
vt |
|
|||
|
|
|
|
|
||
|
|
|
|
|
|
|

490—Chapter 33. State Space Models and the Kalman Filter
Dividing the smoothed disturbance estimates by the square roots of the corresponding diagonal elements of the smoothed variance matrix yields the standardized smoothed disturbance estimates eˆt and nˆt .
Forecasting
There are a variety of types of forecasting which may be performed with state space models. These methods differ primarily in what and how information is used. We will focus on the three methods that are supported by EViews built-in forecasting routines.
n-Step Ahead Forecasting
Earlier, we examined the notion of one-step ahead prediction. Consider now the notion of multi-step ahead prediction of observations, in which we take a fixed set of information available at a given period, and forecast several periods ahead. Modifying slightly the expressions in (33.4)—(33.8) yields the n-step ahead state conditional mean and variance:
|
|
|
|
|
at + n |
|
|
t ∫ Et(at + n), |
(33.12) |
|||||||
|
||||||||||||||||
Pt + n |
|
t ∫ Et[(at + n – at + n |
|
t)(at + n – at + n |
|
t)¢] |
(33.13) |
|||||||||
|
|
|
||||||||||||||
the n-step ahead forecast, |
|
|||||||||||||||
|
|
yt + n |
|
t ∫ Et(yt + n) = ct + Ztat + n |
|
t |
(33.14) |
|||||||||
|
|
|
|
|||||||||||||
and the corresponding n-step ahead forecast MSE matrix: |
(33.15) |
|||||||||||||||
Ft + n t ∫ MSE(yt + n t) = Zt + nPt + n tZt + n¢ + Ht |
||||||||||||||||
|
|
|
|
|
˜ |
|
|
|
for n = 1, 2, º. As before, at + n t may also be interpreted as the minimum MSE estimate of at + n based on the information set available at time t , and Pt + n t is the MSE of the estimate.
It is worth emphasizing that the definitions given above for the forecast MSE matrices do not account for extra variability introduced in the estimation of any unknown
parameters v . In this setting, the Ft + n t will understate the true variability of the forecast, and should be viewed as being computed conditional on the specific value of the estimated parameters.
It is also worth noting that the n-step ahead forecasts may be computed using a slightly modified version of the basic Kalman recursion (Harvey 1989). To forecast at period
s = t + n , simply initialize a Kalman filter at time t + 1 with the values of the predicted states and state covariances using information at time t , and run the filter forward n – 1 additional periods using no additional signal information. This procedure is repeated for each observation in the forecast sample, s = t + 1, º, t + n .

Background—491
Dynamic Forecasting
The concept of dynamic forecasting should be familiar to you from other EViews estimation objects. In dynamic forecasting, we start at the beginning of the forecast sample t , and compute a complete set of n-period ahead forecasts for each period n = 1, º, n in the forecast interval. Thus, if we wish to start at period t and forecast dynamically to t + n , we would compute a one-step ahead forecast for t + 1 , a two-step ahead forecast for t + 2 , and so forth, up to an n -step ahead forecast for t + n . It may be useful to note that as with n-step ahead forecasting, we simply initialize a Kalman filter at time t + 1 and run the filter forward additional periods using no additional signal information. For dynamic forecasting, however, only one n-step ahead forecast is required to compute all of the forecast values since the information set is not updated from the beginning of the forecast period.
Smoothed Forecasting
Alternatively, we can compute smoothed forecasts which use all available signal data over the forecast sample (for example, at + n t + n ). These forward looking forecasts may be computed by initializing the states at the start of the forecast period, and performing a Kalman smooth over the entire forecast period using all relevant signal data. This technique is useful in settings where information on the entire path of the signals is used to interpolate values throughout the forecast sample.
We make one final comment about the forecasting methods described above. For traditional n-step ahead and dynamic forecasting, the states are typically initialized using the one-step ahead forecasts of the states and variances at the start of the forecast window. For smoothed forecasts, one would generally initialize the forecasts using the corresponding smoothed values of states and variances. There may, however, be situations where you wish to choose a different set of initial values for the forecast filter or smoother. The EViews forecasting routines (described in “State Space Procedures,” beginning on page 505) provide you with considerable control over these initial settings. Be aware, however, that the interpretation of the forecasts in terms of the available information will change if you choose alternative settings.
Estimation
To implement the Kalman filter and the fixed-interval smoother, we must first replace any unknown elements of the system matrices by their estimates. Under the assumption that the
et and vt are Gaussian, the sample log likelihood: |
|
|
|
|
|
|
|
|||||||||
logL(v) = |
– |
nT |
log2p– |
1 |
Âlog |
|
Ft(v) |
|
– |
1 |
Âe˜t |
¢(v)Ft(v) |
–1 |
e˜t |
(v) |
(33.16) |
|
|
|||||||||||||||
2 |
2 |
|
|
2 |
|
|||||||||||
|
|
------- |
|
-- |
|
|
˜ |
|
|
-- |
|
˜ |
|
|
|
|
|
|
|
|
|
t |
|
|
|
|
|
t |
|
|
|
|
|
may be evaluated using the Kalman filter. Using numeric derivatives, standard iterative techniques may be employed to maximize the likelihood with respect to the unknown parameters v (see Appendix B. “Estimation and Solution Options,” on page 755).