
- •Preface
- •Part IV. Basic Single Equation Analysis
- •Chapter 18. Basic Regression Analysis
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •References
- •Chapter 19. Additional Regression Tools
- •Special Equation Expressions
- •Robust Standard Errors
- •Weighted Least Squares
- •Nonlinear Least Squares
- •Stepwise Least Squares Regression
- •References
- •Chapter 20. Instrumental Variables and GMM
- •Background
- •Two-stage Least Squares
- •Nonlinear Two-stage Least Squares
- •Limited Information Maximum Likelihood and K-Class Estimation
- •Generalized Method of Moments
- •IV Diagnostics and Tests
- •References
- •Chapter 21. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •References
- •Chapter 22. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasts with Lagged Dependent Variables
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Nonlinear and PDL Specifications
- •References
- •Chapter 23. Specification and Diagnostic Tests
- •Background
- •Coefficient Diagnostics
- •Residual Diagnostics
- •Stability Diagnostics
- •Applications
- •References
- •Part V. Advanced Single Equation Analysis
- •Chapter 24. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •References
- •Chapter 25. Cointegrating Regression
- •Background
- •Estimating a Cointegrating Regression
- •Testing for Cointegration
- •Working with an Equation
- •References
- •Binary Dependent Variable Models
- •Ordered Dependent Variable Models
- •Censored Regression Models
- •Truncated Regression Models
- •Count Models
- •Technical Notes
- •References
- •Chapter 27. Generalized Linear Models
- •Overview
- •How to Estimate a GLM in EViews
- •Examples
- •Working with a GLM Equation
- •Technical Details
- •References
- •Chapter 28. Quantile Regression
- •Estimating Quantile Regression in EViews
- •Views and Procedures
- •Background
- •References
- •Chapter 29. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •References
- •Part VI. Advanced Univariate Analysis
- •Chapter 30. Univariate Time Series Analysis
- •Unit Root Testing
- •Panel Unit Root Test
- •Variance Ratio Test
- •BDS Independence Test
- •References
- •Part VII. Multiple Equation Analysis
- •Chapter 31. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •References
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •References
- •Chapter 33. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •References
- •Chapter 34. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •References
- •Part VIII. Panel and Pooled Data
- •Chapter 35. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •References
- •Chapter 36. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •References
- •Chapter 37. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •References
- •Part IX. Advanced Multivariate Analysis
- •Chapter 38. Cointegration Testing
- •Johansen Cointegration Test
- •Single-Equation Cointegration Tests
- •Panel Cointegration Testing
- •References
- •Chapter 39. Factor Analysis
- •Creating a Factor Object
- •Rotating Factors
- •Estimating Scores
- •Factor Views
- •Factor Procedures
- •Factor Data Members
- •An Example
- •Background
- •References
- •Appendix B. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •References
- •Appendix C. Gradients and Derivatives
- •Gradients
- •Derivatives
- •References
- •Appendix D. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Appendix E. Long-run Covariance Estimation
- •Technical Discussion
- •Kernel Function Properties
- •References
- •Index
- •Symbols
- •Numerics

Structural (Identified) VARs—471
Procs of a VAR
Most of the procedures available for a VAR are common to those available for a system object (see “System Procs” on page 435). Here, we discuss only those procedures that are unique to the VAR object.
Make System
This proc creates a system object that contains an equivalent VAR specification. If you want to estimate a non-standard VAR, you may use this proc as a quick way to specify a VAR in a system object which you can then modify to meet your needs. For example, while the VAR object requires each equation to have the same lag structure, you may want to relax this restriction. To estimate a VAR with unbalanced lag structure, use the Proc/Make System procedure to create a VAR system with a balanced lag structure and edit the system specification to meet the desired lag specification.
The By Variable option creates a system whose specification (and coefficient number) is ordered by variables. Use this option if you want to edit the specification to exclude lags of a specific variable from some of the equations. The By Lag option creates a system whose specification (and coefficient number) is ordered by lags. Use this option if you want to edit the specification to exclude certain lags from some of the equations.
For vector error correction (VEC) models, treating the coefficients of the cointegrating vector as additional unknown coefficients will make the resulting system unidentified. In this case, EViews will create a system object where the coefficients for the cointegrating vectors are fixed at the estimated values from the VEC. If you want to estimate the coefficients of the cointegrating vector in the system, you may edit the specification, but you should make certain that the resulting system is identified.
You should also note that while the standard VAR can be estimated efficiently by equation- by-equation OLS, this is generally not the case for the modified specification. You may wish to use one of the system-wide estimation methods (e.g. SUR) when estimating non-standard VARs using the system object.
Estimate Structural Factorization
This procedure is used to estimate the factorization matrices for a structural (or identified) VAR. The details for this procedure are provided in “Structural (Identified) VARs” below. You must first estimate the structural factorization matrices using this proc in order to use the structural options in impulse responses and variance decompositions.
Structural (Identified) VARs
The main purpose of structural VAR (SVAR) estimation is to obtain non-recursive orthogonalization of the error terms for impulse response analysis. This alternative to the recursive

472—Chapter 32. Vector Autoregression and Error Correction Models
Cholesky orthogonalization requires the user to impose enough restrictions to identify the orthogonal (structural) components of the error terms.
Let yt be a k -element vector of the endogenous variables and let S = |
E[etet¢] be the |
||
residual covariance matrix. Following Amisano and Giannini (1997), the class of SVAR |
|||
models that EViews estimates may be written as: |
|
||
|
Aet = But |
(32.12) |
|
where et |
and ut are vectors of length k . et is the observed (or reduced form) residuals, |
||
while ut |
is the unobserved structural innovations. A and B are k ¥ k matrices to be esti- |
||
mated. The structural innovations ut |
are assumed to be orthonormal, i.e. its covariance |
||
matrix is an identity matrix E[utut¢] |
= I . The assumption of orthonormal innovations ut |
||
imposes the following identifying restrictions on A and B : |
|
||
|
ASA¢ = BB¢. |
(32.13) |
Noting that the expressions on either side of (32.13) are symmetric, this imposes
k(k + 1) § 2 restrictions on the 2k2 unknown elements in A and B . Therefore, in order to identify A and B , you need to supply at least 2k2 – k(k + 1) § 2 = k(3k – 1) § 2 additional restrictions.
Specifying the Identifying Restrictions
As explained above, in order to estimate the orthogonal factorization matrices A and B , you need to provide additional identifying restrictions. We distinguish two types of identifying restrictions: short-run and long-run. For either type, the identifying restrictions can be specified either in text form or by pattern matrices.
Short-run Restrictions by Pattern Matrices
For many problems, the identifying restrictions on the A and B matrices are simple zero exclusion restrictions. In this case, you can specify the restrictions by creating a named “pattern” matrix for A and B . Any elements of the matrix that you want to be estimated should be assigned a missing value “NA”. All non-missing values in the pattern matrix will be held fixed at the specified values.
For example, suppose you want to restrict A to be a lower triangular matrix with ones on the main diagonal and B to be a diagonal matrix. Then the pattern matrices (for a k = 3 variable VAR) would be:
|
|
1 |
0 |
0 |
|
|
|
NA |
0 |
0 |
|
|
A = |
|
|
B = |
|
|
|
||||||
|
NA |
1 |
0 |
, |
|
0 |
NA |
0 |
. |
(32.14) |
||
|
|
NA NA |
1 |
|
|
|
0 |
0 |
NA |
|
|
|
|
|
|
|
|
|
|

Structural (Identified) VARs—473
You can create these matrices interactively. Simply use Object/New Object... to create two new 3 ¥ 3 matrices, A and B, and then use the spreadsheet view to edit the values. Alternatively, you can issue the following commands:
matrix(3,3) pata
’ fill matrix in row major order pata.fill(by=r) 1,0,0, na,1,0, na,na,1 matrix(3,3) patb = 0
patb(1,1) = na patb(2,2) = na patb(3,3) = na
Once you have created the pattern matrices, select Proc/Estimate Structural Factorization... from the VAR window menu. In the SVAR Options dialog, click the Matrix button and the Short-Run Pattern button and type in the name of the pattern matrices in the relevant edit boxes.
Short-run Restrictions in Text Form
For more general restrictions, you can specify the identifying restrictions in text form. In text form, you will write out the relation Aet = But as a set of equations, identifying each element of the et and ut vectors with special symbols. Elements of the A and B matrices to be estimated must be specified as elements of a coefficient vector.
To take an example, suppose again that you have a k = 3 variable VAR where you want to restrict A to be a lower triangular matrix with ones on the main diagonal and B to be a
diagonal matrix. Under these restrictions, the relation Aet |
= But can be written as: |
|
e1 |
= b11u1 |
|
e2 |
= – a21e1 + b22u2 |
(32.15) |
e3 |
= – a31e1 – a32e2 + b33u3 |
|
To specify these restrictions in text form, select Proc/Estimate Structural Factorization...
from the VAR window and click the Text button. In the edit window, you should type the following:
@e1 = c(1)*@u1
@e2 = -c(2)*@e1 + c(3)*@u2
@e3 = -c(4)*@e1 - c(5)*@e2 + c(6)*@u3
The special key symbols “@e1,” “@e2,” “@e3,” represent the first, second, and third elements of the et vector, while “@u1,” “@u2,” “@u3” represent the first, second, and third elements of the ut vector. In this example, all unknown elements of the A and B matrices are represented by elements of the C coefficient vector.

474—Chapter 32. Vector Autoregression and Error Correction Models
Long-run Restrictions
The identifying restrictions embodied in the relation Ae = Bu are commonly referred to as short-run restrictions. Blanchard and Quah (1989) proposed an alternative identification method based on restrictions on the long-run properties of the impulse responses. The (accumulated) long-run response C to structural innovations takes the form:
|
|
|
|
C = |
ˆ |
–1 |
B |
(32.16) |
|
|
|
|
|
W•A |
|
||||
ˆ |
= |
ˆ |
ˆ |
–1 |
is the estimated accumulated responses to the reduced |
||||
where W• |
(I – A1 |
– º – Ap) |
|
form (observed) shocks. Long-run identifying restrictions are specified in terms of the elements of this C matrix, typically in the form of zero restrictions. The restriction Ci, j = 0 means that the (accumulated) response of the i-th variable to the j-th structural shock is zero in the long-run.
It is important to note that the expression for the long-run response (32.16) involves the inverse of A . Since EViews currently requires all restrictions to be linear in the elements of
A and B , if you specify a long-run restriction, the A matrix must be the identity matrix.
To specify long-run restrictions by a pattern matrix, create a named matrix that contains the pattern for the long-run response matrix C . Unrestricted elements in the C matrix should be assigned a missing value “NA”. For example, suppose you have a k = 2 variable VAR where you want to restrict the long-run response of the second endogenous variable to the
first structural shock to be zero C2, 1 |
= 0 . Then the long-run response matrix will have the |
|||
following pattern: |
|
|
|
|
C = |
|
NA NA |
|
(32.17) |
|
|
|||
|
|
0 NA |
|
|
You can create this matrix with the following commands:
matrix(2,2) patc = na patc(2,1) = 0
Once you have created the pattern matrix, select Proc/Estimate Structural Factorization...
from the VAR window menu. In the SVAR Options dialog, click the Matrix button and the Long-Run Pattern button and type in the name of the pattern matrix in the relevant edit box.
To specify the same long-run restriction in text form, select Proc/Estimate Structural Factorization... from the VAR window and click the Text button. In the edit window, you would type the following:
@lr2(@u1)=0 ’ zero LR response of 2nd variable to 1st shock
where everything on the line after the apostrophe is a comment. This restriction begins with the special keyword “@LR#”, with the “#” representing the response variable to restrict.

Structural (Identified) VARs—475
Inside the parentheses, you must specify the impulse keyword “@U” and the innovation number, followed by an equal sign and the value of the response (typically 0). We caution you that while you can list multiple long-run restrictions, you cannot mix short-run and long-run restrictions.
Note that it is possible to specify long-run restrictions as short-run restrictions (by obtaining the infinite MA order representation). While the estimated A and B matrices should be the same, the impulse response standard errors from the short-run representation would be incorrect (since it does not take into account the uncertainty in the estimated infinite MA order coefficients).
Some Important Notes
Currently we have the following limitations for the specification of identifying restrictions:
•The A and B matrices must be square and non-singular. In text form, there must be
exactly as many equations as there are endogenous variables in the VAR. For shortrun restrictions in pattern form, you must provide the pattern matrices for both A and B matrices.
•The restrictions must be linear in the elements of A and B . Moreover, the restrictions on A and B must be independent (no restrictions across elements of A and B ).
•You cannot impose both short-run and long-run restrictions.
•Structural decompositions are currently not available for VEC models.
•The identifying restriction assumes that the structural innovations ut have unit variances. Therefore, you will almost always want to estimate the diagonal elements of the B matrix so that you obtain estimates of the standard deviations of the structural
shocks.
•It is common in the literature to assume that the structural innovations have a diago-
nal covariance matrix rather than an identity matrix. To compare your results to those from these studies, you will have to divide each column of the B matrix with the diagonal element in that column (so that the resulting B matrix has ones on the main diagonal). To illustrate this transformation, consider a simple k = 2 variable model with A = 1 :
e1,t = b11u1,t + b12u2,t
(32.18)
e2,t = b21u1,t + b22u2,t
and u2,t are independent structural shocks with unit variances as assumed in the EViews specification. To rewrite this specification with a B matrix containing ones on the main diagonal, define a new set of structural shocks by the

476—Chapter 32. Vector Autoregression and Error Correction Models
transformations v1,t |
= b11u1,t |
and v2,t |
|
= b22u2,t . Then the structural relation |
|||||||||||||
can be rewritten as, |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
e1,t |
= v1,t + (b12 |
§ b22 )v2,t |
|
|
|
|
|
|
|
(32.19) |
|||||
|
|
e2,t |
= (b21 § b11)v1,t + v2,t |
|
|
|
|
|
|
|
|||||||
|
|
|
|
|
|
|
|
|
|
||||||||
where now: |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
b12 § |
b22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
|
|
|
v1,t |
|
0 |
|
b112 0 |
|
|||||||
v |
|
= |
|
~ |
|
, |
|
(32.20) |
|||||||||
B = |
|
|
, |
t |
|
|
|
|
|
|
|
|
|||||
b21 § b11 |
1 |
|
|
|
|
v2,t |
|
|
0 |
|
|
0 b222 |
|
||||
|
|
|
|
|
|
|
|
|
|
|
|||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note that the transformation involves only rescaling elements of the B matrix and not on the A matrix. For the case where B is a diagonal matrix, the elements in the main diagonal are simply the estimated standard deviations of the structural shocks.
Identification Conditions
As stated above, the assumption of orthonormal structural innovations imposes
k(k + 1) § 2 restrictions on the 2k2 unknown elements in A and B , where k is the number of endogenous variables in the VAR. In order to identify A and B , you need to provide at least k(k + 1) § 2 – 2k2 = k(3k – 1) § 2 additional identifying restrictions. This is a necessary order condition for identification and is checked by counting the number of restrictions provided.
As discussed in Amisano and Giannini (1997), a sufficient condition for local identification can be checked by the invertibility of the “augmented” information matrix (see Amisano and Giannini, 1997). This local identification condition is evaluated numerically at the starting values. If EViews returns a singularity error message for different starting values, you should make certain that your restrictions identify the A and B matrices.
We also require the A and B matrices to be square and non-singular. The non-singularity condition is checked numerically at the starting values. If the A and B matrix is non-singu- lar at the starting values, an error message will ask you to provide a different set of starting values.
Sign Indeterminacy
For some restrictions, the signs of the A and B matrices are not identified; see Christiano, Eichenbaum, and Evans (1999) for a discussion of this issue. When the sign is indeterminate, we choose a normalization so that the diagonal elements of the factorization matrix A–1B are all positive. This normalization ensures that all structural impulses have positive signs (as does the Cholesky factorization). The default is to always apply this normalization rule whenever applicable. If you do not want to switch the signs, deselect the Normalize Sign option from the Optimization Control tab of the SVAR Options dialog.

Structural (Identified) VARs—477
Estimation of A and B Matrices
Once you provide the identifying restrictions in any of the forms described above, you are ready to estimate the A and B matrices. Simply click the OK button in the SVAR Options dialog. You must first estimate these matrices in order to use the structural option in impulse responses and variance decompositions.
A and B are estimated by maximum likelihood, assuming the innovations are multivariate normal. We evaluate the likelihood in terms of unconstrained parameters by substituting out the constraints. The log likelihood is maximized by the method of scoring (with a Mar- quardt-type diagonal correction—See “Marquardt,” on page 758), where the gradient and expected information matrix are evaluated analytically. See Amisano and Giannini (1997) for the analytic expression of these derivatives.
Optimization Control
Options for controlling the optimization process are provided in the Optimization Control tab of the SVAR Options dialog. You have the option to specify the starting values, maximum number of iterations, and the convergence criterion.
The starting values are those for the unconstrained parameters after substituting out the constraints. Fixed sets all free parameters to the value specified in the edit box. User Specified uses the values in the coefficient vector as specified in text form as starting values. For restrictions specified in pattern form, user specified starting values are taken from the first m elements of the default C coefficient vector, where m is the number of free parameters. Draw from... options randomly draw the starting values for the free parameters from the specified distributions.
Estimation Output
Once convergence is achieved, EViews displays the estimation output in the VAR window. The point estimates, standard errors, and z-statistics of the estimated free parameters are reported together with the maximized value of the log likelihood. The estimated standard errors are based on the inverse of the estimated information matrix (negative expected value of the Hessian) evaluated at the final estimates.
For overidentified models, we also report the LR test for over-identification. The LR test statistic is computed as:
LR = 2(lu – lr) = T(tr(P) – log |
|
P |
|
– k) |
(32.21) |
|
|
||||
|
|
where P = A¢B–TB–1AS . Under the null hypothesis that the restrictions are valid, the LR statistic is asymptotically distributed x2(q – k) where q is the number of identifying restrictions.