
- •Preface
- •Part IV. Basic Single Equation Analysis
- •Chapter 18. Basic Regression Analysis
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •References
- •Chapter 19. Additional Regression Tools
- •Special Equation Expressions
- •Robust Standard Errors
- •Weighted Least Squares
- •Nonlinear Least Squares
- •Stepwise Least Squares Regression
- •References
- •Chapter 20. Instrumental Variables and GMM
- •Background
- •Two-stage Least Squares
- •Nonlinear Two-stage Least Squares
- •Limited Information Maximum Likelihood and K-Class Estimation
- •Generalized Method of Moments
- •IV Diagnostics and Tests
- •References
- •Chapter 21. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •References
- •Chapter 22. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasts with Lagged Dependent Variables
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Nonlinear and PDL Specifications
- •References
- •Chapter 23. Specification and Diagnostic Tests
- •Background
- •Coefficient Diagnostics
- •Residual Diagnostics
- •Stability Diagnostics
- •Applications
- •References
- •Part V. Advanced Single Equation Analysis
- •Chapter 24. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •References
- •Chapter 25. Cointegrating Regression
- •Background
- •Estimating a Cointegrating Regression
- •Testing for Cointegration
- •Working with an Equation
- •References
- •Binary Dependent Variable Models
- •Ordered Dependent Variable Models
- •Censored Regression Models
- •Truncated Regression Models
- •Count Models
- •Technical Notes
- •References
- •Chapter 27. Generalized Linear Models
- •Overview
- •How to Estimate a GLM in EViews
- •Examples
- •Working with a GLM Equation
- •Technical Details
- •References
- •Chapter 28. Quantile Regression
- •Estimating Quantile Regression in EViews
- •Views and Procedures
- •Background
- •References
- •Chapter 29. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •References
- •Part VI. Advanced Univariate Analysis
- •Chapter 30. Univariate Time Series Analysis
- •Unit Root Testing
- •Panel Unit Root Test
- •Variance Ratio Test
- •BDS Independence Test
- •References
- •Part VII. Multiple Equation Analysis
- •Chapter 31. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •References
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •References
- •Chapter 33. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •References
- •Chapter 34. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •References
- •Part VIII. Panel and Pooled Data
- •Chapter 35. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •References
- •Chapter 36. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •References
- •Chapter 37. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •References
- •Part IX. Advanced Multivariate Analysis
- •Chapter 38. Cointegration Testing
- •Johansen Cointegration Test
- •Single-Equation Cointegration Tests
- •Panel Cointegration Testing
- •References
- •Chapter 39. Factor Analysis
- •Creating a Factor Object
- •Rotating Factors
- •Estimating Scores
- •Factor Views
- •Factor Procedures
- •Factor Data Members
- •An Example
- •Background
- •References
- •Appendix B. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •References
- •Appendix C. Gradients and Derivatives
- •Gradients
- •Derivatives
- •References
- •Appendix D. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Appendix E. Long-run Covariance Estimation
- •Technical Discussion
- •Kernel Function Properties
- •References
- •Index
- •Symbols
- •Numerics

78—Chapter 20. Instrumental Variables and GMM
Dependent Variable: I
Method: Generalized Method of Moments Date: 08/10/09 Time: 10:48
Sample (adjusted): 1921 1941
Included observations: 21 after adjustments Continuously updating weights & coefficients
Estimation weighting matrix: HAC (Prewhitening with lags = 1, Tukey -Hanning kernel, Andrews bandwidth = 2.1803)
Standard errors & covariance computed using estimation weighting matrix
Convergence achieved after 30 iterations
No d.f. adjustment for standard errors & covariance Instrument specification: C P(-1) K(-1) X(-1) TM WG G T
Variable |
Coefficient |
Std. Error |
t-Statistic |
Prob. |
|
|
|
|
|
C |
22.20609 |
5.693625 |
3.900168 |
0.0012 |
Y |
-0.261377 |
0.277758 |
-0.941024 |
0.3599 |
Y(-1) |
0.935801 |
0.235666 |
3.970878 |
0.0010 |
K(-1) |
-0.157050 |
0.024042 |
-6.532236 |
0.0000 |
|
|
|
|
|
|
|
|
|
|
R-squared |
0.659380 |
Mean dependent var |
1.266667 |
Adjusted R-squared |
0.599271 |
S.D. dependent var |
3.551948 |
S.E. of regression |
2.248495 |
Sum squared resid |
85.94740 |
Durbin-Watson stat |
1.804037 |
Instrument rank |
8 |
J-statistic |
1.949180 |
Prob(J-statistic) |
0.745106 |
|
|
|
|
Note that the header information for this equation shows slightly different information from the previous estimation. The inclusion of the HAC weighting matrix yields information on the prewhitening choice (lags = 1), and on the kernel specification, including the bandwidth that was chosen by the Andrews procedure (2.1803). Since the CUE procedure is used, the number of optimization iterations that took place is reported (39).
IV Diagnostics and Tests
EViews offers several IV and GMM specific diagnostics and tests.
Instrument Summary
The Instrument Summary view of an equation is available for non-panel equations estimated by GMM, TSLS or LIML. The summary will display the number of instruments specified, the instrument specification, and a list of the instruments that were used in estimation.
For most equations, the instruments used will be the same as the instruments that were specified in the equation, however if two or more of the instruments are collinear, EViews will automatically drop instruments until the instrument matrix is of full rank. In cases where instruments have been dropped, the summary will list which instruments were dropped.

IV Diagnostics and Tests—79
The Instrument Summary view may be found under View/IV Diagnostics & Tests/Instrument Summary.
Instrument Orthogonality Test
The Instrument Orthogonality test, also known as the C-test or Eichenbaum, Hansen and Singleton (EHS) Test, evaluates the othogonality condition of a sub-set of the instruments. This test is available for non-panel equations estimated by TSLS or GMM.
Recall that the central assumption of instrumental variable estimation is that the instruments are orthogonal to a function of the parameters of the model:
E(Z¢u(b)) = 0 |
(20.45) |
The Instrument Orthogonality Test evaluates whether this condition possibly holds for a subset of the instruments but not for the remaining instruments
E(Z1¢u(b)) = 0
(20.46)
E(Z2¢u(b)) π 0
Where Z = (Z1, Z2) , and Z1 are instruments for which the condition is assumed to hold.
The test statistic, CT , is calculated as the difference in J-statistics between the original equation and a secondary equation estimated using only Z1 as instruments:
CT = |
1 |
ˆ |
ˆ –1 |
ˆ |
---u(b)¢ZWT |
Z ¢u(b) |
|||
|
T |
|
|
|
1 |
˜ |
ˆ –1 |
˜ |
(20.47) |
– ---u(b)¢Z1WT1 |
Z1¢u(b) |
|||
T |
|
|
|
|
ˆ |
|
ˆ |
where b are the parameter estimates from the original TSLS or GMM estimation, and |
WT |
|
˜ |
ˆ –1 |
is the |
is the original weighting matrix, b are the estimates from the test equation, and WT1 |
||
ˆ –1 |
corresponding to the |
|
matrix for the test equation formed by taking the subset of WT |
|
instruments in Z1 . The test statistic is Chi-squared distributed with degrees of freedom equal to the number of instruments in Z2 .
To perform the Instrumental Orthogonality Test in EViews, click on View/IV Diagnostics and Tests/Instrument Orthogonality Test. A dialog box will the open up asking you to enter a list of the Z2 instruments for which the orthogonality condition may not hold. Click on OK and the test results will be displayed.
Regressor Endogeneity Test
The Regressor Endogeneity Test, also known as the Durbin-Wu-Hausman Test, tests for the endogeneity of some, or all, of the equation regressors. This test is available for non-panel equations estimated by TSLS or GMM.
A regressor is endogenous if it is explained by the instruments in the model, whereas exogenous variables are those which are not explained by instruments. In EViews’ TSLS and GMM estimation, exogenous variables may be specified by including a variable as both a

80—Chapter 20. Instrumental Variables and GMM
regressor and an instrument, whereas endogenous variable are those which are specified in the regressor list only.
The Endogeneity Test tests whether a subset of the endogenous variables are actually exogenous. This is calculated by running a secondary estimation where the test variables are treated as exogenous rather than endogenous, and then comparing the J-statistic between this secondary estimation and the original estimation:
HT = |
1 |
˜ |
˜ ˜ –1 |
˜ |
˜ |
1 |
ˆ |
ˆ –1 |
ˆ |
(20.48) |
---u(b)¢ZWT |
Z ¢u(b) – ---u(b)¢ZWT*Z ¢u(b) |
|||||||||
|
T |
|
|
|
|
T |
|
|
|
|
ˆ |
|
|
where b are the parameter estimates from the original TSLS or GMM estimation obtained |
||
ˆ |
˜ |
˜ |
using weights WT , and |
b are the estimates from the test equation estimated using Z , the |
|
|
˜ |
is the weighting |
instruments augmented by the variables which are being tested, and WT |
||
matrix from the secondary estimation. |
|
|
|
ˆ –1 |
˜ –1 |
Note that in the case of GMM estimation, the matrix WT* should be a sub-matrix of WT |
to ensure positivity of the test statistic. Accordingly, in computing the test statistic, EViews
|
˜ |
˜ –1 |
, |
first estimates the secondary equation to obtain b , and then forms a new matrix |
WT* |
||
˜ –1 |
corresponding to the original instruments Z . A third estimation |
||
which is the subset of WT |
is then performed using the subset matrix for weighting, and the test statistic is calculated as:
HT = |
1 |
˜ ˜ ˜ –1 |
---u(b)¢Z WT |
||
|
T |
|
˜ |
˜ |
1 |
ˆ |
|
˜ |
–1 |
ˆ |
|
) |
|
--- |
|
(20.49) |
||||||||
Z ¢u(b) – |
Tu(b |
|
)¢Z¢ WT* Z¢u(b |
|
The test statistic is distributed as a Chi-squared random variable with degrees of freedom equal to the number of regressors tested for endogeneity.
To perform the Regressor Endogeneity Test in EViews, click on View/IV Diagnostics and Tests/Regressor Endogeneity Test. A dialog box will the open up asking you to enter a list of regressors to test for endogeneity. Once you have entered those regressors, hit OK and the test results are shown.
Weak Instrument Diagnostics
The Weak Instrument Diagnostics view provides diagnostic information on the instruments used during estimation. This information includes the Cragg-Donald statistic, the associated Stock and Yugo critical values, and Moment Selection Criteria (MSC). The Cragg-Donald statistic and its critical values are available for equations estimated by TSLS, GMM or LIML, but the MSC are available for equations estimated by TSLS or GMM only.
The Cragg-Donald statistic is proposed by Stock and Yugo as a measure of the validity of the instruments in an IV regression. Instruments that are only marginally valid, known as weak instruments, can lead to biased inferences based on the IV estimates, thus testing for the presence of weak instruments is important. For a discussion of the properties of IV estima-

IV Diagnostics and Tests—81
tion when the instruments are weak, see, for example, Moreira 2001, Stock and Yugo 2004 or Stock, Wright and Yugo 2002.
Although the Cragg-Donald statistic is only valid for TSLS and other K-class estimators, EViews also reports for equations estimated by GMM for comparative purposes.
The Cragg-Donald statistic is calculated as:
G |
= |
|
(T – k1 |
– k2 )2 |
(X |
¢M X ) |
–1 § 2 |
(M X )¢M Z ((M Z )¢(M Z )) |
–1 |
(20.50) |
||
|
|
|
|
|
||||||||
t |
|
|
---------------------------------- |
|
E XZ E |
|
X E X Z |
X Z |
X Z |
|
|
|
|
|
k2 |
|
|
|
|
|
(MXZZ)¢(MXXE )(XE¢MXZXE )–1 § 2
where:
ZZ = instruments that are not in the regressor list
XZ = (XX ZZ)
XX = exogenous regressors (regressors in both the regressor and instrument lists)
XE = endogenous regressors (regressors that are not in instrument list)
MXZ = I – XZ(XZ¢XZ)–1XZ¢
MX = I – XX(XX¢XX)–1XX¢ k1 = number of columns of XX k2 = number of columns of ZZ
The statistic does not follow a standard distribution, however Stock and Yugo provide a table of critical values for certain combinations of instruments and endogenous variable numbers. EViews will report these critical values if they are available for the specified number of instruments and endogenous variables in the equation.
Moment Selection Criteria (MSC) are a form of Information Criteria that can be used to compare different instrument sets. Comparison of the MSC from equations estimated with different instruments can help determine which instruments perform the best. EViews reports three different MSCs: two proposed by Andrews (1999)—a Schwarz criterion based, and a Hannan-Quinn criterion based, and the third proposed by Hall, Inoue, Jana and Shin (2007)—the Relevant Moment Selection Criterion. They are calculated as follows:
SIC-based = JT – (c – k)ln(T)
HQIQ-based = JT – 2.01(c – k)ln(ln(T))
Relevant MSC = ln( TQ )(1 § t)(c – k)ln(t)

82—Chapter 20. Instrumental Variables and GMM
where c = the number of instruments, k = the number of regressors, T = the number of observations, Q = the estimation covariance matrix,
|
|
T 1 § 2 |
t = |
|
|
|
--- |
|
b |
and b is equal 1 for TSLS and White GMM estimation, and equal to the bandwidth used in HAC GMM estimation.
To view the Weak Instrument Diagnostics in EViews, click on View/IV Diagnostics & Tests/Weak Instrument Diagnostics.
GMM Breakpoint Test
The GMM Breakpoint test is similar to the Chow Breakpoint Test, but it is geared towards equations estimated via GMM rather than least squares.
EViews calculates three different types of GMM breakpoint test statistics: the Andrews-Fair (1988) Wald Statistic, the Andrews-Fair LR-type Statistic, and the Hall and Sen (1999) O-Sta- tistic. The first two statistics test the null hypothesis that there are no structural breaks in the equation parameters. The third statistic tests the null hypothesis that the over-identify- ing restrictions are stable over the entire sample.
All three statistics are calculated in a similar fashion to the Chow Statistic – the data are partitioned into different subsamples, and the original equation is re-estimated for each of these subsamples. However, unlike the Chow Statistic, which is calculated on the basis that the variance-covariance matrix of the error terms remains constant throughout the entire sample (i.es2 . is the same between subsamples), the GMM breakpoint statistic lets the variancecovariance matrix of the error terms vary between the subsamples.
The Andrews-Fair Wald Statistic is calculated, in the single breakpoint case, as:
AF1 |
= |
(v1 |
– v2)¢ |
1 –1 |
+ |
1 |
– |
1 |
–1 |
(v1 – v2) |
(20.51) |
T1 V1 |
T2 V2 |
|
|
||||||||
|
|
|
|
------ |
|
------ |
|
|
|
|
|
Where vi refers to the coefficient estimates from subsample i , Ti |
refers to the number of |
observations in subsample i , and Vi is the estimate of the variance-covariance matrix for subsample i .
The Andrews-Fair LR-type statistic is a comparison of the J-statistics from each of the subsample estimations:
AF2 = JR – (J1 + J2) |
(20.52) |
Where JR is a J-statistic calculated with the original equation’s residuals, but a GMM weighting matrix equal to the weighted (by number of observations) sum of the estimated weighting matrices from each of the subsample estimations.