- •Table of Contents
- •What’s New in EViews 5.0
- •What’s New in 5.0
- •Compatibility Notes
- •EViews 5.1 Update Overview
- •Overview of EViews 5.1 New Features
- •Preface
- •Part I. EViews Fundamentals
- •Chapter 1. Introduction
- •What is EViews?
- •Installing and Running EViews
- •Windows Basics
- •The EViews Window
- •Closing EViews
- •Where to Go For Help
- •Chapter 2. A Demonstration
- •Getting Data into EViews
- •Examining the Data
- •Estimating a Regression Model
- •Specification and Hypothesis Tests
- •Modifying the Equation
- •Forecasting from an Estimated Equation
- •Additional Testing
- •Chapter 3. Workfile Basics
- •What is a Workfile?
- •Creating a Workfile
- •The Workfile Window
- •Saving a Workfile
- •Loading a Workfile
- •Multi-page Workfiles
- •Addendum: File Dialog Features
- •Chapter 4. Object Basics
- •What is an Object?
- •Basic Object Operations
- •The Object Window
- •Working with Objects
- •Chapter 5. Basic Data Handling
- •Data Objects
- •Samples
- •Sample Objects
- •Importing Data
- •Exporting Data
- •Frequency Conversion
- •Importing ASCII Text Files
- •Chapter 6. Working with Data
- •Numeric Expressions
- •Series
- •Auto-series
- •Groups
- •Scalars
- •Chapter 7. Working with Data (Advanced)
- •Auto-Updating Series
- •Alpha Series
- •Date Series
- •Value Maps
- •Chapter 8. Series Links
- •Basic Link Concepts
- •Creating a Link
- •Working with Links
- •Chapter 9. Advanced Workfiles
- •Structuring a Workfile
- •Resizing a Workfile
- •Appending to a Workfile
- •Contracting a Workfile
- •Copying from a Workfile
- •Reshaping a Workfile
- •Sorting a Workfile
- •Exporting from a Workfile
- •Chapter 10. EViews Databases
- •Database Overview
- •Database Basics
- •Working with Objects in Databases
- •Database Auto-Series
- •The Database Registry
- •Querying the Database
- •Object Aliases and Illegal Names
- •Maintaining the Database
- •Foreign Format Databases
- •Working with DRIPro Links
- •Part II. Basic Data Analysis
- •Chapter 11. Series
- •Series Views Overview
- •Spreadsheet and Graph Views
- •Descriptive Statistics
- •Tests for Descriptive Stats
- •Distribution Graphs
- •One-Way Tabulation
- •Correlogram
- •Unit Root Test
- •BDS Test
- •Properties
- •Label
- •Series Procs Overview
- •Generate by Equation
- •Resample
- •Seasonal Adjustment
- •Exponential Smoothing
- •Hodrick-Prescott Filter
- •Frequency (Band-Pass) Filter
- •Chapter 12. Groups
- •Group Views Overview
- •Group Members
- •Spreadsheet
- •Dated Data Table
- •Graphs
- •Multiple Graphs
- •Descriptive Statistics
- •Tests of Equality
- •N-Way Tabulation
- •Principal Components
- •Correlations, Covariances, and Correlograms
- •Cross Correlations and Correlograms
- •Cointegration Test
- •Unit Root Test
- •Granger Causality
- •Label
- •Group Procedures Overview
- •Chapter 13. Statistical Graphs from Series and Groups
- •Distribution Graphs of Series
- •Scatter Diagrams with Fit Lines
- •Boxplots
- •Chapter 14. Graphs, Tables, and Text Objects
- •Creating Graphs
- •Modifying Graphs
- •Multiple Graphs
- •Printing Graphs
- •Copying Graphs to the Clipboard
- •Saving Graphs to a File
- •Graph Commands
- •Creating Tables
- •Table Basics
- •Basic Table Customization
- •Customizing Table Cells
- •Copying Tables to the Clipboard
- •Saving Tables to a File
- •Table Commands
- •Text Objects
- •Part III. Basic Single Equation Analysis
- •Chapter 15. Basic Regression
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •Chapter 16. Additional Regression Methods
- •Special Equation Terms
- •Weighted Least Squares
- •Heteroskedasticity and Autocorrelation Consistent Covariances
- •Two-stage Least Squares
- •Nonlinear Least Squares
- •Generalized Method of Moments (GMM)
- •Chapter 17. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •Nonstationary Time Series
- •Unit Root Tests
- •Panel Unit Root Tests
- •Chapter 18. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Expression and PDL Specifications
- •Chapter 19. Specification and Diagnostic Tests
- •Background
- •Coefficient Tests
- •Residual Tests
- •Specification and Stability Tests
- •Applications
- •Part IV. Advanced Single Equation Analysis
- •Chapter 20. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •Binary Dependent Variable Models
- •Estimating Binary Models in EViews
- •Procedures for Binary Equations
- •Ordered Dependent Variable Models
- •Estimating Ordered Models in EViews
- •Views of Ordered Equations
- •Procedures for Ordered Equations
- •Censored Regression Models
- •Estimating Censored Models in EViews
- •Procedures for Censored Equations
- •Truncated Regression Models
- •Procedures for Truncated Equations
- •Count Models
- •Views of Count Models
- •Procedures for Count Models
- •Demonstrations
- •Technical Notes
- •Chapter 22. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •Part V. Multiple Equation Analysis
- •Chapter 23. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Cointegration Test
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •Chapter 25. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •Chapter 26. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •Part VI. Panel and Pooled Data
- •Chapter 27. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •Chapter 28. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •Chapter 29. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •Appendix A. Global Options
- •The Options Menu
- •Print Setup
- •Appendix B. Wildcards
- •Wildcard Expressions
- •Using Wildcard Expressions
- •Source and Destination Patterns
- •Resolving Ambiguities
- •Wildcard versus Pool Identifier
- •Appendix C. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •Appendix D. Gradients and Derivatives
- •Gradients
- •Derivatives
- •Appendix E. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Index
- •Symbols
- •.DB? files 266
- •.EDB file 262
- •.RTF file 437
- •.WF1 file 62
- •@obsnum
- •Panel
- •@unmaptxt 174
- •~, in backup file name 62, 939
- •Numerics
- •3sls (three-stage least squares) 697, 716
- •Abort key 21
- •ARIMA models 501
- •ASCII
- •file export 115
- •ASCII file
- •See also Unit root tests.
- •Auto-search
- •Auto-series
- •in groups 144
- •Auto-updating series
- •and databases 152
- •Backcast
- •Berndt-Hall-Hall-Hausman (BHHH). See Optimization algorithms.
- •Bias proportion 554
- •fitted index 634
- •Binning option
- •classifications 313, 382
- •Boxplots 409
- •By-group statistics 312, 886, 893
- •coef vector 444
- •Causality
- •Granger's test 389
- •scale factor 649
- •Census X11
- •Census X12 337
- •Chi-square
- •Cholesky factor
- •Classification table
- •Close
- •Coef (coefficient vector)
- •default 444
- •Coefficient
- •Comparison operators
- •Conditional standard deviation
- •graph 610
- •Confidence interval
- •Constant
- •Copy
- •data cut-and-paste 107
- •table to clipboard 437
- •Covariance matrix
- •HAC (Newey-West) 473
- •heteroskedasticity consistent of estimated coefficients 472
- •Create
- •Cross-equation
- •Tukey option 393
- •CUSUM
- •sum of recursive residuals test 589
- •sum of recursive squared residuals test 590
- •Data
- •Database
- •link options 303
- •using auto-updating series with 152
- •Dates
- •Default
- •database 24, 266
- •set directory 71
- •Dependent variable
- •Description
- •Descriptive statistics
- •by group 312
- •group 379
- •individual samples (group) 379
- •Display format
- •Display name
- •Distribution
- •Dummy variables
- •for regression 452
- •lagged dependent variable 495
- •Dynamic forecasting 556
- •Edit
- •See also Unit root tests.
- •Equation
- •create 443
- •store 458
- •Estimation
- •EViews
- •Excel file
- •Excel files
- •Expectation-prediction table
- •Expected dependent variable
- •double 352
- •Export data 114
- •Extreme value
- •binary model 624
- •Fetch
- •File
- •save table to 438
- •Files
- •Fitted index
- •Fitted values
- •Font options
- •Fonts
- •Forecast
- •evaluation 553
- •Foreign data
- •Formula
- •forecast 561
- •Freq
- •DRI database 303
- •F-test
- •for variance equality 321
- •Full information maximum likelihood 698
- •GARCH 601
- •ARCH-M model 603
- •variance factor 668
- •system 716
- •Goodness-of-fit
- •Gradients 963
- •Graph
- •remove elements 423
- •Groups
- •display format 94
- •Groupwise heteroskedasticity 380
- •Help
- •Heteroskedasticity and autocorrelation consistent covariance (HAC) 473
- •History
- •Holt-Winters
- •Hypothesis tests
- •F-test 321
- •Identification
- •Identity
- •Import
- •Import data
- •See also VAR.
- •Index
- •Insert
- •Instruments 474
- •Iteration
- •Iteration option 953
- •in nonlinear least squares 483
- •J-statistic 491
- •J-test 596
- •Kernel
- •bivariate fit 405
- •choice in HAC weighting 704, 718
- •Kernel function
- •Keyboard
- •Kwiatkowski, Phillips, Schmidt, and Shin test 525
- •Label 82
- •Last_update
- •Last_write
- •Latent variable
- •Lead
- •make covariance matrix 643
- •List
- •LM test
- •ARCH 582
- •for binary models 622
- •LOWESS. See also LOESS
- •in ARIMA models 501
- •Mean absolute error 553
- •Metafile
- •Micro TSP
- •recoding 137
- •Models
- •add factors 777, 802
- •solving 804
- •Mouse 18
- •Multicollinearity 460
- •Name
- •Newey-West
- •Nonlinear coefficient restriction
- •Wald test 575
- •weighted two stage 486
- •Normal distribution
- •Numbers
- •chi-square tests 383
- •Object 73
- •Open
- •Option setting
- •Option settings
- •Or operator 98, 133
- •Ordinary residual
- •Panel
- •irregular 214
- •unit root tests 530
- •Paste 83
- •PcGive data 293
- •Polynomial distributed lag
- •Pool
- •Pool (object)
- •PostScript
- •Prediction table
- •Principal components 385
- •Program
- •p-value 569
- •for coefficient t-statistic 450
- •Quiet mode 939
- •RATS data
- •Read 832
- •CUSUM 589
- •Regression
- •Relational operators
- •Remarks
- •database 287
- •Residuals
- •Resize
- •Results
- •RichText Format
- •Robust standard errors
- •Robustness iterations
- •for regression 451
- •with AR specification 500
- •workfile 95
- •Save
- •Seasonal
- •Seasonal graphs 310
- •Select
- •single item 20
- •Serial correlation
- •theory 493
- •Series
- •Smoothing
- •Solve
- •Source
- •Specification test
- •Spreadsheet
- •Standard error
- •Standard error
- •binary models 634
- •Start
- •Starting values
- •Summary statistics
- •for regression variables 451
- •System
- •Table 429
- •font 434
- •Tabulation
- •Template 424
- •Tests. See also Hypothesis tests, Specification test and Goodness of fit.
- •Text file
- •open as workfile 54
- •Type
- •field in database query 282
- •Units
- •Update
- •Valmap
- •find label for value 173
- •find numeric value for label 174
- •Value maps 163
- •estimating 749
- •View
- •Wald test 572
- •nonlinear restriction 575
- •Watson test 323
- •Weighting matrix
- •heteroskedasticity and autocorrelation consistent (HAC) 718
- •kernel options 718
- •White
- •Window
- •Workfile
- •storage defaults 940
- •Write 844
- •XY line
- •Yates' continuity correction 321
712—Chapter 23. System Estimation
substitution between capital and labor is given by 1+c(3)/(C_K*C_L). Note that the elasticity of substitution is not a constant, and depends on the values of C_K and C_L. To create a series containing the elasticities computed for each observation, select Quick/ Generate Series…, and enter:
es_kl = 1 + sys1.c(3)/(c_k*c_l)
To plot the series of elasticity of substitution between capital and labor for each observation, double click on the series name ES_KL in the workfile and select View/Line Graph:
While it varies over the sample, the elasticity of substitution is generally close to one, which is consistent with the assumption of a Cobb-Douglas cost function.
Technical Discussion
While the discussion to follow is expressed in terms of a balanced system of linear equations, the analysis carries forward in a straightforward way to unbalanced systems containing nonlinear equations.
Denote a system of m equations in stacked form as:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
y1 |
|
X1 0 … 0 |
|
β1 |
|
1 |
|
||||
|
y2 |
= |
0 |
X2 |
|
|
β2 |
+ |
2 |
(23.4) |
||
|
|
|
|
|
0 |
|
|
|
|
|
|
|
yM |
|
0 |
… 0 XM |
|
βM |
|
M |
|
||||
|
|
|
|
|
|
|
|
|
|
|
|
|
Technical Discussion—713
where ym is T vector, Xm is a T × km matrix, and βm is a km vector of coefficients. The error terms have an MT × MT covariance matrix V . The system may be written in compact form as:
y = Xβ + . |
(23.5) |
Under the standard assumptions, the residual variance matrix from this stacked system is given by:
V = E( ′ ) = σ2( IM IT) . |
(23.6) |
Other residual structures are of interest. First, the errors may be heteroskedastic across the m equations. Second, they may be heteroskedastic and contemporaneously correlated. We can characterize both of these cases by defining the M × M matrix of contemporane-
ous correlations, Σ , where the (i,j)-th element of Σ is given by σij = E( it jt) for all |
||||
t . If the errors are contemporaneously uncorrelated, then, σij |
= 0 for i ≠ j , and we can |
|||
write: |
|
|
|
|
V = diag( σ2, σ2,…, σ |
2 |
) I |
T |
(23.7) |
1 2 |
M |
|
|
More generally, if the errors are heteroskedastic and contemporaneously correlated:
V = Σ IT . |
(23.8) |
Lastly, at the most general level, there may be heteroskedasticity, contemporaneous correlation, and autocorrelation of the residuals. The general variance matrix of the residuals may be written:
|
σ11Σ11 |
σ12Σ12 |
… |
|
|
|
|
σ1MΣ1M |
|
||||
|
σ21Σ21 |
σ22Σ22 |
|
|
|
(23.9) |
V = |
|
|
||||
|
|
|
|
|
|
|
|
|
… |
|
|
|
|
σM1ΣM1 |
|
σMMΣMM |
|
where Σij is an autocorrelation matrix for the i-th and j-th equations.
Ordinary Least Squares
The OLS estimator of the estimated variance matrix of the parameters is valid under the assumption that V = Σ IT . The estimator for β is given by,
bLS = ( X′ X)−1X′y |
(23.10) |
and the variance estimator is given by: |
|
var( bLS) = s2( X′ X)−1 |
(23.11) |
where s2 is the residual variance estimate for the stacked system.
714—Chapter 23. System Estimation
Weighted Least Squares
The weighted least squares estimator is given by:
|
ˆ −1 |
X) |
−1 |
ˆ |
−1 |
(23.12) |
|
bWLS = ( X′V |
|
X′V |
y |
||
ˆ |
diag( s11, s22, …, sMM) IT is a consistent estimator of V , and sii is the |
|||||
where V = |
||||||
residual variance estimator, |
|
|
|
|
|
|
|
sij = ( ( yi − XibLS) ′( yj − XjbLS) ) ⁄ max( Ti, Tj) |
(23.13) |
||||
where the inner product is taken over the non-missing common elements of i |
and j . The |
max function in Equation (23.13) is designed to handle the case of unbalanced data by down-weighting the covariance terms. Provided the missing values are asymptotically negligible, this yields a consistent estimator of the variance elements. Note also that there is no adjustment for degrees of freedom.
When specifying your estimation specification, you are given a choice of which coefficients to use in computing the sij . If you choose not to iterate the weights, the OLS coefficient estimates will be used to estimate the variances. If you choose to iterate the weights, the current parameter estimates (which may be based on the previously computed weights) are used in computing the sij . This latter procedure may be iterated until the weights and coefficients converge.
The estimator for the coefficient variance matrix is:
ˆ |
−1 |
−1 |
(23.14) |
var( bWLS) = ( X′V |
X) |
. |
The weighted least squares estimator is efficient, and the variance estimator consistent, under the assumption that there is heteroskedasticity, but no serial or contemporaneous correlation in the residuals.
It is worth pointing out that if there are no cross-equation restrictions on the parameters of the model, weighted LS on the entire system yields estimates that are identical to those obtained by equation-by-equation LS. Consider the following simple model:
y1 = X1β1 + 1
(23.15)
y2 = X2β2 + 2
If β1 and β2 are unrestricted, the WLS estimator given in Equation (23.14) yields:
b |
WLS |
= |
|
( ( X |
1′X |
1) ⁄ s11 )−1( ( X1′y1) ⁄ s11) |
|
= |
|
( X1′X1)−1X1′y1 |
|
. (23.16) |
|
|
|
|
|||||||||
|
( ( X2′X2) ⁄ s22 )−1( ( X2′y2) ⁄ s22) |
|
|
( X2′X2)−1X2′y2 |
|
|||||||
|
|
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
||||
|
|
|
|
|
|
|
|
|
The expression on the right is equivalent to equation-by-equation OLS. Note, however, that even without cross-equation restrictions, the standard errors are not the same in the two cases.
Technical Discussion—715
Seemingly Unrelated Regression (SUR)
SUR is appropriate when all the right-hand side regressors X are assumed to be exogenous, and the errors are heteroskedastic and contemporaneously correlated so that the error variance matrix is given by V = Σ IT . Zellner’s SUR estimator of β takes the form
ˆ |
−1 |
X ) |
−1 |
ˆ |
−1 |
(23.17) |
bSUR = ( X′( Σ IT) |
|
|
X′( Σ IT) |
y , |
ˆ
where Σ is a consistent estimate of Σ with typical element sij , for all i and j .
If you include AR terms in equation j , EViews transforms the model (see “Estimating AR Models” on page 497) and estimates the following equation:
|
pj |
|
+ jt |
|
yjt = Xjtβj + |
Σ ρjr( yj(t−r) − Xj(t−r)) |
(23.18) |
r = 1
where j is assumed to be serially independent, but possibly correlated contemporaneously across equations. At the beginning of the first iteration, we estimate the equation by nonlinear LS and use the estimates to compute the residuals ˆ . We then construct an estimate of Σ using sij = ( ˆi′ ˆj) ⁄ max( Ti, Tj) and perform nonlinear GLS to complete one iteration of the estimation procedure. These iterations may be repeated until the coefficients and weights converge.
Two-Stage Least Squares (TSLS) and Weighted TSLS
TSLS is a single equation estimation method that is appropriate when some of the variables in X are endogenous. Write the j-th equation of the system as,
YΓj + XBj + j |
= 0 |
(23.19) |
|||
or, alternatively: |
|
|
|
|
|
yj = Yjγj + Xjβj + j |
= Zjδj + j |
(23.20) |
|||
where Γj′ = ( −1, γj′, 0) , Bj′ |
= ( βj′, 0) , Zj′ = ( Yj′, Xj′) and δj′ |
= ( γj′, βj′) . Y |
|||
is the matrix of endogenous variables and X is the matrix of exogenous variables. |
|||||
In the first stage, we regress the right-hand side endogenous variables yj |
on all exogenous |
||||
variables X and get the fitted values: |
|
|
|
|
|
ˆ |
= X( X′X) |
−1 |
X′Xyj . |
(23.21) |
|
Yj |
|
In the second stage, we regress
ˆ ( ˆ ) where Zj = Yj, Xj .
ˆ |
|
|
to get: |
|
yj on Yj and Xj |
|
|||
ˆ |
ˆ |
ˆ |
−1 ˆ |
(23.22) |
δ2SLS = |
( Zj′ Zj) |
Zj′y . |
716—Chapter 23. System Estimation
Weighted TSLS applies the weights in the second stage so that:
ˆ |
ˆ |
ˆ −1 |
ˆ |
−1 ˆ |
ˆ |
−1 |
(23.23) |
δW2SLS = |
( Zj′ V |
Zj) |
Zj′V |
y |
where the elements of the variance matrix are estimated in the usual fashion using the residuals from unweighted TSLS.
If you choose to iterate the weights, X is estimated at each step using the current values of the coefficients and residuals.
Three-Stage Least Squares (3SLS)
Since TSLS is a single equation estimator that does not take account of the covariances between residuals, it is not, in general, fully efficient. 3SLS is a system method that estimates all of the coefficients of the model, then forms weights and reestimates the model using the estimated weighting matrix. It should be viewed as the endogenous variable analogue to the SUR estimator described above.
The first two stages of 3SLS are the same as in TSLS. In the third stage, we apply feasible generalized least squares (FGLS) to the equations in the system in a manner analogous to the SUR estimator.
SUR uses the OLS residuals to obtain a consistent estimate of the cross-equation covariance matrix Σ . This covariance estimator is not, however, consistent if any of the righthand side variables are endogenous. 3SLS uses the 2SLS residuals to obtain a consistent estimate of Σ .
In the balanced case, we may write the equation as,
ˆ |
ˆ |
−1 |
X( X′X) |
−1 |
X′ ) Z) |
−1 |
ˆ |
−1 |
X( X′X) |
−1 |
X′) y , (23.24) |
δ3SLS = ( Z( Σ |
|
|
Z |
( Σ |
|
|
|||||
ˆ |
|
|
|
|
|
|
|
|
|
|
|
where Σ has typical element: |
|
|
|
|
|
|
|
|
|||
sij |
|
|
ˆ |
|
ˆ |
|
|
|
|
|
(23.25) |
= ( ( yi − Ziγ2SLS) ′( yj − Zjγ2SLS) ) ⁄ max( Ti, Tj) . |
|
If you choose to iterate the weights, the current coefficients and residuals will be used to
ˆ
estimate Σ .
Generalized Method of Moments (GMM)
The basic idea underlying GMM is simple and intuitive. We have a set of theoretical moment conditions that the parameters of interest θ should satisfy. We denote these moment conditions as:
E( m( y, θ) ) = 0 . |
(23.26) |
The method of moments estimator is defined by replacing the moment condition (23.26) by its sample analog:
|
Technical Discussion—717 |
|
|
( Σ m( yt, θ) ) ⁄ T = 0 . |
(23.27) |
t |
|
However, condition (23.27) will not be satisfied for any θ when there are more restrictions m than there are parameters θ . To allow for such overidentification, the GMM estimator is defined by minimizing the following criterion function:
Σm( yt, θ) A( yt, θ)m( yt, θ) |
(23.28) |
t |
|
which measures the “distance” between m and zero. A is a weighting matrix that weights each moment condition. Any symmetric positive definite matrix A will yield a consistent estimate of θ . However, it can be shown that a necessary (but not sufficient) condition to obtain an (asymptotically) efficient estimate of θ is to set A equal to the inverse of the covariance matrix Ω of the sample moments m . This follows intuitively, since we want to put less weight on the conditions that are more imprecise.
To obtain GMM estimates in EViews, you must be able to write the moment conditions in Equation (23.26) as an orthogonality condition between the residuals of a regression equation, u( y, θ, X) , and a set of instrumental variables, Z , so that:
m( θ, y, X, Z ) = Z′u( θ, y, X) |
(23.29) |
For example, the OLS estimator is obtained as a GMM estimator with the orthogonality conditions:
X′( y − Xβ) = 0 . |
(23.30) |
For the GMM estimator to be identified, there must be at least as many instrumental variables Z as there are parameters θ . See the section on “Generalized Method of Moments (GMM)” beginning on page 488 for additional examples of GMM orthogonality conditions.
An important aspect of specifying a GMM problem is the choice of the weighting matrix |
|||
A . EViews uses the optimal A = |
ˆ |
−1 |
ˆ |
Ω |
|
, where Ω is the estimated covariance matrix of |
the sample moments m . EViews uses the consistent TSLS estimates for the initial estimate of θ in forming the estimate of Ω .
White’s Heteroskedasticity Consistent Covariance Matrix
If you choose the GMM-Cross section option, EViews estimates Ω using White’s heteroskedasticity consistent covariance matrix:
ˆ |
ˆ |
1 |
|
T |
Z |
′u |
u |
′Z |
|
(23.31) |
ΩW = Γ( 0) = |
------------ |
|
Σ |
t |
||||||
|
|
T − k |
t |
t |
t |
|
|
|||
|
|
|
|
t = 1 |
|
|
|
|
|
|
where u is the vector of residuals, and Zt is a k × p matrix such that the p moment conditions at t may be written as m( θ, yt, Xt, Zt) = Zt′u( θ, yt, Xt) .
718—Chapter 23. System Estimation
Heteroskedasticity and Autocorrelation Consistent (HAC) Covariance
Matrix
If you choose the GMM-Time series option, EViews estimates Ω by,
ˆ |
|
ˆ |
T − 1 |
|
|
ˆ |
|
|
ˆ |
|
|
|
ΩHAC |
= Γ( 0) + Σ k( j, q)( Γ( j) + Γ |
′( j) ) |
||||||||||
|
|
|
|
j = 1 |
|
|
|
|
|
|
|
|
where: |
|
|
|
|
|
|
|
|
|
|
|
|
ˆ |
|
1 |
|
T |
|
|
′u |
|
|
′Z |
|
|
|
|
Z |
|
u |
|
. |
||||||
Γ( j) |
= ------------ |
Σ |
t − j |
t − j |
t |
|||||||
|
|
T − k |
|
t |
|
|
|
|||||
|
|
|
|
t = j + 1 |
|
|
|
|
|
|
|
|
(23.32)
(23.33)
You also need to specify the kernel κ and the bandwidth q .
Kernel Options
ˆ
The kernel κ is used to weight the covariances so that Ω is ensured to be positive semidefinite. EViews provides two choices for the kernel, Bartlett and quadratic spectral (QS). The Bartlett kernel is given by,
κ( j, q) |
|
1 − ( j ⁄ q) |
0 ≤ j ≤ q |
= |
0 |
otherwise |
|
|
|
while the quadratic spectral (QS) kernel is given by:
k( j ⁄ q) |
= |
25 |
|
sin( 6πx ⁄ 5) |
|
|
------------------- |
2- |
|
----------------------------- − cos ( 6πx ⁄ 5 ) |
|||
|
|
12( πx) |
6πx ⁄ 5 |
|
||
|
|
|
|
|
|
(23.34)
(23.35)
where x = j ⁄ q . The QS has a faster rate of convergence than the Bartlett and is smooth and not truncated (Andrews 1991). Note that even though the QS kernel is not truncated, it still depends on the bandwidth q (which need not be an integer).
Bandwidth Selection
The bandwidth q determines how the weights given by the kernel change with the lags in the estimation of Ω . Newey-West fixed bandwidth is based solely on the number of observations in the sample and is given by:
q = int( 4( T ⁄ 100)2 ⁄ 9) |
(23.36) |
where int( ) denotes the integer part of the argument.
EViews also provides two “automatic”, or data dependent bandwidth selection methods that are based on the autocorrelations in the data. Both methods select the bandwidth according to:
|
|
|
|
|
|
|
Technical Discussion—719 |
|
|
|
|
|
|
|
|
|
|
ˆ |
|
|
1 ⁄ 3 |
) |
for the Bartlett kernel |
int( 1.1447 ( α( 1) T) |
|
||||||
q = |
|
ˆ |
|
1 ⁄ 5 |
|
(23.37) |
|
|
1.3221 |
2 )T) |
|
for the QS kernel |
|||
|
( α( |
|
|
|
|
The two methods, Andrews and Variable-Newey-West, differ in how they estimate αˆ ( 1) and αˆ ( 2 ) .
Andrews (1991) is a parametric method that assumes the sample moments follow an AR(1) process. We first fit an AR(1) to each sample moment (23.29) and estimate the auto-
ˆ |
ˆ 2 |
for i = 1, 2, …, zn , where z |
correlation coefficients ρi |
and the residual variances σi |
is the number of instrumental variables and n is the number of equations in the system.
ˆ |
ˆ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Then α( 1) |
and α( 2 ) are estimated by: |
|
|
|
|
|
|
|
|
|
|
|
|
||||
|
ˆ |
|
zn |
|
ˆ 2 |
ˆ |
4 |
|
|
|
zn |
ˆ |
4 |
|
|||
|
|
|
4ρi |
σi |
|
|
|
σi |
|||||||||
|
α( 1 ) = |
|
Σ |
------------------------------------------- |
⁄ |
Σ |
-------------------- |
||||||||||
|
|
|
|
ˆ |
6 |
( |
|
|
ˆ |
2 |
|
|
ˆ |
4 |
|||
|
|
i = 1 |
( 1 − ρi) |
|
1 + ρi) |
i = 1 ( 1 − |
ρi) |
||||||||||
|
|
|
|
ˆ 2 |
ˆ 4 |
|
|
|
|
|
ˆ 4 |
|
|
|
|
(23.38) |
|
|
ˆ |
zn |
|
zn |
|
|
|
|
|
||||||||
|
|
4ρi |
σi |
⁄ |
|
|
σi |
|
|
|
|
||||||
|
α( 2 ) = |
|
Σ |
-------------------- |
|
Σ |
-------------------- |
|
|
||||||||
|
|
|
|
ˆ |
8 |
|
|
|
( 1 |
ˆ |
|
4 |
|
|
|||
|
|
i = 1 |
( 1 − ρi) |
|
|
i = 1 |
− ρi) |
|
|
|
|
Note that we weight all moments equally, including the moment corresponding to the constant.
Newey-West (1994) is a nonparametric method based on a truncated weighted sum of the
ˆ |
ˆ |
ˆ |
( 2 ) are estimated by, |
|
||||
estimated cross-moments Γ( j) . |
α( 1 ) and α |
|
||||||
|
ˆ |
l′F( p) l |
(23.39) |
|||||
|
α( p) = |
----------------- |
||||||
|
|
l′F( 0) l |
|
|||||
where l is a vector of ones and: |
|
|
|
|
|
|
|
|
F( p) = Γ( 0) + |
L |
|
p |
ˆ |
ˆ |
|
||
Σ i |
(23.40) |
|||||||
|
( Γ |
( i) + Γ′( i) ) , |
i = 1
for p = 1, 2 .
One practical problem with the Newey-West method is that we have to choose a lag selection parameter L . The choice of L is arbitrary, subject to the condition that it grow at a certain rate. EViews sets the lag parameter to
int( 4( T ⁄ 100)2 ⁄ 9) |
for the Bartlett kernel |
|
L = |
|
(23.41) |
|
T |
for the QS kernel |
720—Chapter 23. System Estimation
Prewhitening
You can also choose to prewhiten the sample moments m to “soak up” the correlations in m prior to GMM estimation. We first fit a VAR(1) to the sample moments:
mt |
= Amt − 1 + vt . |
|
|
|
|
|
(23.42) |
||
ˆ |
ˆ |
− A ) |
−1 ˆ |
|
( I − A ) |
−1 |
ˆ |
|
is |
Then the variance Ω of m is estimated by Ω = ( I |
Ω |
|
|
where Ω |
|
the variance of the residuals vt and is computed using any of the above methods. The GMM estimator is then found by minimizing the criterion function:
ˆ |
−1 |
(23.43) |
u′ZΩ |
Z′u |
Note that while Andrews and Monahan (1992) adjust the VAR estimates to avoid singularity when the moments are near unit root processes, EViews does not perform this eigenvalue adjustment.