- •Table of Contents
- •What’s New in EViews 5.0
- •What’s New in 5.0
- •Compatibility Notes
- •EViews 5.1 Update Overview
- •Overview of EViews 5.1 New Features
- •Preface
- •Part I. EViews Fundamentals
- •Chapter 1. Introduction
- •What is EViews?
- •Installing and Running EViews
- •Windows Basics
- •The EViews Window
- •Closing EViews
- •Where to Go For Help
- •Chapter 2. A Demonstration
- •Getting Data into EViews
- •Examining the Data
- •Estimating a Regression Model
- •Specification and Hypothesis Tests
- •Modifying the Equation
- •Forecasting from an Estimated Equation
- •Additional Testing
- •Chapter 3. Workfile Basics
- •What is a Workfile?
- •Creating a Workfile
- •The Workfile Window
- •Saving a Workfile
- •Loading a Workfile
- •Multi-page Workfiles
- •Addendum: File Dialog Features
- •Chapter 4. Object Basics
- •What is an Object?
- •Basic Object Operations
- •The Object Window
- •Working with Objects
- •Chapter 5. Basic Data Handling
- •Data Objects
- •Samples
- •Sample Objects
- •Importing Data
- •Exporting Data
- •Frequency Conversion
- •Importing ASCII Text Files
- •Chapter 6. Working with Data
- •Numeric Expressions
- •Series
- •Auto-series
- •Groups
- •Scalars
- •Chapter 7. Working with Data (Advanced)
- •Auto-Updating Series
- •Alpha Series
- •Date Series
- •Value Maps
- •Chapter 8. Series Links
- •Basic Link Concepts
- •Creating a Link
- •Working with Links
- •Chapter 9. Advanced Workfiles
- •Structuring a Workfile
- •Resizing a Workfile
- •Appending to a Workfile
- •Contracting a Workfile
- •Copying from a Workfile
- •Reshaping a Workfile
- •Sorting a Workfile
- •Exporting from a Workfile
- •Chapter 10. EViews Databases
- •Database Overview
- •Database Basics
- •Working with Objects in Databases
- •Database Auto-Series
- •The Database Registry
- •Querying the Database
- •Object Aliases and Illegal Names
- •Maintaining the Database
- •Foreign Format Databases
- •Working with DRIPro Links
- •Part II. Basic Data Analysis
- •Chapter 11. Series
- •Series Views Overview
- •Spreadsheet and Graph Views
- •Descriptive Statistics
- •Tests for Descriptive Stats
- •Distribution Graphs
- •One-Way Tabulation
- •Correlogram
- •Unit Root Test
- •BDS Test
- •Properties
- •Label
- •Series Procs Overview
- •Generate by Equation
- •Resample
- •Seasonal Adjustment
- •Exponential Smoothing
- •Hodrick-Prescott Filter
- •Frequency (Band-Pass) Filter
- •Chapter 12. Groups
- •Group Views Overview
- •Group Members
- •Spreadsheet
- •Dated Data Table
- •Graphs
- •Multiple Graphs
- •Descriptive Statistics
- •Tests of Equality
- •N-Way Tabulation
- •Principal Components
- •Correlations, Covariances, and Correlograms
- •Cross Correlations and Correlograms
- •Cointegration Test
- •Unit Root Test
- •Granger Causality
- •Label
- •Group Procedures Overview
- •Chapter 13. Statistical Graphs from Series and Groups
- •Distribution Graphs of Series
- •Scatter Diagrams with Fit Lines
- •Boxplots
- •Chapter 14. Graphs, Tables, and Text Objects
- •Creating Graphs
- •Modifying Graphs
- •Multiple Graphs
- •Printing Graphs
- •Copying Graphs to the Clipboard
- •Saving Graphs to a File
- •Graph Commands
- •Creating Tables
- •Table Basics
- •Basic Table Customization
- •Customizing Table Cells
- •Copying Tables to the Clipboard
- •Saving Tables to a File
- •Table Commands
- •Text Objects
- •Part III. Basic Single Equation Analysis
- •Chapter 15. Basic Regression
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •Chapter 16. Additional Regression Methods
- •Special Equation Terms
- •Weighted Least Squares
- •Heteroskedasticity and Autocorrelation Consistent Covariances
- •Two-stage Least Squares
- •Nonlinear Least Squares
- •Generalized Method of Moments (GMM)
- •Chapter 17. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •Nonstationary Time Series
- •Unit Root Tests
- •Panel Unit Root Tests
- •Chapter 18. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Expression and PDL Specifications
- •Chapter 19. Specification and Diagnostic Tests
- •Background
- •Coefficient Tests
- •Residual Tests
- •Specification and Stability Tests
- •Applications
- •Part IV. Advanced Single Equation Analysis
- •Chapter 20. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •Binary Dependent Variable Models
- •Estimating Binary Models in EViews
- •Procedures for Binary Equations
- •Ordered Dependent Variable Models
- •Estimating Ordered Models in EViews
- •Views of Ordered Equations
- •Procedures for Ordered Equations
- •Censored Regression Models
- •Estimating Censored Models in EViews
- •Procedures for Censored Equations
- •Truncated Regression Models
- •Procedures for Truncated Equations
- •Count Models
- •Views of Count Models
- •Procedures for Count Models
- •Demonstrations
- •Technical Notes
- •Chapter 22. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •Part V. Multiple Equation Analysis
- •Chapter 23. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Cointegration Test
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •Chapter 25. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •Chapter 26. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •Part VI. Panel and Pooled Data
- •Chapter 27. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •Chapter 28. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •Chapter 29. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •Appendix A. Global Options
- •The Options Menu
- •Print Setup
- •Appendix B. Wildcards
- •Wildcard Expressions
- •Using Wildcard Expressions
- •Source and Destination Patterns
- •Resolving Ambiguities
- •Wildcard versus Pool Identifier
- •Appendix C. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •Appendix D. Gradients and Derivatives
- •Gradients
- •Derivatives
- •Appendix E. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Index
- •Symbols
- •.DB? files 266
- •.EDB file 262
- •.RTF file 437
- •.WF1 file 62
- •@obsnum
- •Panel
- •@unmaptxt 174
- •~, in backup file name 62, 939
- •Numerics
- •3sls (three-stage least squares) 697, 716
- •Abort key 21
- •ARIMA models 501
- •ASCII
- •file export 115
- •ASCII file
- •See also Unit root tests.
- •Auto-search
- •Auto-series
- •in groups 144
- •Auto-updating series
- •and databases 152
- •Backcast
- •Berndt-Hall-Hall-Hausman (BHHH). See Optimization algorithms.
- •Bias proportion 554
- •fitted index 634
- •Binning option
- •classifications 313, 382
- •Boxplots 409
- •By-group statistics 312, 886, 893
- •coef vector 444
- •Causality
- •Granger's test 389
- •scale factor 649
- •Census X11
- •Census X12 337
- •Chi-square
- •Cholesky factor
- •Classification table
- •Close
- •Coef (coefficient vector)
- •default 444
- •Coefficient
- •Comparison operators
- •Conditional standard deviation
- •graph 610
- •Confidence interval
- •Constant
- •Copy
- •data cut-and-paste 107
- •table to clipboard 437
- •Covariance matrix
- •HAC (Newey-West) 473
- •heteroskedasticity consistent of estimated coefficients 472
- •Create
- •Cross-equation
- •Tukey option 393
- •CUSUM
- •sum of recursive residuals test 589
- •sum of recursive squared residuals test 590
- •Data
- •Database
- •link options 303
- •using auto-updating series with 152
- •Dates
- •Default
- •database 24, 266
- •set directory 71
- •Dependent variable
- •Description
- •Descriptive statistics
- •by group 312
- •group 379
- •individual samples (group) 379
- •Display format
- •Display name
- •Distribution
- •Dummy variables
- •for regression 452
- •lagged dependent variable 495
- •Dynamic forecasting 556
- •Edit
- •See also Unit root tests.
- •Equation
- •create 443
- •store 458
- •Estimation
- •EViews
- •Excel file
- •Excel files
- •Expectation-prediction table
- •Expected dependent variable
- •double 352
- •Export data 114
- •Extreme value
- •binary model 624
- •Fetch
- •File
- •save table to 438
- •Files
- •Fitted index
- •Fitted values
- •Font options
- •Fonts
- •Forecast
- •evaluation 553
- •Foreign data
- •Formula
- •forecast 561
- •Freq
- •DRI database 303
- •F-test
- •for variance equality 321
- •Full information maximum likelihood 698
- •GARCH 601
- •ARCH-M model 603
- •variance factor 668
- •system 716
- •Goodness-of-fit
- •Gradients 963
- •Graph
- •remove elements 423
- •Groups
- •display format 94
- •Groupwise heteroskedasticity 380
- •Help
- •Heteroskedasticity and autocorrelation consistent covariance (HAC) 473
- •History
- •Holt-Winters
- •Hypothesis tests
- •F-test 321
- •Identification
- •Identity
- •Import
- •Import data
- •See also VAR.
- •Index
- •Insert
- •Instruments 474
- •Iteration
- •Iteration option 953
- •in nonlinear least squares 483
- •J-statistic 491
- •J-test 596
- •Kernel
- •bivariate fit 405
- •choice in HAC weighting 704, 718
- •Kernel function
- •Keyboard
- •Kwiatkowski, Phillips, Schmidt, and Shin test 525
- •Label 82
- •Last_update
- •Last_write
- •Latent variable
- •Lead
- •make covariance matrix 643
- •List
- •LM test
- •ARCH 582
- •for binary models 622
- •LOWESS. See also LOESS
- •in ARIMA models 501
- •Mean absolute error 553
- •Metafile
- •Micro TSP
- •recoding 137
- •Models
- •add factors 777, 802
- •solving 804
- •Mouse 18
- •Multicollinearity 460
- •Name
- •Newey-West
- •Nonlinear coefficient restriction
- •Wald test 575
- •weighted two stage 486
- •Normal distribution
- •Numbers
- •chi-square tests 383
- •Object 73
- •Open
- •Option setting
- •Option settings
- •Or operator 98, 133
- •Ordinary residual
- •Panel
- •irregular 214
- •unit root tests 530
- •Paste 83
- •PcGive data 293
- •Polynomial distributed lag
- •Pool
- •Pool (object)
- •PostScript
- •Prediction table
- •Principal components 385
- •Program
- •p-value 569
- •for coefficient t-statistic 450
- •Quiet mode 939
- •RATS data
- •Read 832
- •CUSUM 589
- •Regression
- •Relational operators
- •Remarks
- •database 287
- •Residuals
- •Resize
- •Results
- •RichText Format
- •Robust standard errors
- •Robustness iterations
- •for regression 451
- •with AR specification 500
- •workfile 95
- •Save
- •Seasonal
- •Seasonal graphs 310
- •Select
- •single item 20
- •Serial correlation
- •theory 493
- •Series
- •Smoothing
- •Solve
- •Source
- •Specification test
- •Spreadsheet
- •Standard error
- •Standard error
- •binary models 634
- •Start
- •Starting values
- •Summary statistics
- •for regression variables 451
- •System
- •Table 429
- •font 434
- •Tabulation
- •Template 424
- •Tests. See also Hypothesis tests, Specification test and Goodness of fit.
- •Text file
- •open as workfile 54
- •Type
- •field in database query 282
- •Units
- •Update
- •Valmap
- •find label for value 173
- •find numeric value for label 174
- •Value maps 163
- •estimating 749
- •View
- •Wald test 572
- •nonlinear restriction 575
- •Watson test 323
- •Weighting matrix
- •heteroskedasticity and autocorrelation consistent (HAC) 718
- •kernel options 718
- •White
- •Window
- •Workfile
- •storage defaults 940
- •Write 844
- •XY line
- •Yates' continuity correction 321
480—Chapter 16. Additional Regression Methods
Note that EViews augments the instrument list by adding lagged dependent and regressor variables. Note however, that each MA term involves an infinite number of AR terms. Clearly, it is impossible to add an infinite number of lags to the instrument list, so that EViews performs an ad hoc approximation by adding a truncated set of instruments involving the MA order and an additional lag. If for example, you have an MA(5), EViews will add lagged instruments corresponding to lags 5 and 6.
Nonlinear Least Squares
Suppose that we have the regression specification: |
|
yt = f(xt, β) + t , |
(16.26) |
where f is a general function of the explanatory variables xt and the parameters β . Least squares estimation chooses the parameter values that minimize the sum of squared residuals:
S( β) = Σ( yt − f(xt, β))2 = ( y − f( X, β) ) ′( y − f( X, β) ) |
(16.27) |
t |
|
We say that a model is linear in parameters if the derivatives of f with respect to the parameters do not depend upon β ; if the derivatives are functions of β , we say that the model is nonlinear in parameters.
For example, consider the model given by: |
|
yt = β1 + β2log Lt + β3log Kt + t . |
(16.28) |
It is easy to see that this model is linear in its parameters, implying that it can be estimated using ordinary least squares.
In contrast, the equation specification: |
|
yt = β1Ltβ2 Ktβ3 + t |
(16.29) |
has derivatives that depend upon the elements of β . There is no way to rearrange the terms in this model so that ordinary least squares can be used to minimize the sum-of- squared residuals. We must use nonlinear least squares techniques to estimate the parameters of the model.
Nonlinear least squares minimizes the sum-of-squared residuals with respect to the choice of parameters β . While there is no closed form solution for the parameter estimates, the estimates satisfy the first-order conditions:
( G( β) )′ ( y − f( X, β) ) = 0 , |
(16.30) |
Nonlinear Least Squares—481
where G( β) is the matrix of first derivatives of f( X, β) with respect to β (to simplify notation we suppress the dependence of G upon X ). The estimated covariance matrix is given by:
ˆ |
2 |
( G( bNLLS) ′G( bNLLS) ) |
−1 |
. |
(16.31) |
ΣNLLS = s |
|
where bNLLS are the estimated parameters. For additional discussion of nonlinear estimation, see Pindyck and Rubinfeld (1991, pp. 231-245) or Davidson and MacKinnon (1993).
Estimating NLS Models in EViews
It is easy to tell EViews that you wish to estimate the parameters of a model using nonlinear least squares. EViews automatically applies nonlinear least squares to any regression equation that is nonlinear in its coefficients. Simply select Object/New Object.../Equation, enter the equation in the equation specification dialog box, and click OK. EViews will do all of the work of estimating your model using an iterative algorithm.
A full technical discussion of iterative estimation procedures is provided in Appendix C, “Estimation and Solution Options”, beginning on page 951.
Specifying Nonlinear Least Squares
For nonlinear regression models, you will have to enter your specification in equation form using EViews expressions that contain direct references to coefficients. You may use elements of the default coefficient vector C (e.g. C(1), C(2), C(34), C(87)), or you can define and use other coefficient vectors. For example:
y = c(1) + c(2)*(k^c(3)+l^c(4))
is a nonlinear specification that uses the first through the fourth elements of the default coefficient vector, C.
To create a new coefficient vector, select Object/New Object.../Matrix-Vector-Coef/Coeffi- cient Vector in the main menu and provide a name. You may now use this coefficient vector in your specification. For example, if you create a coefficient vector named CF, you can rewrite the specification above as:
y = cf(11) + cf(12)*(k^cf(13)+l^cf(14))
which uses the eleventh through the fourteenth elements of CF. You can also use multiple coefficient vectors in your specification:
y = c(11) + c(12)*(k^cf(1)+l^cf(2))
which uses both C and CF in the specification.
482—Chapter 16. Additional Regression Methods
It is worth noting that EViews implicitly adds an additive disturbance to your specification. For example, the input
y = (c(1)*x + c(2)*z + 4)^2
is interpreted as yt = ( c( 1 ) xt + c( 2 ) zt + 4 )2 + t , and EViews will minimize:
S( c( 1 ), c( 2 ) ) = |
Σ( yt − ( c( 1 )xt + c( 2) zt + 4 ) |
2 |
) |
2 |
|
(16.32) |
t
If you wish, the equation specification may be given by a simple expression that does not include a dependent variable. For example, the input,
(c(1)*x + c(2)*z + 4)^2
is interpreted by EViews as −( c( 1) xt + c( 2) zt + 4 )2 = t , and EViews will minimize:
S( c( 1 ), c( 2 )) |
= |
Σ ( −( c( 1 ) xt + c( 2 )zt + 4) |
2 |
) |
2 |
|
(16.33) |
t
While EViews will estimate the parameters of this last specification, the equation cannot be used for forecasting and cannot be included in a model. This restriction also holds for any equation that includes coefficients to the left of the equal sign. For example, if you specify,
x + c(1)*y = z^c(2)
EViews will find the values of C(1) and C(2) that minimize the sum of squares of the implicit equation:
xt + c( 1) yt − ztc(2) = t |
(16.34) |
The estimated equation cannot be used in forecasting or included in a model, since there is no dependent variable.
Estimation Options
Starting Values. Iterative estimation procedures require starting values for the coefficients of the model. There are no general rules for selecting starting values for parameters. The closer to the true values the better, so if you have reasonable guesses for parameter values, these can be useful. In some cases, you can obtain good starting values by estimating a restricted version of the model using least squares. In general, however, you will have to experiment in order to find starting values.
EViews uses the values in the coefficient vector at the time you begin the estimation procedure as starting values for the iterative procedure. It is easy to examine and change these coefficient starting values.
Nonlinear Least Squares—483
To see the starting values, double click on the coefficient vector in the workfile directory. If the values appear to be reasonable, you can close the window and proceed with estimating your model.
If you wish to change the starting values, first make certain that the spreadsheet view of your coefficients is in edit mode, then enter the coefficient values. When you are finished setting the initial values, close the coefficient vector window and estimate your model.
You may also set starting coefficient values from the command window using the PARAM command. Simply enter the PARAM keyword, following by each coefficient and desired value:
param c(1) 153 c(2) .68 c(3) .15
sets C(1)=153, C(2)=.68, and C(3)=.15.
See Appendix C, “Estimation and Solution Options” on page 951, for further details.
Derivative Methods. Estimation in EViews requires computation of the derivatives of the regression function with respect to the parameters. EViews provides you with the option of computing analytic expressions for these derivatives (if possible), or computing finite difference numeric derivatives in cases where the derivative is not constant. Furthermore, if numeric derivatives are computed, you can choose whether to favor speed of computation (fewer function evaluations) or whether to favor accuracy (more function evaluations).
Additional issues associated with ARIMA models are discussed in “Estimation Options” on page 509.
Iteration and Convergence Options. You can control the iterative process by specifying convergence criterion and the maximum number of iterations. Press the Options button in the equation dialog box and enter the desired values.
EViews will report that the estimation procedure has converged if the convergence test value is below your convergence tolerance. See “Iteration and Convergence Options” on page 953 for details.
In most cases, you will not need to change the maximum number of iterations. However, for some difficult to estimate models, the iterative procedure will not converge within the maximum number of iterations. If your model does not converge within the allotted number of iterations, simply click on the Estimate button, and, if desired, increase the maximum number of iterations. Click on OK to accept the options, and click on OK to begin estimation. EViews will start estimation using the last set of parameter values as starting values.
These options may also be set from the global options dialog. See Appendix A, “Estimation Defaults” on page 941.
484—Chapter 16. Additional Regression Methods
Output from NLS
Once your model has been estimated, EViews displays an equation output screen showing the results of the nonlinear least squares procedure. Below is the output from a regression of LOG(CS) on C, and the Box-Cox transform of GDP:
Dependent Variable: LOG(CS)
Method: Least Squares
Date: 10/15/97 Time: 11:51
Sample(adjusted): 1947:1 1995:1
Included observations: 193 after adjusting endpoints
Convergence achieved after 80 iterations
LOG(CS)= C(1)+C(2)*(GDP^C(3)-1)/C(3)
|
Coefficient |
Std. Error |
t-Statistic |
Prob. |
|
|
|
|
|
C(1) |
2.851780 |
0.279033 |
10.22024 |
0.0000 |
C(2) |
0.257592 |
0.041147 |
6.260254 |
0.0000 |
C(3) |
0.182959 |
0.020201 |
9.056824 |
0.0000 |
|
|
|
|
|
R-squared |
0.997252 |
Mean dependent var |
7.476058 |
|
Adjusted R-squared |
0.997223 |
S.D. dependent var |
0.465503 |
|
S.E. of regression |
0.024532 |
Akaike info criterion |
-4.562220 |
|
Sum squared resid |
0.114350 |
Schwarz criterion |
-4.511505 |
|
Log likelihood |
443.2542 |
F-statistic |
|
34469.84 |
Durbin-Watson stat |
0.134628 |
Prob(F-statistic) |
|
0.000000 |
|
|
|
|
|
If the estimation procedure has converged, EViews will report this fact, along with the number of iterations that were required. If the iterative procedure did not converge, EViews will report “Convergence not achieved after” followed by the number of iterations attempted.
Below the line describing convergence, EViews will repeat the nonlinear specification so that you can easily interpret the estimated coefficients of your model.
EViews provides you with all of the usual summary statistics for regression models. Provided that your model has converged, the standard statistical results and tests are asymptotically valid.
Weighted NLS
Weights can be used in nonlinear estimation in a manner analogous to weighted linear least squares. To estimate an equation using weighted nonlinear least squares, enter your specification, press the Options button and click on the Weighted LS/TSLS option. Fill in the blank after Weight: with the name of the weight series and then estimate the equation.
EViews minimizes the sum of the weighted squared residuals:
S( β) = Σ wt2( yt − f(xt, β))2 |
= ( y − f( X, β) ) ′W′W( y − f( X, β) ) (16.35) |
t
Nonlinear Least Squares—485
with respect to the parameters β , where wt are the values of the weight series and W is the matrix of weights. The first-order conditions are given by,
( G( β) )′ W′W( y − f( X, β) ) = 0 |
(16.36) |
and the covariance estimate is computed as:
ˆ |
2 |
( G( bWNLLS) ′W′WG( bWNLLS) ) |
−1 |
. |
(16.37) |
ΣWNLLS = s |
|
NLS with AR errors
EViews will estimate nonlinear regression models with autoregressive error terms. Simply select Object/New Object.../Equation… or Quick/Estimate Equation… and specify your model using EViews expressions, followed by an additive term describing the AR correction enclosed in square brackets. The AR term should consist of a coefficient assignment for each AR term, separated by commas. For example, if you wish to estimate,
CSt |
= c1 |
+ GDPtc2 + ut |
(16.38) |
|
= c3ut − 1 + c4ut − 2 + t |
||
ut |
|
you should enter the specification:
cs = c(1) + gdp^c(2) + [ar(1)=c(3), ar(2)=c(4)]
See “How EViews Estimates AR Models” on page 500 for additional details. EViews does not currently estimate nonlinear models with MA errors, nor does it estimate weighted models with AR terms—if you add AR terms to a weighted nonlinear model, the weighting series will be ignored.
Nonlinear TSLS
Nonlinear two-stage least squares refers to an instrumental variables procedure for estimating nonlinear regression models involving functions of endogenous and exogenous variables and parameters. Suppose we have the usual nonlinear regression model:
yt = f(xt, β) + t , |
(16.39) |
where β is a k -dimensional vector of parameters, and xt contains both exogenous and endogenous variables. In matrix form, if we have m ≥ k instruments zt , nonlinear twostage least squares minimizes:
S( β) = ( y − f( X, β) ) ′Z( Z′ Z)−1Z′( y − f( X, β) ) |
(16.40) |
with respect to the choice of β .
While there is no closed form solution for the parameter estimates, the parameter estimates satisfy the first-order conditions:
G( β) ′Z( Z′Z)−1Z′( y − f( X, β) ) = 0 |
(16.41) |
486—Chapter 16. Additional Regression Methods
with estimated covariance given by:
ˆ |
2 |
( G(bTSNLLS) ′Z( Z′Z) |
−1 |
Z′G(bTSNLLS) ) |
−1 |
(16.42) |
ΣTSNLLS = s |
|
. |
How to Estimate Nonlinear TSLS in EViews
EViews performs the estimation procedure in a single step so that you don’t have to perform the separate stages yourself. Simply select Object/New Object.../Equation… or Quick/Estimate Equation… Choose TSLS from the Method: combo box, enter your nonlinear specification and the list of instruments. Click OK.
With nonlinear two-stage least squares estimation, you have a great deal of flexibility with your choice of instruments. Intuitively, you want instruments that are correlated with
G( β) . Since G is nonlinear, you may begin to think about using more than just the exogenous and predetermined variables as instruments. Various nonlinear functions of these variables, for example, cross-products and powers, may also be valid instruments. One should be aware, however, of the possible finite sample biases resulting from using too many instruments.
Weighted Nonlinear Two-stage Least Squares
Weights can be used in nonlinear two-stage least squares estimation. Simply add weighting to your nonlinear TSLS specification above by pressing the Options button, selecting Weighted LS/TSLS option, and entering the name of the weight series.
The objective function for weighted TSLS is,
S( β) = ( y − f( X, β) ) ′W′ WZ( Z′W′WZ)−1Z′W′W( y − f( X, β) ) . (16.43)
The reported standard errors are based on the covariance matrix estimate given by:
ˆ |
2 |
( G( b) ′W′WZ( Z′W′WZ ) |
−1 |
Z′W′ WG( b) ) |
−1 |
ΣWTSNLLS = s |
|
(16.44) |
where b ≡ bWTSNLLS . Note that if you add AR or MA terms to a weighted specification, the weighting series will be ignored.
Nonlinear Two-stage Least Squares with AR errors
While we will not go into much detail here, note that EViews can estimate non-linear TSLS models where there are autoregressive error terms. EViews does not currently estimate nonlinear models with MA errors.
To estimate your model, simply open your equation specification window, and enter your nonlinear specification, including all AR terms, and provide your instrument list. For example, you could enter the regression specification:
Nonlinear Least Squares—487
cs = exp(c(1) + gdp^c(2)) + [ar(1)=c(3)]
with the instrument list:
c gov
EViews will transform the nonlinear regression model as described in “Estimating AR Models” on page 497, and then estimate nonlinear TSLS on the transformed specification using the instruments C and GOV. For nonlinear models with AR errors, EViews uses a GaussNewton algorithm. See “Optimization Algorithms” on page 956 for further details.
Solving Estimation Problems
EViews may not be able to estimate your nonlinear equation on the first attempt. Sometimes, the nonlinear least squares procedure will stop immediately. Other times, EViews may stop estimation after several iterations without achieving convergence. EViews might even report that it cannot improve the sums-of-squares. While there are no specific rules on how to proceed if you encounter these estimation problems, there are a few general areas you might want to examine.
Starting Values
If you experience problems with the very first iteration of a nonlinear procedure, the problem is almost certainly related to starting values. See the discussion above for how to examine and change your starting values.
Model Identification
If EViews goes through a number of iterations and then reports that it encounters a “Near Singular Matrix”, you should check to make certain that your model is identified. Models are said to be non-identified if there are multiple sets of coefficients which identically yield the minimized sum-of-squares value. If this condition holds, it is impossible to choose between the coefficients on the basis of the minimum sum-of-squares criterion.
For example, the nonlinear specification: |
|
yt = β1β2 + β22xt + t |
(16.45) |
is not identified, since any coefficient pair ( β1, β2) is indistinguishable from the pair ( − β1, − β2) in terms of the sum-of-squared residuals.
For a thorough discussion of identification of nonlinear least squares models, see Davidson and MacKinnon (1993, Sections 2.3, 5.2 and 6.3).
Convergence Criterion
EViews may report that it is unable to improve the sums-of-squares. This result may be evidence of non-identification or model misspecification. Alternatively, it may be the result of