- •Table of Contents
- •What’s New in EViews 5.0
- •What’s New in 5.0
- •Compatibility Notes
- •EViews 5.1 Update Overview
- •Overview of EViews 5.1 New Features
- •Preface
- •Part I. EViews Fundamentals
- •Chapter 1. Introduction
- •What is EViews?
- •Installing and Running EViews
- •Windows Basics
- •The EViews Window
- •Closing EViews
- •Where to Go For Help
- •Chapter 2. A Demonstration
- •Getting Data into EViews
- •Examining the Data
- •Estimating a Regression Model
- •Specification and Hypothesis Tests
- •Modifying the Equation
- •Forecasting from an Estimated Equation
- •Additional Testing
- •Chapter 3. Workfile Basics
- •What is a Workfile?
- •Creating a Workfile
- •The Workfile Window
- •Saving a Workfile
- •Loading a Workfile
- •Multi-page Workfiles
- •Addendum: File Dialog Features
- •Chapter 4. Object Basics
- •What is an Object?
- •Basic Object Operations
- •The Object Window
- •Working with Objects
- •Chapter 5. Basic Data Handling
- •Data Objects
- •Samples
- •Sample Objects
- •Importing Data
- •Exporting Data
- •Frequency Conversion
- •Importing ASCII Text Files
- •Chapter 6. Working with Data
- •Numeric Expressions
- •Series
- •Auto-series
- •Groups
- •Scalars
- •Chapter 7. Working with Data (Advanced)
- •Auto-Updating Series
- •Alpha Series
- •Date Series
- •Value Maps
- •Chapter 8. Series Links
- •Basic Link Concepts
- •Creating a Link
- •Working with Links
- •Chapter 9. Advanced Workfiles
- •Structuring a Workfile
- •Resizing a Workfile
- •Appending to a Workfile
- •Contracting a Workfile
- •Copying from a Workfile
- •Reshaping a Workfile
- •Sorting a Workfile
- •Exporting from a Workfile
- •Chapter 10. EViews Databases
- •Database Overview
- •Database Basics
- •Working with Objects in Databases
- •Database Auto-Series
- •The Database Registry
- •Querying the Database
- •Object Aliases and Illegal Names
- •Maintaining the Database
- •Foreign Format Databases
- •Working with DRIPro Links
- •Part II. Basic Data Analysis
- •Chapter 11. Series
- •Series Views Overview
- •Spreadsheet and Graph Views
- •Descriptive Statistics
- •Tests for Descriptive Stats
- •Distribution Graphs
- •One-Way Tabulation
- •Correlogram
- •Unit Root Test
- •BDS Test
- •Properties
- •Label
- •Series Procs Overview
- •Generate by Equation
- •Resample
- •Seasonal Adjustment
- •Exponential Smoothing
- •Hodrick-Prescott Filter
- •Frequency (Band-Pass) Filter
- •Chapter 12. Groups
- •Group Views Overview
- •Group Members
- •Spreadsheet
- •Dated Data Table
- •Graphs
- •Multiple Graphs
- •Descriptive Statistics
- •Tests of Equality
- •N-Way Tabulation
- •Principal Components
- •Correlations, Covariances, and Correlograms
- •Cross Correlations and Correlograms
- •Cointegration Test
- •Unit Root Test
- •Granger Causality
- •Label
- •Group Procedures Overview
- •Chapter 13. Statistical Graphs from Series and Groups
- •Distribution Graphs of Series
- •Scatter Diagrams with Fit Lines
- •Boxplots
- •Chapter 14. Graphs, Tables, and Text Objects
- •Creating Graphs
- •Modifying Graphs
- •Multiple Graphs
- •Printing Graphs
- •Copying Graphs to the Clipboard
- •Saving Graphs to a File
- •Graph Commands
- •Creating Tables
- •Table Basics
- •Basic Table Customization
- •Customizing Table Cells
- •Copying Tables to the Clipboard
- •Saving Tables to a File
- •Table Commands
- •Text Objects
- •Part III. Basic Single Equation Analysis
- •Chapter 15. Basic Regression
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •Chapter 16. Additional Regression Methods
- •Special Equation Terms
- •Weighted Least Squares
- •Heteroskedasticity and Autocorrelation Consistent Covariances
- •Two-stage Least Squares
- •Nonlinear Least Squares
- •Generalized Method of Moments (GMM)
- •Chapter 17. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •Nonstationary Time Series
- •Unit Root Tests
- •Panel Unit Root Tests
- •Chapter 18. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Expression and PDL Specifications
- •Chapter 19. Specification and Diagnostic Tests
- •Background
- •Coefficient Tests
- •Residual Tests
- •Specification and Stability Tests
- •Applications
- •Part IV. Advanced Single Equation Analysis
- •Chapter 20. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •Binary Dependent Variable Models
- •Estimating Binary Models in EViews
- •Procedures for Binary Equations
- •Ordered Dependent Variable Models
- •Estimating Ordered Models in EViews
- •Views of Ordered Equations
- •Procedures for Ordered Equations
- •Censored Regression Models
- •Estimating Censored Models in EViews
- •Procedures for Censored Equations
- •Truncated Regression Models
- •Procedures for Truncated Equations
- •Count Models
- •Views of Count Models
- •Procedures for Count Models
- •Demonstrations
- •Technical Notes
- •Chapter 22. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •Part V. Multiple Equation Analysis
- •Chapter 23. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Cointegration Test
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •Chapter 25. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •Chapter 26. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •Part VI. Panel and Pooled Data
- •Chapter 27. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •Chapter 28. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •Chapter 29. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •Appendix A. Global Options
- •The Options Menu
- •Print Setup
- •Appendix B. Wildcards
- •Wildcard Expressions
- •Using Wildcard Expressions
- •Source and Destination Patterns
- •Resolving Ambiguities
- •Wildcard versus Pool Identifier
- •Appendix C. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •Appendix D. Gradients and Derivatives
- •Gradients
- •Derivatives
- •Appendix E. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Index
- •Symbols
- •.DB? files 266
- •.EDB file 262
- •.RTF file 437
- •.WF1 file 62
- •@obsnum
- •Panel
- •@unmaptxt 174
- •~, in backup file name 62, 939
- •Numerics
- •3sls (three-stage least squares) 697, 716
- •Abort key 21
- •ARIMA models 501
- •ASCII
- •file export 115
- •ASCII file
- •See also Unit root tests.
- •Auto-search
- •Auto-series
- •in groups 144
- •Auto-updating series
- •and databases 152
- •Backcast
- •Berndt-Hall-Hall-Hausman (BHHH). See Optimization algorithms.
- •Bias proportion 554
- •fitted index 634
- •Binning option
- •classifications 313, 382
- •Boxplots 409
- •By-group statistics 312, 886, 893
- •coef vector 444
- •Causality
- •Granger's test 389
- •scale factor 649
- •Census X11
- •Census X12 337
- •Chi-square
- •Cholesky factor
- •Classification table
- •Close
- •Coef (coefficient vector)
- •default 444
- •Coefficient
- •Comparison operators
- •Conditional standard deviation
- •graph 610
- •Confidence interval
- •Constant
- •Copy
- •data cut-and-paste 107
- •table to clipboard 437
- •Covariance matrix
- •HAC (Newey-West) 473
- •heteroskedasticity consistent of estimated coefficients 472
- •Create
- •Cross-equation
- •Tukey option 393
- •CUSUM
- •sum of recursive residuals test 589
- •sum of recursive squared residuals test 590
- •Data
- •Database
- •link options 303
- •using auto-updating series with 152
- •Dates
- •Default
- •database 24, 266
- •set directory 71
- •Dependent variable
- •Description
- •Descriptive statistics
- •by group 312
- •group 379
- •individual samples (group) 379
- •Display format
- •Display name
- •Distribution
- •Dummy variables
- •for regression 452
- •lagged dependent variable 495
- •Dynamic forecasting 556
- •Edit
- •See also Unit root tests.
- •Equation
- •create 443
- •store 458
- •Estimation
- •EViews
- •Excel file
- •Excel files
- •Expectation-prediction table
- •Expected dependent variable
- •double 352
- •Export data 114
- •Extreme value
- •binary model 624
- •Fetch
- •File
- •save table to 438
- •Files
- •Fitted index
- •Fitted values
- •Font options
- •Fonts
- •Forecast
- •evaluation 553
- •Foreign data
- •Formula
- •forecast 561
- •Freq
- •DRI database 303
- •F-test
- •for variance equality 321
- •Full information maximum likelihood 698
- •GARCH 601
- •ARCH-M model 603
- •variance factor 668
- •system 716
- •Goodness-of-fit
- •Gradients 963
- •Graph
- •remove elements 423
- •Groups
- •display format 94
- •Groupwise heteroskedasticity 380
- •Help
- •Heteroskedasticity and autocorrelation consistent covariance (HAC) 473
- •History
- •Holt-Winters
- •Hypothesis tests
- •F-test 321
- •Identification
- •Identity
- •Import
- •Import data
- •See also VAR.
- •Index
- •Insert
- •Instruments 474
- •Iteration
- •Iteration option 953
- •in nonlinear least squares 483
- •J-statistic 491
- •J-test 596
- •Kernel
- •bivariate fit 405
- •choice in HAC weighting 704, 718
- •Kernel function
- •Keyboard
- •Kwiatkowski, Phillips, Schmidt, and Shin test 525
- •Label 82
- •Last_update
- •Last_write
- •Latent variable
- •Lead
- •make covariance matrix 643
- •List
- •LM test
- •ARCH 582
- •for binary models 622
- •LOWESS. See also LOESS
- •in ARIMA models 501
- •Mean absolute error 553
- •Metafile
- •Micro TSP
- •recoding 137
- •Models
- •add factors 777, 802
- •solving 804
- •Mouse 18
- •Multicollinearity 460
- •Name
- •Newey-West
- •Nonlinear coefficient restriction
- •Wald test 575
- •weighted two stage 486
- •Normal distribution
- •Numbers
- •chi-square tests 383
- •Object 73
- •Open
- •Option setting
- •Option settings
- •Or operator 98, 133
- •Ordinary residual
- •Panel
- •irregular 214
- •unit root tests 530
- •Paste 83
- •PcGive data 293
- •Polynomial distributed lag
- •Pool
- •Pool (object)
- •PostScript
- •Prediction table
- •Principal components 385
- •Program
- •p-value 569
- •for coefficient t-statistic 450
- •Quiet mode 939
- •RATS data
- •Read 832
- •CUSUM 589
- •Regression
- •Relational operators
- •Remarks
- •database 287
- •Residuals
- •Resize
- •Results
- •RichText Format
- •Robust standard errors
- •Robustness iterations
- •for regression 451
- •with AR specification 500
- •workfile 95
- •Save
- •Seasonal
- •Seasonal graphs 310
- •Select
- •single item 20
- •Serial correlation
- •theory 493
- •Series
- •Smoothing
- •Solve
- •Source
- •Specification test
- •Spreadsheet
- •Standard error
- •Standard error
- •binary models 634
- •Start
- •Starting values
- •Summary statistics
- •for regression variables 451
- •System
- •Table 429
- •font 434
- •Tabulation
- •Template 424
- •Tests. See also Hypothesis tests, Specification test and Goodness of fit.
- •Text file
- •open as workfile 54
- •Type
- •field in database query 282
- •Units
- •Update
- •Valmap
- •find label for value 173
- •find numeric value for label 174
- •Value maps 163
- •estimating 749
- •View
- •Wald test 572
- •nonlinear restriction 575
- •Watson test 323
- •Weighting matrix
- •heteroskedasticity and autocorrelation consistent (HAC) 718
- •kernel options 718
- •White
- •Window
- •Workfile
- •storage defaults 940
- •Write 844
- •XY line
- •Yates' continuity correction 321
Chapter 16. Additional Regression Methods
This first portion of this chapter describes special terms that may be used in estimation to estimate models with Polynomial Distributed Lags (PDLs) or dummy variables.
In addition, we describe weighted least squares, heteroskedasticity and autocorrelation consistent covariance estimation, two-stage least squares (TSLS), nonlinear least squares, and generalized method of moments (GMM). Note that most of these methods are also available in systems of equations; see Chapter 23, “System Estimation”, on page 696.
Parts of this chapter refer to estimation of models which have autoregressive (AR) and moving average (MA) error terms. These concepts are discussed in greater depth in Chapter 17, “Time Series Regression”, on page 493.
Special Equation Terms
EViews provides you with special terms that may be used to specify and estimate equations with PDLs, dummy variables, or ARMA errors. We begin with a discussion of PDLs and dummy variables, and defer the discussion of ARMA estimation to “Time Series Regression” on page 493.
Polynomial Distributed Lags (PDLs)
A distributed lag is a relation of the type:
yt = wtδ + β0xt + β1xt − 1 + … + βkxt − k + t |
(16.1) |
The coefficients β describe the lag in the effect of x on y . In many cases, the coefficients can be estimated directly using this specification. In other cases, the high collinearity of current and lagged values of x will defeat direct estimation.
You can reduce the number of parameters to be estimated by using polynomial distributed lags (PDLs) to impose a smoothness condition on the lag coefficients. Smoothness is expressed as requiring that the coefficients lie on a polynomial of relatively low degree. A polynomial distributed lag model with order p restricts the β coefficients to lie on a p -th order polynomial of the form,
βj = γ1 + γ2( j − c ) + γ3( j − c) 2 + … + γp + 1( j − c) p |
(16.2) |
|||
for j = 1, 2, …, k , where c is a pre-specified constant given by: |
|
|||
|
( k) ⁄ |
2 |
if p is even |
(16.3) |
c = |
( k − |
1 ) ⁄ 2 |
|
|
|
if p is odd |
|
462—Chapter 16. Additional Regression Methods
The PDL is sometimes referred to as an Almon lag. The constant c is included only to avoid numerical problems that can arise from collinearity and does not affect the estimates of β .
This specification allows you to estimate a model with k lags of x using only p parameters (if you choose p > k , EViews will return a “Near Singular Matrix” error).
If you specify a PDL, EViews substitutes Equation (16.2) into (16.1), yielding, |
|
|
yt = α + γ1z1 + γ2z2 + … + γp + 1zp + 1 + t |
(16.4) |
|
where: |
|
|
z1 |
= xt + xt − 1 + … + xt − k |
|
z2 |
= − cxt + ( 1 − c) xt − 1 + … + ( k − c) xt − k |
|
(16.5)
…
zp + 1 = ( −c)pxt + ( 1 − c)pxt − 1 + … + ( k − c)pxt − k
Once we estimate γ from Equation (16.4), we can recover the parameters of interest β , and their standard errors using the relationship described in Equation (16.2). This procedure is straightforward since β is a linear transformation of γ .
The specification of a polynomial distributed lag has three elements: the length of the lag k , the degree of the polynomial (the highest power in the polynomial) p , and the constraints that you want to apply. A near end constraint restricts the one-period lead effect of x on y to be zero:
β−1 = γ1 + γ2( − 1 − c) + … + γp + 1( − 1 − c)p = 0 . |
(16.6) |
A far end constraint restricts the effect of x on y to die off beyond the number of specified lags:
βk + 1 = γ1 + γ2( k + 1 − c) + … + γp + 1( k + 1 − c) p = 0 . |
(16.7) |
If you restrict either the near or far end of the lag, the number of γ parameters estimated is reduced by one to account for the restriction; if you restrict both the near and far end of the lag, the number of γ parameters is reduced by two.
By default, EViews does not impose constraints.
How to Estimate Models Containing PDLs
You specify a polynomial distributed lag by the pdl term, with the following information in parentheses, each separated by a comma in this order:
•The name of the series.
•The lag length (the number of lagged values of the series to be included).
Special Equation Terms—463
•The degree of the polynomial.
•A numerical code to constrain the lag polynomial (optional):
1 |
constrain the near end of the lag to zero. |
|
|
2 |
constrain the far end. |
|
|
3 |
constrain both ends. |
|
|
You may omit the constraint code if you do not want to constrain the lag polynomial. Any number of pdl terms may be included in an equation. Each one tells EViews to fit distributed lag coefficients to the series and to constrain the coefficients to lie on a polynomial.
For example, the commands:
ls sales c pdl(orders,8,3)
fits SALES to a constant, and a distributed lag of current and eight lags of ORDERS, where the lag coefficients of ORDERS lie on a third degree polynomial with no endpoint constraints. Similarly:
ls div c pdl(rev,12,4,2)
fits DIV to a distributed lag of current and 12 lags of REV, where the coefficients of REV lie on a 4th degree polynomial with a constraint at the far end.
The pdl specification may also be used in two-stage least squares. If the series in the pdl is exogenous, you should include the PDL of the series in the instruments as well. For this purpose, you may specify pdl(*) as an instrument; all pdl variables will be used as instruments. For example, if you specify the TSLS equation as,
sales c inc pdl(orders(-1),12,4)
with instruments:
fed fed(-1) pdl(*)
the distributed lag of ORDERS will be used as instruments together with FED and FED(–1).
Polynomial distributed lags cannot be used in nonlinear specifications.
Example
The distributed lag model of industrial production (IP) on money (M1) yields the following results:
464—Chapter 16. Additional Regression Methods
Dependent Variable: IP
Method: Least Squares
Date: 08/15/97 Time: 17:09
Sample(adjusted): 1960:01 1989:12
Included observations: 360 after adjusting endpoints
Variable |
Coefficient |
Std. Error |
t-Statistic |
Prob. |
|
|
|
|
|
C |
40.67568 |
0.823866 |
49.37171 |
0.0000 |
M1 |
0.129699 |
0.214574 |
0.604449 |
0.5459 |
M1(-1) |
-0.045962 |
0.376907 |
-0.121944 |
0.9030 |
M1(-2) |
0.033183 |
0.397099 |
0.083563 |
0.9335 |
M1(-3) |
0.010621 |
0.405861 |
0.026169 |
0.9791 |
M1(-4) |
0.031425 |
0.418805 |
0.075035 |
0.9402 |
M1(-5) |
-0.048847 |
0.431728 |
-0.113143 |
0.9100 |
M1(-6) |
0.053880 |
0.440753 |
0.122245 |
0.9028 |
M1(-7) |
-0.015240 |
0.436123 |
-0.034944 |
0.9721 |
M1(-8) |
-0.024902 |
0.423546 |
-0.058795 |
0.9531 |
M1(-9) |
-0.028048 |
0.413540 |
-0.067825 |
0.9460 |
M1(-10) |
0.030806 |
0.407523 |
0.075593 |
0.9398 |
M1(-11) |
0.018509 |
0.389133 |
0.047564 |
0.9621 |
M1(-12) |
-0.057373 |
0.228826 |
-0.250728 |
0.8022 |
|
|
|
|
|
R-squared |
0.852398 |
Mean dependent var |
71.72679 |
|
Adjusted R-squared |
0.846852 |
S.D. dependent var |
19.53063 |
|
S.E. of regression |
7.643137 |
Akaike info criterion |
6.943606 |
|
Sum squared resid |
20212.47 |
Schwarz criterion |
7.094732 |
|
Log likelihood |
-1235.849 F-statistic |
|
153.7030 |
|
Durbin-Watson stat |
0.008255 |
Prob(F-statistic) |
|
0.000000 |
|
|
|
|
|
Taken individually, none of the coefficients on lagged M1 are statistically different from zero. Yet the regression as a whole has a reasonable R2 with a very significant F-statistic (though with a very low Durbin-Watson statistic). This is a typical symptom of high collinearity among the regressors and suggests fitting a polynomial distributed lag model.
To estimate a fifth-degree polynomial distributed lag model with no constraints, set the sample using the command,
smpl 1959:01 1989:12
then estimate the equation specification:
ip c pdl(m1,12,5)
using a command with the specification, or by entering the specification in the Equation Estimation dialog.
The following result is reported at the top of the equation window:
Special Equation Terms—465
Dependent Variable: IP
Method: Least Squares
Date: 08/15/97 Time: 17:53
Sample(adjusted): 1960:01 1989:12
Included observations: 360 after adjusting endpoints
Variable |
Coefficient |
Std. Error |
t-Statistic |
Prob. |
|
|
|
|
|
C |
40.67311 |
0.815195 |
49.89374 |
0.0000 |
PDL01 |
-4.66E-05 |
0.055566 |
-0.000839 |
0.9993 |
PDL02 |
-0.015625 |
0.062884 |
-0.248479 |
0.8039 |
PDL03 |
-0.000160 |
0.013909 |
-0.011485 |
0.9908 |
PDL04 |
0.001862 |
0.007700 |
0.241788 |
0.8091 |
PDL05 |
2.58E-05 |
0.000408 |
0.063211 |
0.9496 |
PDL06 |
-4.93E-05 |
0.000180 |
-0.273611 |
0.7845 |
|
|
|
|
|
R-squared |
0.852371 |
Mean dependent var |
71.72679 |
|
Adjusted R-squared |
0.849862 |
S.D. dependent var |
19.53063 |
|
S.E. of regression |
7.567664 |
Akaike info criterion |
6.904899 |
|
Sum squared resid |
20216.15 |
Schwarz criterion |
6.980462 |
|
Log likelihood |
-1235.882 F-statistic |
|
339.6882 |
|
Durbin-Watson stat |
0.008026 |
Prob(F-statistic) |
|
0.000000 |
|
|
|
|
|
This portion of the view reports the estimated coefficients γ of the polynomial in Equation (16.2) on page 461. The terms PDL01, PDL02, PDL03, …, correspond to z1, z2, … in Equation (16.4).
The implied coefficients of interest βj in equation (1) are reported at the bottom of the table, together with a plot of the estimated polynomial:
The Sum of Lags reported at the bottom of the table is the sum of the estimated coefficients on the distributed lag and has the interpretation of the long run effect of M1 on IP, assuming stationarity.
Note that selecting View/Coefficient Tests for an equation estimated with PDL terms tests the restrictions on γ , not on β . In this example, the coefficients on the fourth- (PDL05) and fifth-order (PDL06) terms are individually insignificant and very close to zero. To test
466—Chapter 16. Additional Regression Methods
the joint significance of these two terms, click View/Coefficient Tests/Wald-Coefficient Restrictions… and enter:
c(6)=0, c(7)=0
in the Wald Test dialog box (see “Wald Test (Coefficient Restrictions)” on page 572 for an extensive discussion of Wald tests in EViews). EViews displays the result of the joint test:
Wald Test:
Equation: IP_PDL
Test Statistic |
Value |
df |
Probability |
|
|
|
|
F-statistic |
0.039852 |
(2, 353) |
0.9609 |
Chi-square |
0.079704 |
2 |
0.9609 |
|
|
|
|
Null Hypothesis Summary: |
|
|
|
|
|
|
|
Normalized Restriction (= 0) |
Value |
Std. Err. |
|
|
|
|
|
C(6) |
|
2.58E-05 |
2448.827 |
C(7) |
|
-4.93E-05 |
5550.537 |
|
|
|
|
Restrictions are linear in coefficients.
There is no evidence to reject the null hypothesis, suggesting that you could have fit a lower order polynomial to your lag structure.
Automatic Categorical Dummy Variables
EViews equation specifications support expressions of the form:
@EXPAND(ser1[, ser2, ser3, ...][, drop_spec])
that, when used in an equation specification, creates a set of dummy variables that span the unique integer or string values of the input series.
For example consider the following two variables:
•SEX is a numeric series which takes the values 1 and 0.
•REGION is an alpha series which takes the values “North”, “South”, “East”, and “West”.
The equation list specification
income age @expand(sex)
is used to regress INCOME on the regressor AGE, and two dummy variables, one for “SEX=0” and one for “SEX=1”.
Similarly, the @EXPAND statement in the equation list specification,
Special Equation Terms—467
income @expand(sex, region) age
creates 8 dummy variables corresponding to :
sex=0, region="North"
sex=0, region="South"
sex=0, region="East"
sex=0, region="West"
sex=1, region="North"
sex=1, region="South"
sex=1, region="East"
sex=1, region="West"
Note that our two example equation specifications did not include an intercept. This is because the default @EXPAND statements created a full set of dummy variables that would preclude including an intercept.
You may wish to drop one or more of the dummy variables. @EXPAND takes several options for dropping variables.
The option @DROPFIRST specifies that the first category should be dropped so that:
@expand(sex, region, @dropfirst)
no dummy is created for “SEX=0, REGION="North"”.
Similarly, @DROPLAST specifies that the last category should be dropped. In:
@expand(sex, region, @droplast)
no dummy is created for “SEX=1, REGION="WEST"”.
You may specify the dummy variables to be dropped, explicitly, using the modifier option @DROP(val1[, val2, val3,...]), where each argument specified corresponds to a successive category in @EXPAND. For example, in the expression:
@expand(sex, region, @drop(0,"West"), @drop(1,"North")
no dummy is created for “SEX=0, REGION="West"” and “SEX=1, REGION="North"”.
When you specify drops by explicit value you may use the wild card "*" to indicate all values of a corresponding category. For example:
@expand(sex, region, @drop(1,*))
specifies that dummy variables for all values of REGION where “SEX=1” should be dropped.
468—Chapter 16. Additional Regression Methods
We caution you to take some care in using @EXPAND since it is very easy to generate excessively large numbers of regressors.
Example
Following Wooldridge (2000, Example 3.9, p. 106), we regress the log median housing price, LPRICE, on a constant, the log of the amount of pollution (LNOX), and the average number of houses in the community, ROOMS, using data from Harrison and Rubinfeld (1978).
We expand the example to include a dummy variable for each value of the series RADIAL, representing an index for community access to highways. We use @EXPAND to create the dummy variables of interest, with a list specification of:
lprice lnox rooms @expand(radial)
We deliberately omit the constant term C since the @EXPAND creates a full set of dummy variables. The top portion of the results is depicted below:
Dependent Variable: LPRICE
Method: Least Squares
Date: 12/30/03 Time: 16:49
Sample: 1 506
Included observations: 506
Variable |
Coefficient |
Std. Error |
t-Statistic |
Prob. |
|
|
|
|
|
|
|
|
|
|
LNOX |
-0.487579 |
0.084998 |
-5.736396 |
0.0000 |
ROOMS |
0.284844 |
0.018790 |
15.15945 |
0.0000 |
RADIAL=1 |
8.930255 |
0.205986 |
43.35368 |
0.0000 |
RADIAL=2 |
9.030875 |
0.209225 |
43.16343 |
0.0000 |
RADIAL=3 |
9.085988 |
0.199781 |
45.47970 |
0.0000 |
RADIAL=4 |
8.960967 |
0.198646 |
45.11016 |
0.0000 |
RADIAL=5 |
9.110542 |
0.209759 |
43.43330 |
0.0000 |
RADIAL=6 |
9.001712 |
0.205166 |
43.87528 |
0.0000 |
RADIAL=7 |
9.013491 |
0.206797 |
43.58621 |
0.0000 |
RADIAL=8 |
9.070626 |
0.214776 |
42.23297 |
0.0000 |
RADIAL=24 |
8.811812 |
0.217787 |
40.46069 |
0.0000 |
|
|
|
|
|
|
|
|
|
|
Note that EViews has automatically created dummy variable expressions for each distinct value in RADIAL. If we wish to renormalize our dummy variables with respect to a different omitted category, we may include the C in the regression list, and explicitly exclude a value. For example, to exclude the category RADIAL=24, we use the list:
lprice c lnox rooms @expand(radial, @drop(24))
Estimation of this specification yields: