
- •Table of Contents
- •What’s New in EViews 5.0
- •What’s New in 5.0
- •Compatibility Notes
- •EViews 5.1 Update Overview
- •Overview of EViews 5.1 New Features
- •Preface
- •Part I. EViews Fundamentals
- •Chapter 1. Introduction
- •What is EViews?
- •Installing and Running EViews
- •Windows Basics
- •The EViews Window
- •Closing EViews
- •Where to Go For Help
- •Chapter 2. A Demonstration
- •Getting Data into EViews
- •Examining the Data
- •Estimating a Regression Model
- •Specification and Hypothesis Tests
- •Modifying the Equation
- •Forecasting from an Estimated Equation
- •Additional Testing
- •Chapter 3. Workfile Basics
- •What is a Workfile?
- •Creating a Workfile
- •The Workfile Window
- •Saving a Workfile
- •Loading a Workfile
- •Multi-page Workfiles
- •Addendum: File Dialog Features
- •Chapter 4. Object Basics
- •What is an Object?
- •Basic Object Operations
- •The Object Window
- •Working with Objects
- •Chapter 5. Basic Data Handling
- •Data Objects
- •Samples
- •Sample Objects
- •Importing Data
- •Exporting Data
- •Frequency Conversion
- •Importing ASCII Text Files
- •Chapter 6. Working with Data
- •Numeric Expressions
- •Series
- •Auto-series
- •Groups
- •Scalars
- •Chapter 7. Working with Data (Advanced)
- •Auto-Updating Series
- •Alpha Series
- •Date Series
- •Value Maps
- •Chapter 8. Series Links
- •Basic Link Concepts
- •Creating a Link
- •Working with Links
- •Chapter 9. Advanced Workfiles
- •Structuring a Workfile
- •Resizing a Workfile
- •Appending to a Workfile
- •Contracting a Workfile
- •Copying from a Workfile
- •Reshaping a Workfile
- •Sorting a Workfile
- •Exporting from a Workfile
- •Chapter 10. EViews Databases
- •Database Overview
- •Database Basics
- •Working with Objects in Databases
- •Database Auto-Series
- •The Database Registry
- •Querying the Database
- •Object Aliases and Illegal Names
- •Maintaining the Database
- •Foreign Format Databases
- •Working with DRIPro Links
- •Part II. Basic Data Analysis
- •Chapter 11. Series
- •Series Views Overview
- •Spreadsheet and Graph Views
- •Descriptive Statistics
- •Tests for Descriptive Stats
- •Distribution Graphs
- •One-Way Tabulation
- •Correlogram
- •Unit Root Test
- •BDS Test
- •Properties
- •Label
- •Series Procs Overview
- •Generate by Equation
- •Resample
- •Seasonal Adjustment
- •Exponential Smoothing
- •Hodrick-Prescott Filter
- •Frequency (Band-Pass) Filter
- •Chapter 12. Groups
- •Group Views Overview
- •Group Members
- •Spreadsheet
- •Dated Data Table
- •Graphs
- •Multiple Graphs
- •Descriptive Statistics
- •Tests of Equality
- •N-Way Tabulation
- •Principal Components
- •Correlations, Covariances, and Correlograms
- •Cross Correlations and Correlograms
- •Cointegration Test
- •Unit Root Test
- •Granger Causality
- •Label
- •Group Procedures Overview
- •Chapter 13. Statistical Graphs from Series and Groups
- •Distribution Graphs of Series
- •Scatter Diagrams with Fit Lines
- •Boxplots
- •Chapter 14. Graphs, Tables, and Text Objects
- •Creating Graphs
- •Modifying Graphs
- •Multiple Graphs
- •Printing Graphs
- •Copying Graphs to the Clipboard
- •Saving Graphs to a File
- •Graph Commands
- •Creating Tables
- •Table Basics
- •Basic Table Customization
- •Customizing Table Cells
- •Copying Tables to the Clipboard
- •Saving Tables to a File
- •Table Commands
- •Text Objects
- •Part III. Basic Single Equation Analysis
- •Chapter 15. Basic Regression
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •Chapter 16. Additional Regression Methods
- •Special Equation Terms
- •Weighted Least Squares
- •Heteroskedasticity and Autocorrelation Consistent Covariances
- •Two-stage Least Squares
- •Nonlinear Least Squares
- •Generalized Method of Moments (GMM)
- •Chapter 17. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •Nonstationary Time Series
- •Unit Root Tests
- •Panel Unit Root Tests
- •Chapter 18. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Expression and PDL Specifications
- •Chapter 19. Specification and Diagnostic Tests
- •Background
- •Coefficient Tests
- •Residual Tests
- •Specification and Stability Tests
- •Applications
- •Part IV. Advanced Single Equation Analysis
- •Chapter 20. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •Binary Dependent Variable Models
- •Estimating Binary Models in EViews
- •Procedures for Binary Equations
- •Ordered Dependent Variable Models
- •Estimating Ordered Models in EViews
- •Views of Ordered Equations
- •Procedures for Ordered Equations
- •Censored Regression Models
- •Estimating Censored Models in EViews
- •Procedures for Censored Equations
- •Truncated Regression Models
- •Procedures for Truncated Equations
- •Count Models
- •Views of Count Models
- •Procedures for Count Models
- •Demonstrations
- •Technical Notes
- •Chapter 22. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •Part V. Multiple Equation Analysis
- •Chapter 23. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Cointegration Test
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •Chapter 25. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •Chapter 26. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •Part VI. Panel and Pooled Data
- •Chapter 27. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •Chapter 28. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •Chapter 29. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •Appendix A. Global Options
- •The Options Menu
- •Print Setup
- •Appendix B. Wildcards
- •Wildcard Expressions
- •Using Wildcard Expressions
- •Source and Destination Patterns
- •Resolving Ambiguities
- •Wildcard versus Pool Identifier
- •Appendix C. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •Appendix D. Gradients and Derivatives
- •Gradients
- •Derivatives
- •Appendix E. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Index
- •Symbols
- •.DB? files 266
- •.EDB file 262
- •.RTF file 437
- •.WF1 file 62
- •@obsnum
- •Panel
- •@unmaptxt 174
- •~, in backup file name 62, 939
- •Numerics
- •3sls (three-stage least squares) 697, 716
- •Abort key 21
- •ARIMA models 501
- •ASCII
- •file export 115
- •ASCII file
- •See also Unit root tests.
- •Auto-search
- •Auto-series
- •in groups 144
- •Auto-updating series
- •and databases 152
- •Backcast
- •Berndt-Hall-Hall-Hausman (BHHH). See Optimization algorithms.
- •Bias proportion 554
- •fitted index 634
- •Binning option
- •classifications 313, 382
- •Boxplots 409
- •By-group statistics 312, 886, 893
- •coef vector 444
- •Causality
- •Granger's test 389
- •scale factor 649
- •Census X11
- •Census X12 337
- •Chi-square
- •Cholesky factor
- •Classification table
- •Close
- •Coef (coefficient vector)
- •default 444
- •Coefficient
- •Comparison operators
- •Conditional standard deviation
- •graph 610
- •Confidence interval
- •Constant
- •Copy
- •data cut-and-paste 107
- •table to clipboard 437
- •Covariance matrix
- •HAC (Newey-West) 473
- •heteroskedasticity consistent of estimated coefficients 472
- •Create
- •Cross-equation
- •Tukey option 393
- •CUSUM
- •sum of recursive residuals test 589
- •sum of recursive squared residuals test 590
- •Data
- •Database
- •link options 303
- •using auto-updating series with 152
- •Dates
- •Default
- •database 24, 266
- •set directory 71
- •Dependent variable
- •Description
- •Descriptive statistics
- •by group 312
- •group 379
- •individual samples (group) 379
- •Display format
- •Display name
- •Distribution
- •Dummy variables
- •for regression 452
- •lagged dependent variable 495
- •Dynamic forecasting 556
- •Edit
- •See also Unit root tests.
- •Equation
- •create 443
- •store 458
- •Estimation
- •EViews
- •Excel file
- •Excel files
- •Expectation-prediction table
- •Expected dependent variable
- •double 352
- •Export data 114
- •Extreme value
- •binary model 624
- •Fetch
- •File
- •save table to 438
- •Files
- •Fitted index
- •Fitted values
- •Font options
- •Fonts
- •Forecast
- •evaluation 553
- •Foreign data
- •Formula
- •forecast 561
- •Freq
- •DRI database 303
- •F-test
- •for variance equality 321
- •Full information maximum likelihood 698
- •GARCH 601
- •ARCH-M model 603
- •variance factor 668
- •system 716
- •Goodness-of-fit
- •Gradients 963
- •Graph
- •remove elements 423
- •Groups
- •display format 94
- •Groupwise heteroskedasticity 380
- •Help
- •Heteroskedasticity and autocorrelation consistent covariance (HAC) 473
- •History
- •Holt-Winters
- •Hypothesis tests
- •F-test 321
- •Identification
- •Identity
- •Import
- •Import data
- •See also VAR.
- •Index
- •Insert
- •Instruments 474
- •Iteration
- •Iteration option 953
- •in nonlinear least squares 483
- •J-statistic 491
- •J-test 596
- •Kernel
- •bivariate fit 405
- •choice in HAC weighting 704, 718
- •Kernel function
- •Keyboard
- •Kwiatkowski, Phillips, Schmidt, and Shin test 525
- •Label 82
- •Last_update
- •Last_write
- •Latent variable
- •Lead
- •make covariance matrix 643
- •List
- •LM test
- •ARCH 582
- •for binary models 622
- •LOWESS. See also LOESS
- •in ARIMA models 501
- •Mean absolute error 553
- •Metafile
- •Micro TSP
- •recoding 137
- •Models
- •add factors 777, 802
- •solving 804
- •Mouse 18
- •Multicollinearity 460
- •Name
- •Newey-West
- •Nonlinear coefficient restriction
- •Wald test 575
- •weighted two stage 486
- •Normal distribution
- •Numbers
- •chi-square tests 383
- •Object 73
- •Open
- •Option setting
- •Option settings
- •Or operator 98, 133
- •Ordinary residual
- •Panel
- •irregular 214
- •unit root tests 530
- •Paste 83
- •PcGive data 293
- •Polynomial distributed lag
- •Pool
- •Pool (object)
- •PostScript
- •Prediction table
- •Principal components 385
- •Program
- •p-value 569
- •for coefficient t-statistic 450
- •Quiet mode 939
- •RATS data
- •Read 832
- •CUSUM 589
- •Regression
- •Relational operators
- •Remarks
- •database 287
- •Residuals
- •Resize
- •Results
- •RichText Format
- •Robust standard errors
- •Robustness iterations
- •for regression 451
- •with AR specification 500
- •workfile 95
- •Save
- •Seasonal
- •Seasonal graphs 310
- •Select
- •single item 20
- •Serial correlation
- •theory 493
- •Series
- •Smoothing
- •Solve
- •Source
- •Specification test
- •Spreadsheet
- •Standard error
- •Standard error
- •binary models 634
- •Start
- •Starting values
- •Summary statistics
- •for regression variables 451
- •System
- •Table 429
- •font 434
- •Tabulation
- •Template 424
- •Tests. See also Hypothesis tests, Specification test and Goodness of fit.
- •Text file
- •open as workfile 54
- •Type
- •field in database query 282
- •Units
- •Update
- •Valmap
- •find label for value 173
- •find numeric value for label 174
- •Value maps 163
- •estimating 749
- •View
- •Wald test 572
- •nonlinear restriction 575
- •Watson test 323
- •Weighting matrix
- •heteroskedasticity and autocorrelation consistent (HAC) 718
- •kernel options 718
- •White
- •Window
- •Workfile
- •storage defaults 940
- •Write 844
- •XY line
- •Yates' continuity correction 321

400—Chapter 13. Statistical Graphs from Series and Groups
Note that we select the Exact method option since there are only 69 observations to evaluate the kernel. The kernel density result is depicted below:
Kernel Density (Normal, h = 0.0800)
2.0 |
|
|
|
|
|
|
|
1.6 |
|
|
|
|
|
|
|
1.2 |
|
|
|
|
|
|
|
0.8 |
|
|
|
|
|
|
|
0.4 |
|
|
|
|
|
|
|
0.0 |
|
|
|
|
|
|
|
7.4 |
7.6 |
7.8 |
8.0 |
8.2 |
8.4 |
8.6 |
8.8 |
|
|
|
CDRATE |
|
|
|
This density estimate has about the right degree of smoothing. Interestingly enough, this density has a trimodal shape with modes at the “focal” numbers 7.5, 8.0, and 8.5.
Scatter Diagrams with Fit Lines
The view menu of a group includes four variants of scatterplot diagrams. Click on View/ Graph/Scatter, then select Simple Scatter to plot a scatter diagram with the first series on the horizontal axis and the remaining series on the vertical axis. The XY Pairs form of the scatterplot graph, plots scatter diagrams in pairs, with the first series plotted against the second, and the third plotted against the fourth, etc.

Scatter Diagrams with Fit Lines—401
The remaining three graphs, Scatter with Regression, Scatter with Nearest Neighbor Fit, and Scatter with Kernel Fit plot fitted lines for the scatterplot of the first series against the second series.
Scatter with Regression
This view fits a bivariate regression of transformations of the second series in the group Y on transformations of the first series in the group X (and a constant).
The following transformations of the series are available for the bivariate fit:
None |
y |
x |
|
||
|
|
|
Logarithmic |
log ( y) |
log ( x) |
|
|
|
Inverse |
1 ⁄ y |
1 ⁄ x |
|
|
|
Power |
ya |
xb |
Box-Cox |
( ya − 1 ) ⁄ a |
( xb − 1) ⁄ b |
|
|
|
Polynomial |
— |
1, x, x2, …, xb |
|
|
|
where you specify the parameters a and b in the edit field. Note that the Box-Cox transformation with parameter zero is the same as the log transformation.
•If any of the transformed values are not available, EViews returns an error message. For example, if you take logs of negative values, noninteger powers of nonpositive values, or inverses of zeros, EViews will stop processing and issue an error message.

402—Chapter 13. Statistical Graphs from Series and Groups
•If you specify a high-order polynomial, EViews may be forced to drop some of the high order terms to avoid collinearity.
When you click OK, EViews displays a scatter diagram of the series together with a line connecting the fitted values from the regression. You may optionally save the fitted values as a series. Type a name for the fitted series in the Fitted Y series edit field.
Robustness Iterations
The least squares method is very sensitive to the presence of even a few outlying observations. The Robustness Iterations option carries out a form of weighted least squares where outlying observations are given relatively less weight in estimating the coefficients of the regression.
For any given transformation of the series, the Robustness Iteration option carries out robust fitting with bisquare weights. Robust fitting estimates the parameters a , b to minimize the weighted sum of squared residuals,
|
|
N |
|
|
|
|
|
|
|
|
|
|
Σ ri( yi − a − xib)2 |
(13.7) |
|||||||
|
|
i = 1 |
|
|
|
|
|
|
|
|
where yi |
and xi are the transformed series and the bisquare robustness weights r are |
|||||||||
given by, |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
2 |
) ) |
2 |
ei ⁄ 6m |
|
< 1 |
|
|
|
r = |
( 1 − ei ⁄ ( 36m |
|
for |
|
(13.8) |
||||
|
|
0 |
|
|
otherwise |
|
|
|||
|
|
|
|
|
|
|||||
|
|
|
|
|
|
|
|
|
|
|
where ei |
= yi − a − xib is the residual from the previous iteration (the first iteration |
|||||||||
weights are determined by the OLS residuals), and m is the median of |
ei |
. Observations |
with large residuals (outliers) are given small weights when forming the weighted sum of squared residuals.
To choose robustness iterations, click on the check box for Robustness Iterations and specify an integer for the number of iterations.
See Cleveland (1993) for additional discussion.
Scatter with Nearest Neighbor Fit
This view displays local polynomial regressions with bandwidth based on nearest neighbors. Briefly, for each data point in a sample, we fit a locally weighted polynomial regression. It is a local regression since we use only the subset of observations which lie in a neighborhood of the point to fit the regression model; it may be weighted so that observations further from the given data point are given less weight.

Scatter Diagrams with Fit Lines—403
This class of regressions includes the popular Loess (also known as Lowess) techniques described by Cleveland (1993, 1994). Additional discussion of these techniques may be found in Fan and Gijbels (1996), and in Chambers, Cleveland, Kleiner, Tukey (1983).
Method
You should choose between computing the local regression at each data point in the sample, or using a subsample of data points.
•Exact (full sample) fits a local regression at every data point in the sample.
•Cleveland subsampling performs the local regression at only a subset of points. You should provide the size of the subsample M in the edit box.
The number of points at which the local regressions are computed is approximately equal to M . The actual number of points will depend on the distribution of the explanatory variable.
Since the exact method computes a regression at every data point in the sample, it may be quite time consuming when applied to large samples. For samples with over 100 observations, you may wish to consider subsampling.
The idea behind subsampling is that the local regression computed at two adjacent points should differ by only a small amount. Cleveland subsampling provides an adaptive algorithm for skipping nearby points in such a way that the subsample includes all of the representative values of the regressor.
It is worth emphasizing that at each point in the subsample, EViews uses the entire sample in determining the neighborhood of points. Thus, each regression in the Cleveland subsample corresponds to an equivalent regression in the exact computation. For large data sets, the computational savings are substantial, with very little loss of information.
Specification
For each point in the sample selected by the Method option, we compute the fitted value by running a local regression using data around that point. The Specification option determines the rules employed in identifying the observations to be included in each local regression, and the functional form used for the regression.
Bandwidth span determines which observations should be included in the local regressions. You should specify a number α between 0 and 1. The span controls the smoothness of the local fit; a larger fraction α gives a smoother fit. The fraction α instructs EViews to

404—Chapter 13. Statistical Graphs from Series and Groups
include the αN observations nearest to the given point, where αN is 100α % of the total sample size, truncated to an integer.
Note that this standard definition of nearest neighbors implies that the number of points need not be symmetric around the point being evaluated. If desired, you can force symmetry by selecting the Symmetric neighbors option.
Polynomial degree specifies the degree of polynomial to fit in each local regression.
If you mark the Bracket bandwidth span option, EViews displays three nearest neighbor fits with spans of 0.5α , α , and 1.5 α .
Other Options
Local Weighting (Tricube) weights the observations of each local regression. The weighted regression minimizes the weighted sum of squared residuals
|
N |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Σ wi( yi − a − xib1 − xi2b2 − … − xikbk) . |
(13.9) |
|||||||||||||||
|
i = 1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The tricube weights w are given by |
|
|
|
|
|
|
|
|
|
|
|||||||
|
|
|
|
|
|
di |
|
3 3 |
for |
|
|
di |
|
< 1 |
|||
|
|
|
|
|
|
|
|
||||||||||
|
|
1 |
− |
----------------------- |
|
----------------------- |
|
||||||||||
wi |
= |
|
|
d( |
|
αN |
|
) |
|
|
d( |
|
αN |
|
) |
|
(13.10) |
|
|
|
|
0 |
|
|
|
otherwise |
|
|
|||||||
|
|
|
|
|
|
|
|
|
where di = xi − x and d( αN ) is the αN -th smallest such distance. Observations that are relatively far from the point being evaluated get small weights in the sum of squared residuals. If you turn this option off, each local regression will be unweighted with wi = 1 for all i .
Robustness Iterations iterates the local regressions by adjusting the weights to downweight outlier observations. The initial fit is obtained using weights wi , where wi is tricube if you choose Local Weighting and 1 otherwise. The residuals ei from the initial fit are used to compute the robustness bisquare weights ri as given on (p. 402). In the second iteration, the local fit is obtained using weights wiri . We repeat this process for the user specified number of iterations, where at each iteration the robustness weights ri are recomputed using the residuals from the last iteration.
Symmetric Neighbors forces the local regression to include the same number of observations to the left and to the right of the point being evaluated. This approach violates the definition, though not the spirit, of nearest neighbor regression.
To save the fitted values as a series; type a name in the Fitted series field box. If you have specified subsampling, EViews will linearly interpolate to find the fitted value of y for the

Scatter Diagrams with Fit Lines—405
actual value of x . If you have marked the Bracket bandwidth span option, EViews saves three series with _L, _M, _H appended to the name, each corresponding to bandwidths of 0.5 α , α , and 1.5α , respectively.
Note that Loess is a special case of nearest neighbor fit, with a polynomial of degree 1, and local tricube weighting. The default EViews options are set to provide Loess estimation.
Scatter with Kernel Fit
This view displays fits of local polynomial kernel regressions of the second series in the group Y on the first series in the group X. Both the nearest neighbor fit, described above, and the kernel fit are nonparametric regressions that fit local polynomials. The two differ in how they define “local” in the choice of bandwidth. The effective bandwidth in nearest neighbor regression varies, adapting to the observed distribution of the regressor. For the kernel fit, the bandwidth is fixed but the local observations are weighted according to a kernel function.
Extensive discussion may be found in Simonoff (1996), Hardle (1991), Fan and Gijbels (1996).
Local polynomial kernel regressions fit Y at each value x , by choosing the parameters β to minimize the weighted sum-of-squared residuals:
m( x) = |
N |
k 2 |
|
|
x − Xi |
|
|
K |
(13.11) |
||||
Σ |
( Yi − β0 − β1( x − Xi) + −… −βk( x − Xi) ) |
--------------- |
||||
|
|
|
|
h |
|
|
|
i = 1 |
|
|
|
|
|
where N is the number of observations, h is the bandwidth (or smoothing parameter), and K is a kernel function that integrates to one. Note that the minimizing estimates of β will differ for each x .
When you select the Scatter with Kernel Fit view, the Kernel Fit dialog appears.
You will need to specify the form of the local regression, the kernel, the bandwidth, and other options to control the fit procedure.
Regression
Specify the order of polynomial k to be fit at each data point. The NadarayaWatson option sets k = 0 and locally fits a constant at each x . Local Linear sets k = 1 at each x . For higher order polynomials, mark the Local Polynomial
option and type in an integer in the field box to specify the order of the polynomial.

406—Chapter 13. Statistical Graphs from Series and Groups
Kernel
The kernel is the function used to weight the observations in each local regression. EViews provides the option of selecting one of the following kernel functions:
Epanechnikov (default) |
3 |
( 1 − u |
|
2 |
) I( |
|
u |
|
≤ 1 ) |
||||||||||||||||
|
-- |
|
|
|
|
|
|
||||||||||||||||||
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|||||
Triangular |
( 1 − |
|
u |
|
) ( I( |
|
u |
|
≤ 1) ) |
||||||||||||||||
|
|
|
|
||||||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Uniform (Rectangular) |
|
1 |
( I( |
|
u |
|
≤ 1 )) |
||||||||||||||||||
|
|
-- |
|
|
|||||||||||||||||||||
|
|
2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Normal (Gaussian) |
|
1 |
|
|
|
|
|
|
|
|
|
|
1 |
|
|
|
2 |
||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
||||||||||||
|
|
---------- exp |
|
− --u |
|
||||||||||||||||||||
|
|
2 π |
|
|
|
|
|
|
2 |
|
|
|
|||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Biweight (Quartic) |
15 |
( 1 |
− u |
2 |
) |
2 |
|
|
|
|
|
|
u |
|
≤ 1 ) |
||||||||||
|
----- |
|
|
|
|
I( |
|
||||||||||||||||||
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Triweight |
35 |
( 1 |
− u |
2 |
) |
3 |
|
|
|
|
|
|
u |
|
≤ 1 ) |
||||||||||
|
----- |
|
|
|
|
I( |
|
||||||||||||||||||
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Cosinus |
π |
|
|
π |
|
|
|
|
|
|
I |
( |
|
u |
|
≤ 1 ) |
|||||||||
|
-- cos |
|
--u |
|
|
|
|||||||||||||||||||
|
4 |
|
|
2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
where u is the argument of the kernel function and I is the indicator function that takes a value of one, if its argument is true, and zero otherwise.
Bandwidth
The bandwidth h determines the weights to be applied to observations in each local regression. The larger the h , the smoother the fit. By default, EViews arbitrarily sets the bandwidth to:
h = 0.15( XU − XL) |
(13.12) |
where ( XU − XL) is the range of X .
For nearest neighbor bandwidths, see Scatter with Nearest Neighbor Fit.
To specify your own bandwidth, mark User Specified and enter a nonnegative number for the bandwidth in the edit box.
Bracket Bandwidth option fits three kernel regressions using bandwidths 0.5 h , h , and
1.5 h .

Scatter Diagrams with Fit Lines—407
Number of grid points
You must specify the number of points M at which to evaluate the local polynomial regression. The default is M = 100 points; you can specify any integer in the field. Suppose the range of the series X is [XL,XU] . Then the polynomial is evaluated at M equispaced points:
xi |
= XL |
+ i |
XU − XL |
for i = 0, 1, … M − 1 |
(13.13) |
|
--------------------- |
||||||
|
|
|
M |
|
|
|
Method
Given a number of evaluation points, EViews provides you with two additional computational options: exact computation and linear binning.
The Exact method performs a regression at each xi , using all of the data points ( Xj, Yj) , for j = 1, 2, …, N . Since the exact method computes a regression at every grid point, it may be quite time consuming when applied to large samples. In these settings, you may wish to consider the linear binning method.
The Linear Binning method (Fan and Marron 1994) approximates the kernel regression by binning the raw data Xj fractionally to the two nearest evaluation points, prior to evaluating the kernel estimate. For large data sets, the computational savings may be substantial, with virtually no loss of precision.
To save the fitted values as a series, type a name in the Fitted Series field box. EViews will save the fitted Y to the series, linearly interpolating points computed on the grid, to find the appropriate value. If you have marked the Bracket Bandwidth option, EViews saves three series with “_L”, “_M”, “_H” appended to the name, each corresponding to bandwidths 0.5α , α , and 1.5 α , respectively.
Example
As an example, we estimate a bivariate relation for a simulated data set of the type used by Hardle (1991). The data were generated by:
scalar pi = @atan(1)*4
series x = rnd
series y = sin(2*pi*x^3)^3 + nrnd*(0.1^.5)
The simple scatter of Y and the “true” conditional mean of Y against X looks as follows:

408—Chapter 13. Statistical Graphs from Series and Groups
2.0 |
|
1.5 |
|
1.0 |
|
0.5 |
|
|
Y |
0.0 |
YTRUE |
-0.5
-1.0
-1.5
0.0 |
0.2 |
0.4 |
0.6 |
0.8 |
1.0 |
X
The “+” shapes in the middle of the scatterplot trace out the “true” conditional mean of Y. Note that the true mean reaches a peak around x = 0.6 , a valley around x = 0.9 , and a saddle around x = 0.8 .
To fit a nonparametric regression of Y on X, you first create a group containing the series Y and X. The order that you enter the series is important; the explanatory series variable must be the first series in the group. Highlight the series name X and then Y, double click in the highlighted area, select Open Group, and select View/Graph/Scatter/Scatter with Nearest Neighbor Fit, and repeat the procedure for Scatter with Kernel Fit.
The two fits, computed using the EViews default settings, are shown below:
LOESS Fit (degree = 1, span = 0.3000) |
|
Kernel Fit (Epanechnikov, h= 0.1488) |
|
||||||||
2.0 |
|
|
|
|
|
2.0 |
|
|
|
|
|
1.5 |
|
|
|
|
|
1.5 |
|
|
|
|
|
1.0 |
|
|
|
|
|
1.0 |
|
|
|
|
|
0.5 |
|
|
|
|
|
0.5 |
|
|
|
|
|
Y |
|
|
|
|
|
Y |
|
|
|
|
|
0.0 |
|
|
|
|
|
0.0 |
|
|
|
|
|
-0.5 |
|
|
|
|
|
-0.5 |
|
|
|
|
|
-1.0 |
|
|
|
|
|
-1.0 |
|
|
|
|
|
-1.5 |
|
|
|
|
|
-1.5 |
|
|
|
|
|
0.0 |
0.2 |
0.4 |
0.6 |
0.8 |
1.0 |
0.0 |
0.2 |
0.4 |
0.6 |
0.8 |
1.0 |
|
|
|
X |
|
|
|
|
|
X |
|
|
Both local regression lines seem to capture the peak, but the kernel fit is more sensitive to the upturn in the neighborhood of X=1. Of course, the fitted lines change as we modify the options, particularly when we adjust the bandwidth h and window width α .