
- •Table of Contents
- •Foreword
- •Chapter 1. A Quick Walk Through
- •Workfile: The Basic EViews Document
- •Viewing an individual series
- •Looking at different samples
- •Generating a new series
- •Looking at a pair of series together
- •Estimating your first regression in EViews
- •Saving your work
- •Forecasting
- •What’s Ahead
- •Chapter 2. EViews—Meet Data
- •The Structure of Data and the Structure of a Workfile
- •Creating a New Workfile
- •Deconstructing the Workfile
- •Time to Type
- •Identity Noncrisis
- •Dated Series
- •The Import Business
- •Adding Data To An Existing Workfile—Or, Being Rectangular Doesn’t Mean Being Inflexible
- •Among the Missing
- •Quick Review
- •Appendix: Having A Good Time With Your Date
- •Chapter 3. Getting the Most from Least Squares
- •A First Regression
- •The Really Important Regression Results
- •The Pretty Important (But Not So Important As the Last Section’s) Regression Results
- •A Multiple Regression Is Simple Too
- •Hypothesis Testing
- •Representing
- •What’s Left After You’ve Gotten the Most Out of Least Squares
- •Quick Review
- •Chapter 4. Data—The Transformational Experience
- •Your Basic Elementary Algebra
- •Simple Sample Says
- •Data Types Plain and Fancy
- •Numbers and Letters
- •Can We Have A Date?
- •What Are Your Values?
- •Relative Exotica
- •Quick Review
- •Chapter 5. Picture This!
- •A Simple Soup-To-Nuts Graphing Example
- •A Graphic Description of the Creative Process
- •Picture One Series
- •Group Graphics
- •Let’s Look At This From Another Angle
- •To Summarize
- •Categorical Graphs
- •Togetherness of the Second Sort
- •Quick Review and Look Ahead
- •Chapter 6. Intimacy With Graphic Objects
- •To Freeze Or Not To Freeze Redux
- •A Touch of Text
- •Shady Areas and No-Worry Lines
- •Templates for Success
- •Point Me The Way
- •Your Data Another Sorta Way
- •Give A Graph A Fair Break
- •Options, Options, Options
- •Quick Review?
- •Chapter 7. Look At Your Data
- •Sorting Things Out
- •Describing Series—Just The Facts Please
- •Describing Series—Picturing the Distribution
- •Tests On Series
- •Describing Groups—Just the Facts—Putting It Together
- •Chapter 8. Forecasting
- •Just Push the Forecast Button
- •Theory of Forecasting
- •Dynamic Versus Static Forecasting
- •Sample Forecast Samples
- •Facing the Unknown
- •Forecast Evaluation
- •Forecasting Beneath the Surface
- •Quick Review—Forecasting
- •Chapter 9. Page After Page After Page
- •Pages Are Easy To Reach
- •Creating New Pages
- •Renaming, Deleting, and Saving Pages
- •Multi-Page Workfiles—The Most Basic Motivation
- •Multiple Frequencies—Multiple Pages
- •Links—The Live Connection
- •Unlinking
- •Have A Match?
- •Matching When The Identifiers Are Really Different
- •Contracted Data
- •Expanded Data
- •Having Contractions
- •Two Hints and A GotchYa
- •Quick Review
- •Chapter 10. Prelude to Panel and Pool
- •Pooled or Paneled Population
- •Nuances
- •So What Are the Benefits of Using Pools and Panels?
- •Quick (P)review
- •Chapter 11. Panel—What’s My Line?
- •What’s So Nifty About Panel Data?
- •Setting Up Panel Data
- •Panel Estimation
- •Pretty Panel Pictures
- •More Panel Estimation Techniques
- •One Dimensional Two-Dimensional Panels
- •Fixed Effects With and Without the Social Contrivance of Panel Structure
- •Quick Review—Panel
- •Chapter 12. Everyone Into the Pool
- •Getting Your Feet Wet
- •Playing in the Pool—Data
- •Getting Out of the Pool
- •More Pool Estimation
- •Getting Data In and Out of the Pool
- •Quick Review—Pools
- •Chapter 13. Serial Correlation—Friend or Foe?
- •Visual Checks
- •Testing for Serial Correlation
- •More General Patterns of Serial Correlation
- •Correcting for Serial Correlation
- •Forecasting
- •ARMA and ARIMA Models
- •Quick Review
- •Chapter 14. A Taste of Advanced Estimation
- •Weighted Least Squares
- •Heteroskedasticity
- •Nonlinear Least Squares
- •Generalized Method of Moments
- •Limited Dependent Variables
- •ARCH, etc.
- •Maximum Likelihood—Rolling Your Own
- •System Estimation
- •Vector Autoregressions—VAR
- •Quick Review?
- •Chapter 15. Super Models
- •Your First Homework—Bam, Taken Up A Notch!
- •Looking At Model Solutions
- •More Model Information
- •Your Second Homework
- •Simulating VARs
- •Rich Super Models
- •Quick Review
- •Chapter 16. Get With the Program
- •I Want To Do It Over and Over Again
- •You Want To Have An Argument
- •Program Variables
- •Loopy
- •Other Program Controls
- •A Rolling Example
- •Quick Review
- •Appendix: Sample Programs
- •Chapter 17. Odds and Ends
- •How Much Data Can EViews Handle?
- •How Long Does It Take To Compute An Estimate?
- •Freeze!
- •A Comment On Tables
- •Saving Tables and Almost Tables
- •Saving Graphs and Almost Graphs
- •Unsubtle Redirection
- •Objects and Commands
- •Workfile Backups
- •Updates—A Small Thing
- •Updates—A Big Thing
- •Ready To Take A Break?
- •Help!
- •Odd Ending
- •Chapter 18. Optional Ending
- •Required Options
- •Option-al Recommendations
- •More Detailed Options
- •Window Behavior
- •Font Options
- •Frequency Conversion
- •Alpha Truncation
- •Spreadsheet Defaults
- •Workfile Storage Defaults
- •Estimation Defaults
- •File Locations
- •Graphics Defaults
- •Quick Review
- •Index
- •Symbols

Chapter 14. A Taste of Advanced Estimation
Estimation is econometric software’s raison d’être. This chapter presents a quick taste of some of the many techniques built into EViews. We’re not going to explore all the nuanced variations. If you find an interesting flavor, visit the User’s Guide for in-depth discussion.
Weighted Least Squares
Ordinary least squares attaches equal weight to each observation. Sometimes you want certain observations to count more than others. One reason for weighting is to make sub-popu- lation proportions in your sample mimic sub-population proportions in the overall population. Another reason for weighting is to downweight high error variance observations. The version of least squares that attaches weights to each observation is conveniently named weighted least squares, or WLS.
In Chapter 8, “Forecasting” we looked at the growth of currency in the hands of the public, estimating the equation shown here. We used ordinary least squares for an estimation technique, but you may remember that the residuals were much noisier early in the sample than they were later on. We might get a better estimate by giving less weight to the early observations.

336—Chapter 14. A Taste of Advanced Estimation
As a rough and ready adjustment after looking at the residual plot, we’ll choose to give more weight to observations from 1952 on and less to those earlier.
We used a Stats By Classification… view of RESID to find error standard deviations for each subperiod.
You can see that the residual standard deviation falls in half from 1952. We’ll use this information to create a series, ROUGH_W, for weighting observations:
series rough_w = 14*(@year<1952) + 6*(@year>=1952)
That’s the heart of the trick in instructing EViews to do weighted least squares—you need to create a series which holds the weight for every observation. When performing weighted least squares using the default settings,
EViews then multiplies each observation by the weight you supply. Essentially, this is equivalent to replicating each observation in proportion to its weight.

Weighted Least Squares—337
Hint: In fact, if the weight is wi , the EViews default scaling multiplies the data by wi ⁄ w —the observation weight divided by the mean weight. In theory this makes no difference, but sometimes the denominator helps with numerical computation issues.
The Weighted Option
Open the least squares equation EQ01 in the workfile, click the button, and switch to the Options tab. In the Weights groupbox, select Inverse std.dev. from the Type dropdown and enter the weight series in the Weight series field. Notice that we’ve entered 1/ROUGH_W. That’s because 1/ROUGH_W is roughly proportional to the inverse of the error standard deviation. As is generally true in EViews, you can enter an expression wherever a series is called for.

338—Chapter 14. A Taste of Advanced Estimation
The weighted least squares estimates include two summary statistics panels. The first panel is calculated from the residuals from the weighted regression, while the second is based on unweighted residuals. Notice that the unweighted R2 from weighted least squares is a little lower than
reported in the original ordinary least squares estimate, just as it should be.
Heteroskedasticity
One of the statistical assumptions underneath ordinary least squares is that the error terms for all observations have a common variance; that they are homoskedastic. Vary-
ing variance errors are said, in contrast, to be heteroskedastic. EViews offers both tests for heteroskedasticity and methods for producing correct standard errors in the presence of heteroskedasticity.

Heteroskedasticity—339
Tests for Heteroskedastic Residuals
The Residual Diagnostics/Heteroskedasticity Tests... view of an equation offers variance heteroskedasticity tests, including two variants of the White heteroskedasticity test. The White test is essentially a test of whether values of the right-hand side variables—and/or their cross terms, x21, x1 × x2, x22 , etc.—help explain the squared residuals. To perform a White test with only the squared terms (no cross terms), you should uncheck the Include White cross terms box.
Here are the results of the White test (without cross terms) on our currency growth equation. The F- and x2- statistics reported in the top panel decisively reject the null hypothesis of homoskedasticity.
The bottom panel— only part of which is shown—shows the auxiliary regression used to compute the test statistics.

340—Chapter 14. A Taste of Advanced Estimation
Heteroskedasticity Robust Standard Errors
One approach to dealing with heteroskedasticity is to weight observations such that the weighted data are homoskedastic. That’s essentially what we did in the previous section. A different approach is to stick with least squares estimation, but to correct standard errors to account for heteroskedasticity. Click the button in the equation window and switch to the Options tab. Select either White or HAC (Newey-West) in the dropdown in the Coefficient covariance matrix
group. As an example, we’ll trumpet the White results.
Compare the results here to the least squares results shown on page 335. The coefficients, as well as the summary panel at the bottom, are identical. This reinforces the point that we’re still doing a least squares estimation, but adjusting the standard errors.
The reported t-statistics and p-values reflect the adjusted standard errors. Some are smaller than before and some are larger. Hypothesis tests computed using Coefficient Diag- nostics/Wald-Coefficient Restrictions… correctly account for the adjusted standard errors. The Omitted Variables and Redundant Variables tests do not use the adjusted standard errors.