Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Eviews5 / EViews5 / Docs / EViews 5 Users Guide.pdf
Скачиваний:
196
Добавлен:
23.03.2015
Размер:
11.51 Mб
Скачать

Weighted Least Squares—469

Dependent Variable: LPRICE

Method: Least Squares

Date: 12/30/03 Time: 16:57

Sample: 1 506

Included observations: 506

Variable

Coefficient

Std. Error

t-Statistic

Prob.

 

 

 

 

 

 

 

 

 

 

C

8.811812

0.217787

40.46069

0.0000

LNOX

-0.487579

0.084998

-5.736396

0.0000

ROOMS

0.284844

0.018790

15.15945

0.0000

RADIAL=1

0.118444

0.072129

1.642117

0.1012

RADIAL=2

0.219063

0.066055

3.316398

0.0010

RADIAL=3

0.274176

0.059458

4.611253

0.0000

RADIAL=4

0.149156

0.042649

3.497285

0.0005

RADIAL=5

0.298730

0.037827

7.897337

0.0000

RADIAL=6

0.189901

0.062190

3.053568

0.0024

RADIAL=7

0.201679

0.077635

2.597794

0.0097

RADIAL=8

0.258814

0.066166

3.911591

0.0001

 

 

 

 

 

 

 

 

 

 

R-squared

0.573871

Mean dependent var

9.941057

Adjusted R-squared

0.565262

S.D. dependent var

0.409255

S.E. of regression

0.269841

Akaike info criterion

0.239530

Sum squared resid

36.04295

Schwarz criterion

0.331411

Log likelihood

-49.60111

F-statistic

66.66195

Durbin-Watson stat

0.671010

Prob(F-statistic)

0.000000

 

 

 

 

 

 

 

 

Weighted Least Squares

Suppose that you have heteroskedasticity of known form, and that there is a series w , whose values are proportional to the reciprocals of the error standard deviations. You can use weighted least squares, with weight series w , to correct for the heteroskedasticity.

EViews performs weighted least squares by first dividing the weight series by its mean, then multiplying all of the data for each observation by the scaled weight series. The scaling of the weight series is a normalization that has no effect on the parameter results, but makes the weighted residuals more comparable to the unweighted residuals. The normalization does imply, however, that EViews weighted least squares is not appropriate in situations where the scale of the weight series is relevant, as in frequency weighting.

Estimation is then completed by running a regression using the weighted dependent and independent variables to minimize the sum-of-squared residuals:

S( β) = Σwt2( yt xtβ)2

(16.8)

t

470—Chapter 16. Additional Regression Methods

with respect to the k -dimensional vector of parameters β . In matrix notation, let W be a diagonal matrix containing the scaled w along the diagonal and zeroes elsewhere, and let y and X be the usual matrices associated with the left and right-hand side variables. The weighted least squares estimator is,

bWLS = ( XWWX)−1XWWy ,

(16.9)

and the estimated covariance matrix is:

 

 

 

 

 

ˆ

2

( XWWX)

−1

.

(16.10)

ΣWLS = s

 

To estimate an equation using weighted least squares, first go to the main menu and select

Quick/Estimate Equation…, then choose LS—Least Squares (NLS and ARMA) from the combo box. Enter your equation specification and sample in the Specification tab, then select the Options tab and click on the Weighted LS/TSLS option.

Fill in the blank after Weight with the name of the series containing your weights, and click on OK. Click on OK again to accept the dialog and estimate the equation.

Heteroskedasticity and Autocorrelation Consistent Covariances—471

Dependent Variable: LOG(X)

Method: Least Squares

Date: 10/15/97 Time: 11:10

Sample(adjusted): 1891 1983

Included observations: 93 after adjusting endpoints

Weighting series: POP

Variable

Coefficient

Std. Error

t-Statistic

Prob.

 

 

 

 

 

C

0.004233

0.012745

0.332092

0.7406

LOG(X(-1))

0.099840

0.112539

0.887163

0.3774

LOG(W(-1))

0.194219

0.421005

0.461322

0.6457

 

 

 

 

 

Weighted Statistics

R-squared

0.016252

Mean dependent var

0.009762

Adjusted R-squared

-0.005609

S.D. dependent var

0.106487

S.E. of regression

0.106785

Akaike info criterion

-1.604274

Sum squared resid

1.026272

Schwarz criterion

-1.522577

Log likelihood

77.59873

F-statistic

0.743433

Durbin-Watson stat

1.948087

Prob(F-statistic)

0.478376

 

 

 

 

Unweighted Statistics

R-squared Adjusted R-squared S.E. of regression Durbin-Watson stat

-0.002922

Mean dependent var

0.011093

-0.025209

S.D. dependent var

0.121357

0.122877

Sum squared resid

1.358893

2.086669

 

 

EViews will open an output window displaying the standard coefficient results, and both weighted and unweighted summary statistics. The weighted summary statistics are based on the fitted residuals, computed using the weighted data:

˜

= wt( yt xtbWLS) .

(16.11)

ut

The unweighted summary results are based on the residuals computed from the original (unweighted) data:

ut = yt xtbWLS .

(16.12)

Following estimation, the unweighted residuals are placed in the RESID series.

If the residual variance assumptions are correct, the weighted residuals should show no evidence of heteroskedasticity. If the variance assumptions are correct, the unweighted residuals should be heteroskedastic, with the reciprocal of the standard deviation of the residual at each period t being proportional to wt .

The weighting option will be ignored in equations containing ARMA specifications. Note also that the weighting option is not available for binary, count, censored and truncated, or ordered discrete choice models.

Heteroskedasticity and Autocorrelation Consistent Covariances

When the form of heteroskedasticity is not known, it may not be possible to obtain efficient estimates of the parameters using weighted least squares. OLS provides consistent

472—Chapter 16. Additional Regression Methods

parameter estimates in the presence of heteroskedasticity, but the usual OLS standard errors will be incorrect and should not be used for inference.

Before we describe the techniques for HAC covariance estimation, note that:

Using the White heteroskedasticity consistent or the Newey-West HAC consistent covariance estimates does not change the point estimates of the parameters, only the estimated standard errors.

There is nothing to keep you from combining various methods of accounting for heteroskedasticity and serial correlation. For example, weighted least squares estimation might be accompanied by White or Newey-West covariance matrix estimates.

Heteroskedasticity Consistent Covariances (White)

White (1980) has derived a heteroskedasticity consistent covariance matrix estimator which provides correct estimates of the coefficient covariances in the presence of heteroskedasticity of unknown form. The White covariance matrix is given by:

ˆ

=

T

( XX)

−1 T

u

2

x

 

−1

,

(16.13)

ΣW

------------

Σ

t

x ′ ( XX)

 

 

 

T k

 

 

t

t

 

 

 

 

 

 

 

t = 1

 

 

 

 

 

 

 

where is T the number of observations, k is the number of regressors, and ut

is the least

squares residual.

 

 

 

 

 

 

 

 

 

 

 

EViews provides you the option to use the White covariance estimator in place of the standard OLS formula. Open the equation dialog and specify the equation as before, then push the Options button. Next, click on the check box labeled Heteroskedasticity Consistent Covariance and click on the White radio button. Accept the options and click OK to estimate the equation.

EViews will estimate your equation and compute the variances using White’s covariance estimator. You can always tell when EViews is using White covariances, since the output display will include a line to document this fact:

Dependent Variable: LOG(X)

Method: Least Squares

Date: 10/15/97 Time: 11:11

Sample(adjusted): 1891 1983

Included observations: 93 after adjusting endpoints

Weighting series: POP

White Heteroskedasticity-Consistent Standard Errors & Covariance

Variable

Coefficient

Std. Error

t-Statistic

Prob.

 

 

 

 

 

C

0.004233

0.012519

0.338088

0.7361

LOG(X(-1))

0.099840

0.137262

0.727369

0.4689

LOG(W(-1))

0.194219

0.436644

0.444800

0.6575

 

 

 

 

 

Соседние файлы в папке Docs