Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Eviews5 / EViews5 / Docs / EViews 5 Users Guide.pdf
Скачиваний:
152
Добавлен:
23.03.2015
Размер:
11.51 Mб
Скачать

Technical Notes—667

Suppose that the estimated QML equation is named EQ1. Then you can use EViews to compute p-value associated with this statistic, placing the results in scalars using the following commands:

scalar qlr = eq1.@logl/2.226420477

scalar qpval = 1-@cchisq(qlr, 2)

You can examine the results by clicking on the scalar objects in the workfile window and viewing the results in the status line. The QLR statistic is 7.5666, and the p-value is 0.023. The statistic and p-value are valid under the weaker conditions that the conditional mean is correctly specified, and that the conditional variance is proportional (but not necessarily equal) to the conditional mean.

Technical Notes

Huber/White (QML) Standard Errors

The Huber/White options for robust standard errors computes the quasi-maximum likelihood (or pseudo-ML) standard errors:

 

 

ˆ

ˆ

−1 ˆ ˆ ˆ

−1

(21.49)

 

 

varQML( β) =

H

ggH

,

ˆ

ˆ

−1

 

 

 

 

where g and

H

are the gradient (or score) and Hessian of the log likelihood evaluated

at the ML estimates.

Note that these standard errors are not robust to heteroskedasticity in binary dependent variable models. They are robust to certain misspecifications of the underlying distribution of y .

GLM Standard Errors

Many of the discrete and limited dependent variable models described in this chapter belong to a class of models known as generalized linear models (GLM). The assumption of GLM is that the distribution of the dependent variable yi belongs to the exponential family

and that the conditional mean of yi is a (smooth) nonlinear transformation of the linear

part xiβ :

 

E( yi xi, β) = h( xiβ) .

(21.50)

Even though the QML covariance is robust to general misspecification of the conditional distribution of yi , it does not possess any efficiency properties. An alternative consistent estimate of the covariance is obtained if we impose the GLM condition that the (true) variance of yi is proportional to the variance of the distribution used to specify the log likelihood:

var( y

i

 

x , β) = σ2var

ML

( y

i

 

x , β) .

(21.51)

 

 

 

 

i

 

 

i

 

668—Chapter 21. Discrete and Limited Dependent Variable Models

In other words, the ratio of the (conditional) variance to the mean is some constant σ2 that is independent of x . The most empirically relevant case is σ2 > 1 , which is known as overdispersion. If this proportional variance condition holds, a consistent estimate of the GLM covariance is given by:

where

ˆ

2

=

1

 

σ

 

N---------------

 

 

 

K

 

ˆ

ˆ 2

ˆ

 

varGLM( β) = σ

varML( β)

N

ˆ

2

1

 

Σ

( yi yi)

 

 

----------------------------- = ---------------

 

ˆ

ˆ

N K

 

i = 1 v( xi, β,

γ)

 

 

,

 

 

(21.52)

N

ˆ 2

 

 

Σ

ui

 

(21.53)

---------------------------------- .

 

ˆ

ˆ

 

i = 1 ( v( xi, β, γ) )

 

If you select GLM standard errors, the estimated proportionality term ˆ 2 is reported as the

σ

variance factor estimate in EViews.

For more discussion on GLM and the phenomenon of overdispersion, see McCullaugh and Nelder (1989) or Fahrmeir and Tutz (1994).

The Hosmer-Lemeshow Test

Let the data be grouped into j = 1, 2, …, J groups, and let nj

be the number of obser-

vations in group j . Define the number of yi

 

= 1 observations and the average of pre-

dicted values in group j as:

 

 

 

 

 

 

 

 

 

y( j )

= Σ yi

 

 

 

 

 

 

 

 

 

 

 

 

i j

 

 

 

 

 

 

 

 

(21.54)

 

 

 

ˆ

 

 

 

 

 

 

 

ˆ

 

 

 

 

 

 

 

 

 

 

 

p( j)

nj =

 

 

 

 

 

 

= Σ pi

Σ ( 1 − F( − xiβ) ) ⁄ nj

 

 

 

i j

 

 

i j

 

 

 

The Hosmer-Lemeshow test statistic is computed as:

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

J

( y(j ) − njp(j ) )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

HL =

Σ

n----------------------------

 

( j) ( 1 − p( j) )-------------

.

(21.55)

 

 

 

 

j = 1

p

 

 

 

 

 

 

j

 

 

 

 

 

 

 

The distribution of the HL statistic is not known; however, Hosmer and Lemeshow (1989, p.141) report evidence from extensive simulation indicating that when the model is correctly specified, the distribution of the statistic is well approximated by a χ2 distribution with J − 2 degrees of freedom. Note that these findings are based on a simulation where J is close to n .

The Andrews Test

Let the data be grouped into j = 1, 2, …, J groups. Since y is binary, there are 2J cells into which any observation can fall. Andrews (1988a, 1988b) compares the 2J vector of the actual number of observations in each cell to those predicted from the model, forms a quadratic form, and shows that the quadratic form has an asymptotic χ2 distribution if the model is specified correctly.

Technical Notes—669

Andrews suggests three tests depending on the choice of the weighting matrix in the quadratic form. EViews uses the test that can be computed by an auxiliary regression as described in Andrews (1988a, 3.18) or Andrews (1988b, 17).

˜

 

˜

=

ˆ

, where the indicator

Briefly, let A be an n × J

matrix with element aij

1( i j) − pi

function 1( i j)

takes the value one if observation i

belongs to group j with yi = 1 ,

and zero otherwise (we drop the columns for the groups with y = 0 to avoid singularity). Let B be the n × K matrix of the contributions to the score ∂l( β) ⁄ ∂β. The Andrews test statistic is n times the R2 from regressing a constant (one) on each column

˜

B . Under the null hypothesis that the model is correctly specified,

nR

2

is

of A and

 

asymptotically distributed χ2 with J degrees of freedom.

670—Chapter 21. Discrete and Limited Dependent Variable Models

Соседние файлы в папке Docs