Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Eviews5 / EViews5 / Docs / EViews 5 Users Guide.pdf
Скачиваний:
152
Добавлен:
23.03.2015
Размер:
11.51 Mб
Скачать

684—Chapter 22. The Log Likelihood (LogL) Object

Troubleshooting

Because the logl object provides a great deal of flexibility, you are more likely to experience problems with estimation using the logl object than with EViews’ built-in estimators.

If you are experiencing difficulties with estimation the following suggestions may help you in solving your problem:

Check your likelihood specification. A simple error involving a wrong sign can easily stop the estimation process from working. You should also verify that the parameters of the model are really identified (in some specifications you may have to impose a normalization across the parameters). Also, every parameter which appears in the model must feed directly or indirectly into the likelihood contributions. The Check Derivatives view is particularly useful in helping you spot the latter problem.

Choose your starting values. If any of the likelihood contributions in your sample cannot be evaluated due to missing values or because of domain errors in mathematical operations (logs and square roots of negative numbers, division by zero, etc.) the estimation will stop immediately with the message: “Cannot compute @logl due to missing values”. In other cases, a bad choice of starting values may lead you into regions where the likelihood function is poorly behaved. You should always try to initialize your parameters to sensible numerical values. If you have a simpler estimation technique available which approximates the problem, you may wish to use estimates from this method as starting values for the maximum likelihood specification.

Make sure lagged values are initialized correctly. In contrast to most other estimation routines in EViews, the logl estimation procedure will not automatically drop observations with NAs or lags from the sample when estimating a log likelihood model. If your likelihood specification involves lags, you will either have to drop observations from the beginning of your estimation sample, or you will have to carefully code the specification so that missing values from before the sample do not cause NAs to propagate through the entire sample (see the AR(1) and GARCH examples for a demonstration).

Since the series used to evaluate the likelihood are contained in your workfile (unless you use the @temp statement to delete them), you can examine the values in the log likelihood and intermediate series to find problems involving lags and missing values.

Verify your derivatives. If you are using analytic derivatives, use the Check Derivatives view to make sure you have coded the derivatives correctly. If you are using numerical derivatives, consider specifying analytic derivatives or adjusting the options for derivative method or step size.

Limitations—685

Reparametrize your model. If you are having problems with parameter values causing mathematical errors, you may wish to consider reparameterizing the model to restrict the parameter within its valid domain. See the discussion below for examples.

Most of the error messages you are likely to see during estimation are self-explanatory. The error message “near singular matrix” may be less obvious. This error message occurs when EViews is unable to invert the matrix of the sum of the outer product of the derivatives so that it is impossible to determine the direction of the next step of the optimization. This error may indicate a wide variety of problems, including bad starting values, but will almost always occur if the model is not identified, either theoretically, or in terms of the available data.

Limitations

The likelihood object can be used to estimate parameters that maximize (or minimize) a variety of objective functions. Although the main use of the likelihood object will be to specify a log likelihood, you can specify least squares and minimum distance estimation problems with the likelihood object as long as the objective function is additive over the sample.

You should be aware that the algorithm used in estimating the parameters of the log likelihood is not well suited to solving arbitrary maximization or minimization problems. The algorithm forms an approximation to the Hessian of the log likelihood, based on the sum of the outer product of the derivatives of the likelihood contributions. This approximation relies on both the functional form and statistical properties of maximum likelihood objective functions, and may not be a good approximation in general settings. Consequently, you may or may not be able to obtain results with other functional forms. Furthermore, the standard error estimates of the parameter values will only have meaning if the series describing the log likelihood contributions are (up to an additive constant) the individual contributions to a correctly specified, well-defined theoretical log likelihood.

Currently, the expressions used to describe the likelihood contribution must follow the rules of EViews series expressions. This restriction implies that we do not allow matrix operations in the likelihood specification. In order to specify likelihood functions for multiple equation models, you may have to write out the expression for the determinants and quadratic forms. Although possible, this may become tedious for models with more than two or three equations. See the multivariate GARCH sample programs for examples of this approach.

Additionally, the logl object does not directly handle optimization subject to general inequality constraints. There are, however, a variety of well-established techniques for imposing simple inequality constraints. We provide examples below. The underlying idea is to

686—Chapter 22. The Log Likelihood (LogL) Object

apply a monotonic transformation to the coefficient so that the new coefficient term takes on values only in the desired range. The commonly used transformations are the @exp for one-sided restrictions and the @logit and @arctan for two-sided restrictions.

You should be aware of the limitations of the transformation approach. First, the approach only works for relatively simple inequality constraints. If you have several cross-coefficient inequality restrictions, the solution will quickly become intractable. Second, in order to perform hypothesis tests on the untransformed coefficient, you will have to obtain an estimate of the standard errors of the associated expressions. Since the transformations are generally nonlinear, you will have to compute linear approximations to the variances yourself (using the delta method). Lastly, inference will be poor near the boundary values of the inequality restrictions.

Simple One-Sided Restrictions

Suppose you would like to restrict the estimate of the coefficient of X to be no larger than 1. One way you could do this is to specify the corresponding subexpression as follows:

' restrict coef on x to not exceed 1

res1 = y - c(1) - (1-exp(c(2)))*x

Note that EViews will report the point estimate and the standard error for the parameter C(2), not the coefficient of X. To find the standard error of the expression 1-exp(c(2)), you will have to use the delta method; see for example Greene (1997), Theorems 4.15 and 4.16.

Simple Two-Sided Restrictions

Suppose instead that you want to restrict the coefficient for X to be between -1 and 1. Then you can specify the expression as:

' restrict coef on x to be between -1 and 1

res1 = y - c(1) - (2*@logit(c(2))-1)*x

Again, EViews will report the point estimate and standard error for the parameter C(2). You will have to use the delta method to compute the standard error of the transformation expression 2*@logit(c(2))-1.

More generally, if you want to restrict the parameter to lie between L and H, you can use the transformation:

(H-L)*@logit(c(1)) + L

where C(1) is the parameter to be estimated. In the above example, L=-1 and H=1.

Examples—687

Examples

In this section, we provide extended examples of working with the logl object to estimate a multinomial logit and a maximum likelihood AR(1) specification. Example programs for these and several other specifications are provided in your default EViews data directory. If you set your default directory to point to the EViews data directory, you should be able to issue a RUN command for each of these programs to create the logl object and to estimate the unknown parameters.

Multinomial Logit (mlogit1.prg)

In this example, we demonstrate how to specify and estimate a simple multinomial logit model using the logl object. Suppose the dependent variable Y can take one of three categories 1, 2, and 3. Further suppose that there are data on two regressors, X1 and X2 that vary across observations (individuals). Standard examples include variables such as age and level of education. Then the multinomial logit model assumes that the probability of observing each category in Y is given by:

Pr( yi = j) =

exp ( β0j + β1jx1i +

β2jx2i)

 

---------------------------------------------------------------------------------3

= Pij

(22.8)

 

Σ exp ( β0k + β1kx1i + β2kx2i)

 

 

k = 1

 

 

for j = 1,

2, 3 . Note that the parameters β are specific to each category so there are

3 × 3 = 9

parameters in this specification. The parameters are not all identified unless

we impose a normalization (see for example Greene, 1997, chapter 19.7), so we normalize

the parameters of the first choice category j = 1 to be all zero:

β0, 1 = β1, 1 = β2, 1 = 0 .

The log likelihood function for the multinomial logit can be written as:

N 3

 

l = Σ Σ dijlog ( Pij)

(22.9)

i = 1 j = 1

where dij is a dummy variable that takes the value 1 if observation i has chosen alterna-

tive j

and 0 otherwise. The first-order conditions are:

 

 

∂l

N

 

 

= Σ ( dij Pij) xki

(22.10)

 

----------

 

∂βkj

i = 1

 

for k

= 0, 1, 2 and j = 1, 2, 3 .

 

 

We have provided, in the Example Files subdirectory of your default EViews directory, a workfile MLOGIT.WK1 containing artificial multinomial data. The program begins by loading this workfile:

' load artificial data

688—Chapter 22. The Log Likelihood (LogL) Object

%evworkfile = @evpath + "\example files\logl\mlogit"

load "{%evworkfile}"

from the EViews example directory.

Next, we declare the coefficient vectors that will contain the estimated parameters for each choice alternative:

' declare parameter vector

coef(3) b2

coef(3) b3

As an alternative, we could have used the default coefficient vector C.

We then set up the likelihood function by issuing a series of append statements:

mlogit.append xb2 = b2(1)+b2(2)*x1+b2(3)*x2 mlogit.append xb3 = b3(1)+b3(2)*x1+b3(3)*x2

'define prob for each choice mlogit.append denom = 1+exp(xb2)+exp(xb3) mlogit.append pr1 = 1/denom

mlogit.append pr2 = exp(xb2)/denom mlogit.append pr3 = exp(xb3)/denom

'specify likelihood

mlogit.append logl1 = (1-dd2-dd3)*log(pr1) +dd2*log(pr2)+dd3*log(pr3)

Since the analytic derivatives for the multinomial logit are particularly simple, we also specify the expressions for the analytic derivatives to be used during estimation and the appropriate @deriv statements:

' specify analytic derivatives for!i = 2 to 3

mlogit.append @deriv b{!i}(1) grad{!i}1 b{!i}(2) grad{!i}2 b{!i}(3) grad{!i}3

mlogit.append grad{!i}1 = dd{!i}-pr{!i} mlogit.append grad{!i}2 = grad{!i}1*x1 mlogit.append grad{!i}3 = grad{!i}1*x2 next

Note that if you were to specify this likelihood interactively, you would simply type the expression that follows each append statement directly into the MLOGIT object.

Examples—689

This concludes the actual specification of the likelihood object. Before estimating the model, we get the starting values by estimating a series of binary logit models:

' get starting values from binomial logit equation eq2.binary(d=l) dd2 c x1 x2

b2 = eq2.@coefs

equation eq3.binary(d=l) dd3 c x1 x2 b3 = eq3.@coefs

To check whether you have specified the analytic derivatives correctly, choose View/ Check Derivatives or use the command:

show mlogit.checkderiv

If you have correctly specified the analytic derivatives, they should be fairly close to the numeric derivatives.

We are now ready to estimate the model. Either click the Estimate button or use the command:

' do MLE

mlogit.ml(showopts, m=1000, c=1e-5)

show mlogit.output

Note that you can examine the derivatives for this model using the Gradient Table view, or you can examine the series in the workfile containing the gradients. You can also look at the intermediate results and log likelihood values. For example, to look at the likelihood contributions for each individual, simply double click on the LOGL1 series.

AR(1) Model (ar1.prg)

In this example, we demonstrate how to obtain full maximum likelihood estimates of an AR(1). The maximum likelihood procedure uses the first observation in the sample, in contrast to the built-in AR(1) procedure in EViews which treats the first observation as fixed and maximizes the conditional likelihood for the remaining observations by nonlinear least squares.

As an illustration, we first generate data that follows an AR(1) process:

' make up data create m 80 89 rndseed 123 series y=0

smpl @first+1 @last

y = 1+0.85*y(-1) + nrnd

690—Chapter 22. The Log Likelihood (LogL) Object

The exact Gaussian likelihood function for an AR(1) model is given by:

 

 

 

 

 

 

 

 

 

 

( yt c ⁄ ( 1 − ρ2) )2

 

 

 

 

1

 

 

 

 

 

t = 1

 

 

---------------------------------

 

 

 

- exp

-------------------------------------------

 

 

 

 

f( y, θ)

 

σ

2π( 1 −

ρ

2

)

 

2( σ2 ⁄ ( 1 −

ρ2) )

(22.11)

=

 

 

 

 

 

 

 

 

 

 

 

 

1

 

 

 

 

( yt c ρyt − 1)

2

 

 

 

 

 

 

 

 

t > 0

 

 

 

σ-------------2π-

exp

-----------------------------------------

 

2( σ

2

)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

where c is the constant term, ρ is the AR(1) coefficient, and σ2 is the error variance, all to be estimated (see for example Hamilton, 1994a, chapter 5.2).

Since the likelihood function evaluation differs for the first observation in our sample, we create a dummy variable indicator for the first observation:

' create dummy variable for first obs

series d1 = 0

smpl @first @first

d1 = 1

smpl @all

Next, we declare the coefficient vectors to store the parameter estimates and initialize them with the least squares estimates:

' set starting values to LS (drops first obs)

equation eq1.ls y c ar(1)

coef(1) rho = c(2)

coef(1) s2 = eq1.@se^2

We then specify the likelihood function. We make use of the @recode function to differentiate the evaluation of the likelihood for the first observation from the remaining observations. Note: the @recode function used here uses the updated syntax for this function— please double-check the current documentation for details.

' set up likelihood logl ar1

ar1.append @logl logl1

ar1.append var = @recode(d1=1,s2(1)/(1-rho(1)^2),s2(1))

ar1.append res = @recode(d1=1,y-c(1)/(1-rho(1)),y-c(1)- rho(1)*y(-1))

ar1.append sres = res/@sqrt(var)

ar1.append logl1 = log(@dnorm(sres))-log(var)/2

Examples—691

The likelihood specification uses the built-in function @dnorm for the standard normal density. The second term is the Jacobian term that arises from transforming the standard normal variable to one with non-unit variance. (You could, of course, write out the likelihood for the normal distribution without using the @dnorm function.)

The program displays the MLE together with the least squares estimates:

' do MLE

ar1.ml(showopts, m=1000, c=1e-5)

show ar1.output

' compare with EViews AR(1) which ignores first obs

show eq1.output

Additional Examples

The following additional example programs can be found in the “Example Files” subdirectory of your default EViews directory.

Conditional logit (clogit1.prg): estimates a conditional logit with 3 outcomes and both individual specific and choice specific regressors. The program also displays the prediction table and carries out a Hausman test for independence of irrelevant alternatives (IIA). See Greene (1997, chapter 19.7) for a discussion of multinomial logit models.

Box-Cox transformation (boxcox1.prg): estimates a simple bivariate regression with an estimated Box-Cox transformation on both the dependent and independent variables. Box-Cox transformation models are notoriously difficult to estimate and the results are very sensitive to starting values.

Disequilibrium switching model (diseq1.prg): estimates the switching model in exercise 15.14–15.15 of Judge et al. (1985, pages 644–646). Note that there are some typos in Judge et al. (1985, pages 639–640). The program uses the likelihood specification in Quandt (1988, page 32, equations 2.3.16–2.3.17).

Multiplicative heteroskedasticity (hetero1.prg): estimates a linear regression model with multiplicative heteroskedasticity. Replicates the results in Greene (1997, example 12.14).

Probit with heteroskedasticity (hprobit1.prg): estimates a probit specification with multiplicative heteroskedasticity. See Greene (1997, example 19.7).

Probit with grouped data (gprobit1.prg): estimates a probit with grouped data (proportions data). Estimates the model in Greene (1997, exercise 19.6).

692—Chapter 22. The Log Likelihood (LogL) Object

Nested logit (nlogit1.prg): estimates a nested logit model with 2 branches. Tests the IIA assumption by a Wald test. See Greene (1997, chapter 19.7.4) for a discussion of nested logit models.

Zero-altered Poisson model (zpoiss1.prg): estimates the zero-altered Poisson model. Also carries out the non-nested LR test of Vuong (1989). See Greene (1997, chapter 19.9.6) for a discussion of zero-altered Poisson models and Vuong’s nonnested likelihood ratio test.

Heckman sample selection model (heckman1.prg): estimates Heckman’s two equation sample selection model by MLE using the two-step estimates as starting values.

Weibull hazard model (weibull1.prg): estimates the uncensored Weibull hazard model described in Greene (1997, example 20.18). The program also carries out one of the conditional moment tests in Greene (1997, example 20.19).

GARCH(1,1) with t-distributed errors (arch_t1.prg): estimates a GARCH(1,1) model with t-distribution. The log likelihood function for this model can be found in Hamilton (1994a, equation 21.1.24, page 662). Note that this model may more easily be estimated using the standard ARCH estimation tools provided in EViews.

GARCH with coefficient restrictions (garch1.prg): estimates an MA(1)-GARCH(1,1) model with coefficient restrictions in the conditional variance equation. This model is estimated by Bollerslev, Engle, and Nelson (1994, equation 9.1, page 3015) for different data.

EGARCH with generalized error distributed errors (egarch1.prg): estimates Nelson’s (1991) exponential GARCH with generalized error distribution. The specification and likelihood are described in Hamilton (1994a, pages 668–669). Note that this model may more easily be estimated using the standard ARCH estimation tools provided in EViews (Chapter 20, “ARCH and GARCH Estimation”, on page 601).

Multivariate GARCH (bv_garch.prg and tv_garch.prg): estimates the bior the trivariate version of the BEKK GARCH specification (Engle and Kroner, 1995).

Соседние файлы в папке Docs