
- •Preface
- •Part IV. Basic Single Equation Analysis
- •Chapter 18. Basic Regression Analysis
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •References
- •Chapter 19. Additional Regression Tools
- •Special Equation Expressions
- •Robust Standard Errors
- •Weighted Least Squares
- •Nonlinear Least Squares
- •Stepwise Least Squares Regression
- •References
- •Chapter 20. Instrumental Variables and GMM
- •Background
- •Two-stage Least Squares
- •Nonlinear Two-stage Least Squares
- •Limited Information Maximum Likelihood and K-Class Estimation
- •Generalized Method of Moments
- •IV Diagnostics and Tests
- •References
- •Chapter 21. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •References
- •Chapter 22. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasts with Lagged Dependent Variables
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Nonlinear and PDL Specifications
- •References
- •Chapter 23. Specification and Diagnostic Tests
- •Background
- •Coefficient Diagnostics
- •Residual Diagnostics
- •Stability Diagnostics
- •Applications
- •References
- •Part V. Advanced Single Equation Analysis
- •Chapter 24. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •References
- •Chapter 25. Cointegrating Regression
- •Background
- •Estimating a Cointegrating Regression
- •Testing for Cointegration
- •Working with an Equation
- •References
- •Binary Dependent Variable Models
- •Ordered Dependent Variable Models
- •Censored Regression Models
- •Truncated Regression Models
- •Count Models
- •Technical Notes
- •References
- •Chapter 27. Generalized Linear Models
- •Overview
- •How to Estimate a GLM in EViews
- •Examples
- •Working with a GLM Equation
- •Technical Details
- •References
- •Chapter 28. Quantile Regression
- •Estimating Quantile Regression in EViews
- •Views and Procedures
- •Background
- •References
- •Chapter 29. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •References
- •Part VI. Advanced Univariate Analysis
- •Chapter 30. Univariate Time Series Analysis
- •Unit Root Testing
- •Panel Unit Root Test
- •Variance Ratio Test
- •BDS Independence Test
- •References
- •Part VII. Multiple Equation Analysis
- •Chapter 31. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •References
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •References
- •Chapter 33. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •References
- •Chapter 34. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •References
- •Part VIII. Panel and Pooled Data
- •Chapter 35. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •References
- •Chapter 36. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •References
- •Chapter 37. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •References
- •Part IX. Advanced Multivariate Analysis
- •Chapter 38. Cointegration Testing
- •Johansen Cointegration Test
- •Single-Equation Cointegration Tests
- •Panel Cointegration Testing
- •References
- •Chapter 39. Factor Analysis
- •Creating a Factor Object
- •Rotating Factors
- •Estimating Scores
- •Factor Views
- •Factor Procedures
- •Factor Data Members
- •An Example
- •Background
- •References
- •Appendix B. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •References
- •Appendix C. Gradients and Derivatives
- •Gradients
- •Derivatives
- •References
- •Appendix D. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Appendix E. Long-run Covariance Estimation
- •Technical Discussion
- •Kernel Function Properties
- •References
- •Index
- •Symbols
- •Numerics

Troubleshooting—367
Troubleshooting
Because the logl object provides a great deal of flexibility, you are more likely to experience problems with estimation using the logl object than with EViews’ built-in estimators.
If you are experiencing difficulties with estimation the following suggestions may help you in solving your problem:
•Check your likelihood specification. A simple error involving a wrong sign can easily stop the estimation process from working. You should also verify that the parameters of the model are really identified (in some specifications you may have to impose a normalization across the parameters). Also, every parameter which appears in the model must feed directly or indirectly into the likelihood contributions. The Check Derivatives view is particularly useful in helping you spot the latter problem.
•Choose your starting values. If any of the likelihood contributions in your sample cannot be evaluated due to missing values or because of domain errors in mathematical operations (logs and square roots of negative numbers, division by zero, etc.) the estimation will stop immediately with the message: “Cannot compute @logl due to missing values”. In other cases, a bad choice of starting values may lead you into regions where the likelihood function is poorly behaved. You should always try to initialize your parameters to sensible numerical values. If you have a simpler estimation technique available which approximates the problem, you may wish to use estimates from this method as starting values for the maximum likelihood specification.
•Make sure lagged values are initialized correctly. In contrast to most other estimation routines in EViews, the logl estimation procedure will not automatically drop observations with NAs or lags from the sample when estimating a log likelihood model. If your likelihood specification involves lags, you will either have to drop observations from the beginning of your estimation sample, or you will have to carefully code the specification so that missing values from before the sample do not cause NAs to propagate through the entire sample (see the AR(1) and GARCH examples for a demonstration).
Since the series used to evaluate the likelihood are contained in your workfile (unless you use the @temp statement to delete them), you can examine the values in the log likelihood and intermediate series to find problems involving lags and missing values.
•Verify your derivatives. If you are using analytic derivatives, use the Check Derivatives view to make sure you have coded the derivatives correctly. If you are using numerical derivatives, consider specifying analytic derivatives or adjusting the options for derivative method or step size.
•Reparametrize your model. If you are having problems with parameter values causing mathematical errors, you may wish to consider reparameterizing the model to restrict the parameter within its valid domain. See the discussion below for examples.

368—Chapter 29. The Log Likelihood (LogL) Object
Most of the error messages you are likely to see during estimation are self-explanatory. The error message “near singular matrix” may be less obvious. This error message occurs when EViews is unable to invert the matrix of the sum of the outer product of the derivatives so that it is impossible to determine the direction of the next step of the optimization. This error may indicate a wide variety of problems, including bad starting values, but will almost always occur if the model is not identified, either theoretically, or in terms of the available data.
Limitations
The likelihood object can be used to estimate parameters that maximize (or minimize) a variety of objective functions. Although the main use of the likelihood object will be to specify a log likelihood, you can specify least squares and minimum distance estimation problems with the likelihood object as long as the objective function is additive over the sample.
You should be aware that the algorithm used in estimating the parameters of the log likelihood is not well suited to solving arbitrary maximization or minimization problems. The algorithm forms an approximation to the Hessian of the log likelihood, based on the sum of the outer product of the derivatives of the likelihood contributions. This approximation relies on both the functional form and statistical properties of maximum likelihood objective functions, and may not be a good approximation in general settings. Consequently, you may or may not be able to obtain results with other functional forms. Furthermore, the standard error estimates of the parameter values will only have meaning if the series describing the log likelihood contributions are (up to an additive constant) the individual contributions to a correctly specified, well-defined theoretical log likelihood.
Currently, the expressions used to describe the likelihood contribution must follow the rules of EViews series expressions. This restriction implies that we do not allow matrix operations in the likelihood specification. In order to specify likelihood functions for multiple equation models, you may have to write out the expression for the determinants and quadratic forms. Although possible, this may become tedious for models with more than two or three equations. See the multivariate GARCH sample programs for examples of this approach.
Additionally, the logl object does not directly handle optimization subject to general inequality constraints. There are, however, a variety of well-established techniques for imposing simple inequality constraints. We provide examples below. The underlying idea is to apply a monotonic transformation to the coefficient so that the new coefficient term takes on values only in the desired range. The commonly used transformations are the @exp for one-sided restrictions and the @logit and @atan for two-sided restrictions.
You should be aware of the limitations of the transformation approach. First, the approach only works for relatively simple inequality constraints. If you have several cross-coefficient inequality restrictions, the solution will quickly become intractable. Second, in order to per-

Examples—369
form hypothesis tests on the untransformed coefficient, you will have to obtain an estimate of the standard errors of the associated expressions. Since the transformations are generally nonlinear, you will have to compute linear approximations to the variances yourself (using the delta method). Lastly, inference will be poor near the boundary values of the inequality restrictions.
Simple One-Sided Restrictions
Suppose you would like to restrict the estimate of the coefficient of X to be no larger than 1. One way you could do this is to specify the corresponding subexpression as follows:
' restrict coef on x to not exceed 1 res1 = y - c(1) - (1-exp(c(2)))*x
Note that EViews will report the point estimate and the standard error for the parameter C(2), not the coefficient of X. To find the standard error of the expression 1-exp(c(2)), you will have to use the delta method; see for example Greene (2008).
Simple Two-Sided Restrictions
Suppose instead that you want to restrict the coefficient for X to be between -1 and 1. Then you can specify the expression as:
' restrict coef on x to be between -1 and 1 res1 = y - c(1) - (2*@logit(c(2))-1)*x
Again, EViews will report the point estimate and standard error for the parameter C(2). You will have to use the delta method to compute the standard error of the transformation expression 2*@logit(c(2))-1.
More generally, if you want to restrict the parameter to lie between L and H, you can use the transformation:
(H-L)*@logit(c(1)) + L
where C(1) is the parameter to be estimated. In the above example, L=-1 and H=1.
Examples
In this section, we provide extended examples of working with the logl object to estimate a multinomial logit and a maximum likelihood AR(1) specification. Example programs for these and several other specifications are provided in your default EViews data directory. If you set your default directory to point to the EViews data directory, you should be able to issue a RUN command for each of these programs to create the logl object and to estimate the unknown parameters.

370—Chapter 29. The Log Likelihood (LogL) Object
Multinomial Logit (mlogit1.prg)
In this example, we demonstrate how to specify and estimate a simple multinomial logit model using the logl object. Suppose the dependent variable Y can take one of three categories 1, 2, and 3. Further suppose that there are data on two regressors, X1 and X2 that vary across observations (individuals). Standard examples include variables such as age and level of education. Then the multinomial logit model assumes that the probability of observing each category in Y is given by:
|
Pr(yi = j) |
= |
exp(b0j + b1jx1i + b2jx2i) |
= Pij |
(29.8) |
||
|
3 |
|
|
||||
|
|
|
 exp(b0k + b1kx1i + b2kx2i) |
|
|
||
|
|
k = 1 |
|
|
|
|
|
for j |
= 1, 2, 3 . Note that the parameters b are specific to each category so there are |
||||||
3 ¥ 3 |
= 9 parameters in this specification. The parameters are not all identified unless we |
||||||
impose a normalization, so we normalize the parameters of the first choice category j |
= 1 |
||||||
to be all zero: b0, 1 = b1, 1 |
= b2, 1 |
= 0 (see, for example, Greene (2008, Section |
|
||||
23.11.1). |
|
|
|
|
|
|
|
The log likelihood function for the multinomial logit can be written as: |
|
||||||
|
|
|
|
N |
3 |
|
|
|
|
l |
= |
  dijlog(Pij) |
|
(29.9) |
|
|
|
|
i = 1 j |
= 1 |
|
|
where dij is a dummy variable that takes the value 1 if observation tive j and 0 otherwise. The first-order conditions are:
∂l |
|
N |
|
= |
 (dij – Pij )xki |
||
--------- |
|||
∂bkj |
|
i = 1 |
|
|
|
for k = 0, 1, 2 and j = 1, 2, 3 .
i has chosen alterna-
(29.10)
We have provided, in the Example Files subdirectory of your default EViews directory, a workfile “Mlogit.WK1” containing artificial multinomial data. The program begins by loading this workfile:
' load artificial data
%evworkfile = @evpath + "\example files\logl\mlogit" load "{%evworkfile}"
from the EViews example directory.
Next, we declare the coefficient vectors that will contain the estimated parameters for each choice alternative:
' declare parameter vector coef(3) b2

Examples—371
coef(3) b3
As an alternative, we could have used the default coefficient vector C.
We then set up the likelihood function by issuing a series of append statements:
mlogit.append xb2 = b2(1)+b2(2)*x1+b2(3)*x2 mlogit.append xb3 = b3(1)+b3(2)*x1+b3(3)*x2
'define prob for each choice mlogit.append denom = 1+exp(xb2)+exp(xb3) mlogit.append pr1 = 1/denom mlogit.append pr2 = exp(xb2)/denom mlogit.append pr3 = exp(xb3)/denom
'specify likelihood
mlogit.append logl1 = (1-dd2-dd3)*log(pr1) +dd2*log(pr2)+dd3*log(pr3)
Since the analytic derivatives for the multinomial logit are particularly simple, we also specify the expressions for the analytic derivatives to be used during estimation and the appropriate @deriv statements:
' specify analytic derivatives for!i = 2 to 3
mlogit.append @deriv b{!i}(1) grad{!i}1 b{!i}(2) grad{!i}2 b{!i}(3) grad{!i}3
mlogit.append grad{!i}1 = dd{!i}-pr{!i} mlogit.append grad{!i}2 = grad{!i}1*x1 mlogit.append grad{!i}3 = grad{!i}1*x2 next
Note that if you were to specify this likelihood interactively, you would simply type the expression that follows each append statement directly into the MLOGIT object.
This concludes the actual specification of the likelihood object. Before estimating the model, we get the starting values by estimating a series of binary logit models:
' get starting values from binomial logit equation eq2.binary(d=l) dd2 c x1 x2
b2 = eq2.@coefs
equation eq3.binary(d=l) dd3 c x1 x2 b3 = eq3.@coefs
To check whether you have specified the analytic derivatives correctly, choose View/Check Derivatives or use the command:
show mlogit.checkderiv

372—Chapter 29. The Log Likelihood (LogL) Object
If you have correctly specified the analytic derivatives, they should be fairly close to the numeric derivatives.
We are now ready to estimate the model. Either click the Estimate button or use the command:
' do MLE
mlogit.ml(showopts, m=1000, c=1e-5) show mlogit.output
Note that you can examine the derivatives for this model using the Gradient Table view, or you can examine the series in the workfile containing the gradients. You can also look at the intermediate results and log likelihood values. For example, to look at the likelihood contributions for each individual, simply double click on the LOGL1 series.
AR(1) Model (ar1.prg)
In this example, we demonstrate how to obtain full maximum likelihood estimates of an AR(1). The maximum likelihood procedure uses the first observation in the sample, in contrast to the built-in AR(1) procedure in EViews which treats the first observation as fixed and maximizes the conditional likelihood for the remaining observations by nonlinear least squares.
As an illustration, we first generate data that follows an AR(1) process:
' make up data create m 80 89 rndseed 123 series y=0
smpl @first+1 @last
y = 1+0.85*y(-1) + nrnd
The exact Gaussian likelihood function for an AR(1) model is given by:
|
|
|
|
|
(yt – c § (1 – r2))2 |
|
|||||
|
|
1 |
|
|
t = 1 |
||||||
|
|
---------------------------------- |
exp – |
---------------------------------------------2(j2 § (1 – r2 )) |
|
||||||
f(y, v) = |
|
j 2p(1 – r2 ) |
|
|
|
(29.11) |
|||||
|
|
|
|
|
|
|
|
|
|
||
|
|
1 |
|
(yt |
– c – ryt – 1 ) |
2 |
|
|
|
||
|
|
|
|
t > 0 |
|||||||
|
-------------- exp |
–----------------------------------------- |
|
2 |
|
- |
|
|
|||
|
|
j 2p |
|
|
2(j |
) |
|
|
|
|
|
|
|
|
|
|
|
|
|||||
where c is the constant term, r is the AR(1) coefficient, and j2 |
is the error variance, all to |
||||||||||
be estimated (see for example Hamilton, 1994a, Chapter 5.2). |
|
|
Since the likelihood function evaluation differs for the first observation in our sample, we create a dummy variable indicator for the first observation:

Examples—373
' create dummy variable for first obs series d1 = 0
smpl @first @first d1 = 1
smpl @all
Next, we declare the coefficient vectors to store the parameter estimates and initialize them with the least squares estimates:
' set starting values to LS (drops first obs) equation eq1.ls y c ar(1)
coef(1) rho = c(2) coef(1) s2 = eq1.@se^2
We then specify the likelihood function. We make use of the @recode function to differentiate the evaluation of the likelihood for the first observation from the remaining observations. Note: the @recode function used here uses the updated syntax for this function— please double-check the current documentation for details.
' set up likelihood logl ar1
ar1.append @logl logl1
ar1.append var = @recode(d1=1,s2(1)/(1-rho(1)^2),s2(1))
ar1.append res = @recode(d1=1,y-c(1)/(1-rho(1)),y-c(1)-rho(1)*y(- 1))
ar1.append sres = res/@sqrt(var)
ar1.append logl1 = log(@dnorm(sres))-log(var)/2
The likelihood specification uses the built-in function @dnorm for the standard normal density. The second term is the Jacobian term that arises from transforming the standard normal variable to one with non-unit variance. (You could, of course, write out the likelihood for the normal distribution without using the @dnorm function.)
The program displays the MLE together with the least squares estimates:
' do MLE
ar1.ml(showopts, m=1000, c=1e-5) show ar1.output
' compare with EViews AR(1) which ignores first obs show eq1.output
Additional Examples
The following additional example programs can be found in the “Example Files” subdirectory of your default EViews directory.

374—Chapter 29. The Log Likelihood (LogL) Object
•Conditional logit (clogit1.prg): estimates a conditional logit with 3 outcomes and both individual specific and choice specific regressors. The program also displays the prediction table and carries out a Hausman test for independence of irrelevant alternatives (IIA). See Greene (2008, Chapter 23.11.1) for a discussion of multinomial logit models.
•Box-Cox transformation (boxcox1.prg): estimates a simple bivariate regression with an estimated Box-Cox transformation on both the dependent and independent variables. Box-Cox transformation models are notoriously difficult to estimate and the results are very sensitive to starting values.
•Disequilibrium switching model (diseq1.prg): estimates the switching model in exercise 15.14–15.15 of Judge et al. (1985, p. 644–646). Note that there are some typos in Judge et al. (1985, p. 639–640). The program uses the likelihood specification in Quandt (1988, page 32, equations 2.3.16–2.3.17).
•Multiplicative heteroskedasticity (hetero1.prg): estimates a linear regression model with multiplicative heteroskedasticity.
•Probit with heteroskedasticity (hprobit1.prg): estimates a probit specification with multiplicative heteroskedasticity.
•Probit with grouped data (gprobit1.prg): estimates a probit with grouped data (proportions data).
•Nested logit (nlogit1.prg): estimates a nested logit model with 2 branches. Tests the IIA assumption by a Wald test. See Greene (2008, Chapter 23.11.4) for a discussion of nested logit models.
•Zero-altered Poisson model (zpoiss1.prg): estimates the zero-altered Poisson model. Also carries out the non-nested LR test of Vuong (1989). See Greene (2008, Chapter 25.4) for a discussion of zero-altered Poisson models and Vuong’s non-nested likelihood ratio test.
•Heckman sample selection model (heckman1.prg): estimates Heckman’s two equation sample selection model by MLE using the two-step estimates as starting values.
•Weibull hazard model (weibull1.prg): estimates the uncensored Weibull hazard model described in Greene (2008, example 25.4).
•GARCH(1,1) with t-distributed errors (arch_t1.prg): estimates a GARCH(1,1) model with t-distribution. The log likelihood function for this model can be found in Hamil-
ton (1994a, equation 21.1.24, page 662). Note that this model may more easily be estimated using the standard ARCH estimation tools provided in EViews (Chapter 24. “ARCH and GARCH Estimation,” on page 195).
•GARCH with coefficient restrictions (garch1.prg): estimates an MA(1)-GARCH(1,1) model with coefficient restrictions in the conditional variance equation. This model is