
- •Preface
- •Part IV. Basic Single Equation Analysis
- •Chapter 18. Basic Regression Analysis
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •References
- •Chapter 19. Additional Regression Tools
- •Special Equation Expressions
- •Robust Standard Errors
- •Weighted Least Squares
- •Nonlinear Least Squares
- •Stepwise Least Squares Regression
- •References
- •Chapter 20. Instrumental Variables and GMM
- •Background
- •Two-stage Least Squares
- •Nonlinear Two-stage Least Squares
- •Limited Information Maximum Likelihood and K-Class Estimation
- •Generalized Method of Moments
- •IV Diagnostics and Tests
- •References
- •Chapter 21. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •References
- •Chapter 22. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasts with Lagged Dependent Variables
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Nonlinear and PDL Specifications
- •References
- •Chapter 23. Specification and Diagnostic Tests
- •Background
- •Coefficient Diagnostics
- •Residual Diagnostics
- •Stability Diagnostics
- •Applications
- •References
- •Part V. Advanced Single Equation Analysis
- •Chapter 24. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •References
- •Chapter 25. Cointegrating Regression
- •Background
- •Estimating a Cointegrating Regression
- •Testing for Cointegration
- •Working with an Equation
- •References
- •Binary Dependent Variable Models
- •Ordered Dependent Variable Models
- •Censored Regression Models
- •Truncated Regression Models
- •Count Models
- •Technical Notes
- •References
- •Chapter 27. Generalized Linear Models
- •Overview
- •How to Estimate a GLM in EViews
- •Examples
- •Working with a GLM Equation
- •Technical Details
- •References
- •Chapter 28. Quantile Regression
- •Estimating Quantile Regression in EViews
- •Views and Procedures
- •Background
- •References
- •Chapter 29. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •References
- •Part VI. Advanced Univariate Analysis
- •Chapter 30. Univariate Time Series Analysis
- •Unit Root Testing
- •Panel Unit Root Test
- •Variance Ratio Test
- •BDS Independence Test
- •References
- •Part VII. Multiple Equation Analysis
- •Chapter 31. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •References
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •References
- •Chapter 33. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •References
- •Chapter 34. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •References
- •Part VIII. Panel and Pooled Data
- •Chapter 35. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •References
- •Chapter 36. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •References
- •Chapter 37. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •References
- •Part IX. Advanced Multivariate Analysis
- •Chapter 38. Cointegration Testing
- •Johansen Cointegration Test
- •Single-Equation Cointegration Tests
- •Panel Cointegration Testing
- •References
- •Chapter 39. Factor Analysis
- •Creating a Factor Object
- •Rotating Factors
- •Estimating Scores
- •Factor Views
- •Factor Procedures
- •Factor Data Members
- •An Example
- •Background
- •References
- •Appendix B. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •References
- •Appendix C. Gradients and Derivatives
- •Gradients
- •Derivatives
- •References
- •Appendix D. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Appendix E. Long-run Covariance Estimation
- •Technical Discussion
- •Kernel Function Properties
- •References
- •Index
- •Symbols
- •Numerics

Nonlinear Equation Solution Methods—759
EViews also performs a crude trial-and-error search to determine the scale factor a for Marquardt and quadratic hill-climbing methods.
Derivative free methods
Other optimization routines do not require the computation of derivatives. The grid search is a leading example. Grid search simply computes the objective function on a grid of parameter values and chooses the parameters with the highest values. Grid search is computationally costly, especially for multi-parameter models.
EViews uses (a version of) grid search for the exponential smoothing routine.
Nonlinear Equation Solution Methods
When solving a nonlinear equation system, EViews first analyzes the system to determine if the system can be separated into two or more blocks of equations which can be solved sequentially rather than simultaneously. Technically, this is done by using a graph representation of the equation system where each variable is a vertex and each equation provides a set of edges. A well known algorithm from graph theory is then used to find the strongly connected components of the directed graph.
Once the blocks have been determined, each block is solved for in turn. If the block contains no simultaneity, each equation in the block is simply evaluated once to obtain values for each of the variables.
If a block contains simultaneity, the equations in that block are solved by either a GaussSeidel or Newton method, depending on how the solver options have been set.
Gauss-Seidel
By default, EViews uses the Gauss-Seidel method when solving systems of nonlinear equations. Suppose the system of equations is given by:
x1 = f1(x1, x2, º, xN, z)
x2 = f2(x1, x2, º, xN, z)
(B.4)
M
xN = fN(x1, x2, º, xN, z)
where x are the endogenous variables and z are the exogenous variables.
The problem is to find a fixed point such that x = f(x, z). Gauss-Seidel employs an iterative updating rule of the form:
x(i + 1) = f(x(i), z). |
(B.5) |

760—Appendix B. Estimation and Solution Options
to find the solution. At each iteration, EViews solves the equations in the order that they appear in the model. If an endogenous variable that has already been solved for in that iteration appears later in some other equation, EViews uses the value as solved in that iteration. For example, the k-th variable in the i-th iteration is solved by:
xk(i) = fk(x1(i), x2(i), º, xk(i–) |
1, xk(i – 1), xk(i+– |
11), º, xN(i – 1), z). |
(B.6) |
The performance of the Gauss-Seidel method can be affected be reordering of the equations. If the Gauss-Seidel method converges slowly or fails to converge, you should try moving the equations with relatively few and unimportant right-hand side endogenous variables so that they appear early in the model.
Newton's Method
Newton’s method for solving a system of nonlinear equations consists of repeatedly solving a local linear approximation to the system.
Consider the system of equations written in implicit form: |
|
F(x, z) = 0 |
(B.7) |
where F is the set of equations, x is the vector of endogenous variables and z is the vector of exogenous variables.
In Newton’s method, we take a linear approximation to the system around some values x and z :
|
|
|
|
|
|
∂ |
|
|
|
|
|
|
|
F(x, z) |
= |
F(x |
|
, z |
) + |
∂x |
F(x |
|
, z |
|
)Dx |
= 0 |
(B.8) |
and then use this approximation to construct an iterative procedure for updating our current guess for x :
|
|
|
|
|
∂ |
|
|
|
–1 |
|
x |
t + 1 |
= x |
|
– |
F(xt, z |
|
) |
F(x , z ) |
(B.9) |
|
|
|
t |
|
∂x |
|
t |
|
|||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
where raising to the power of -1 denotes matrix inversion.
The procedure is repeated until the changes in x between periods are smaller than a specified tolerance.
Note that in contrast to Gauss-Seidel, the ordering of equations under Newton does not affect the rate of convergence of the algorithm.
Broyden's Method
Broyden's Method is a modification of Newton's method which tries to decrease the calculational cost of each iteration by using an approximation to the derivatives of the equation

References—761
system rather than the true derivatives of the equation system when calculating the Newton step. That is, at each iteration, Broyden's method takes a step:
x |
t + 1 |
= x |
t |
– J |
–1F(x , z ) |
(B.10) |
|
|
t |
t |
|
where Jt is the current approximation to the matrix of derivatives of the equation system.
As well as updating the value of x at each iteration, Broyden's method also updates the existing Jacobian approximation, Jt , at each iteration based on the difference between the observed change in the residuals of the equation system and the change in the residuals predicted by a linear approximation to the equation system based on the current Jacobian approximation.
In particular, Broyden's method uses the following equation to update J :
Jt + 1 = Jt + |
(F(xt + 1, z ) – F(xt, z ) – JtDx)Dx¢ |
(B.11) |
|
-------------------------------------------------------------------------------------------Dx¢Dx - |
|||
|
|
where Dx = xt + 1 – xt . This update has a number of desirable properties (see Chapter 8 of Dennis and Schnabel (1983) for details).
In EViews, the Jacobian approximation is initialized by taking the true derivatives of the equation system at the starting values of x . The updating procedure given above is repeated until changes in x between periods become smaller than a specified tolerance. In some cases the method may stall before reaching a solution, in which case a fresh set of derivatives of the equation system is taken at the current values of x , and the updating is continued using these derivatives as the new Jacobian approximation.
Broyden's method shares many of the properties of Newton's method including the fact that it is not dependent on the ordering of equations in the system and that it will generally converge quickly in the vicinity of a solution. In comparison to Newton's method, Broyden's method will typically take less time to perform each iteration, but may take more iterations to converge to a solution. In most cases Broyden's method will take less overall time to solve a system than Newton's method, but the relative performance will depend on the structure of the derivatives of the equation system.
References
Amemiya, Takeshi (1983). “Nonlinear Regression Models,” Chapter 6 in Z. Griliches and M. D. Intriligator (eds.), Handbook of Econometrics, Volume 1, Amsterdam: Elsevier Science Publishers B.V.
Dennis, J. E. and R. B. Schnabel (1983). “Secant Methods for Systems of Nonlinear Equations,” Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, London.
Kincaid, David, and Ward Cheney (1996). Numerical Analysis, 2nd edition, Pacific Grove, CA: Brooks/ Cole Publishing Company.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery (1992). Numerical Recipes in C, 2nd edition, Cambridge University Press.

762—Appendix B. Estimation and Solution Options
Quandt, Richard E. (1983). “Computational Problems and Methods,” Chapter 12 in Z. Griliches and M. D. Intriligator (eds.), Handbook of Econometrics, Volume 1, Amsterdam: Elsevier Science Publishers B.V.
Thisted, Ronald A. (1988). Elements of Statistical Computing, New York: Chapman and Hall.