
- •Preface
- •Part IV. Basic Single Equation Analysis
- •Chapter 18. Basic Regression Analysis
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •References
- •Chapter 19. Additional Regression Tools
- •Special Equation Expressions
- •Robust Standard Errors
- •Weighted Least Squares
- •Nonlinear Least Squares
- •Stepwise Least Squares Regression
- •References
- •Chapter 20. Instrumental Variables and GMM
- •Background
- •Two-stage Least Squares
- •Nonlinear Two-stage Least Squares
- •Limited Information Maximum Likelihood and K-Class Estimation
- •Generalized Method of Moments
- •IV Diagnostics and Tests
- •References
- •Chapter 21. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •References
- •Chapter 22. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasts with Lagged Dependent Variables
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Nonlinear and PDL Specifications
- •References
- •Chapter 23. Specification and Diagnostic Tests
- •Background
- •Coefficient Diagnostics
- •Residual Diagnostics
- •Stability Diagnostics
- •Applications
- •References
- •Part V. Advanced Single Equation Analysis
- •Chapter 24. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •References
- •Chapter 25. Cointegrating Regression
- •Background
- •Estimating a Cointegrating Regression
- •Testing for Cointegration
- •Working with an Equation
- •References
- •Binary Dependent Variable Models
- •Ordered Dependent Variable Models
- •Censored Regression Models
- •Truncated Regression Models
- •Count Models
- •Technical Notes
- •References
- •Chapter 27. Generalized Linear Models
- •Overview
- •How to Estimate a GLM in EViews
- •Examples
- •Working with a GLM Equation
- •Technical Details
- •References
- •Chapter 28. Quantile Regression
- •Estimating Quantile Regression in EViews
- •Views and Procedures
- •Background
- •References
- •Chapter 29. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •References
- •Part VI. Advanced Univariate Analysis
- •Chapter 30. Univariate Time Series Analysis
- •Unit Root Testing
- •Panel Unit Root Test
- •Variance Ratio Test
- •BDS Independence Test
- •References
- •Part VII. Multiple Equation Analysis
- •Chapter 31. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •References
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •References
- •Chapter 33. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •References
- •Chapter 34. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •References
- •Part VIII. Panel and Pooled Data
- •Chapter 35. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •References
- •Chapter 36. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •References
- •Chapter 37. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •References
- •Part IX. Advanced Multivariate Analysis
- •Chapter 38. Cointegration Testing
- •Johansen Cointegration Test
- •Single-Equation Cointegration Tests
- •Panel Cointegration Testing
- •References
- •Chapter 39. Factor Analysis
- •Creating a Factor Object
- •Rotating Factors
- •Estimating Scores
- •Factor Views
- •Factor Procedures
- •Factor Data Members
- •An Example
- •Background
- •References
- •Appendix B. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •References
- •Appendix C. Gradients and Derivatives
- •Gradients
- •Derivatives
- •References
- •Appendix D. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Appendix E. Long-run Covariance Estimation
- •Technical Discussion
- •Kernel Function Properties
- •References
- •Index
- •Symbols
- •Numerics

Background—341
If your original model is for a quantile other than the median, you will be offered a third choice of performing the test using only the estimated quantile. For example, f the model is fit to the 0.6 , an additional radio button will appear: Estimation quantile only (0.6).
Choosing this form of the test, there will be a single set of restrictions:
(b(0.4) + b(0.6)) § 2 = b(0.5).
Also, if it is known a priori that the errors are i.i.d., but possibly not symmetrically distributed, one can restrict the null to examine only the restriction associated with the intercept. To perform this restricted version of the test, simply click on Intercept only in the Test Specification portion of the page.
Lastly, you may use the Output page to save the results from the supplementary process estimation. You may provide a name for the vector of quantiles, the matrix of process coefficients, and the covariance matrix of the coefficients.
The default test of symmetry for the basic median Engel curve specification is given below:
Symmetric Quantiles Test
Equation: UNTITLED
Specification: Y C X
Test statistic compares all coefficients
|
|
Chi-Sq. |
|
|
Test Summary |
|
Statistic |
Chi-Sq. d.f. |
Prob. |
|
|
|
|
|
|
|
|
|
|
Wald Test |
|
0.530024 |
2 |
0.7672 |
|
|
|
|
|
|
|
|
|
|
Restriction Detail: |
b(tau) + b(1-tau) - 2*b(.5) = 0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
Quantiles |
Variable |
Restr. Value |
Std. Error |
Prob. |
|
|
|
|
|
|
|
|
|
|
0.25, 0.75 |
C |
-5.084370 |
34.59898 |
0.8832 |
|
X |
-0.002244 |
0.045012 |
0.9602 |
|
|
|
|
|
|
|
|
|
|
We see that the test compares estimates at the first and third quartile with the median specification. While earlier we saw strong evidence that the slope coefficients are not constant across quantiles, we now see that there is little evidence of departures from symmetry. The overall p-value for the test is around 0.75, and the individual coefficient restriction test values show even less evidence of asymmetry.
Background
We present here a brief discussion of quantile regression. As always, the discussion is necessarily brief and omits considerable detail. For a book-length treatment of quantile regression see Koenker (2005).

342—Chapter 28. Quantile Regression
The Model
Suppose that we have a random variable Y with probability distribution function |
|
F(y) = Prob(Y £ y) |
(28.2) |
so that for 0 < t < 1 , the t -th quantile of Y may be defined as the smallest y satisfying
F(y) ≥ t :
Q(t) = inf{y: F(y) ≥ t} |
(28.3) |
Given a set of n observations on Y , the traditional empirical distribution function is given by:
Fn(y) = Â1(Yi £ y) |
(28.4) |
k |
|
where 1(z) is an indicator function that takes the value 1 if the argument z |
is true and 0 |
otherwise. The associated empirical quantile is given by, |
|
Qn(t) = inf{y: Fn(y) ≥ t} |
(28.5) |
or equivalently, in the form of a simple optimization problem:
Qn(t) = |
|
 t |
|
Yi – y |
|
+ |
 |
(1 |
– t) |
|
Yi – y |
|
argminy |
|
|
|
|
||||||||
|
i:Yi ≥ y |
|
|
|
|
i:Yi < y |
|
|
|
|
|
(28.6)
=argminy Ârt (Yi – y)i
where rt (u) = u(t – 1(u < 0)) is the so-called check function which weights positive and negative values asymmetrically.
Quantile regression extends this simple formulation to allow for regressors X . We assume a linear specification for the conditional quantile of the response variable Y given values for the p -vector of explanatory variables X :
Q(t |
|
Xi, b(t)) = Xi¢b(t) |
(28.7) |
|
where b(t) is the vector of coefficients associated with the t -th quantile.
Then the analog to the unconditional quantile minimization above is the conditional quantile regression estimator:
bˆn(t) = |
|
Ârt |
|
|
argminb (t) |
(Yi – Xi¢b(t)) |
(28.8) |
||
|
|
i |
|
|

Background—343
Estimation
The quantile regression estimator can be obtained as the solution to a linear programming problem. Several algorithms for obtaining a solution to this problem have been proposed in the literature. EViews uses a modified version of the Koenker and D’Orey (1987) version of the Barrodale and Roberts (1973) simplex algorithm.
The Barrodale and Roberts (BR) algorithm has received more than its fair share of criticism for being computationally inefficient, with dire theoretical results for worst-case scenarios in problems involving large numbers of observations. Simulations showing poor relative performance of the BR algorithm as compared with alternatives such as interior point methods appear to bear this out, with estimation times that are roughly quadratic in the number of observations (Koenker and Hallock, 2001; Portnoy and Koenker, 1997).
Our experience with a suitably optimized version of the BR algorithm is that its performance is certainly better than commonly portrayed. Using various subsets of the low-birthweight data described in Koenker and Hallock (2001), we find that while certainly not as fast as Cholesky-based linear regression (and most likely not as fast as interior point methods), the estimation times for the modified BR algorithm are quite reasonable.
For example, estimating a 16 explanatory variable model for the median using the first 20,000 observations of the data set takes a bit more than 1.2 seconds on a 3.2GHz Pentium 4, with 1.0Gb of RAM; this time includes both estimation and computation of a kernel based estimator of the coefficient covariance matrix. The same specification using the full sample of 198,377 observations takes under 7.5 seconds.
Overall, our experience is that estimation times for the modified BR are roughly linear in the number of observations through a broad range of sample sizes. While our results are not definitive, we see no real impediment to using this algorithm for virtually all practical problems.
Asymptotic Distributions
Under mild regularity conditions, quantile regression coefficients may be shown to be asymptotically normally distributed (Koenker, 2005) with different forms of the asymptotic covariance matrix depending on the model assumptions.
Computation of the coefficient covariance matrices occupies an important place in quantile regression analysis. In large part, this importance stems from the fact that the covariance matrix of the estimates depends on one or more nuisance quantities which must be estimated. Accordingly, a large literature has developed to consider the relative merits of various approaches to estimating the asymptotic variances (see Koenker (2005), for an overview).

344—Chapter 28. Quantile Regression
We may divide the estimators into three distinct classes: (1) direct methods for estimating the covariance matrix in i.i.d. settings; (2) direct methods for estimating the covariance matrix for independent but not-identical distribution; (3) bootstrap resampling methods for both i.i.d. and i.n.i.d. settings.
Independent and Identical
Koenker and Bassett (1978) derive asymptotic normality results for the quantile regression estimator in the i.i.d. setting, showing that under mild regularity conditions,
|
ˆ |
|
|
(1 – t)s(t) |
2 |
J |
–1 |
) |
(28.9) |
|
n(b(t) – b(t)) ~ N(0, t |
|
|
||||||
where: |
|
|
|
|
|
|
|
|
|
J = |
|
|
= |
limn Æ •(X¢X § n) |
|
||||
limn Æ • |
ÂXiXi¢ § n |
(28.10) |
|||||||
|
|
i |
|
|
|
|
|
|
s(t) = F–1¢(t) = 1 § f(F–1(t))
and s(t), which is termed the sparsity function or the quantile density function, may be interpreted either as the derivative of the quantile function or the inverse of the density function evaluated at the t -th quantile (see, for example, Welsh, 1988). Note that the i.i.d. error assumption implies that s(t) does not depend on X so that the quantile functions depend on X only in location, hence all conditional quantile planes are parallel.
Given the value of the sparsity at a given quantile, direct estimation of the coefficient covariance matrix is straightforward. In fact, the expression for the asymptotic covariance in Equation (28.9) is analogous to the ordinary least squares covariance in the i.i.d. setting, with t(1 – t)s(t)2 standing in for the error variance in the usual formula.
Sparsity Estimation
We have seen the importance of the sparsity function in the formula for the asymptotic covariance matrix of the quantile regression estimates for i.i.d. data. Unfortunately, the sparsity is a function of the unknown distribution F , and therefore is a nuisance quantity which must be estimated.
EViews provides three methods for estimating the scalar sparsity s(t): two Siddiqui (1960) difference quotient methods (Koenker, 1994; Bassett and Koenker (1982) and one kernel density estimator (Powell, 1986; Jones, 1992; Buchinsky 1995).
Siddiqui Difference Quotient
The first two methods are variants of a procedure originally proposed by Siddiqui (1960; see Koenker, 1994), where we compute a simple difference quotient of the empirical quantile function:
ˆ |
= |
ˆ –1 |
ˆ |
–1 |
(t – hn)] § (2hn) |
(28.11) |
s(t) |
[F |
(t + hn) – F |
|

Background—345
for some bandwidth hn tending to zero as the sample size n Æ • . sˆ(t) is in essence computed using a simply two-sided numeric derivative of the quantile function. To make this procedure operational we need to determine: (1) how to obtain estimates of the empirical quantile function F–1(t) at the two evaluation points, and (2) what bandwidth to employ.
The first approach to evaluating the quantile functions, which EViews terms Siddiqui (mean fitted), is due to Bassett and Koenker (1982). The approach involves estimating two additional quantile regression models for t – hn and t + hn , and using the estimated coefficients to compute fitted quantiles. Substituting the fitted quantiles into the numeric derivative expression yields:
ˆ |
= |
|
ˆ |
ˆ |
(28.12) |
s(t) |
X |
¢(b(t + hn) – b(t – hn)) § (2hn) |
for an arbitrary X . While the i.i.d. assumption implies that X may be set to any value, Bassett and Koenker propose using the mean value of X , noting that the mean possesses two very desirable properties: the precision of the estimate is maximized at that point, and the empirical quantile function is monotone in t when evaluated at X = X , so that sˆ(t) will always yield a positive value for suitable hn .
A second, less computationally intensive approach to evaluating the quantile functions computes the t + h and t – h empirical quantiles of the residuals from the original quantile regression equation, as in Koenker (1994). Following Koencker, we compute quantiles for the residuals excluding the p residuals that are set to zero in estimation, and interpolating values to get a piecewise linear version of the quantile. EViews refers to this method as Siddiqui (residual).
Both Siddiqui methods require specification of a bandwidth hn . EViews offers the Bofinger (1975), Hall-Sheather (1988), and Chamberlain (1994) bandwidth methods (along with the ability to specify an arbitrary bandwidth).
The Bofinger bandwidth, which is given by:
hn = n |
–1 § 5 |
4.5(f(F–1(t)))4 1 § 5 |
|
|
-------------------------------------------- |
(28.13) |
|
|
|||
|
|
[2(F–1(t))2 + 1]2 |
|
(approximately) minimizes the mean square error (MSE) of the sparsity estimates.
Hall-Sheather proposed an alternative bandwidth that is designed specifically for testing. The Hall-Sheather bandwidth is given by:
|
–1 § 3 |
2 § 3 |
|
1.5(f(F–1(t))) |
2 |
|
1 § 3 |
hn = n |
|
|
|||||
|
za |
|
---------------------------------------- |
|
(28.14) |
||
|
|
|
2(F–1(t))2 + 1 |
|
|
where za = F–1(1 – a § 2), for a the parameter controlling the size of the desired 1 – a
confidence intervals.

346—Chapter 28. Quantile Regression
A similar testing motivation underlies the Chamberlain bandwidth:
hn = za |
t-------------------(1 – t) |
(28.15) |
|
n |
|
which is derived using the exact and normal asymptotic confidence intervals for the order statistics (Buchinsky, 1995).
Kernel Density
Kernel density estimators of the sparsity offer an important alternative to the Siddiqui approach. Most of the attention has focused on kernel methods for estimating F–1¢(t) directly (Falk, 1988; Welsh, 1988), but one may also estimate s(t) using the inverse of a kernel density function estimator (Powell, 1986; Jones, 1992; Buchinsky 1995). In the present context, we may compute:
|
|
|
n |
|
|
|
|
ˆ |
= 1 § |
|
–1 |
ˆ |
(t) § cn) |
|
(28.16) |
s(t) |
|
(1 § n) Â cn |
K(ui |
|
|||
|
|
|
i = 1 |
|
|
|
|
|
|
|
|
|
|
|
where uˆ (t) are the residuals from the quantile regression fit. EViews supports the latter density function approach, which is termed the Kernel (residual) method, since it is closely related to the more commonly employed Powell (1984, 1989) kernel estimator for the non- i.i.d. case described below.
Kernel estimation of the density function requires specification of a bandwidth cn . We follow Koenker (2005, p. 81) in choosing:
cn = k(F–1(t + hn) – F–1(t – hn)) |
(28.17) |
where k = min(s, IQR § 1.34) is the Silverman (1986) robust estimate of scale (where s the sample standard deviation and IQR the interquartile range) and hn is the Siddiqui bandwidth.
Independent, Non-Identical
We may relax the assumption that the quantile density function does not depend on X . The
ˆ |
in the i.n.i.d. setting takes the Huber sandwich |
|||||
asymptotic distribution of n(b(t) – b(t)) |
||||||
form (see, among others, Hendricks and Koenker, 1992): |
|
|
|
|
|
|
ˆ |
|
–1 |
JH(t) |
–1 |
) |
(28.18) |
n(b(t) – b(t)) ~ N(0, t(1 – t)H(t) |
|
|
||||
where J is as defined earlier, |
|
|
|
|
|
|
|
|
|
|
|
|
(28.19) |
J = limn Æ • ÂXiXi¢ § n |
|
|
|
|
||
|
i |
|
|
|
|
|
and:

|
|
|
Background—347 |
|
|
|
|
H(t) = |
|
|
(28.20) |
limn Æ • |
ÂXiXi¢fi(qi(t)) § n |
||
|
|
i |
|
fi(qi(t)) is the conditional density function of the response, evaluated at the t -th conditional quantile for individual i . Note that if the conditional density does not depend on the observation, the Huber sandwich form of the variance in Equation (28.18) reduces to the simple scalar sparsity form given in Equation (28.9).
Computation of a sample analogue to J is straightforward so we focus on estimation of H(t). EViews offers a choice of two methods for estimating H(t): a Siddiqui-type differ-
ence method proposed by Hendricks and Koenker (1992), and a Powell (1984, 1989) kernel method based on residuals of the estimated model. EViews labels the first method Siddiqui (mean fitted), and the latter method Kernel (residual):
The Siddiqui-type method proposed by Hendricks and Koenker (1991) is a straightforward generalization of the scalar Siddiqui method (see “Siddiqui Difference Quotient,” beginning on page 344). As before, two additional quantile regression models are estimated for t – h and t + h , and the estimated coefficients may be used to compute the Siddiqui difference quotient:
ˆ |
= |
ˆ –1 |
|
ˆ |
–1 |
(qi(t – h))) |
fi(qi(t)) |
2hn § (Fi |
(qi(t + h)) – Fi |
|
|||
|
= |
|
ˆ |
ˆ |
|
(28.21) |
|
|
|
|
|||
|
2hn § (Xi¢(b(t + h) – b(t – h))) |
Note that in the absence of identically distributed data the quantile density function
ˆi( i( )) must be evaluated for each individual. One minor complication is that the q t
f
Equation (28.21) is not guaranteed to be positive except at Xi = X . Accordingly, Hendricks and Koenker modify the expression slightly to use only positive values:
ˆ |
= |
ˆ |
ˆ |
(28.22) |
fi(qi(t)) |
max(0, 2hn § (Xi¢(b(t + h) – b(t – h)) – d)) |
where d is a small positive number included to prevent division by zero.
|
ˆ |
|
|
ˆ |
of H : |
The estimated quantile densities fi(qi(t)) |
are then used to form an estimator Hn |
||||
ˆ |
= |
ˆ |
(qi(t))XiXi¢ § n |
(28.23) |
|
Hn |
Âfi |
i
The Powell (1984, 1989) kernel approach replaces the Siddiqui difference with a kernel density estimator using the residuals of the original fitted model:
ˆ |
= |
–1 |
ˆ |
(t) § cn)XiXi¢ |
(28.24) |
Hn |
(1 § n)Âcn |
K(ui |
i
where K is a kernel function that integrates to 1, and cn is a kernel bandwidth. EViews uses the Koenker (2005) kernel bandwidth as described in “Kernel Density” on page 346 above.

348—Chapter 28. Quantile Regression
Bootstrapping
The direct methods of estimating the asymptotic covariance matrices of the estimates require the estimation of the sparsity nuisance parameter, either at a single point, or conditionally for each observation. One method of avoiding this cumbersome estimation is to employ bootstrapping techniques for the estimation of the covariance matrix.
EViews supports four different bootstrap methods: the residual bootstrap (Residual), the design, or XY-pair, bootstrap (XY-pair), and two variants of the Markov Chain Marginal Bootstrap (MCMB and MBMB-A).
The following discussion provides a brief overview of the various bootstrap methods. For additional detail, see Buchinsky (1995, He and Hu (2002) and Kocherginsky, He, and Mu (2005).
Residual Bootstrap
The residual bootstrap, is constructed by resampling (with replacement) separately from the residuals uˆ i(t) and from the Xi .
Let u be an m -vector of resampled residuals, and let X be a m ¥ p matrix of independently resampled X . (Note that m need not be equal to the original sample size n .) We form the dependent variable using the resampled residuals, resampled data, and estimated
|
= |
X |
ˆ |
|
, and then construct a bootstrap estimate of b(t) using |
coefficients, Y |
b(t) + u |
|
|||
Y and X . |
|
|
|
|
|
This procedure is repeated for M bootstrap replications, and the estimator of the asymptotic covariance matrix is formed from:
ˆ ˆ |
|
|
|
m 1 |
B |
|
|
|
|
|
|
|
|
|
|
|
|
ˆ |
|
|
|
ˆ |
|
||||||
= |
n |
|
--- |
 |
|
|
|
(28.25) |
||||||
V(b) |
|
---- |
(b |
j |
(t) – b(t))(bj(t) – b(t))¢ |
|||||||||
|
|
|
n |
B |
|
|
|
|
|
|
|
|||
|
|
|
|
|
|
j = 1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
ˆ ˆ |
where b(t) is the mean of the bootstrap elements. The bootstrap covariance matrix V(b) is simply a (scaled) estimate of the sample variance of the bootstrap estimates of b(t).
Note that the validity of using separate draws from uˆ i(t) and Xi requires independence of the u and the X .
XY-pair (Design) Bootstrap
The XY-pair bootstrap is the most natural form of bootstrap resampling, and is valid in settings where u and X are not independent. For the XY-pair bootstrap, we simply form B randomly drawn (with replacement) subsamples of size m from the original data, then compute estimates of b(t) using the (y , X ) for each subsample. The asymptotic covariance matrix is then estimated from sample variance of the bootstrap results using
Equation (28.25).

Background—349
Markov Chain Marginal Bootstrap
The primary disadvantage to the residual and design bootstrapping methods is that they are computationally intensive, requiring estimation of a relatively difficult p -dimensional linear programming problem for each bootstrap replication.
He and Hu (2002) proposed a new method for constructing bootstrap replications that reduces each p -dimensional bootstrap optimization to a sequence of p easily solved onedimensional problems. The sequence of one-dimensional solutions forms a Markov chain whose sample variance, computed using Equation (28.25), consistently approximates the true covariance for large n and M .
One problem with the MCMB is that high autocorrelations in the MCMB sequence for specific coefficients will result in a poor estimates for the asymptotic covariance for given chain length M , and may result in non-convergence of the covariance estimates for any chain of practical length.
Kocherginsky, He, and Mu (KHM, 2005) propose a modification to MCMB, which alleviates autocorrelation problems by transforming the parameter space prior to performing the MCMB algorithm, and then transforming the result back to the original space. Note that the resulting MCMB-A algorithm requires the i.i.d. assumption, though the authors suggest that the method is robust against heteroskedasticity.
Practical recommendations for the MCMB-A are provided in KHM. Summarizing, they rec-
ommend that the methods be applied to problems where n min(t, 1 – t) > 5p with M between 100 and 200 for relatively small problems (n £ 1000, p £ 10 ). For moderately
large problems with np between 10,000 and 2,000,000, they recommend M between 50 and 200 depending on one’s level of patience.
Model Evaluation and Testing
Evaluation of the quality of a quantile regression model may be conducted using goodness- of-fit criteria, as well as formal testing using quasi-likelihood ratio and Wald tests.
Goodness-of-Fit
Koenker and Machado (1999) define a goodness-of-fit statistics for quantile regression that is analogous to the R2 from conventional regression analysis. We begin by recalling our lin-
ear quantile specification, Q(t |
|
Xi, b(t)) |
= Xi¢b(t)and assume that we may partition |
||||
|
|||||||
the data and coefficient vector |
as Xi = |
(1, Xi1¢)¢ and b(t) = |
(b0(t), b1(t)¢)¢, so that |
||||
Q(t |
|
Xi, b(t)) |
= |
b0(t) + Xi1¢b1(t) |
(28.26) |
||
|
We may then define:

350—Chapter 28. Quantile Regression
ˆ |
= minb (t )Ârt (Yi – b0(t)–Xi1¢b1(t)) |
|
||
V(t) |
|
|||
|
|
i |
(28.27) |
|
˜ |
= minb0 |
(t)Ârt (Yi – b0(t)) |
||
|
||||
V(t) |
|
|||
|
|
i |
|
the minimized unrestricted and intercept-only objective functions. The Koenker and Machado goodness-of-fit criterion is given by:
R |
1 |
(t) |
= |
1 |
ˆ |
˜ |
(28.28) |
|
– V(t) § V(t) |
||||||
This statistic is an obvious analogue of the conventional R2 . R1(t) |
lies between 0 and 1, |
and measures the relative success of the model in fitting the data for the t -th quantile.
Quasi-Likelihood Ratio Tests
Koenker and Machado (1999) describe quasi-likelihood ratio tests based on the change in the optimized value of the objective function after relaxation of the restrictions imposed by the null hypothesis. They offer two test statistics which they term quantile-r tests, though as Koenker (2005) points out, they may also be thought of as quasi-likelihood ratio tests.
We define the test statistics:
Ln(t) =
Ln(t) =
2(V(t) – V(t)) |
|
|
|
˜ |
ˆ |
|
|
--------------------------------------- |
|
|
|
t(1 |
– t)s(t) |
|
(28.29) |
ˆ |
|
|
|
2V(t) |
(V(t) § V(t)) |
||
t(1 – t)s(t)log |
|||
------------------------------ |
˜ |
ˆ |
which are both asymptotically x2q where q is the number of restrictions imposed by the null hypothesis.
You should note the presence of the sparsity term s(t) in the denominator of both expressions. Any of the sparsity estimators outlined in “Sparsity Estimation,” on page 344 may be employed for either the null or alternative specifications; EViews uses the sparsity estimated under the alternative. The presence of s(t) should be a tipoff that these test statistics require that the quantile density function does not depend on X , as in the pure locationshift model.
Note that EViews will always compute an estimate of the scalar sparsity, even when you specify a Huber sandwich covariance method. This value of the sparsity will be used to compute QLR test statistics which may be less robust than the corresponding Wald counterparts.
Coefficient Tests
Given estimates of the asymptotic covariance matrix for the quantile regression estimates, you may construct Wald-type tests of hypotheses and construct coefficient confidence ellipses as in “Coefficient Diagnostics,” beginning on page 140.

Background—351
Quantile Process Testing
The focus of our analysis thus far has been on the quantile regression model for a single quantile, t . In a number of cases, we may instead be interested in forming joint hypotheses using coefficients for more than one quantile. We may, for example, be interested in evaluating whether the location-shift model is appropriate by testing for equality of slopes across quantile values. Consideration of more than one quantile regression at the same time comes under the general category of quantile process analysis.
While the EViews equation object is set up to consider only one quantile at a time, specialized tools allow you to perform the most commonly performed quantile process analyses.
Before proceeding to the hypothesis tests of interest, we must first outline the required distributional theory. Define the process coefficient vector:
b = (b(t1 )¢, b(t2 )¢, º, b(tK)¢)¢ |
(28.30) |
||
Then |
|
|
|
|
ˆ |
(28.31) |
|
|
n(b – b) ~ N(0, Q) |
||
where Q has blocks of the form: |
|
|
|
Qij = [min(ti, tj) – titj]H–1(ti)JH–1(tj) |
(28.32) |
||
In the i.i.d. setting, Q simplifies to, |
|
||
|
Q = Q0 ƒ J |
(28.33) |
|
where Q0 has representative element: |
|
||
qij = |
min(ti, tj) – titj |
(28.34) |
|
f-------------------------------------------------------(F–1(ti))(f(F–1(tj ))) |
|||
|
|
Estimation of Q may be performed directly using (28.32), (28.33) and (28.34), or using one of the bootstrap variants.
Slope Equality Testing
Koenker and Bassett (1982a) propose testing for slope equality across quantiles as a robust test of heteroskedasticity. The null hypothesis is given by:
H0: b1(t1) = b1(t2) = º = b1(tK) |
(28.35) |
which imposes (p – 1)(K – 1) restrictions on the coefficients. We may form the corresponding Wald statistic, which is distributed as a x2(p – 1)(K – 1) .