
- •Preface
- •Part IV. Basic Single Equation Analysis
- •Chapter 18. Basic Regression Analysis
- •Equation Objects
- •Specifying an Equation in EViews
- •Estimating an Equation in EViews
- •Equation Output
- •Working with Equations
- •Estimation Problems
- •References
- •Chapter 19. Additional Regression Tools
- •Special Equation Expressions
- •Robust Standard Errors
- •Weighted Least Squares
- •Nonlinear Least Squares
- •Stepwise Least Squares Regression
- •References
- •Chapter 20. Instrumental Variables and GMM
- •Background
- •Two-stage Least Squares
- •Nonlinear Two-stage Least Squares
- •Limited Information Maximum Likelihood and K-Class Estimation
- •Generalized Method of Moments
- •IV Diagnostics and Tests
- •References
- •Chapter 21. Time Series Regression
- •Serial Correlation Theory
- •Testing for Serial Correlation
- •Estimating AR Models
- •ARIMA Theory
- •Estimating ARIMA Models
- •ARMA Equation Diagnostics
- •References
- •Chapter 22. Forecasting from an Equation
- •Forecasting from Equations in EViews
- •An Illustration
- •Forecast Basics
- •Forecasts with Lagged Dependent Variables
- •Forecasting with ARMA Errors
- •Forecasting from Equations with Expressions
- •Forecasting with Nonlinear and PDL Specifications
- •References
- •Chapter 23. Specification and Diagnostic Tests
- •Background
- •Coefficient Diagnostics
- •Residual Diagnostics
- •Stability Diagnostics
- •Applications
- •References
- •Part V. Advanced Single Equation Analysis
- •Chapter 24. ARCH and GARCH Estimation
- •Basic ARCH Specifications
- •Estimating ARCH Models in EViews
- •Working with ARCH Models
- •Additional ARCH Models
- •Examples
- •References
- •Chapter 25. Cointegrating Regression
- •Background
- •Estimating a Cointegrating Regression
- •Testing for Cointegration
- •Working with an Equation
- •References
- •Binary Dependent Variable Models
- •Ordered Dependent Variable Models
- •Censored Regression Models
- •Truncated Regression Models
- •Count Models
- •Technical Notes
- •References
- •Chapter 27. Generalized Linear Models
- •Overview
- •How to Estimate a GLM in EViews
- •Examples
- •Working with a GLM Equation
- •Technical Details
- •References
- •Chapter 28. Quantile Regression
- •Estimating Quantile Regression in EViews
- •Views and Procedures
- •Background
- •References
- •Chapter 29. The Log Likelihood (LogL) Object
- •Overview
- •Specification
- •Estimation
- •LogL Views
- •LogL Procs
- •Troubleshooting
- •Limitations
- •Examples
- •References
- •Part VI. Advanced Univariate Analysis
- •Chapter 30. Univariate Time Series Analysis
- •Unit Root Testing
- •Panel Unit Root Test
- •Variance Ratio Test
- •BDS Independence Test
- •References
- •Part VII. Multiple Equation Analysis
- •Chapter 31. System Estimation
- •Background
- •System Estimation Methods
- •How to Create and Specify a System
- •Working With Systems
- •Technical Discussion
- •References
- •Vector Autoregressions (VARs)
- •Estimating a VAR in EViews
- •VAR Estimation Output
- •Views and Procs of a VAR
- •Structural (Identified) VARs
- •Vector Error Correction (VEC) Models
- •A Note on Version Compatibility
- •References
- •Chapter 33. State Space Models and the Kalman Filter
- •Background
- •Specifying a State Space Model in EViews
- •Working with the State Space
- •Converting from Version 3 Sspace
- •Technical Discussion
- •References
- •Chapter 34. Models
- •Overview
- •An Example Model
- •Building a Model
- •Working with the Model Structure
- •Specifying Scenarios
- •Using Add Factors
- •Solving the Model
- •Working with the Model Data
- •References
- •Part VIII. Panel and Pooled Data
- •Chapter 35. Pooled Time Series, Cross-Section Data
- •The Pool Workfile
- •The Pool Object
- •Pooled Data
- •Setting up a Pool Workfile
- •Working with Pooled Data
- •Pooled Estimation
- •References
- •Chapter 36. Working with Panel Data
- •Structuring a Panel Workfile
- •Panel Workfile Display
- •Panel Workfile Information
- •Working with Panel Data
- •Basic Panel Analysis
- •References
- •Chapter 37. Panel Estimation
- •Estimating a Panel Equation
- •Panel Estimation Examples
- •Panel Equation Testing
- •Estimation Background
- •References
- •Part IX. Advanced Multivariate Analysis
- •Chapter 38. Cointegration Testing
- •Johansen Cointegration Test
- •Single-Equation Cointegration Tests
- •Panel Cointegration Testing
- •References
- •Chapter 39. Factor Analysis
- •Creating a Factor Object
- •Rotating Factors
- •Estimating Scores
- •Factor Views
- •Factor Procedures
- •Factor Data Members
- •An Example
- •Background
- •References
- •Appendix B. Estimation and Solution Options
- •Setting Estimation Options
- •Optimization Algorithms
- •Nonlinear Equation Solution Methods
- •References
- •Appendix C. Gradients and Derivatives
- •Gradients
- •Derivatives
- •References
- •Appendix D. Information Criteria
- •Definitions
- •Using Information Criteria as a Guide to Model Selection
- •References
- •Appendix E. Long-run Covariance Estimation
- •Technical Discussion
- •Kernel Function Properties
- •References
- •Index
- •Symbols
- •Numerics

BDS Independence Test—411
The Wright variance ratio test statistics are obtained by computing the Lo and MacKinlay homoskedastic test statistic using the ranks or rank scores in place of the original data. Under the i.i.d. null hypothesis, the exact sampling distribution of the statistics may be approximated using a permutation bootstrap.
Sign Test
Wright also proposes a modification of the homoskedastic Lo and MacKinlay statistic in which each DYt is replaced by its sign. This statistic is valid under the m.d.s. null hypothesis, and under the assumption that m = 0 , the exact sampling distribution may also be approximated using a permutation bootstrap. (EViews does not allow for non-zero means when performing the sign test since allowing m π 0 introduces a nuisance parameter into the sampling distribution.)
Panel Statistics
EViews offers two approaches to variance ratio testing in panel settings.
First, under the assumption that cross-sections are independent, with cross-section heterogeneity of the processes, we may compute separate joint variance ratio tests for each crosssection, then combine the p-values from cross-section results using the Fisher approach as in Maddala and Wu (1999). If we define pi to be a p-value from the i-th cross-section, then under the hypothesis that the null hypothesis holds for all N cross-sections,
N |
|
–2 Â log(pi) Æ x22N |
(30.60) |
i = 1
as T Æ •.
Alternately, if we assume homogeneity across all cross-sections, we may stack the panel observations and compute the variance ratio test for the stacked data. In this approach, the only adjustment for the panel nature of the stacked data is in ensuring that lag calculations do not span cross-section boundaries.
BDS Independence Test
This series view carries out the BDS test for independence as described in Brock, Dechert, Scheinkman and LeBaron (1996).
The BDS test is a portmanteau test for time based dependence in a series. It can be used for testing against a variety of possible deviations from independence including linear dependence, non-linear dependence, or chaos.
The test can be applied to a series of estimated residuals to check whether the residuals are independent and identically distributed (iid). For example, the residuals from an ARMA

412—Chapter 30. Univariate Time Series Analysis
model can be tested to see if there is any non-linear dependence in the series after the linear ARMA model has been fitted.
The idea behind the test is fairly simple. To perform the test, we first choose a distance, e . We then consider a pair of points. If the observations of the series truly are iid, then for any pair of points, the probability of the distance between these points being less than or equal to epsilon will be constant. We denote this probability by c1(e) .
We can also consider sets consisting of multiple pairs of points. One way we can choose sets of pairs is to move through the consecutive observations of the sample in order. That is, given an observation s , and an observation t of a series X, we can construct a set of pairs of the form:
{{Xs, Xt} , {Xs + 1, Xt + 1} , {Xs + 2, Xt + 2} , º, {Xs + m – 1, Xt + m – 1} } |
(30.61) |
where m is the number of consecutive points used in the set, or embedding dimension. We denote the joint probability of every pair of points in the set satisfying the epsilon condition by the probability cm(e) .
The BDS test proceeds by noting that under the assumption of independence, this probability will simply be the product of the individual probabilities for each pair. That is, if the observations are independent,
cm(e) = c1m(e) . |
(30.62) |
When working with sample data, we do not directly observe c1(e) or cm(e) . We can only estimate them from the sample. As a result, we do not expect this relationship to hold exactly, but only with some error. The larger the error, the less likely it is that the error is caused by random sample variation. The BDS test provides a formal basis for judging the size of this error.
To estimate the probability for a particular dimension, we simply go through all the possible sets of that length that can be drawn from the sample and count the number of sets which satisfy the e condition. The ratio of the number of sets satisfying the condition divided by the total number of sets provides the estimate of the probability. Given a sample of n observations of a series X, we can state this condition in mathematical notation,
|
2 |
|
n – m + 1 n – m + 1 m – 1 |
|
|
||||
cm, n(e) = |
|
  ’ |
Ie(Xs + j, Xt + j ) |
(30.63) |
|||||
------------------------------------------------- |
|||||||||
|
(n – m + 1)(n – m) |
|
|
||||||
|
|
|
s = 1 |
t = s + 1 j = 0 |
|
|
|||
where Ie is the indicator function: |
|
|
|
|
|
|
|
|
|
|
Ie(x, y) = |
1 |
if |
|
x – y |
|
£ e |
|
|
|
|
|
|
|
|||||
|
|
otherwise. |
|
(30.64) |
|||||
|
|
0 |
|
|
Note that the statistics cm, n are often referred to as correlation integrals.

BDS Independence Test—413
We can then use these sample estimates of the probabilities to construct a test statistic for independence:
b |
m, n |
(e) = c |
(e) – c |
1, n – m + 1 |
(e)m |
(30.65) |
|
|
m, n |
|
|
where the second term discards the last m – 1 observations from the sample so that it is based on the same number of terms as the first statistic.
Under the assumption of independence, we would expect this statistic to be close to zero. In fact, it is shown in Brock et al. (1996) that
|
bm, n(e) |
Æ N(0, 1) |
|
|
----------------- |
|
|
( |
n – m + 1)jm, n(e) |
(30.66) |
where
jm2 |
, n(e) = |
|
m – 1 |
2j + (m – 1)2c1 |
|
– m2kc1 |
|
|
4 km + 2 |
 km – jc1 |
2m |
2m – 2 |
(30.67) |
||||
|
|
|
j = 1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
and where c1 can be estimated using c1, n . k is the probability of any triplet of points lying within e of each other, and is estimated by counting the number of sets satisfying the sample condition:
|
2 |
n |
n |
n |
|
|
kn(e) = |
   |
(30.68) |
||||
--------------------------------------- |
||||||
|
n(n – 1)(n – 2) |
|
||||
|
|
t = 1 s = t + 1 r = s + 1 |
|
(Ie(Xt, Xs)Ie(Xs, Xr) + Ie(Xt, Xr)Ie(Xr, Xs) + Ie(Xs, Xt)Ie(Xt, Xr))
To calculate the BDS test statistic in EViews, simply open the series you would like to test in a window, and choose View/BDS Independence Test.... A dialog will appear prompting you to input options.
To carry out the test, we must choose e , the distance used for testing proximity of the data points, and the dimension m , the number of consecutive data points to include in the set.
The dialog provides several choices for how to specify e :
•Fraction of pairs: e is calculated so as to ensure a certain fraction of the total number of pairs of points in the sample lie within e of each other.
•Fixed value: e is fixed at a raw value specified in the units as the data series.

414—Chapter 30. Univariate Time Series Analysis
•Standard deviations: e is calculated as a multiple of the standard deviation of the series.
•Fraction of range: e is calculated as a fraction of the range (the difference between the maximum and minimum value) of the series.
The default is to specify e as a fraction of pairs, since this method is most invariant to different distributions of the underlying series.
You must also specify the value used in calculating e . The meaning of this value varies based on the choice of method. The default value of 0.7 provides a good starting point for the default method when testing shorter dimensions. For testing longer dimensions, you should generally increase the value of e to improve the power of the test.
EViews also allows you to specify the maximum correlation dimension for which to calculate the test statistic. EViews will calculate the BDS test statistic for all dimensions from 2 to the specified value, using the same value of e or each dimension. Note the same e is used only because of calculational efficiency. It may be better to vary e with the correlation dimension to maximize the power of the test.
In small samples or in series that have unusual distributions, the distribution of the BDS test statistic can be quite different from the asymptotic normal distribution. To compensate for this, EViews offers you the option of calculating bootstrapped p-values for the test statistic. To request bootstrapped p-values, simply check the Use bootstrap box, then specify the number of repetitions in the field below. A greater number of repetitions will provide a more accurate estimate of the p-values, but the procedure will take longer to perform.
When bootstrapped p-values are requested, EViews first calculates the test statistic for the data in the order in which it appears in the sample. EViews then carries out a set of repetitions where for each repetition a set of observations is randomly drawn with replacement from the original data. Also note that the set of observations will be of the same size as the original data. For each repetition, EViews recalculates the BDS test statistic for the randomly drawn data, then compares the statistic to that obtained from the original data. When all the repetitions are complete, EViews forms the final estimate of the bootstrapped p-value by dividing the lesser of the number of repetitions above or below the original statistic by the total number of repetitions, then multiplying by two (to account for the two tails).
As an example of a series where the BDS statistic will reject independence, consider a series generated by the non-linear moving average model:
yt = ut + 8ut – 1ut – 2 |
(30.69) |
where ut is a normal random variable. On simulated data, the correlogram of this series shows no statistically significant correlations, yet the BDS test strongly rejects the hypothesis that the observations of the series are independent (note that the Q-statistics on the squared levels of the series also reject independence).