Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Скачиваний:
668
Добавлен:
03.06.2015
Размер:
8.25 Mб
Скачать

446—Chapter 31. System Estimation

Covariance specification: Constant Conditional Correlation

GARCH(i) = M(i) + A1(i)*RESID(i)(-1)^2 + B1(i)*GARCH(i)(-1)

COV(i,j) = R(i,j)*@SQRT(GARCH(i)*GARCH(j))

Transformed Variance Coefficients

 

Coefficient

Std. Error

z-Statistic

Prob.

 

 

 

 

 

 

 

 

 

 

M(1)

5.84E-06

1.30E-06

4.482923

0.0000

A1(1)

0.062911

0.010085

6.238137

0.0000

B1(1)

0.916958

0.013613

67.35994

0.0000

M(2)

4.89E-05

1.72E-05

2.836869

0.0046

A1(2)

0.063178

0.012988

4.864469

0.0000

B1(2)

0.772214

0.064005

12.06496

0.0000

M(3)

1.47E-05

3.11E-06

4.735844

0.0000

A1(3)

0.104348

0.009262

11.26665

0.0000

B1(3)

0.828536

0.017936

46.19308

0.0000

R(1,2)

0.571323

0.018238

31.32550

0.0000

R(1,3)

-0.403219

0.023634

-17.06082

0.0000

R(2,3)

-0.677329

0.014588

-46.43002

0.0000

 

 

 

 

 

 

 

 

 

 

Is this model better than the previous model? While the log likelihood value is lower, it also has fewer coefficients. We may compare the two system by looking at model selection criteria. The Akaike, Schwarz and Hannan-Quinn all show lower information criteria values for the VECH model than the CCC specification, suggesting that the time-varying Diagonal VECH specification may be preferred.

Technical Discussion

While the discussion to follow is expressed in terms of a balanced system of linear equations, the analysis carries forward in a straightforward way to unbalanced systems containing nonlinear equations.

Denote a system of m equations in stacked form as:

 

 

 

 

 

 

 

 

 

 

 

 

y1

 

X1 0 º 0

 

b1

 

e1

 

 

y2

=

0

X2

M

 

b2

+

e2

(31.8)

 

M

 

 

O

0

 

M

 

M

 

yM

 

0

º 0 XM

 

bM

 

eM

 

 

 

 

 

 

 

 

 

 

 

 

where ym is T vector, Xm is a T ¥ km matrix, and error terms e have an MT ¥ MT covariance matrix compact form as:

y = Xb + e .

bm is a km vector of coefficients. The V . The system may be written in

(31.9)

Technical Discussion—447

Under the standard assumptions, the residual variance matrix from this stacked system is given by:

V = E(ee¢) = j2(IM ƒ IT).

(31.10)

Other residual structures are of interest. First, the errors may be heteroskedastic across the m equations. Second, they may be heteroskedastic and contemporaneously correlated. We can characterize both of these cases by defining the M ¥ M matrix of contemporaneous

correlations, S , where the (i,j)-th element of S is given by jij = E(eitejt) for all t . If the errors are contemporaneously uncorrelated, then, jij = 0 for i π j , and we can write:

V = diag(j12, j22,º, jM2 ) ƒ IT

(31.11)

More generally, if the errors are heteroskedastic and contemporaneously correlated:

V = S ƒ IT .

(31.12)

Lastly, at the most general level, there may be heteroskedasticity, contemporaneous correlation, and autocorrelation of the residuals. The general variance matrix of the residuals may be written:

 

 

j11S11

j12S12

º

 

 

 

 

 

j1MS1M

 

V =

 

j

21

S

21

j

22

S

22

 

M

 

(31.13)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

O

 

 

 

 

 

 

 

 

 

 

º

 

 

 

 

 

 

jM1SM1

 

 

 

jMMSMM

 

where Sij is an autocorrelation matrix for the i-th and j-th equations.

Ordinary Least Squares

The OLS estimator of the estimated variance matrix of the parameters is valid under the assumption that V = S ƒ IT . The estimator for b is given by,

bLS = (X¢X)–1X¢y

(31.14)

and the variance estimator is given by:

 

var(bLS) = s2(X¢X)–1

(31.15)

where s2 is the residual variance estimate for the stacked system.

 

Weighted Least Squares

The weighted least squares estimator is given by:

 

bWLS =

ˆ –1

X)

–1

ˆ

–1

y

 

 

(X¢V

 

X¢V

 

(31.16)

ˆ

= diag(s11, s22, º, sMM) ƒ IT is a consistent estimator of V , and sii is the

where V

residual variance estimator:

448—Chapter 31. System Estimation

sij = (yi XibLS)¢(yj XjbLS) § max(Ti, Tj)

(31.17)

where the inner product is taken over the non-missing common elements of i and j . The max function in Equation (31.17) is designed to handle the case of unbalanced data by down-weighting the covariance terms. Provided the missing values are asymptotically negligible, this yields a consistent estimator of the variance elements. Note also that there is no adjustment for degrees of freedom.

When specifying your estimation specification, you are given a choice of which coefficients to use in computing the sij . If you choose not to iterate the weights, the OLS coefficient estimates will be used to estimate the variances. If you choose to iterate the weights, the current parameter estimates (which may be based on the previously computed weights) are used in computing the sij . This latter procedure may be iterated until the weights and coefficients converge.

The estimator for the coefficient variance matrix is:

var(bWLS) =

ˆ

–1

X)

–1

 

(X¢V

 

.

(31.18)

The weighted least squares estimator is efficient, and the variance estimator consistent, under the assumption that there is heteroskedasticity, but no serial or contemporaneous correlation in the residuals.

It is worth pointing out that if there are no cross-equation restrictions on the parameters of the model, weighted LS on the entire system yields estimates that are identical to those obtained by equation-by-equation LS. Consider the following simple model:

y1 = X1b1 + e1

(31.19)

y2 = X2b2 + e2

If b1 and b2 are unrestricted, the WLS estimator given in Equation (31.18) yields:

bWLS

=

 

((X1¢X1 ) § s11 )–1

((X1¢y1) § s11 )

 

=

 

(X1

¢X1 )–1X1

¢y1

 

.

(31.20)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

((X2¢X2 ) § s22 )–1((X2¢y2) § s22 )

 

 

 

(X2¢X2 )–1X2¢y2

 

 

 

 

 

 

 

 

 

 

 

The expression on the right is equivalent to equation-by-equation OLS. Note, however, that even without cross-equation restrictions, the standard errors are not the same in the two cases.

Seemingly Unrelated Regression (SUR)

SUR is appropriate when all the right-hand side regressors X are assumed to be exogenous, and the errors are heteroskedastic and contemporaneously correlated so that the error variance matrix is given by V = S ƒ IT . Zellner’s SUR estimator of b takes the form:

bSUR =

ˆ

–1

X)

–1

ˆ

–1

y ,

 

(X¢(S ƒ IT)

 

 

X¢(S ƒ IT )

 

(31.21)

 

 

Technical Discussion—449

 

 

 

ˆ

S with typical element

sij , for all i and j .

where S is a consistent estimate of

If you include AR terms in equation j , EViews transforms the model (see “Estimating AR Models” on page 89) and estimates the following equation:

y

 

= X

 

 

 

 

pj

 

 

(y

 

X

 

 

+ e

 

(31.22)

jt

jt

b

j

+

Â

r

jr

j(tr)

j(tr)

)

jt

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

r = 1

 

 

 

 

 

 

 

 

 

 

where ej is assumed to be serially independent, but possibly correlated contemporaneously across equations. At the beginning of the first iteration, we estimate the equation by nonlinear LS and use the estimates to compute the residuals eˆ . We then construct an estimate of S using sij = (eˆi¢eˆj) § max(Ti, Tj) and perform nonlinear GLS to complete one iteration of the estimation procedure. These iterations may be repeated until the coefficients and weights converge.

Two-Stage Least Squares (TSLS) and Weighted TSLS

TSLS is a single equation estimation method that is appropriate when some of the variables in X are endogenous. Write the j-th equation of the system as,

YGj + XBj + ej

= 0

(31.23)

or, alternatively:

 

 

yj = Yjgj + Xjbj + ej

= Zjdj + ej

(31.24)

where Gj¢ = (–1, gj¢, 0), Bj¢ = (bj¢, 0), Zj¢

= (Yj¢, Xj¢)and dj¢

= (gj¢, bj¢). Y is

the matrix of endogenous variables and X is the matrix of exogenous variables; Yj is the matrix of endogenous variables not including yj .

In the first stage, we regress the right-hand side endogenous variables yj

on all exogenous

variables X and get the fitted values:

 

 

 

 

 

ˆ

= X(X¢X)

–1

X¢Yj .

(31.25)

Yj

 

 

In the second stage, we regress yj

ˆ

Xj

to get:

 

on Yj and

 

ˆ

ˆ ˆ

 

–1

ˆ

(31.26)

d2SLS = (Zj¢Zj)

 

 

Zj¢y .

where Zˆ j = (Yˆ j, Xj). The residuals from an equation using these coefficients are used for form weights.

Weighted TSLS applies the weights in the second stage so that:

ˆ

=

ˆ ˆ

–1 ˆ

–1 ˆ ˆ

–1

y

(31.27)

dW2SLS

(Zj¢V

Zj)

Zj¢V

 

where the elements of the variance matrix are estimated in the usual fashion using the residuals from unweighted TSLS.

450—Chapter 31. System Estimation

If you choose to iterate the weights, X is estimated at each step using the current values of the coefficients and residuals.

Three-Stage Least Squares (3SLS)

Since TSLS is a single equation estimator that does not take account of the covariances between residuals, it is not, in general, fully efficient. 3SLS is a system method that estimates all of the coefficients of the model, then forms weights and reestimates the model using the estimated weighting matrix. It should be viewed as the endogenous variable analogue to the SUR estimator described above.

The first two stages of 3SLS are the same as in TSLS. In the third stage, we apply feasible generalized least squares (FGLS) to the equations in the system in a manner analogous to the SUR estimator.

SUR uses the OLS residuals to obtain a consistent estimate of the cross-equation covariance matrix S . This covariance estimator is not, however, consistent if any of the right-hand side variables are endogenous. 3SLS uses the 2SLS residuals to obtain a consistent estimate of

S .

In the balanced case, we may write the equation as,

ˆ

ˆ

–1

ƒ X(X¢X)

–1

X¢)Z)

–1

ˆ –1

ƒ

X(X¢X)

–1

X¢)y

(31.28)

d3SLS =

(Z(S

 

 

 

Z(S

 

ˆ

 

 

 

 

 

 

 

 

 

 

 

 

where S has typical element:

 

 

 

 

 

 

 

 

(31.29)

sij

= ((yi Zig2SLS)¢(yj Zjg2SLS)) § max(Ti, Tj)

 

 

 

 

ˆ

 

ˆ

 

 

 

 

 

 

If you choose to iterate the weights, the current coefficients and residuals will be used to

estimate ˆ .

S

Generalized Method of Moments (GMM)

The basic idea underlying GMM is simple and intuitive. We have a set of theoretical moment conditions that the parameters of interest v should satisfy. We denote these moment conditions as:

E(m(y, v)) = 0 .

(31.30)

The method of moments estimator is defined by replacing the moment condition (31.30) by its sample analog:

 

 

§ T

= 0 .

(31.31)

 

Âm(yt, v)

 

t

 

 

 

However, condition (31.31) will not be satisfied for any v when there are more restrictions m than there are parameters v . To allow for such overidentification, the GMM estimator is defined by minimizing the following criterion function:

 

Technical Discussion—451

 

 

Âm(yt, v)A(yt, v)m(yt, v)

(31.32)

t

 

which measures the “distance” between m and zero. A is a weighting matrix that weights each moment condition. Any symmetric positive definite matrix A will yield a consistent estimate of v . However, it can be shown that a necessary (but not sufficient) condition to obtain an (asymptotically) efficient estimate of v is to set A equal to the inverse of the covariance matrix Q of the sample moments m . This follows intuitively, since we want to put less weight on the conditions that are more imprecise.

To obtain GMM estimates in EViews, you must be able to write the moment conditions in Equation (31.30) as an orthogonality condition between the residuals of a regression equation, u(y, v, X) , and a set of instrumental variables, Z , so that:

m(v, y, X, Z) = Z¢u(v, y, X)

(31.33)

For example, the OLS estimator is obtained as a GMM estimator with the orthogonality conditions:

X¢(y Xb) = 0 .

(31.34)

For the GMM estimator to be identified, there must be at least as many instrumental variables Z as there are parameters v . See the section on “Generalized Method of Moments,” beginning on page 67 for additional examples of GMM orthogonality conditions.

An important aspect of specifying a GMM problem is the choice of the weighting matrix A .

EViews uses the optimal A =

ˆ –1

ˆ

Q

, where Q is the estimated long-run covariance matrix of

the sample moments m . EViews uses the consistent TSLS estimates for the initial estimate of v in forming the estimate of Q .

White’s Heteroskedasticity Consistent Covariance Matrix

If you choose the GMM-Cross section option, EViews estimates Q using White’s heteroskedasticity consistent covariance matrix:

ˆ

 

=

ˆ

=

 

1

 

T

Z

¢u

u

 

 

(31.35)

Q

W

G(0)

------------

Â

¢Z

 

 

 

 

 

T

k

t

t

t

t

 

 

 

 

 

 

 

 

 

 

t = 1

 

 

 

 

 

 

where u is the vector of residuals, and Zt

is a k ¥ p matrix such that the p moment con-

ditions at t may be written as m(v, yt, Xt, Zt) =

Zt¢u(v, yt, Xt).

 

Heteroskedasticity and Autocorrelation Consistent (HAC) Covariance Matrix

 

If you choose the GMM-Time series option, EViews estimates Q by,

 

ˆ

 

 

ˆ

T – 1

 

 

ˆ

 

 

ˆ

 

 

QHAC =

 

 

Â

k(j, q)(G(j) +

 

 

(31.36)

G(0) +

 

G¢(j))

j = 1

452—Chapter 31. System Estimation

where:

ˆ

=

1

 

T

Z

 

¢u

 

u

 

(31.37)

G(j)

------------

Â

t j

t j

¢Z .

 

 

T k

 

 

t

t

 

 

 

 

 

t = j + 1

 

 

 

 

 

 

 

You also need to specify the kernel function k and the bandwidth q .

Kernel Options

The kernel function is used to weight the covariances so that ˆ is ensured to be positive k Q

semi-definite. EViews provides two choices for the kernel, Bartlett and quadratic spectral (QS). The Bartlett kernel is given by:

k(x) =

1 – x

0 £ x £ 1

(31.38)

 

0

otherwise

 

 

 

while the quadratic spectral (QS) kernel is given by:

k(j § q) =

25

sin(6px § 5)

 

(31.39)

12(px)2

-----------------------------6px § 5 -

– cos(6px § 5)

 

--------------------

 

 

 

where x = j § q . The QS has a faster rate of convergence than the Bartlett and is smooth and not truncated (Andrews 1991). Note that even though the QS kernel is not truncated, it still depends on the bandwidth q (which need not be an integer).

Bandwidth Selection

The bandwidth q determines how the weights given by the kernel change with the lags in the estimation of Q . Newey-West fixed bandwidth is based solely on the number of observations in the sample and is given by:

q = int(4(T § 100)2 § 9)

(31.40)

where int( ) denotes the integer part of the argument.

EViews also provides two “automatic”, or data dependent bandwidth selection methods that are based on the autocorrelations in the data. Both methods select the bandwidth according to the rule:

 

 

 

ˆ

 

1 § 3

)

for the Bartlett kernel

 

q =

int(1.1447(a(1)T)

 

(31.41)

 

 

ˆ

1 § 5

 

for the QS kernel

 

 

1.3221

 

 

 

 

(a(2)T)

 

 

 

 

 

The two methods, Andrews and Variable-Newey-West, differ in how they estimate aˆ (1) and aˆ (2).

Andrews (1991) is a parametric method that assumes the sample moments follow an AR(1) process. We first fit an AR(1) to each sample moment (31.33) and estimate the autocorrela-

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Technical Discussion—453

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ˆ

 

 

 

 

 

 

 

 

 

 

 

 

ˆ

2

for i

 

 

 

 

ˆ

tion coefficients ri and the residual variances ji

= 1, 2, º, p . Then a(1) and

ˆ

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

a(2) are estimated by:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

zn

 

 

 

ˆ

4

ˆ 2

 

 

 

 

 

zn

 

 

ˆ 4

 

ˆ

=

Â

 

 

4ji

ri

 

 

 

 

Â

 

ji

a(1)

 

-------------------------------------------

 

ˆ

 

6

 

 

 

ˆ

 

2

§

--------------------

ˆ

4

 

 

 

i = 1

(1

 

 

 

 

 

 

 

 

 

 

 

i

= 1

(1

 

 

 

 

ri) (1

+ ri)

 

 

 

ri)

(31.42)

 

 

 

zn

 

ˆ 4

ˆ

2

 

 

 

 

 

zn

 

 

 

ˆ 4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

ˆ

=

Â

4ji

ri

 

 

§

Â

 

 

 

ji

 

 

 

 

a(2)

 

--------------------

 

ˆ

 

8

 

 

--------------------

 

 

ˆ

 

4

 

 

 

 

 

i = 1

(1

 

 

 

 

 

 

 

i = 1

 

(1

 

 

 

 

 

 

 

 

 

ri)

 

 

 

 

 

 

ri)

 

 

 

 

Note that we weight all moments equally, including the moment corresponding to the constant.

Newey-West (1994) is a nonparametric method based on a truncated weighted sum of the

ˆ

ˆ

 

 

ˆ

 

 

 

estimated cross-moments G(j). a(1)

and a(2) are estimated by,

 

 

ˆ

=

 

l¢F(p)l

 

2

 

 

-----------------

 

(31.43)

 

 

 

 

 

a(p)

l¢F(0)l

 

 

 

 

 

 

 

 

where l is a vector of ones and:

 

ˆ

L

p

ˆ

ˆ

 

F(p) =

 i

(31.44)

(p – 1)G(0) +

 

(G(i) + G¢(i)),

i = 1

for p = 1, 2 .

One practical problem with the Newey-West method is that we have to choose a lag selection parameter L . The choice of L is arbitrary, subject to the condition that it grow at a certain rate. EViews sets the lag parameter to:

L = int( 4(T § 100)a )

(31.45)

where a = 2 § 9 for the Bartlett kernel and a = 4 § 25 for the quadratic spectral kernel.

Prewhitening

You can also choose to prewhiten the sample moments m to “soak up” the correlations in m prior to GMM estimation. We first fit a VAR(1) to the sample moments:

mt

= Amt – 1 + vt .

 

 

 

 

(31.46)

ˆ

 

ˆ

–1 ˆ

(I A)

–1

ˆ

 

Then the variance Q of m is estimated by Q = (I A)

Q

 

where Q

is the

long-run variance of the residuals vt

computed using any of the above methods. The GMM

estimator is then found by minimizing the criterion function:

 

 

 

 

 

ˆ

–1

Z¢u

 

 

 

 

(31.47)

u¢Z Q

 

 

 

 

 

454—Chapter 31. System Estimation

Note that while Andrews and Monahan (1992) adjust the VAR estimates to avoid singularity when the moments are near unit root processes, EViews does not perform this eigenvalue adjustment.

Multivariate ARCH

ARCH estimation uses maximum likelihood to jointly estimate the parameters of the mean and the variance equations.

Assuming multivariate normality, the log likelihood contributions for GARCH models are given by:

l

 

1

1

 

H

 

 

1

¢H

–1

e

 

(31.48)

 

= – --mlog(2p) – --log(

 

 

 

) – --e

 

 

 

t

2

2

 

 

t

 

2 t

 

t

 

t

 

 

 

 

 

 

 

 

where m is the number of mean equations, and et

 

 

is the m vector of mean equation resid-

uals. For Student's t-distribution, the contributions are of the form:

 

lt =

where

 

 

v + m m § 2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

G -------------

2 v

 

 

 

 

 

 

 

 

 

et¢Ht–1et

 

 

 

 

 

1

 

 

 

1

 

 

 

 

 

 

 

Ht

 

(v + m)log

1 +

 

(31.49)

 

 

 

 

log

------------------------------------------------------------

 

 

 

– --log(

 

 

) – --

-------------------

 

 

(vp)

m § 2

v

m § 2

2

 

 

 

2

 

 

 

v – 2

 

 

 

 

 

 

 

 

 

 

 

G -- (v – 2)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

v is the estimated degree of freedom.

Given a specification for the mean equation and a distributional assumption, all that we require is a specification for the conditional covariance matrix. We consider, in turn, each of the three basic specifications: Diagonal VECH, Constant Conditional Correlation (CCC), and Diagonal BEKK.

Diagonal VECH

Bollerslev, et. al (1988) introduce a restricted version of the general multivariate VECH model of the conditional covariance with the following formulation:

Ht = Q + A ∑ et – 1et – 1¢ + B Ht – 1

(31.50)

where the coefficient matrices A , B , and Q are N ¥ N symmetric matrices, and the operator “•” is the element by element (Hadamard) product. The coefficient matrices may be parametrized in several ways. The most general way is to allow the parameters in the matrices to vary without any restrictions, i.e. parameterize them as indefinite matrices. In that case the model may be written in single equation format as:

(Ht )ij = (Q)ij + (Aij)ejt – 1eit – 1 + (B)ij(Ht – 1)ij

(31.51)

where, for instance, (Ht )ij is the i-th row and j-th column of matrix Ht .

Technical Discussion—455

Each matrix contains N(N + 1) § 2 parameters. This model is the most unrestricted version of a Diagonal VECH model. At the same time, it does not ensure that the conditional covariance matrix is positive semidefinite (PSD). As summarized in Ding and Engle (2001), there are several approaches for specifying coefficient matrices that restrict H to be PSD, possibly by reducing the number of parameters. One example is:

Ht =

˜

˜

˜

(31.52)

QQ¢ + AA¢ ∑ et – 1et – 1

¢ + BB¢ ƒ Ht – 1

˜

˜

˜

 

 

where raw matrices A ,

B , and Q are any matrix up to rank N . For example, one may use

the rank N Cholesky factorized matrix of the coefficient matrix. This method is labeled the Full Rank Matrix in the coefficient Restriction selection of the system ARCH dialog. While this method contains the same number of parameters as the indefinite version, it does ensure that the conditional covariance is PSD.

A second method, which we term Rank One, reduces the number of parameter estimated to N and guarantees that the conditional covariance is PSD. In this case, the estimated raw matrix is restricted, with all but the first column of coefficients equal to zero.

 

 

˜

˜

In both of these specifications, the reported raw variance coefficients are elements of A ,

B ,

˜

 

˜

 

and Q . These coefficients must be transformed to obtain the matrix of interest: A =

AA¢,

˜

˜

 

 

B = BB¢, and Q

= QQ¢. These transformed coefficients are reported in the extended

 

variance coefficient section at the end of the system estimation results.

There are two other covariance specifications that you may employ. First, the values in the N ¥ N matrix may be a constant, so that:

B = b ii¢

(31.53)

where b is a scalar and i is an N ¥ 1 vector of ones. This Scalar specification implies that for a particular term, the parameters of the variance and covariance equations are restricted to be the same. Alternately, the matrix coefficients may be parameterized as Diagonal so that all off diagonal elements are restricted to be zero. In both of these parameterizations, the coefficients are not restricted to be positive, so that H is not guaranteed to be PSD.

Lastly, for the constant matrix Q , we may also impose a Variance Target on the coefficients which restricts the values of the coefficient matrix so that:

Q = Q0 ∑ (ii¢ A B)

(31.54)

where Q0 is the unconditional sample variance of the residuals. When using this option, the constant matrix is not estimated, reducing the number of estimated parameters.

You may specify a different type of coefficient matrix for each term. For example, if one estimates a multivariate GARCH(1,1) model with indefinite matrix coefficient for the constant while specifying the coefficients of the ARCH and GARCH term to be rank one matrices, then the number of parameters will be N((N + 1) § 2) + 2N , instead of 3N((N + 1) § 2).

Соседние файлы в папке EViews Guides BITCH