
- •3.Describe the method of ols
- •4.The method of testing statistical hypotheses of the r. A. Fisher.
- •6.Explain the difference between ratio, interval, ordinal, and nominal scales. Give an example of each. (pp. 30–31)
- •Example 9.5
- •Consequences of Micronumerosity
- •24) Explain the difference between positive and negative autocorrelation. Illustrate. (p. 449).
- •26) Explain and illustrate the graphical method to detect autocorrelation. (pp. 462–465).
- •Efficient estimator.
24) Explain the difference between positive and negative autocorrelation. Illustrate. (p. 449).
No autocorrelation between the disturbances. Given any two X values, Xi and Xj (i =à j ), the correlation between any two ui and uj (i =à j ) is zero. Symbolically,
cov(ui,uj |Xi,Xj)=E{[ui −E(ui)]|Xi}{[uj −E(uj)]|Xj} = E(ui | Xi)(uj | Xj)=0 where i and j are two different observations and where cov means covariance. In words, (3.2.5) postulates that the disturbances ui and uj are uncorrelated. Technically, this is the assumption of no serial correlation, or no autocorrelation. This means that, given Xi , the deviations of any two Y values from their mean value do not exhibit patterns such as those shown in Figure 3.6a and b. In Figure 3.6a, we see that the u’s are positively correlated, a positive u followed by a positive u or a negative u followed by a negative u. In Figure 3.6b, the u’s are negatively correlated, a positive u followed by a negative u and vice versa.
26) Explain and illustrate the graphical method to detect autocorrelation. (pp. 462–465).
Graphical Method If there is no a priori information about the nature of heteroscedasticity, in practice one can do the regression analysis on the assumption that there is no heteroscedasticity and then do a postmortem examination of the residual squared ˆu2i to see if they exhibit any systematic pattern.This data help us in transforming our data in such a manner that in the regressionon the transformed data the variance of the disturbance is homoscedastic.
Graphical Method .There are various ways of examining the residuals. The time sequence plot can be produced. Alternatively, we can plot the standardized residuals against time. The standardized residuals is simply the residuals divided by the standard error of the regression. If the actual and standard plot shows a pattern, then the errors may not be random.
25) What are two (out of seven) possible causes of autocorrelation? (pp. 445–448)
27) Explain and illustrate the Durbin-Watson test to detect autocorrelation. (pp. 467–470 until “. . . the scope of this book.”).
Durbin-Watson test is a test statistic used to detect the presence of autocorrelation (a relationship between values separated from each other by a given time lag) in the residuals (prediction errors) from a regression analysis.
A great advantage of the Durbin Watson test is that based on the estimated residuals. It is based on the following assumptions:
The regression model includes the intercept term.
The explanatory variables are nonstochastic, or fixed in repeated sampling.
The disturbances are generated by the first order autoregressive scheme.
The error term is assumed to be normally distributed.
The regression model does not include the lagged values of the dependent an explanatory variables.
There are no missing values in the data
d=
28) What is the difference between a parameter and an estimate of a regression function? Between the stochastic disturbance term ui and the residual term ˆui? (p. 49).
Conceptually ˆui is analogous to ui and can be regarded as an estimate of ui.the residual is the difference between the observed Y and the estimated regression line(Y), while the error term is the difference between the observed Y and the true regression equation (the expected value of Y). Error term is theoretical concept that can never be observed, but the residual is a real-world value that is calculated for each observation every time a regression is run. The reidual can be thought of as an estimate of the error term.
29) What does it mean when we say that the least squares estimator is the best linear unbiased estimator? (p. 79, pp. 899–901)
given the assumptions of the classical linear regression
model, the least-squares estimates possess some ideal or optimum properties.
These properties are contained in the well-known Gauss–Markov
theorem. To understand this theorem, we need to consider the best linear unbiasedness property of an estimator. An estimator, say the OLS estimator ˆ β2, is said to be a best linear unbiased estimator (BLUE) of β2 if the following hold:
1. It is linear, that is, a linear function of a random variable, such as the
dependent variable Y in the regression model.
2. It is unbiased, that is, its average or expected value, E( ˆ β2), is equal to
the true value, β2.
3. It has minimum variance in the class of all such linear unbiased
estimators; an unbiased estimator with the least variance is known as an