
- •3.Describe the method of ols
- •4.The method of testing statistical hypotheses of the r. A. Fisher.
- •6.Explain the difference between ratio, interval, ordinal, and nominal scales. Give an example of each. (pp. 30–31)
- •Example 9.5
- •Consequences of Micronumerosity
- •24) Explain the difference between positive and negative autocorrelation. Illustrate. (p. 449).
- •26) Explain and illustrate the graphical method to detect autocorrelation. (pp. 462–465).
- •Efficient estimator.
What are the steps in the traditional methodology of econometrics? (p3,figure I.4 p. 10).. Illustrate using an example (other than the consumption function of pp. 4–10).
Traditional econometric methodology proceeds along the following lines:
Statement of theory or hypothesis.
Specification of the mathematical model of the theory
Specification of the statistical, or econometric, model
Obtaining the data
Estimation of the parameters of the econometric model
Hypothesis testing
Forecasting or prediction
Using the model for control or policy purposes.
Example.
1.Assume a theory predicting that more schooling increases the wage
2.The equation is:
Y=beta1+beta2X,where Y is the variable for wage and beta1 is a constant and beta2 is the coefficient of schooling, and X is a measurement of schooling, i.e. the number of years in school. We also call beta1 intercept and beta2 a slope coefficient.
3. Here, we assume that the mathematical model is correct but we need to account for the fact that it may not be so. We add an error term, u to the equation above. It is also called a random (stochastic) variable. It represents other non-quantifiable or unknown factors that affect Y. It also represents mismeasurements that may have entered the data. The econometric equation is:
y=beta1+beta2X+u. The error term is assumed to follow some sort of statistical distribution. This will be important later on.
4.We need data for the variables above. This can be obtained from government statistics agencies and other sources. A lot of data can also be collected on the Internet in these days. But we need to learn the art of finding appropriate data from the ever increasing huge loads of data.
5.Here, we quantify beta1and beta2, i.e. we obtain numerical estimates. This is done by statistical technique called regression analysis.
6.Now we go back to the part where we had economic theory. The prediction was that schooling is good for the wage. Does the econometric model support this hypothesis. What we do here is called statistical inference (hypothesis testing). Technically speaking, the beta2 coefficient should be greater than 0.
7.If the hypothesis testing was positive, i.e. the theory was concluded to be correct, we forecast the values of the wage by predicting the values of education. For example, how much would someone earn for an additional year of schooling? If the X variable is the years of schooling, the beta2 coefficient gives the answer to the question.
8.Lastly, if the theory seems to make sense and the econometric model was not refuted on the basis of the hypothesis test, we can go on to use the theory for policy recommendation.
Explain the difference between conditional and unconditional expected values. Illustrate in a diagram. (pp. 38–39).
Сonditional expected values, as they depend on the given values of the (conditioning) variable X. Symbolically, we denote them as E(Y | X), which is read as the expected value of Y given the value of X. It is important to distinguish these conditional expected values from the unconditional expected value. It is unconditional in the sense that in arriving at this number we have disregarded the income levels of the various families.(for example)
It is important to distinguish these conditional expected values from the
unconditional expected value of weekly consumption expenditure, E(Y).
If we add the weekly consumption expenditures for all the 60 families in
the population and divide this number by 60, we get the number $121.20
($7272/60), which is the unconditional mean, or expected, value of weekly
consumption expenditure, E(Y); it is unconditional in the sense that in arriving
at this number we have disregarded the income levels of the various
families.3 Obviously, the various conditional expected values of Y given in
Table 2.1 are different from the unconditional expected value of Y of
$121.20. When we ask the question, “What is the expected value of weekly
consumption expenditure of a family,” we get the answer $121.20 (the unconditional
mean). But if we ask the question, “What is the expected value
of weekly consumption expenditure of a family whose monthly income is
3.Describe the method of ols
.In statistics, ordinary least squares (OLS) or linear least squares is a method for estimating the unknown parameters in a linear regression model. This method minimizes the sum of squared vertical distances between the observed responses in the dataset and the responses predicted by the linear approximation. The resulting estimator can be expressed by a simple formula, especially in the case of a single regressor on the right-hand side.The OLS estimator is consistent when the regressors are exogenous and there is no perfect multicollinearity, and optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated. Under these conditions, the method of OLS provides minimum-variance mean-unbiased estimation when the errors have finite variances. Under the additional assumption that the errors be normally distributed, OLS is the maximum likelihood estimator. OLS is used in economics (econometrics), Political Science and electrical engineering (control theory and signal processing), among many areas of application.
4.The method of testing statistical hypotheses of the r. A. Fisher.
A statistical hypothesis test is a method of making decisions using data from a scientific study. In statistics, a result is called statistically significant if it has been predicted as unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by statistician Ronald Fisher. These tests are used in determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance; this can help to decide whether results contain enough information to cast doubt on conventional wisdom, given that conventional wisdom has been used to establish the null hypothesis. The critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. Statistical hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis, which may not have pre-specified hypotheses. Statistical hypothesis testing is a key technique of frequentist inference.
5. Explain the difference between linear in the variables and linear in the
parameters. Give examples of each. (p. 42)
Linearity in the Variables
The first and perhaps more “natural” meaning of linearity is that the conditional expectation of Y is a linear function of Xi, such as, for example, E(Y|Xi)=β1 +β2Xi Geometrically, the regression curve in this case is a straight line. In this interpretation, a regression function such as E(Y | Xi ) = β1 + β2 Xi2 is not a linear function because the variable X appears with a power or index of 2.
Linearity in the Parameters
The second interpretation of linearity is that the conditional expectation of Y, E(Y | Xi ), is a linear function of the parameters, the β’s; it may or may not be linear in the variable X In this interpretation E(Y | Xi ) = β1 + β2 Xi2 is a linear (in the parameter) regression model. To see this, let us suppose X takes the value 3. Therefore, E(Y | X = 3) = β1 + 9β2 , which is obviously lin- ear in β1 and β2. All the models shown in Figure 2.3 are thus linear regres- sion models, that is, models linear in the parameters.
Now consider the model E(Y | Xi ) = β1 + β2 Xi . Now suppose X = 3; then we obtain E(Y | Xi ) = β1 + 3β2 , which is nonlinear in the parameter β2 . The preceding model is an example of a nonlinear (in the parameter) regression model.